text
stringlengths
1
2.25M
--- abstract: 'We adapted the join-training scheme of Faster RCNN framework from Caffe to TensorFlow as a baseline implementation for object detection. Our code is made publicly available. This report documents the simplifications made to the original pipeline, with justifications from ablation analysis on both PASCAL VOC 2007 and COCO 2014. We further investigated the role of non-maximal suppression (NMS) in selecting regions-of-interest (RoIs) for region classification, and found that a biased sampling toward small regions helps performance and can achieve on-par mAP to NMS-based sampling when converged sufficiently.' author: - | Xinlei Chen\ Carnegie Mellon University\ [[email protected]]{} - | Abhinav Gupta\ Carnegie Mellon University\ [[email protected]]{} title: An Implementation of Faster RCNN with Study for Region Sampling --- \[1\][&gt;[\ ]{}m[\#1]{}]{} \[1\][&gt;[\ ]{}m[\#1]{}]{} \[1\][&gt;[\ ]{}m[\#1]{}]{} Baseline Faster RCNN with Simplification ======================================== We adapted the join-training scheme of Faster RCNN detection framework[^1] [@ren2015faster] from Caffe[^2] to TensorFlow[^3] as a baseline implementation. Our code is made publicly available[^4]. During the implementation process, several simplifications are made to the original pipeline, with observations from ablation analysis that they are are either not affecting or even potentially improving the performance. The ablation analysis has the following default setup: Base network. : Pre-trained VGG16 [@simonyan2014very]. The feature map from `conv5_3` are used for region proposals and fed into region-of-interest (RoI) pooling. Datasets. : Both PASCAL VOC 2007 [@everingham2010pascal] and COCO 2014 [@lin2014microsoft]. For VOC we use the `trainval` split for training, and `test` for evaluation. For COCO we use `train+valminusminival` and `minival`, same as the published model. Training/Testing. : The default end-to-end, single-scale training/testing scheme is copied from the original implementation. Learning rate starts with $.001$ and is reduced after $50k$/$350k$ iterations. Training finishes at $70k$/$490k$ iterations. Following COCO challenge requirements, for each testing image, the detection pipeline provides at most $100$ detection results. Evaluation. : We use evaluation toolkits provided by the respective dataset. The metrics are based on detection average precision/recall. The first notable change follows Huang  [@google]. Instead of using the RoI pooling layer, we use the `crop_and_resize` operator, which crops and resizes feature maps to $14\times 14$, and then max-pool them to $7\times 7$ to match the input size of `fc6`. Second, we do not aggregate gradients from $N=2$ images and $R=128$ regions [@girshick2015fast], instead we simply sample $R=256$ regions from $N=1$ images during a single forward-backward pass. Gradient accumulation across multiple batches is slow, and requires extra operators in TensorFlow. Note that $R$ is the number of regions sampled for training the region classifier, for training region proposal network (RPN) we still use the default $256$ regions. Third, the original Faster RCNN removes small proposals (less than $16$ pixels in height or width in the original scale). We find this step redundant, hurting the performance especially for small objects. Other minor changes that does not seem to affect the performance include: 1) double the learning rate for bias; 2) stop weight decay on bias; 3) remove aspect-ratio grouping (introduced to save memory); 4) exclude ground-truth bounding boxes in the RoIs during training, since they are not accessible during testing and can bias the input distribution for region classification. For ablation analysis results on VOC 2007, please check at Table \[tab:voc2007-tf\]. Performance-wise, our implementation is in general on par with the original Caffe implementation. The `crop_and_resize` pooling appears to have a slight advantage over RoI pooling. We further test the pipeline on COCO, see Table \[tab:coco-tf\]. We fix $N=1$ and only use `crop_and_resize` pooling, which in general gives better average recall than RoI pooling. Keeping the small region proposals also gives consistent boost on small objects. Overall our baseline implementation gives better AP ($+4\%$) and AR ($+5\%$) for small objects. As we vary $R$, we find $256$ gives a good trade-off with the default training scheme, as further increasing $R$ causes potential over-fitting. Training/Testing Speed ---------------------- Ideally, our training procedure can almost cut the total time in half since gradient is only accumulated over $N=1$ image. However, the increased batch size $R=256$ and the use of `crop_and_resize` pooling slow each iteration a bit. Adding the underlying TensorFlow overhead, the average speed for a COCO net on a Titan X (non Pascal) GPU for training is $400$ms per iteration, whereas for testing it is $160$ms per image in our experimental environment. A Study of Region Sampling\[sec:inv\] ===================================== We also investigated how the distribution of the region proposals fed into region classification can influence the training/testing process. In the original Faster RCNN, several steps are taken to select a set of regions: - First, take the top $K$ regions according to RPN score. - Then, non-maximal suppression (NMS) with overlapping ratio of $0.7$ is applied to perform de-duplication. - Third, top $k$ regions are selected as RoIs. For training, $K=12000$ and $k=2000$ are used, and later $R$ regions are sampled for training the region classifier with pre-defined positive/negative ratio ($0.25/0.75$); for testing $K=6000$ and $k=300$ are used. We refer to this default setting as **NMS**. In Ren  [@ren2015faster], a comparable mean average precision (mAP) can be achieved when the top-ranked $K=6000$ proposals are directly selected without NMS during *testing*. This suggests that NMS can be removed at the cost of evaluating more RoIs. However, it is less clear whether NMS de-duplication is necessary during *training*. On a related note, NMS is believed to be crucial for selecting hard examples for Fast RCNN [@ohem_cvpr16]. Therefore, we want to check if it is also true for Faster RCNN in the joint-training setting. Our first alternative (**ALL**) works by simply feeding all top $K$ regions for positive/negative sampling without NMS. While this alternative appears to optimize the same objective function as the one with NMS, there is a subtle difference: NMS implicitly biases the sampling procedure toward smaller regions. Intuitively, it is more likely for large regions to overlap than small regions, so large regions have a higher chance to be suppressed. A proper bias in sampling is known to help at least converge networks more quickly [@bansal2016pixelnet] and is actually also used in Faster RCNN: a fixed positive/negative ratio to avoid always learning on negative patches. To this end, we add two more alternatives for comparison. The first one (**PRE**) computes the final ratio of a pre-trained Faster RCNN model that uses NMS, and samples regions based on this final ratio. The second one (**POW**) simply fits the sampling ratio to the power law: $r(s)=s^{-\gamma}$ where $r$ is ratio, $s$ is scale, and $\gamma$ is a constant factor (set as $1.$). While PRE still depends on a trained model with NMS, POW does not require NMS at all. To fit the target distribution, we keep all regions of the scale with the highest ratio in the distribution, and randomly select regions of other scales according to the relative ratio. , if the distribution is $(0.4,0.2,0.2)$ for scales $(8, 16, 32)$, then all the scale-$8$ regions are kept, and $50\%$ of the other two scales are later sampled. Note that for both of them we set $k=6000$ ($k$ is functioning as $K$) during training, since roughly half the regions are already thrown away. Following Ren  [@ren2015faster], we simply select top $K$ proposals for evaluation directly. With little or no harm on precision but direct benefit on recall, mAP generally increases as $K$ gets larger. We set $K=5000$ trading off speed and performance. This testing scheme is referred as **TOP**. We begin by showing results on VOC 2007 in Table \[tab:voc2007\]. As can be seen, apart from ALL, other schemes with biased sampling all achieve the same level of mAP (around $71\%$). We also include results (last row) that uses NMS during training but switches to TOP for testing. Somewhat to our surprise, it achieves better performance. In fact, we find this advantage of TOP over NMS consistently exists when $K$ is sufficiently large. A more thorough set of experiments were conducted on COCO, which are summarized in Table \[tab:coco\]. Similar to VOC, we find biased sampling (NMS, PRE and POW) in general gives better results than uniform sampling (ALL). In particular, with $490k$ iterations of training, NMS is able to offer a performance similar to PRE/POW after $790k$ iterations. Out of curiosity, we also checked the model trained with $790k$ NMS iterations, which is able to converge to a better AP ($28.3$ on `minival`) with the TOP testing scheme. We did notice that with more iterations, the gap between NMS and POW narrows down from $1.7$ ($490k$) to $1.4$ ($790k$), indicating the latter ones may catch up eventually. The difference to VOC suggests that $490k$ iterations are not sufficient to fully converge on COCO. Extra experiments with longer training iterations are needed for a more conclusive note. [1]{}=-1pt A. Bansal, X. Chen, B. Russell, A. Gupta, and D. Ramanan. Pixelnet: Towards a general pixel-level architecture. , 2016. M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. , 88(2):303–338, 2010. R. Girshick. Fast r-cnn. In [*ICCV*]{}, 2015. J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, and K. Murphy. Speed/accuracy trade-offs for modern convolutional object detectors. , 2016. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll[á]{}r, and C. L. Zitnick. Microsoft coco: Common objects in context. In [*ECCV*]{}, 2014. S. Ren, K. H. R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. , 2015. A. Shrivastava, A. Gupta, and R. Girshick. . In [*CVPR*]{}, 2016. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. , 2014. [^1]: <https://github.com/rbgirshick/py-faster-rcnn> [^2]: <https://github.com/BVLC/caffe> [^3]: <https://github.com/tensorflow> [^4]: <https://github.com/endernewton/tf-faster-rcnn>
--- abstract: '[The Buneman instability occurring when an electron population is drifting with respect to the ions is analyzed in the quantum linear and nonlinear regimes. The one-dimensional low-frequency and collisional model of Shokri and Niknam \[Phys. Plasmas, **12** (2005) 062110\] is revisited introducing the Bohm potential term in the momentum equation. The linear regime is investigated analytically, and quantum effects result in a reduction of the instability. The nonlinear regime is then assessed both numerically and analytically, and pure quantum density oscillations are found to appear during the late evolution of the instability.]{}' title: 'Nonlinear low-frequency collisional quantum Buneman instability' --- .3cm .3cm The Buneman instability [@Buneman1959] is a basic instability process in classical plasmas. It occurs in beam/plasma systems when there is a significant drift between electrons and ions. It is frequently referred to as the “Farley-Buneman” instability because it was almost simultaneously discovered by Farley [@Farley1963]. In view of the very basic “set-up” it needs to be triggered, this instability has been found to play a role in many physical scenarios. In space physics and geophysics, this instability has been invoked in the earth ionosphere [@Farley1963] or in the solar chromosphere [@Gogoberidze]. In double-layer and collisionless shock physics, the same instability has been found responsible, under certain circumstances, of the very formation of this kind of structures [@Iisuka1979]. In situations where an electron beam enters a plasma, like in the fast ignition scenario for inertial fusion [@Tabak1994], the electronic return current prompted in has been found to generate exponentially growing Buneman modes through its interaction with the plasma ions [@Lovelace; @Bret2008]. On the other hand, presently quantum plasmas are attracting much attention from a variety of reasons, since they provide a typical state of ionized matter under large densities and/or low temperatures. For instance, quantum effects in plasmas are relevant in ultra-small semiconductor devices, metal clusters, intense laser-solid interaction experiments and compact astrophysical objects like neutron stars and white dwarfs (see [@rmp; @usp] for reviews). Moreover, X-ray Thomson scattering techniques in dense plasmas have been used [@glenzer] to verify the signature of quantum diffraction effects in the dispersion relation for electrostatic waves. Also, in the near future the development of coherent brilliant X-ray radiation sources [@thiele] and keV free electron lasers [@gregori] will provide experimental access to the quantum nature of plasmas under extreme conditions. Some of the most recent developments in the field are the analysis of wave breaking in quantum plasmas [@bleker], the characteristics of bounded quantum plasmas including electron exchange-correlation effects [@Ma], the development of a quantum single-wave theory for nonlinear coherent structures in quantum plasmas [@tzenov], the discussion of waves in quantum dusty plasmas [@Stenflo], the prediction of a fundamental size limit for plasmonic devices due to the quantum broadening of the transition layer [@marklund], as well as the inclusion of spin [@brodin] and relativistic [@tito] effects in quantum plasma modeling. Finally, we note the usefulness of quantum plasma techniques to other, closely related problems, such as the treatment of nonlinear wave propagation in gravitating Bose-Einstein condensates [@ghosh]. The aim of the present work is to discuss the quantum analog of the Buneman instability. Our approach is based on the quantum hydrodynamic model for plasmas, which has proven to be very useful for the understanding of nonlinear problems in quantum Coulomb systems [@book]. More specifically, we consider linear and nonlinear low-frequency waves in a collisional electron-ion plasma, extending the model by Shokri and Niknam [@shokri] by means of the inclusion of a quantum term (the so-called Bohm potential) associated to the wave nature of the quantum particles. As verified in the continuation, the Bohm potential has a stabilizing influence on the low-frequency linear Buneman instability. In addition, it eventually produces nonlinear oscillatory structures at the late stages of the instability, in sharp contrast to the monotonic character of the classical stationary states. We start with the two-species cold quantum hydrodynamic model [@ms], taking into account the effect of collisions using simple relaxation terms, $$\begin{aligned} \label{ee1} \frac{\partial n_e}{\partial t} + \frac{\partial (n_e v_e)}{\partial x} &=& 0 \,,\\ \frac{\partial n_i}{\partial t} + \frac{\partial (n_i v_i)}{\partial x} &=& 0 \,,\\ \frac{\partial v_e}{\partial t} + v_e \frac{\partial v_e}{\partial x} &=& - \frac{e E}{m}- \nu_e v_e\\ && + \frac{\hbar^2}{2m^2}\frac{\partial}{\partial x}\left(\frac{\partial^{2}\sqrt{n_e}/\partial x^2}{\sqrt{n_e}}\right)\,,\nonumber\\ \frac{\partial v_i}{\partial t} + v_i \frac{\partial v_i}{\partial x} &=& \frac{e E}{M} - \nu_i v_i \,,\\ \label{ee5} \frac{\partial E}{\partial x} &=& \frac{e}{\varepsilon_0} (n_i - n_e) \,.\end{aligned}$$ In Eqs. (\[ee1\])–(\[ee5\]), $n_{e,i}$ are the electron (ion) number densities, $v_{e,i}$ the electron (ion) fluid velocities, $E$ the electrostatic field, m (M) the electron (ion) mass, $-e$ the electron charge, $\hbar$ the Planck constant divided by $2\pi$ and $\varepsilon_0$ the vacuum permittivity. Moreover, $\nu_{e,i}$ represent electron (ion) collision frequencies with neutrals. For simplicity in this work only one spatial dimension is considered. Quantum effects are included in the force equation for electrons by means of the $\sim \hbar^2$ term, the so-called Bohm potential. Due to $m/M \ll 1$, no quantum terms are needed in the ion force equation. Linearizing the model around the homogeneous equilibrium $$n_{e,i} = n_0 \,, \quad v_e = -\frac{e E_0}{m\nu_e} \,, \quad v_i = \frac{e E_0}{M\nu_i} \,, \quad E = E_0 \,,$$ where $E_0$ is an external DC electric field. Supposing perturbations $\sim \exp(i[kx-\omega t])$ with wavenumber $k$ and wave frequency $\omega$, the result in the reference frame of the drifting ions is $$\begin{aligned} 1 &-& \frac{\omega_{pe}^2}{(\omega - kv_0) (\omega - kv_0 + i\nu_e) - \hbar^2 k^4/(4m^2)}\nonumber\\ &-& \frac{\omega_{pi}^2}{\omega (\omega + i\nu_i)} = 0 \,,\end{aligned}$$ where $\omega_{pe} = (n_0 e^2/(m\varepsilon_0))^{1/2}$ and $\omega_{pi} = (n_0 e^2/(M\varepsilon_0))^{1/2}$ are resp. the electron and ion plasma frequencies and where $$v_0 = - e E_0 \left(\frac{1}{m\nu_e} + \frac{1}{M\nu_i}\right)$$ is the relative electron-ion equilibrium drift velocity. In the very low frequency range $\omega \ll \nu_i \ll kv_0$, $\nu_e \ll kv_0$, the dispersion relation reduces to $$\label{dr} \omega = \frac{i\omega_{pi}^2}{\nu_i} \frac{k^2 (v_{0}^2- \hbar^2 k^2/(4 m^2))}{\omega_{pe}^2 - k^2 v_{0}^2 + \hbar^{2}k^4/(4m^2)} \,.$$ This mode is unstable (${\rm Im}(\omega) > 0$) provided $$\label{sp} \omega_{pe}^2 > k^2 v_{0}^2 - \frac{\hbar^2 k^4}{4m^2} > 0 \,,$$ otherwise it is damped. Small wavelengths such that $\hbar^2 k^2 > 4m^2 v_{0}^2$ are automatically stable, due to the quantum effects. Assuming a large ion-neutral collision frequency where $\nu_i \gg \omega_{pi}$, we can suppose a slow temporal dynamics. Moreover, from Eq. (\[sp\]) we have $v_0/\omega_{pe}$ as a natural choice of spatial scale for the development of the instability, at least for not very large quantum effects. Therefore, we consider the following rescaling, $$\begin{aligned} t &\rightarrow& \frac{\omega_{pi}^2 t}{\nu_i} \,, \quad x \rightarrow \frac{\omega_{pe} x}{v_0} \,, \quad v_e \rightarrow \frac{v_e}{v_0} \,, \\ v_i &\rightarrow& \frac{M \nu_i v_i}{m \omega_{pe} v_0} \,, \quad n_{e,i} \rightarrow \frac{n_{e,i}}{n_0} \,, \quad E \rightarrow \frac{e E}{m \omega_{pe} v_0} \,. \nonumber\end{aligned}$$ For simplicity using the same symbols for original and transformed variables, the rescaled system reads $$\begin{aligned} \frac{\omega_{pi}^2}{\nu_i \omega_{pe}} \frac{\partial n_e}{\partial t} + \frac{\partial (n_e v_e)}{\partial x} &=& 0 \,,\\ \frac{\partial n_i}{\partial t} + \frac{\partial (n_i v_i)}{\partial x} &=& 0 \,,\\ \frac{\omega_{pi}^2}{\nu_i \omega_{pe}} \frac{\partial v_e}{\partial t} + v_e \frac{\partial v_e}{\partial x} &=& - E - \frac{\nu_e}{\omega_{pe}} v_e \\ && + \frac{H^2}{2}\frac{\partial}{\partial x}\left(\frac{\partial^{2}\sqrt{n_e}/\partial x^2}{\sqrt{n_e}}\right) \,,\nonumber\\ \frac{\omega_{pi}^2}{\nu_{i}^2} \left(\frac{\partial v_i}{\partial t} + v_i \frac{\partial v_i}{\partial x}\right) &=& E - v_i \,,\\ \frac{\partial E}{\partial x} &=& n_i - n_e \,,\end{aligned}$$ where $$H = \frac{\hbar\omega_{pe}}{mv_{0}^2}$$ is a non-dimensional parameter measuring the relevance of the Bohm potential. Provided $$\nu_i \gg \omega_{pi} \,, \quad \omega_{pe} \gg \nu_e \,,$$ we obtain $$\begin{aligned} \label{e1} \frac{\partial (n_e v_e)}{\partial x} &=& 0 \,,\\ \frac{\partial n_i}{\partial t} + \frac{\partial (n_i v_i)}{\partial x} &=& 0 \,,\\ \label{bohm} v_e \frac{\partial v_e}{\partial x} &=& - E\!+\!\frac{H^2}{2}\frac{\partial}{\partial x}\left(\frac{\partial^{2}\sqrt{n_e}/\partial x^2}{\sqrt{n_e}}\right)\!,\\ E &=& v_i \,,\\ \label{e5} \frac{\partial E}{\partial x} &=& n_i - n_e \,,\end{aligned}$$ where the remaining terms are assumed to be of the same order. The final equations are the same as Eqs. (2)–(6) of ref. [@shokri], with the inclusion of the extra $\sim H^2$ contribution. The purpose of the present work is to investigate the role of this quantum term. We linearize Eqs. (\[e1\])–(\[e5\]) around $n_e = n_i = 1, v_e = 1, v_i = 0, E = 0$. Note that after rescaling the equilibrium ion velocity and electric field are higher-order terms, due to $\nu_{e}/\omega_{pe} \ll 1$. We get the dispersion relation $$\label{wg} \omega = i\gamma \,, \quad \gamma = \frac{k^2 (1 - H^2 k^2/4)}{1 - k^2 + H^2 k^4/4} \,,$$ which is the same as Eq. (\[dr\]), in terms of non-dimensional variables. Moreover in Eq. (\[wg\]) we define $\gamma$, the imaginary part of the frequency. It is interesting to analyze the behavior of $\gamma$ according to the quantum parameter $H$. In the classical $H = 0$ case, one has $\gamma = k^2/(1-k^2)$ and linear instability for $k < 1$. For the sake of comparison with the non-vanishing $H$ case, in Fig. \[fig1\] we show the corresponding form of the classical linear instability. The asymptote at $k = 1$ points for an explosive instability. However, this singularity is eventually regularized by nonlinear effects, as discussed in [@shokri]. ![Growth rate $\gamma$ in Eq. (\[wg\]) as a function of the wavenumber $k$, in the classical limit ($H = 0$). Dimensionless variables are used. Note the asymptote at $k = 1$. In addition, $\gamma \rightarrow -1$ as $k \rightarrow \infty$. Instability is found for $0 < k < 1$.[]{data-label="fig1"}](fig1.eps){width="45.00000%"} In the semiclassical $ 0 < H < 1$ case, the growth rate from Eq. (\[wg\]) has two asymptotes at $k_{A,B}$ defined by $$\begin{aligned} \label{kab} k_{A}^2 &=& \frac{2}{H^2}\left(1 - [1-H^2]^{1/2}\right),\nonumber\\ k_{B}^2 &=& \frac{2}{H^2}\left(1 + [1-H^2]^{1/2}\right).\end{aligned}$$ Moreover, one has $\gamma > 0$ for $0 < k < k_A$ or $k_B < k < k_C$, where $$\label{kc} k_{C}^2 = k_{A}^2 + k_{B}^2 = \frac{4}{H^2} \,.$$ By coincidence, one has formally the same instability condition as for the quantum two-stream instability described by a quantum Dawson model, see Eqs. (37)–(42) of ref. [@ms]. We note the instability of small wavelengths where $k_B < k < k_C$ has no classical counterpart. The behavior of $\gamma$ as a function of the wavenumber in the semiclassical situation when $0 < H < 1$ is shown in Fig. \[fig2\]. Again, $\gamma \rightarrow -1$ as $k \rightarrow \infty$. ![Growth rate $\gamma$ in Eq. (\[wg\]) as a function of the wavenumber $k$, in the semi-classical case ($0 < H < 1$). The value $H = 0.5$ and dimensionless variables were used. Note the asymptotes at $k = k_{A,B}$. In addition, $\gamma \rightarrow -1$ as $k \rightarrow \infty$. Instability is found for $0 < k < k_A$ and also in the small wavelength region $k_B < k < k_C$.[]{data-label="fig2"}](fig2.eps){width="45.00000%"} The case $H = 1$ is particular because then $k_A = k_B = \sqrt{2}$, so that the mid stable branch in Fig. \[fig2\] disappears. One still has explosive instability, at $k = \sqrt{2}$. The perturbation is linearly stable for $k \geq k_C = 2$. This is shown in Fig. \[fig3\]. ![Growth rate $\gamma$ in Eq. (\[wg\]) as a function of the wavenumber $k$, in the particular case $H = 1$. Dimensionless variables are used. Note the asymptote at $k = k_A = k_B = \sqrt{2}$. In addition, $\gamma \rightarrow -1$ as $k \rightarrow \infty$. Instability is found for $0 < k < 2$.[]{data-label="fig3"}](fig3.eps){width="45.00000%"} When $H > 1$, the denominator in Eq. (\[wg\]) can be shown to be always positive, so that singularities are ruled out. Instability is found for $k < k_C = 2/H$, with the most unstable wavenumber being $k = \sqrt{2}/H$. The corresponding maximal growth rate is $\gamma_{\rm max} = 1/(H^2-1)$. We see that both the unstable $k-$region and the maximal growth rate shrinks to zero as $H$ increases, which is a signature of the ultimate stabilizing nature of the quantum effects. The corresponding function $\gamma(k)$ is shown in Fig. \[fig4\]. ![Growth rate $\gamma$ in Eq. (\[wg\]) as a function of the wavenumber $k$, in the strongly quantum ($H > 1$) case. The value $H = 2$ and dimensionless variables were used. Again, $\gamma \rightarrow -1$ as $k \rightarrow \infty$. Instability is found for $0 < k < k_C = 2/H$.[]{data-label="fig4"}](fig4.eps){width="45.00000%"} The overall situation can be visualized in Fig. 5, where the unstable region in $(k^2,H^2)$ space is shown. This is formally the same as Fig. 1 of ref. [@ms] on the quantum two-stream instability. On the other hand, for the low-frequency collision-dominated Buneman instability one has distinct behaviors of $\gamma(k)$, according to the parameter $H$. Namely, one has the four classes shown in Figs. \[fig1\] to \[fig4\]. In contrast, the quantum two-stream instability exhibits no singularities of the growth rate at specific wavenumbers. ![Stability diagram for the low-frequency collision-dominated quantum Buneman instability. The filled area is unstable. Lower and middle curves: resp. $k_{A}^2$ and $k_{B}^2$ as defined in Eq. (\[kab\]). Upper curve: $k_{C}^2$ as defined in Eq. (\[kc\]).[]{data-label="fig5"}](fig5.eps){width="45.00000%"} The linear instabilities just described are eventually killed by nonlinear effects. In this context, Eqs. (\[e1\])–(\[e5\]) provide a convenient framework for nonlinear studies of the collision-dominated low-frequency quantum Buneman instability. We define the new variables $$\label{v} V = \frac{1}{n_e} \,, \quad \rho = n_i - n_e \,,$$ giving resp. the electron fluid velocity and net charge density. Equations (\[e1\])–(\[e5\]) can then be shown to reduce to $$\begin{aligned} \label{ee6} \rho &=& - \frac{1}{2}\frac{\partial^2}{\partial x^2}\left[V^2 - H^2 \frac{\partial^{2}\sqrt{1/V}/\partial x^2}{\sqrt{1/V}}\right] \,,\\ \label{e6} 0 &=&\frac{\partial V}{\partial t} + V^2 \frac{\partial^2 V}{\partial x^2} \\ &-& \frac{H^2 V^2}{2}\frac{\partial}{\partial x} \left(\frac{1}{V}\frac{\partial}{\partial x}\left[\frac{\partial^{2}\sqrt{1/V}/\partial x^2}{\sqrt{1/V}}\right]\right) \nonumber\\ &-&\!V^2\!\left(\frac{\partial\rho}{\partial t}\!-\! \frac{\partial}{\partial x}\left[\frac{\rho}{2}\frac{\partial}{\partial x}\left(V^2 - H^2 \frac{\partial^{2}\sqrt{1/V}/\partial x^2}{\sqrt{1/V}}\right)\right]\right)\!.\nonumber\end{aligned}$$ Assuming quasineutrality ($\rho = 0$) and $V = 1$ at early times, we find the first three terms in Eq. (\[e6\]) to be initially the more relevant. Linearizing them we get $$\label{dif} \frac{\partial V}{\partial t} = - \frac{\partial^2 V}{\partial x^2} - \frac{H^2}{4}\frac{\partial^4 V}{\partial x^4} \,,$$ a quantum-modified diffusion equation with a negative diffusion coefficient. Growing in time solutions are easily found, with the corresponding instability described by the dispersion relation $$\omega = i k^2 \left(1 - \frac{H^2 k^2}{4}\right) \,.$$ This is the quasineutral version of Eq. (\[wg\]). However, the instability in time is accompanied by a periodic structure in space, a feature not present in the classical case. Indeed, oscillatory in $x$ solutions to Eq. (\[dif\]) can be also readily constructed [*e.g.*]{} by separation of variables. In addition, again quantum effects provide the stabilization of the small wavelengths such that $k > k_C = 2/H$. After the initial increase of the perturbation and enlargement of the density gradient, the $\sim \rho$ terms in Eq. (\[e6\]) become essential. To examine the stationary states of the model, we set all time-derivatives to be zero. A little algebra then shows that $$\label{e7} \frac{d^2}{dx^2}\left(\frac{V^2}{2} - \frac{H^2}{2} \frac{d^{2}\sqrt{1/V}/dx^2}{\sqrt{1/V}}\right) = \frac{1}{V} \,,$$ assuming symmetric solutions so that $V'(0)=0$ and excluding the case of identically vanishing electric fields. We start solving Eq. (\[e7\]) in the classical ($H = 0$) limit. Defining $$K = \frac{V^2}{2}$$ one obtain the Newton-like equation $$\label{e8} \frac{d^2 K}{dx^2} = \frac{1}{\sqrt{2K}} \,.$$ Assuming $n_{e}(0) = 1$ and a symmetric density profile so that $n_{e}'(0) = 0$, one can integrate Eq. (\[e8\]) twice with $K(0) = 1/2, K'(0) = 0$. In terms of the electron fluid velocity $V$ the result is $$(V -1)^{1/2} (V + 2) = X \,, \quad X \equiv \frac{3\, x}{\sqrt{2}} \,,$$ which is equivalent to a cubic equation for $V$ with only one real root, namely $$\label{e9} V = - 1 + Q + 1/Q \,,$$ where $$Q = \left(\frac{2 + X^2 + \sqrt{X^4 + 4X^2}}{2}\right)^{1/3} \,.$$ This is the classical nonlinear stationary Buneman solution, in full agreement with Ref. [@shokri]. We note that the ion profile follows from $v_i = E$, with $E$ given by Eq. (\[bohm\]), as well as from $n_i = \rho + 1/V$, with $\rho$ given by Eq. (\[ee6\]). It turns out that $n_i \equiv 0$, which is a result from the low-frequency assumption. Indeed, the continuity equation for ions imply $n_i E = cte.$, set to zero in view of the larger ion mass, which in turn imply a negligible stationary ion density. On the other hand, using Eq. (\[e9\]) one obtain an electric field $E \sim - (6x)^{1/3}$ for large $|x|$, arising from the electron fluid bunching. Also observe that more general solutions are possible to find, including ion density corrections as well as traveling wave forms. However, in this case the algebra becomes much more involved. Incidentally, we note that from Eq. (\[e9\]) we have $$\label{ic} n_{e}(0) = 1 \,, \quad n_{e}'(0) = 0 \,, \quad n_{e}''(0) = - 1 \,, \quad n_{e}'''(0) = 0 \,,$$ a result to be used in the $H \neq 0$ case. In the quantum case, we need to return to Eq. (\[e7\]). Defining $$A \equiv \sqrt{n_e} = \sqrt{1/V} \,,$$ one obtain the fourth-order ordinary differential equation $$\label{e10} H^2 \frac{d^2}{dx^2}\left(\frac{d^{2}A/dx^2}{A}\right) - \frac{d^2}{dx^2}\left(\frac{1}{A^4}\right) + 2A^2 = 0 \,,$$ describing the final, stationary nonlinear stage of the density modulations. For the sake of comparison, we assume $$\label{e11} A(0) = 1 \,, A'(0) = 0 \,, A''(0) = -1/2 \,, A'''(0) = 0 \,,$$ which correspond to the same boundary conditions (\[ic\]) of the classical solution. In other other words, the classical solution is used to set also the second and third-order derivatives of the density at $x=0$, which are needed in the quantum case. Equation (\[e10\]) can be numerically solved, yielding periodic modulations of the stationary velocity and density profiles. The amplitude of the modulations is seen to increase with the strength of the quantum effects, as apparent from Figs. \[fig6\] and \[fig7\]. The emergence of new oscillatory structures is ubiquitous in quantum plasmas. In the present case, the ultimate role of the Bohm potential term in Eq. (\[bohm\]) is a qualitative modification of basic equilibrium macroscopic properties like the electron fluid density and velocity. Similar oscillatory patterns of a pure quantum origin appear in the description of weak shocks in quantum plasmas [@bychkov], of the quantum Harris sheet solution in magnetized quantum plasmas [@epl] and in undulations of the equilibrium Wigner function in quantum plasma weak turbulence [@pre]. Considering the extreme quantum case where only the $\sim H^2$ term is taken into account in Eq. (41) can be instructive for the physical interpretation of the influence of the Bohm potential. In this $H^2 \gg 1$ situation, one has $A =\cos(x/\sqrt{2})$ in view of the initial conditions in Eq. (42). This imply a density $n_e = A^2$ oscillating with a wavelength $\lambda = \sqrt{2}\,\pi$, of the order of $v_0/\omega_{pe}$ using dimensional coordinates. The behavior so described is confirmed by numerical simulations, with corrections arising from the classical terms. For intermediate values of $H$, numerical analysis shows the wavelength displayed in Figs. \[fig6\] and \[fig7\] is actually not constant, but falls like $\sim x^{-3/4}$ instead. ![Stationary electron fluid velocity. Upper, left: classical exact solution from Eq. (\[e9\]). The remaining comes from the numerical solution of Eq. (\[e10\]). Upper, right: $H^2 = 0.5$. Bottom, left: $H^2 = 1.0$. Bottom, right: $H^2 = 1.5$.[]{data-label="fig6"}](fig6.eps){width="45.00000%"} ![Stationary electron fluid density. Upper, left: classical exact solution from Eq. (\[e9\]). The remaining comes from the numerical solution of Eq. (\[e10\]). Upper, right: $H^2 = 0.5$. Bottom, left: $H^2 = 1.0$. Bottom, right: $H^2 = 1.5$.[]{data-label="fig7"}](fig7.eps){width="45.00000%"} Turning attention to the quasineutral regime, we examine the nonlinear stationary states when $\rho \equiv 0$. In this case, Eqs. (\[ee6\])–(\[e6\]) reduce to Eq. (\[ee6\]) only, the other one being redundant. After twice integrating using Eqs. (\[e11\]), the model can be shown to be equivalent to the autonomous Pinney’s [@Pinney] equation $$\label{pi} H^2 \frac{d^2 A}{dx^2} + \left(1 + \frac{H^2}{2}\right)A = \frac{1}{A^3} \,.$$ Pinney’s equation is endemic in nonlinear analysis and is well-known to be exactly solvable. This is specially true in the present autonomous case, where Eq. (\[pi\]) can be directly integrated twice. Assuming $A(0) = 1, A'(0) = 0$ as before, the solution reads $$\label{qn} A^2 = n_e = \frac{1}{1+H^2/2}\left(1 + \frac{H^2}{2}\cos^2\left[\sqrt{1+H^2/2}\,\,\frac{x}{H}\right]\right) \,,$$ displaying quantum oscillations not existing in the classical case. Once again, the amplitude of the quantum oscillations in space increases with $H$, as shown in Figs. \[fig8\] with the stationary electron fluid density. The bunching present in the non-quasineutral case is eliminated. Similar results apply to the electron fluid velocity. ![Stationary electron fluid density in the quasineutral case, as described by Eq. (\[qn\]). Upper, left: $H^2 = 0$. Upper, right: $H^2 = 0.5$. Bottom, left: $H^2 = 1.0$. Bottom, right: $H^2 = 1.5$.[]{data-label="fig8"}](fig8.eps){width="45.00000%"} In conclusion, we have analyzed the low-frequency collisional quantum Buneman instability, both in the linear and nonlinear regimes through a one-dimensional model. Note that because this electrostatic instability is longitudinal, such a low dimensional analysis is relevant. The nonlinear evolution of the instability can be studied numerically and analytically, and results in pure quantum density oscillations. [**Acknowledgments**]{} .3cm This work was supported by CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) and by projects ENE2009-09276 of the Spanish Ministerio de Educación y Ciencia and PEII11-0056-1890 of the Consejería de Educación y Ciencia de la Junta de Comunidades de Castilla-La Mancha. [99]{} BUNEMAN O., [*Phys. Rev.*]{}, [**115**]{} (1959) 503. FARLEY T., [*J. Geophys. Res.*]{}, [**68**]{} (1963) 6083. GOGOBERIDZE G. [*et al.*]{}, [*Astrophys. J. Lett.*]{} [**706**]{} (2009) L12. IIZUKA S., SAEKI K., SATO N. and HATTA Y. [*Phys. Rev. Lett.*]{}, [**43**]{} (1979) 1404. TABAK M., [*Phys. Plasmas*]{}, [**1**]{} (1994) 1626. LOVELACE R. V. and SUDAN R. N., [*Phys. Rev. Lett.*]{}, [**27**]{} (1971) 1256. BRET A. and DIECKMANN M., [*Phys. Plasmas*]{}, [**15**]{} (2008) 012104. SHUKLA P. K. and ELIASSON B., [*Rev. Mod. Phys.*]{}, [**83**]{} (2011) 885. SHUKLA P. K. and ELIASSON B., [*Physics-Uspekhi*]{}, [**53**]{} (2010) 51. GLENZER S. H. and REDMER R., [*Rev. Mod. Phys.*]{}, [**81**]{} (2009) 1625. THIELE R. [*et al.*]{}, Phys. Rev. E, [**82**]{} (2010) 056404. GREGORI G. and GERICKE D. O., [*Phys. Plasmas*]{}, [**16**]{} (2009) 056306. A. SCHMIDT-BLEKER, W. GASSEN and H.-J. KULL, [*Europhys. Lett.*]{}, [**95**]{} (2011) 5503. Y. T. MA, S. H. MAO and J. K. XUE, [*Phys. Plasmas*]{}, [**18**]{} (2011) 102108. TZENOV S. I. and MARINOV K. B., [*Phys. Plasmas*]{}, [**18**]{} (2011) 102312. STENFLO L., SHUKLA P. K. and MARKLUND M., [*Europhys. Lett.*]{}, [**74**]{} (2006) 844. MARKLUND M., BRODIN. G., STENFLO L. and LIU C. S., [*Europhys. Lett.*]{}, [**84**]{} (2008) 17006. BRODIN G., MARKLUND M., ZAMANIAN J. and STEFAN M., [*Plasma Phys. Control. Fusion*]{}, [**53**]{} (2011) 074013. MENDONÇA J. T., [*Phys. Plasmas*]{}, [**18**]{} (2011) 062101. GHOSH S. and CHAKRABARTI N., [*Phys. Rev. E*]{}, [**84**]{} (2011) 046601. HAAS F., [*Quantum Plasmas, an Hydrodynamic Approach*]{} (Springer, New York) 2011. SHOKRI B. and NIKNAM A. R., [*Phys. Plasmas*]{}, [**12**]{} (2005) 062110. HAAS F., MANFREDI G. and FEIX M., [*Phys. Rev. E*]{}, [**62**]{} (2000) 2703. BYCHKOV M., MODESTOV M. and MARKLUND M., [*Phys. Plasmas*]{}, [**15**]{} (2008) 13. HAAS F., [*Europhys. Lett.*]{}, [**77**]{} (2007) 45004. HAAS F., ELIASSON B., SHUKLA P. K. and MANFREDI G., [*Phys. Rev. E*]{}, [**78**]{} (2008) 056407. PINNEY E., [*Proc. Am. Math. Soc.*]{}, [**1**]{} (1950) 681.
--- abstract: 'In this article we prove the Arnold-Givental conjecture for a class of Lagrangian submanifolds in Marsden-Weinstein quotients which are fixpoint sets of some antisymplectic involution. For these Lagrangians the Floer homology cannot in general be defined by standard means due to the bubbling phenomenon. To overcome this difficulty we consider moment Floer homology whose boundary operator is defined by counting solutions of the symplectic vortex equations on the strip which have better compactness properties than the original Floer equations.' author: - 'Urs Frauenfelder[^1]' title: 'The Arnold-Givental conjecture and moment Floer homology' --- Introduction ============ Assume that $(M,\omega)$ is a $2n$-dimensional compact symplectic manifold, $L \subset M$ is a compact Lagrangian submanifold of $M$, and $R \in \mathrm{Diff}(M)$ is an antisymplectic involution, i.e. $$R^*\omega=-\omega, \quad R^2=\mathrm{id}$$ whose fixpoint set is the Lagrangian $$L=\mathrm{Fix}(R).$$ The Arnold-Givental conjecture gives a lower bound on the number of intersection points of $L$ with a Hamiltonian isotopic Lagrangian submanifold which intersects $L$ transversally in terms of the Betti numbers of $L$, more precisely\ \ **Conjecture (Arnold-Givental)** *For $t \in [0,1]$ let $H_t \in C^\infty(M)$ be a smooth family of Hamiltonian function of $M$ and denote by $\phi_H$ the one-time map of the flow of the Hamiltonian vector field $X_{H_t}$ of $H_t$. Assume that $L$ and $\phi_H(L)$ intersect transversally. Then the number of intersection points of $L$ and $\phi_H(L)$ can be estimated from below by the sum of the $\mathbb{Z}_2$ Betti numbers of $L$, i.e. $$\#(L \cap \phi_H(L)) \geq \sum_{k=0}^n b_k(L;\mathbb{Z}_2).$$*\ \ Using the fact that the diagonal $\Delta$ is a Lagrangian submanifold of $(M \times M, \omega \oplus -\omega)$ which equals the fixpoint set of the antisymplectic involution of $M \times M$ given by interchanging the two factors, the Arnold conjecture with $\mathbb{Z}_2$ coefficients for arbitrary compact symplectic manifolds is a Corollary of the Arnold-Givental conjecture.\ \ **Corollary (Arnold conjecture)** *Assume that for $t \in [0,1]$ there is a smooth family $H_t \in C^\infty(M)$ of Hamiltonian functions such that $1$ is not an eigenvalue of $d \phi_H(x)$ for each fixpoint $x$ of $\phi_H$, then the number of fixpoints of $\phi_H$ can be estimated from below by the sum of the $\mathbb{Z}_2$ Betti numbers of $M$, i.e. $$\#\mathrm{Fix}(\phi_H) \geq \sum_{k=0}^{2n}b_k(M,\mathbb{Z}_2).$$*\ Up to now, the Arnold-Givental conjecture could only be proven under some additional assumptions. Givental proved it in [@givental] for $(\mathbb{C}P^n,\mathbb{R}P^n)$, Oh proved it in [@oh3] for real forms of compact Hermitian spaces with suitable conditions on the Maslov indices, Lazzarini proved it in [@lazzarini] in the negative monotone case under suitable assumptions on the minimal Maslov number, and Fukaya, Oh, Ohta, Ono proved it in [@fukaya-oh-ohta-ono] for semipositive Lagrangian submanifolds. In my thesis, see [@frauenfelder2], I introduced moment Floer homology and proved the Arnold-Givental conjecture for a class of Lagrangians in Marsden-Weinstein quotients which satisfy some monotonicity condition. In this paper, this monotonicity condition will be removed. Outline of the paper -------------------- In section \[agi\] we give a heuristic argument why the Arnold-Givental conjecture should hold. This argument is based on Floer theory. We will ignore questions of bubbling and of transversality. The main point is that due to the antisymplectic involution some contribution to the boundary operator should cancel. To describe the cancelation process one has to choose the almost complex structure in a non-generic way. For such a choice of almost complex structure one cannot in general expect to achieve transversality. To overcome this problem we consider in section \[transi\] abstract perturbations as in [@fukaya-ono2] and prove some transversality result specific for problems of Arnold-Givental type. We will need that later to compute moment Floer homology. In my thesis, see [@frauenfelder2], this technique was not available to me and hence I could only compute moment Floer homology under some monotonicity assumption. In section \[mofoho\] we introduce moment Floer homology. The chain complex of moment Floer homology is generated by the intersection points of some Lagrangian submanifold of a Marsden-Weinstein quotient which is the fixpoint set of an antisymplectic involution with its image under a generic Hamiltonian isotopy. The boundary operator is defined by counting solutions of the symplectic vortex equations on the strip. These equations contain an equivariant version of Floer’s equation and a condition which relates the curvature to the moment map. We prove under some topological restrictions on the envelopping manifold a compactness theorem which allows us to prove that the boundary operator is well-defined. To compute the moment Floer homology we set the Hamiltonian equal to zero. This is the infinite dimensional analogon of a Morse-Bott situation. Combining the techniques developed in section \[transi\] together with the approach to Morse-Bott theory described in the appendix by defining a boundary operator by counting flow lines with cascades we prove that moment Floer homology is isomorphic to the singular homology of the Lagrangian in the Marsden-Weinstein quotient with coefficients in some Novikov ring. In the appendix we develop some approach to define Morse-Bott homology by counting flow-lines with cascades. We prove that for a given Morse-Bott function and for generic choice of a Riemannian metric on the manifold and of a Morse-function and a Riemannian metric on the critical manifold transversality for flow-lines with cascades can be achieved and hence the boundary operator is well-defined. We show that Morse-Bott homology is independent of the Morse-Bott function and hence isomorphic to the ordinary Morse homology. Acknowledgements ---------------- I would like to express my deep gratitude to the supervisor of my thesis, Prof. D. Salamon, for pointing my attention to moment Floer homology, for his encouragement and a lot of lively discussions. I cordially thank the two coexaminers Prof. K. Ono and Prof. E. Zehnder for a great number of valuable suggestions. In particular, I would like to thank Prof. Ono for enabling me to come to Hokkaido University and for explaining to me the theory of Kuranishi structures. Arnold-Givental conjecture {#agi} ========================== We will give in this section a heuristic argument based on Floer theory why the Arnold-Givental conjecture should hold. We refer the reader to Floer’s original papers [@floer1; @floer2; @floer3; @floer4; @floer5] and to [@salamon1] for an introduction into the topic of Floer theory. We will here just introduce the main objects of the theory. Moreover, we completely ignore questions of bubbling and of transversality. We will address the question of transversality in section \[transi\]. We hope that these techniques combined with the techniques developed in [@fukaya-ono2] and [@fukaya-oh-ohta-ono] to overcome the bubbling phenomenon should lead to a proof of the Arnold-Givental conjecture in general. However, in this paper we will not persecute such an approach but consider instead in section \[mofoho\] moment Floer homology where no bubbling occurs. We abbreviate by $\Gamma$ the group $$\Gamma=\frac{\pi_2(M,L)}{\mathrm{ker}I_\mu \cap \mathrm{ker}I_\omega}$$ where $I_\omega \colon \pi_2(M,L) \to \mathbb{R}$ and $I_\mu \colon \pi_2(M,L) \to \mathbb{Z}$ are the homomorphisms induced from the symplectic structure $\omega$ respectively the Maslov number $\mu$. We refer the reader to [@robbin-salamon1; @robbin-salamon2; @salamon-zehnder] for a discussion of the Maslov index. Following [@hofer-salamon] we denote the Novikov ring $\Lambda=\Lambda_\Gamma$ as the ring consisting of formal sums $$r=\sum_{\gamma \in \Gamma} r_\gamma \gamma, \quad r_\gamma \in \mathbb{Z}_2$$ which satisfy the finiteness condition $$\#\{\gamma \in \Gamma: r_\gamma \neq 0,\,\,I_\omega(\gamma) \geq \kappa\} <\infty, \quad \forall \,\,\kappa \in \mathbb{R}.$$ Note that since the coefficients are taken in the field $\mathbb{Z}_2$ the Novikov ring is actually a field. It is naturally graded by $I_\mu$. To define the Floer chain complex we consider pairs $(\bar{x},x) \in C^\infty((\Omega, \Omega \cap \mathbb{R}),(M,L)) \times C^\infty(([0,1],\{0,1\}),(M,L))$ [^2] for the half-disk $\Omega=\{z \in \mathbb{C}: |z| \leq 1,\,\,\mathrm{Im}(z) \geq 0\}$ which satisfy the following conditions $$\dot{x}(t)=X_{H_t}(x(t)), \quad \bar{x}(e^{i\pi t})=x(t), \quad t \in [0,1].$$ We introduce an equivalence relation on these pairs by $$(\bar{x},x) \cong (\bar{y},y) \Longleftrightarrow x=y,\,\, \omega(\bar{x})=\omega(\bar{y}),\,\,\mu(\bar{x})=\mu(\bar{y})$$ and denote the set of equivalence classes by $\mathscr{C}$, recalling the fact that this set may be interpreted as the critical set of an action functional on a covering of the space of paths in $M$ which connect two points of the Lagrangian $L$. The Floer chain complex $CF_*(M,L;H)$ can now be defined as the graded $\mathbb{Z}_2$ vector space consisting of formal sums $$\xi=\sum_{c \in \mathscr{C}}\xi_c c, \quad \xi_c \in \mathbb{Z}_2$$ satisfying the finiteness condition $$\#\{c=[\bar{x},x] \in \mathscr{C}: \xi_c \neq 0,\,\,I_\omega(\bar{x}) \geq \kappa\} <\infty, \quad \forall \,\,\kappa \in \mathbb{R}.$$ The grading of $CF_*$ is induced from the Maslov index. The natural action of $\Gamma$ on $CF_*$ by cocatenation induces on $CF_*$ the structure of a graded vector space over the graded field $\Lambda$.\ To define the boundary operator we choose a smooth family of $\omega$-compatible almost complex structures $J_t$ for $t \in [0,1]$ and count for two critical points $[\bar{x},x], [\bar{y},y] \in \mathscr{C}$ solutions $u \in C^\infty([0,1]\times \mathbb{R},M)$ of the following problem $$\begin{aligned} \label{floer} \partial_s u+J_t(u)(\partial_t u-X_{H_t}(u)) &=&0\\ \nonumber u(s,j) \in L, \,\,& &j \in \{0,1\}\\ \nonumber \lim_{s \to -\infty}u(s,t) &=& x(t)\\ \nonumber \lim_{s \to \infty}u(s,t) &=& y(t)\\ \nonumber \bar{x} \#[u]\# \bar{y} &=& 0.\end{aligned}$$ Here the limites are uniform in the $t$-variable with respect to the $C^1$-topology and $\#$ denotes cocatenation. For generic choice of the almost complex structures the moduli space $\tilde{\mathcal{M}}([\bar{x},x],[\bar{y},y])$ of solutions of (\[floer\]) is a smooth manifold of dimension $$\mathrm{dim}(\tilde{\mathcal{M}}([\bar{x},x],[\bar{y},y])) =\mu(\bar{x})-\mu(\bar{y}).$$ If $[\bar{x},x]$ is different from $[\bar{y},y]$ the group $\mathbb{R}$ acts freely on the solutions of (\[floer\]) by timeshift $$u(s,t) \mapsto u(s+r,t), \quad r \in \mathbb{R}.$$ We denote the quotient by $$\mathcal{M}([\bar{x},x],[\bar{y},y])=\tilde{\mathcal{M}}([\bar{x},x], [\bar{y},y])/\mathbb{R}.$$ If we ignore the question of bubbling the manifold $\mathcal{M}([\bar{x},x],[\bar{y},y])$ for critical points $[\bar{x},x],[\bar{y},y] \in \mathscr{C}$ satisfying $\mu(\bar{x})-\mu(\bar{y})=1$ is compact and we may define the $\mathbb{Z}_2$ numbers $$n([\bar{x},x],[\bar{y},y]):=\# \mathcal{M}([\bar{x},x],[\bar{y},y]) \,\,\mathrm{mod} \,\,2.$$ The Floer boundary operator is now defined as the linear extension of $$\partial[\bar{x},x]=\sum_{\substack{[\bar{y},y] \in \mathscr{C},\\ \mu(\bar{y})=\mu(\bar{x})-1}}n([\bar{x},x],[\bar{y},y])[\bar{y},y].$$ Ignoring again the bubbling problem one can “prove” $$\partial^2=0$$ and hence one can define $$HF_*(M,L;H,J)=\frac{\mathrm{ker}\partial}{\mathrm{Im}\partial}.$$ One can show - always ignoring the bubbling problem - that for different choices of $H$ and $J$ there are canonical isomorphisms between the graded $\Lambda$ vector spaces and hence Floer homology $HF_*(M,L)$ is defined independent of $H$ and $J$. It follows from its definition that the dimension of $HF_*$ as $\Lambda$ vector space gives a lower bound on the number of intersection points of $L$ and $\phi_H(L)$.\ To “compute” the Floer homology we consider the case where $H=0$. Actually, in this case $L$ and $\phi_H(L)=L$ do not intersect transversally but still cleanly [^3]. The finite dimensional analogon of clean intersections in Floer theory are Morse-Bott functions. In our case the critical manifold consists of different copies of the Lagrangian $L$ indexed by the group $\Gamma$. Following the approach developed in the appendix one can still define Floer homology in the case of clean intersections. To do that one chooses a Morse function on the critical manifold. The critical points of this Morse function give a basis for the chain complex. The boundary operator is defined by counting flow lines with cascades. In our case, a cascade is just a nonconstant solution $u \in C^\infty([0,1]\times \mathbb{R},M)$ of the unperturbed Floer equation $$\begin{aligned} \partial_s u+J_t(u)\partial_t u&=&0,\\ u(s,j) \in L,& & j \in \{0,1\}\end{aligned}$$ which converges uniformly in the $t$-variable to points on $L$ as $s$ goes to $\pm \infty$. A flow line with zero cascades is just a Morse flow line. A flow line with one cascade consists of a piece of a Morse flow line a cascade which converges on the left to the endpoint of this piece and which converges on the right to the initial point of a second piece of a Morse flow line, see appendix \[morsebott\] for details. The boundary operator now splits naturally $$\partial=\partial^0+\partial^1$$ where $\partial^0$ consists of the flow lines with zero cascades and $\partial^1$ consists of the flow lines with at most one cascade. Note that $\partial^0$ is precisely the Morse boundary operator. If one can show that $\partial^1$ is zero it follows that $$HF_*(M,L)=HM_*(L;\mathbb{Z}_2)\times_{\mathbb{Z}_2}\Lambda$$ where $HM_*(L;\mathbb{Z}_2)$ is the Morse homology of $L$ with $\mathbb{Z}_2$ coefficients, which is isomorphic to the singular homology of $L$ with $\mathbb{Z}_2$ coefficients, see [@schwarz1].\ It now remains to explain why one has to expect that $\partial^1$ vanishes. Since the Lagrangian $L$ is the fixpoint set of the antisymplectic involution $R$ there is an induced involution on the space of cascades. More precisely, since we ignore questions of transversality we may now choose the family of $\omega$ compatible almost complex structures $J_t$ independent of the $t$-variable and impose furthermore the condition that $J$ is antiinvariant under $R$, i.e. $$R^*J=-J.$$ When $J$ is independent of the $t$-variable a cascade corresponds to a nonconstant $J$-holomorphic disk satisfying Lagrangian boundary conditions, i.e. $$\begin{aligned} \label{agf} \partial_s u+J(u)\partial_t u &=&0\\ \nonumber u(s,j) \in L, \,\,& &j \in \{0,1\}.\end{aligned}$$ The antisymplectic involution $R$ induces a natural involution on the solutions of the problem above defined by $$I_1 u(s,t)=R(u(s,1-t)).$$ This involution does not act freely in general, but it was observed by K.Ono that on the “lanterns”, the fixpoints of the involution $I_1$, one can define a new involution defined by $$I_2 u(s,t)=\left\{\begin{array}{cc} u(s,t+1/2) & 0 \leq t \leq 1/2\\ u(s,t-1/2) & 1/2 \leq t \leq 1 \end{array}\right.$$ Finally, on the “double lanterns”, the fixpoints of the involution $I_2$, one can define a third involution given by $$I_3 u(s,t)=\left\{\begin{array}{cc} u(s,t+1/4) & 0 \leq t \leq 3/4\\ u(s,t-3/4) & 3/4 \leq t \leq 1 \end{array}\right.$$ and so on. Since we consider only nonconstant solution, there exists $m \in \mathbb{N}$ such that $I_m$ acts freely. Hence one may expect that there is an even number of flow lines with at most one cascade. Since we are working with $\mathbb{Z}_2$ coefficients this implies that $\partial^1$ is zero. Transversality {#transi} ============== For general symplectic manifolds one cannot expect to find an $R$-invariant $\omega$-compatible almost complex structure which is regular so that the relevant moduli spaces have the structure of finite dimensional manifolds. Hence to make precise the heuristic argument at the end of the last section one has to use abstract perturbation. To do that one considers solutions of (\[agf\]) modulo the natural $\mathbb{R}$ action as the zero set of a section of an infinite dimensional Banach manifold $\mathcal{B}$ into a Banach bundle $\mathcal{E}$ over it. There are natural extensions of the involutions $I_k$ to involutions of the Banach manifold, so that again $I_1$ is defined on the whole space, $I_2$ on the fixpoint set of $I_1$ and so on. The idea of our abstract perturbation is now to perturb our section in such a way that the perturbed section intersects the zero section transversally but that its zero set is still invariant under the involutions. There are two problems we have to overcome. The first one is that one cannot define invariants from the zero set of an arbitrary section from in infinite dimensional space and hence one should define the section in some “finite dimensional” neighbourhood of the zero set of the original section. This problem was solved in [@fukaya-ono2] by using Kuranishi structures. The second problem is that for each involution one should find extensions $I_k^{T\mathcal{B}}$ and $I_k^{\mathcal{E}}$ to the tangent space $T\mathcal{B}$ or the bundle $\mathcal{E}$ respectively restricted to the domain of definition of $I_k$ in order to achieve that the zero set of the perturbed section is invariant under the involutions. Banach spaces ------------- We interpret solutions of (\[agf\]) as the zero-set of a smooth section from a Banach manifold $\tilde{\mathcal{B}}$ to a Banachbundle $\tilde{\mathcal{E}}$ over $\tilde{\mathcal{B}}$. To define $\tilde{\mathcal{B}}$ we first have to introduce some weighted Sobolev norms. Choose a smooth cutoff function $\beta \in C^\infty(\mathbb{R})$ such that $\beta(s)=0$ for $s<0$ and $\beta(s)=1$ for $s>1$. Choose $\delta>0$ and define $\gamma_\delta \in C^\infty(\mathbb{R})$ by $$\gamma_\delta(s):=e^{\delta \beta(s) s}.$$ Let $\Omega$ be an open subset of the strip $\mathbb{R}\times [0,1]$. For $1 \leq p \leq \infty$ and $k \in \mathbb{N}_0$ we define the $||\,||_{k,p,\delta}$-norm for $v \in W^{k,p}(\Omega)$ by $$||v||_{k,p,\delta}:=\sum_{i+j \leq k}||\gamma_\delta \cdot \partial^i_s\partial^j_t v||_p.$$ We introduce the following weighted Sobolev spaces $$W^{k,p}_\delta(\Omega):=\{v \in W^{k,p}(\Omega):||v||_{k,p,\delta}<\infty\} =\{v \in W^{k,p}(\Omega):\gamma_\delta v \in W^{k,p}(\Omega)\}.$$ We abbreviate $$L^p_\delta(\Omega):=W^{0,p}_\delta(\Omega).$$ Fix a real number $p>2$ and a Riemannian metric $g$ on $TM$. The Banach manifold $\tilde{\mathcal{B}}= \tilde{\mathcal{B}}^{1,p}_\delta(M,L)$ consists of $W^{1,p}_{loc}$-maps $u$ from the strip $\mathbb{R}\times [0,1]$ to $M$ which map the boundary of the strip to the Lagrangian $L$ and satisfy in addition the following conditions. B1: : There exists a point $x^- \in L$, a real number $T_1$, and an element $v_1 \in W^{1,p}_\delta((-\infty,-T_1)\times [0,1],T_{x^-}M)$ such that $$u(s,t)=\exp_{x^-}(v_1(s,t)), \quad s < -T_1.$$ Here the exponential map is taken with respect to the metric $g$. B2: : There exists a point $x^+ \in L$, a real number $T_2$, and an element $v_2 \in W^{1,p}_\delta((T_2,\infty)\times [0,1],T_{x^+}M)$ such that $$u(s,t)=\exp_p (v_2(s,t)), \quad s >T_2.$$ We introduce the Banach bundle $\tilde{\mathcal{E}}$ over the Banach-manifold $\tilde{\mathcal{B}}$ whose fiber over $u \in \tilde{\mathcal{B}}$ is given by $$\tilde{\mathcal{E}}_u:=L^p_\delta(u^*TM).$$ Define the section $\tilde{\mathcal{F}} \colon \tilde{\mathcal{B}} \to \tilde{\mathcal{E}}$ by $$\tilde{\mathcal{F}}(u):=\partial_s u+J(u)\partial_t u$$ for $u \in \tilde{\mathcal{B}}$. Note that the zero set $\tilde{\mathcal{F}}^{-1}(0)$ consists of solutions of (\[agf\]). The vertical differential of $\tilde{\mathcal{F}}$ at $u \in \mathcal{F}^{-1}(0)$ is given by $$\tilde{D}_u \xi:=D\tilde{\mathcal{F}}(u)\xi=\partial_s \xi+J(u)\partial_t \xi+\nabla_\xi J(u)\partial_t u, \quad \xi \in T_u \tilde{\mathcal{B}},$$ where $\nabla$ denotes the Levi-Civita connection of the metric $g(\cdot,\cdot)=\omega(\cdot,J\cdot)$. There is a natural action of the group $\mathbb{R}$ on $\tilde{\mathcal{B}}$ and $\tilde{\mathcal{E}}$ given by timeshift $$u(s,t) \mapsto u(s+r,t), \quad r \in \mathbb{R}.$$ We denote the quotient by $$\mathcal{B}:=\tilde{\mathcal{B}}/\mathbb{R}, \quad \mathcal{E}:=\tilde{\mathcal{E}}/\mathbb{R},$$ by $$\mathcal{F} \colon \mathcal{B} \to \mathcal{E}$$ we denote the section induced from $\tilde{\mathcal{F}}$ and by $D_u$ we denote the vertical differential of $\mathcal{F}$ for $u \in \mathcal{F}^{-1}(0)$. By induction we define in the obvious way for every $k \in \mathbb{N}$ smooth involutions $I_k$ on $\mathcal{B}_k=\mathrm{Fix}(I_{k-1})$ where we set $\mathcal{B}_1=\mathcal{B}$. Our aim is to construct natural extensions of these involutions to $T\mathcal{B}$ and $\mathcal{E}$ restricted to their domain of definition. We prove the following theorem. \[invo\] For each $k \in \mathbb{N}$ there exist smooth involutative bundle maps $$I^{T\mathcal{B}}_k \colon T\mathcal{B}|_{\mathcal{B}_k} \to T\mathcal{B}|_{\mathcal{B}_k}, \quad I^{\mathcal{E}}_k \colon \mathcal{E}|_{\mathcal{B}_k} \to \mathcal{E}|_{\mathcal{B}_k}$$ which extend $I_k$ and which have the following properties. (i) : The bundle maps commute on their common domain of definition, i.e. $$I_k^{T\mathcal{B}} \circ I_{\ell}^{T\mathcal{B}}\xi= I_{\ell}^{T\mathcal{B}}\circ I_k^{T\mathcal{B}}\xi, \quad I_k^{\mathcal{E}} \circ I_{\ell}^{\mathcal{E}}\eta= I_{\ell}^{\mathcal{E}}\circ I_k^{\mathcal{E}}\eta, \quad \xi \in T\mathcal{B}|_{\mathcal{B}_{\ell}},\,\, \eta \in \mathcal{E}|_{\mathcal{B}_{\ell}}$$ for $\ell \geq k$. (ii) : For $k \in \mathbb{N}$ and $u \in \mathcal{F}^{-1}(0) \cap \mathcal{B}_k$ the vertical differential of $\mathcal{F}$ commutes with the involutions modulo a compact operator. More precisely, there exists a compact operator $Q_u \colon T_u \mathcal{B} \to \mathcal{E}_u$ such that $$I^{\mathcal{E}}_k \circ D_u-D_u \circ I^{T\mathcal{B}}_k =I_k^{\mathcal{E}} \circ Q_u-Q_u \circ I_k^{T\mathcal{B}}.$$ Moreover, $I_k^{\mathcal{E}} \circ Q_u-Q_u \circ I_k^{T\mathcal{B}}$ vanishes on $T_u\mathcal{B}_k$. (iii) : The restriction of of $I_k^{T\mathcal{B}}$ to the tangent space of $\mathcal{B}_k$ equals the differential of $I_k$, i.e. $$I_k^{T\mathcal{B}}|_{T\mathcal{B}_k}=d I_k.$$ A sequence of involutions ------------------------- The aim of this subsection is to prove Theorem \[invo\]. Since the involution we want to construct are independent of the $s$-variable it is most convenient to define them on the space of paths whose endpoints lie on the Lagrangian $L$. We define for a real number $p>1$ the path space $$\mathscr{P}:=\mathscr{P}^{1,p}(M,L):=W^{1,p}(([0,1],\{0,1\}),(M,L)).$$ For $x \in \mathscr{P}$ the tangent space of $\mathscr{P}$ at $x$ is given by $$T_x\mathscr{P}=W^{1,p}(([0,1],\{0,1\}),(T^*_x M,T^*_x L)).$$ We define the bundle $\mathscr{E}$ over $\mathscr{P}$ by $$\mathscr{E}=L^p([0,1],T_x^*M).$$ Note that $T \mathscr{P}$ is a subbundle of $\mathscr{E}$. As before we set $$\mathscr{P}_1:=\mathscr{P}$$ and define the first involution $\mathscr{I}_1 \in \mathrm{Diff}(\mathscr{P}_1)$ by $$\mathscr{I}_1 x(t):=R(x(1-t)), \quad x \in \mathscr{P}_1.$$ By induction on $k \in \mathbb{N}$ we define for $k \geq 2$ $$\mathscr{P}_k:=\mathrm{Fix}(\mathscr{I}_{k-1}), \quad \mathscr{I}_k x(t):=x(t+\frac{1}{2^{k-1}}-\lfloor t+\frac{1}{2^{k-1}} \rfloor), \quad x \in \mathscr{P}_k.$$ Here $\lfloor \,\, \rfloor$ denote the Gauss brackets, i.e. the largest integer which is smaller then a given real number $$\lfloor r \rfloor:=\max\{n \in \mathbb{Z}: n \leq r\}, \quad r \in \mathbb{R}.$$ Observe that if the index of integrability $p$ is greater than two and $u \in \tilde{\mathcal{B}}$ has the property that $u_s(t):=u(s,t) \in \mathscr{P}_k$ for every $s \in \mathbb{R}$ and some $k \in \mathbb{N}$, then $\mathscr{I}_k$ induces an involution on $u$ which commutes with the $\mathbb{R}$ action on $\tilde{\mathscr{B}}$ so that the induced map in the quotient $\mathcal{B}=\tilde{\mathcal{B}}/\mathbb{R}$ coincides with the involution $I_k$. We will find in this subsection an extension of these involutions to smooth involutative bundle maps $\mathscr{I}_k \colon \mathscr{E}|_{\mathscr{P}_k} \to \mathscr{E}|_{\mathscr{P}_k}$ such that the following properties are satisfied. (i) : The tangent bundle of the path space is invariant under the involutions $$\mathscr{I}_k(T\mathscr{P}|_{\mathscr{P}_k})=T\mathscr{P}|_{\mathscr{P}_k}.$$ Moreover, $$\mathscr{I}_k|_{T\mathscr{P}_k}=d \mathscr{I}_k|_{\mathscr{P}_k}.$$ (ii) : The involutions commute on their common domain of definition $$\mathscr{I}_k \circ \mathscr{I}_\ell \xi=\mathscr{I}_\ell \circ \mathscr{I}_k \xi, \quad \xi \in \mathscr{E}|_{\mathscr{P}_\ell}$$ for $\ell \geq k$. We will then define the involution $I^{T\mathcal{B}}_k$ and $I^{\mathcal{E}}_k$ of Theorem \[invo\] as the induced maps of $\mathscr{I}_k$ on $T\mathcal{B}|_{\mathcal{B}_k}$ respectively $\mathcal{E}|_{\mathcal{B}_k}$. Property (i) of the involutions $\mathscr{I}_k$ guarantees that $I^{T\mathcal{B}}_k$ is well defined and guarantees assertion (iii) of Theorem \[invo\]. Assertion (i) of Theorem \[invo\] follows from property (ii) of the involutions $\mathscr{I}_k$ and assertion (ii) of Theorem \[invo\] will follow from our construction. For $x \in \mathscr{P}$ the extension of the first involution to $\mathscr{E}$ is defined in the following way $$\mathscr{I}_1 \xi(t):=R^* \xi(1-t) \in \mathscr{E}_{\mathscr{I}_1 x}, \quad \xi \in \mathscr{E}_x.$$ If $x \in \mathscr{P}_2$, then $\mathscr{I}_1$ is a bounded linear involutative map of the Banach space $\mathscr{E}_x$ and leads to a decomposition $$\mathscr{E}_x=\mathscr{E}_{x,-1} \oplus \mathscr{E}_{x,1},$$ where $\mathscr{E}_{x,-1}$ is the eigenspace of $\mathscr{I}_1|_{\mathscr{E}_x}$ to the eigenvalue $-1$ and $\mathscr{E}_{x,1}$ is the eigenspace to the eigenvalue $1$. The two projections to the eigenspaces are given by $$\pi_{x,1}= \frac{1}{2}\bigg(\mathrm{id}|_{\mathscr{E}_x}+\mathscr{I}_1|_{\mathscr{E}_x} \bigg) \colon \mathscr{E}_x \to \mathscr{E}_{x,1}, \quad \pi_{x,-1}= \frac{1}{2}\bigg(\mathrm{id}|_{\mathscr{E}_x}-\mathscr{I}_1|_{\mathscr{E}_x} \bigg) \colon \mathscr{E}_x \to \mathscr{E}_{x,-1}.$$ We first define the involutions on the subspace $\mathscr{E}_{x,-1}$. For $\xi \in \mathscr{E}_{x,-1}$ the second involution $\mathscr{I}_2$ is defined by the formula $$\mathscr{I}_2 \xi(t)=(-1)^{\lfloor t+1/2 \rfloor} J(\mathscr{I}_2 x(t))\xi(t+1/2-\lfloor t+1/2 \rfloor)) \in \mathscr{E}_{\mathscr{I}_2 x}.$$ To see that $\mathscr{I}_2 \xi$ satisfies condition (i), i.e. maps the tangent space of the path space to itself, we have to check that if $\xi$ is in $W^{1,p}$ and satisfies the Lagrangian boundary conditions, then $\mathscr{I}_2 \xi$ satisfies these conditions, too. Since $\xi$ lies in the eigenspace of the eigenvalue $-1$ of the first involution $\mathscr{I}_1$, it follows that $$\xi(1/2)=-dR(x(1/2))\xi(1/2)$$ and using anticommutativity of the almost complex structure $J$ with the antisymplectic involution $R$ together with the fact that the Lagrangian $L$ equals the fixpoint set of $R$ it follows that $$J(x(1/2))\xi(1/2) \in T_{x(1/2)}L$$ and hence $\mathscr{I}_2 \xi$ satisfies the required boundary condition. To prove that $\mathscr{I}_2 \xi \in W^{1,p}$ one has to check continuity at the point $t=1/2$. This is done similarly as above by noting that $$\xi(0)=-\xi(1) \in T_{x(0)}L=T_{x(1)}L.$$ Moreover, a straightforward calculation shows that $$\mathscr{I}_1(\mathscr{I}_2 \xi)=-\mathscr{I}_2 \xi$$ and hence $$\mathscr{I}_2 \xi \in \mathscr{E}_{\mathscr{I}_2 x, -1}.$$ Now assume that $x \in \mathscr{P}_m$ for some integer $m>2$. To define the involutions $\mathscr{I}_3, \ldots, \mathscr{I}_m$ on $\mathscr{E}_{x,-1}$ we first consider the following linear maps $$\mathscr{L}_k \colon \mathscr{E}_{x,-1} \to \mathscr{E}_ {\mathscr{I}_{k+2}x,-1}, \quad k \in \{1,\ldots,m-2\}$$ defined by $$\mathscr{L}_k \xi(t)=\sum_{j=0}^{2^k-1} (-1)^{\lfloor \frac{j}{2^{k-1}}\rfloor+\lfloor t+\frac{1}{2^{k+1}}+ \frac{j}{2^k}\rfloor}\xi\bigg(t+\frac{1}{2^{k+1}}+\frac{j}{2^k} -\lfloor t+\frac{1}{2^{k+1}}+\frac{j}{2^k}\rfloor\bigg).$$ The maps $\mathscr{L}_k$ for $1 \leq k \leq m-2$ are well-defined, i.e. $\mathscr{I}_1(\mathscr{L}_k \xi)=-\mathscr{L}_k \xi$. They have the following properties. (i) : They leave the tangent bundle of the path space invariant, i.e. for $T_{x,-1}\mathscr{P}:=T_x \mathscr{P} \cap \mathscr{E}_{x,-1}$ we have $$\mathscr{L}_k(T_{x,-1}\mathscr{P})=T_{\mathscr{I}_{k+2}x,-1}\mathscr{P}.$$ (ii) : They commute with each other and with $\mathscr{I}_2|_{\mathscr{E}_{x,-1}}$. (iii) : Setting $\mathscr{L}_0:=\mathrm{id}$ their squares can be determined recursively from the formula $$\label{square} \mathscr{L}_{k+1}^2=2\Bigg(\mathscr{L}_k^2+\mathscr{L}_k \bigg(\sum_{i=0}^{k-1} \mathscr{L}_i\bigg)\Bigg).$$ (iv) : If $p=2$ then $\mathscr{L}_k$ is selfadjoint with respect to the $L^2$-inner product $$\langle \xi, \eta \rangle=\int_0^1 \xi(t)\eta(t)dt.$$ (v) : The maps $\mathscr{L}_k$ are injective. **Proof:** To prove that the maps $\mathscr{L}_k$ are well defined we calculate $$\begin{aligned} & &R^*\mathscr{L}_k \xi(1-t)\\ &=&\sum_{j=0}^{2^k-1}(-1)^ {\lfloor \frac{j}{2^{k-1}}\rfloor+\lfloor 1-t+\frac{1}{2^{k+1}}+\frac{j}{2^k} \rfloor}\\ & &\quad \cdot R^*\xi\bigg(1-t+\frac{1}{2^{k+1}}+\frac{j}{2^k}- \lfloor 1-t+\frac{1}{2^{k+1}}+\frac{j}{2^k}\rfloor\bigg)\\ &=&-\sum_{j=0}^{2^k-1}(-1)^ {\lfloor \frac{j}{2^{k-1}}\rfloor+\lfloor 1-t+\frac{1}{2^{k+1}}+\frac{j}{2^k} \rfloor}\\ & &\quad \cdot \xi\bigg(t-\frac{1}{2^{k+1}}-\frac{j}{2^k} +\lfloor 1-t+\frac{1}{2^{k+1}}+\frac{j}{2^k}\rfloor\bigg)\\ &=&-\sum_{i=0}^{2^k-1}(-1)^ {\lfloor 2-\frac{i}{2^{k-1}}-\frac{1}{2^{k-1}}\rfloor +\lfloor 1-t+\frac{1}{2^{k+1}}+1-\frac{i}{2^k}-\frac{1}{2^k}\rfloor}\\ & &\quad \cdot \xi\bigg(t-\frac{1}{2^{k+1}}-1+\frac{i}{2^k}+\frac{1}{2^k}+ \lfloor 1-t+\frac{1}{2^{k+1}}+1-\frac{i}{2^k}-\frac{1}{2^k}\rfloor\bigg)\\ &=&-\sum_{i=0}^{2^k-1}(-1)^{\lfloor -\frac{i}{2^{k-1}}-\frac{1}{2^{k-1}} \rfloor+\lfloor -t-\frac{1}{2^{k+1}}-\frac{i}{2^k}\rfloor}\\ & &\quad \cdot \xi\bigg(t+\frac{1}{2^{k+1}}+\frac{i}{2^k}+1+ \lfloor -t-\frac{1}{2^{k+1}}-\frac{i}{2^k}\rfloor\bigg)\\ &=&-\sum_{i=0}^{2^k-1}(-1)^{-\lfloor \frac{i}{2^{k-1}}-\frac{1}{2^{k-1}} \rfloor-1-\lfloor t+\frac{1}{2^{k+1}}+\frac{i}{2^k}\rfloor-1}\\ & &\quad \cdot \xi\bigg(t+\frac{1}{2^{k+1}}+\frac{i}{2^k}- \lfloor t+\frac{1}{2^{k+1}}+\frac{i}{2^k}\rfloor\bigg)\\ &=&-\mathscr{L}_k \xi(t).\end{aligned}$$ The second last equality only holds in the case, when $t \neq \frac{j}{2^k}+\frac{1}{2^{k+1}}$ for $j \in \{0,\ldots, 2^k-1\}$. However, the equality above still holds for $\xi$ in the $L^p$-sense. To prove assertion (i), one has to show that if $\xi \in T_{x,-1}\mathscr{P}$, then $\mathscr{L}_k \xi$ satisfies the Lagrangian boundary condition and is continuous at the points $t=\frac{1}{2^{k+1}}+\frac{j}{2^k}$ for $j \in \{0, \ldots 2^k-1\}$. To prove the boundary conditions one checks that $$dR \mathscr{L}_k \xi(0)=\mathscr{L}_k \xi(0), \quad dR \mathscr{L}_k \xi(1)=\mathscr{L}_k \xi(1).$$ To prove continuity one uses $$\xi(0)=-\xi(1)$$ which follows from the assumption that $\xi \in T_{x,-1}\mathscr{P}$. To prove assertion (ii) one checks using the formula $$\lfloor r-\lfloor s \rfloor \rfloor=\lfloor r \rfloor -\lfloor s \rfloor$$ that $$\begin{aligned} & &\mathscr{L}_k \mathscr{L}_\ell \xi(t)\\ &=&\sum_{j=0}^{2^k-1}\sum_{i=0}^{2^\ell-1} (-1)^{\lfloor \frac{j}{2^{k-1}}\rfloor+\lfloor \frac{i}{2^{\ell-1}}\rfloor +\lfloor t+\frac{1}{2^{k+1}}+\frac{1}{2^{\ell+1}}+ \frac{j}{2^k}+\frac{i}{2^\ell}\rfloor}\\ & &\quad \cdot \xi\bigg(t+\frac{1}{2^{k+1}}+\frac{1}{2^{\ell+1}}+ \frac{j}{2^k}+\frac{i}{2^\ell}- \lfloor t+\frac{1}{2^{k+1}}+\frac{1}{2^{\ell+1}}+ \frac{j}{2^k}+\frac{i}{2^\ell}\rfloor\bigg).\end{aligned}$$ This formula is symmetric in $k$ and $\ell$ and hence commutativity follows. In a similar way one proves commutativity with $\mathscr{I}_2|_{\mathscr{E}_{x,-1}}$. It is straightforward to check that (\[square\]) holds if $k=0$. We now assume $k \geq 1$. Using the formula $$\sum_{i=0}^{k-1}\mathscr{L}_i \xi(t)=\sum^{2^k-1}_{\substack{\ell=0\\ \ell \neq 2^{k-1}}}(-1)^{\lfloor \frac{\ell}{2^{k-1}}\rfloor+ \lfloor t+\frac{\ell}{2^k}\rfloor}\xi\bigg(t+\frac{\ell}{2^k}- \lfloor t+\frac{\ell}{2^k}\rfloor\bigg)$$ we deduce that $$\begin{aligned} \mathscr{L}_k\bigg( \sum_{i=0}^{k-1}\mathscr{L}_i\bigg)\xi(t) &=& \sum_{\substack{\ell=0\\ \ell \neq 2^{k-1}}}^{2^k-1}\sum_{j=0}^{2^k-1} (-1)^{\lfloor \frac{\ell}{2^{k-1}}\rfloor+\lfloor \frac{j}{2^{k-1}}\rfloor +\lfloor t+\frac{\ell}{2^k}+\frac{j}{2^k}+\frac{1}{2^{k+1}}\rfloor}\\ & &\quad\cdot \xi\bigg(t+\frac{\ell}{2^k}+\frac{j}{2^k}+\frac{1}{2^{k+1}} -\lfloor t+\frac{\ell}{2^k}+\frac{j}{2^k}+\frac{1}{2^{k+1}}\rfloor\bigg).\end{aligned}$$ We calculate $$\begin{aligned} & &\Bigg(\mathscr{L}_{k+1}^2-2\mathscr{L}_k^2-2\mathscr{L}_k \bigg(\sum_{i=0}^{k-1}\mathscr{L}_i \bigg)\Bigg)\xi(t)\\ &=&\sum_{j=0}^{2^{k+1}-1}\sum_{i=0}^{2^{k+1}-1} (-1)^{\lfloor \frac{j}{2^k}\rfloor+\lfloor \frac{i}{2^k}\rfloor+ \lfloor t+\frac{1+j+i}{2^{k+1}}\rfloor}\\ & &\quad \cdot \xi\bigg(t+\frac{1+j+i}{2^{k+1}}-\lfloor t+\frac{1+j+i}{2^{k+1}}\rfloor \bigg) \\ & &-2\sum_{j=0}^{2^k-1}\sum_{i=0}^{2^k-1} (-1)^{\lfloor \frac{j}{2^{k-1}}\rfloor+\lfloor \frac{i}{2^{k-1}}\rfloor+ \lfloor t+\frac{1+j+i}{2^k}\rfloor}\\ & &\quad \cdot \xi\bigg(t+\frac{1+j+i}{2^k}-\lfloor t+\frac{1+j+i}{2^k}\rfloor \bigg)\\ & &-2\sum_{\substack{j=0\\ j \neq 2^{k-1}}}^{2^k-1}\sum_{i=0}^{2^k-1} (-1)^{\lfloor \frac{j}{2^{k-1}}\rfloor+\lfloor \frac{i}{2^{k-1}}\rfloor+ \lfloor t+\frac{1+2j+2i}{2^{k+1}}\rfloor}\\ & &\quad \cdot \xi\bigg(t+\frac{1+2j+2i}{2^{k+1}}-\lfloor t+\frac{1+2j+2i}{2^{k+1}} \rfloor \bigg)\end{aligned}$$ $$\begin{aligned} &=&\sum_{j=0}^{2^{k+1}-1}\sum_{i=0}^{2^{k+1}-1} (-1)^{\lfloor \frac{j}{2^k}\rfloor+\lfloor \frac{i}{2^k}\rfloor+ \lfloor t+\frac{1+j+i}{2^{k+1}}\rfloor}\\ & &\quad \cdot \xi\bigg(t+\frac{1+j+i}{2^{k+1}}-\lfloor t+\frac{1+j+i}{2^{k+1}}\rfloor \bigg) \\ & &-\sum_{j=0}^{2^k-1}\sum_{i=0}^{2^k-1} (-1)^{\lfloor \frac{2j+1}{2^k}\rfloor+\lfloor \frac{2i}{2^k}\rfloor+ \lfloor t+\frac{1+2j+1+2i}{2^{k+1}}\rfloor}\\ & &\quad \cdot \xi\bigg(t+\frac{1+2j+1+2i}{2^{k+1}}- \lfloor t+\frac{1+2j+1+2i}{2^{k+1}}\rfloor \bigg)\\ & &-\sum_{j=0}^{2^k-1}\sum_{i=0}^{2^k-1} (-1)^{\lfloor \frac{2j}{2^k}\rfloor+\lfloor \frac{2i+1}{2^k}\rfloor+ \lfloor t+\frac{1+2j+2i+1}{2^{k+1}}\rfloor}\\ & &\quad \cdot \xi\bigg(t+\frac{1+2j+2i+1}{2^{k+1}}- \lfloor t+\frac{1+2j+2i+1}{2^{k+1}}\rfloor \bigg)\\ & &-\sum_{j=0}^{2^k-1}\sum_{i=0}^{2^k-1} (-1)^{\lfloor \frac{2j}{2^k}\rfloor+\lfloor \frac{2i}{2^k}\rfloor+ \lfloor t+\frac{1+2j+2i}{2^{k+1}}\rfloor}\\ & &\quad \cdot \xi\bigg(t+\frac{1+2j+2i}{2^{k+1}}-\lfloor t+\frac{1+2j+2i}{2^{k+1}} \rfloor \bigg)\\ & &-\sum_{j=1}^{2^k}\sum_{i=0}^{2^k-1} (-1)^{\lfloor \frac{2j-1}{2^k}\rfloor+\lfloor \frac{2i+1}{2^k}\rfloor+ \lfloor t+\frac{1+2j-1+2i+1}{2^{k+1}}\rfloor}\\ & &\quad \cdot \xi\bigg(t+\frac{1+2j-1+2i+1}{2^{k+1}}-\lfloor t+\frac{1+2j-1+2i+1}{2^{k+1}} \rfloor \bigg)\\ &=&0\end{aligned}$$ This proves (\[square\]). To prove selfadjointness we calculate $$\begin{aligned} & &\int_0^1 \mathscr{L}_k \xi(t) \eta(t)dt\\ &=&\sum_{j=0}^{2^k-1}\int_0^1 (-1)^{\lfloor \frac{j}{2^{k-1}}\rfloor+ \lfloor t+\frac{1}{2^{k+1}}+\frac{j}{2^k}\rfloor}\\ & &\quad \cdot \xi\bigg(t+\frac{1}{2^{k+1}}+\frac{j}{2^k}-\lfloor t+\frac{1}{2^{k+1}}+ \frac{j}{2^k}\rfloor\bigg)\eta(t)dt\\ &=&\sum_{j=0}^{2^k-1}\int_0^1 (-1)^{\lfloor \frac{j}{2^{k-1}}\rfloor+ \lfloor s-\frac{1}{2^{k+1}}-\frac{j}{2^k}-\lfloor s-\frac{1}{2^{k+1}}- \frac{j}{2^k}\rfloor+\frac{1}{2^{k+1}}+\frac{j}{2^k}\rfloor }\\ & &\quad \cdot \xi(s)\eta\bigg(s-\frac{1}{2^{k+1}}-\frac{j}{2^k}- \rfloor s-\frac{1}{2^{k+1}}-\frac{j}{2^k}\rfloor\bigg)ds\\ &=&\sum_{j=0}^{2^k-1}\int_0^1 (-1)^{\lfloor \frac{j}{2^{k-1}}\rfloor+ \lfloor s-\lfloor s-\frac{1}{2^{k+1}}- \frac{j}{2^k}\rfloor \rfloor }\\ & &\quad \cdot \xi(s)\eta\bigg(s-\frac{1}{2^{k+1}}-\frac{j}{2^k}- \rfloor s-\frac{1}{2^{k+1}}-\frac{j}{2^k}\rfloor\bigg)ds\\ &=&\sum_{i=0}^{2^k-1}\int_0^1 (-1)^{\lfloor 2-\frac{1}{2^{k-1}}-\frac{i}{2^{k-1}}\rfloor +\lfloor s-\lfloor s-\frac{1}{2^{k+1}}-1+\frac{1}{2^k}+\frac{i}{2^k}\rfloor \rfloor}\\ & &\quad \cdot \xi(s)\eta\bigg(s-\frac{1}{2^{k+1}}-1+\frac{1}{2^k} +\frac{i}{2^k}-\lfloor s-\frac{1}{2^{k+1}}-1+\frac{1}{2^k} +\frac{i}{2^k}\rfloor \bigg)\\ &=&\sum_{i=0}^{2^k-1} \int_0^1 (-1)^{\lfloor-\frac{1}{2^{k-1}}-\frac{i}{2^{k-1}}\rfloor +\lfloor s \rfloor -\lfloor s+\frac{1}{2^{k+1}}+\frac{i}{2^k}\rfloor+1}\\ & &\quad \cdot \xi(s)\eta\bigg(s+\frac{1}{2^{k+1}}+\frac{i}{2^k} -\lfloor s+\frac{1}{2^{k+1}}+\frac{i}{2^k}\rfloor\bigg)\\ &=&\sum_{i=0}^{2^k-1} \int_0^1 (-1)^{-\lfloor \frac{1}{2^{k-1}}+\frac{i}{2^{k-1}}\rfloor -\lfloor s+\frac{1}{2^{k+1}}+\frac{i}{2^k}\rfloor}\\ & &\quad \cdot \xi(s)\eta\bigg(s+\frac{1}{2^{k+1}}+\frac{i}{2^k} -\lfloor s+\frac{1}{2^{k+1}}+\frac{i}{2^k}\rfloor\bigg)\\ &=&\int_0^1 \xi(s)\mathscr{L}_k \eta(s)ds\end{aligned}$$ To prove injectivity we observe that using the formula $$\begin{aligned} \mathscr{L}_k \xi(t+\frac{i}{2^k})&=& \sum_{\ell=0}^{2^k-1}(-1)^{\lfloor \frac{\ell-i}{2^{k-1}}\rfloor +\lfloor \frac{\ell-i}{2^k}\rfloor+\lfloor t+\frac{1}{2^{k+1}}+ \frac{\ell}{2^k}\rfloor}\\ & &\quad \cdot \xi\bigg(t+\frac{1}{2^{k+1}}+\frac{\ell}{2^k} -\lfloor t+\frac{1}{2^{k+1}}+\frac{\ell}{2^k}\rfloor\bigg)\end{aligned}$$ for $i \in \{0,\ldots,2^k-1\}$ and $0 \leq t < 1/2^k$ it suffices to show that the matrix $A(t)\in \mathbb{R}^{2^k \times 2^k}$ whose entries are given by $$A_{i,\ell}(t)=(-1)^{\lfloor \frac{\ell-i}{2^{k-1}}\rfloor +\lfloor \frac{\ell-i}{2^k}\rfloor+\lfloor t+\frac{1}{2^{k+1}}+ \frac{\ell}{2^k}\rfloor}$$ is nondegenerate. Using the elementary fact that the determinant of a matrix is invariant if one subtracts from a row a different row together with the formulas $$|A_{i+1,\ell}(t)-A_{i,\ell}(t)|=2 \delta_{i+2^{k-1},\ell}, \quad i \in \{0,\ldots, 2^{k-1}-1\}$$ $$|A_{i+1,\ell}(t)-A_{i,\ell}(t)|=2 \delta_{i-2^{k-1},\ell}, \quad i \in \{2^{k-1}-1, \ldots, 2^k-2\}$$ $$|A_{0,\ell}(t)-A_{2^k-1,\ell}(t)|=2\delta_{2^{k-1}-1,\ell}$$ one concludes that $$|\mathrm{det}(A)|=2^{(2^k)}.$$ This proves injectivity and hence the lemma follows. $\square$\ \ It follows from (\[square\]) and from (ii) in the preceding lemma that for each $k \in \{1,\ldots,m-2\}$ the spectrum of $\mathscr{L}_k^2 \colon \mathscr{E}_{x,-1} \to \mathscr{E}_{x,-1}$ consists of a finite number of eigenvalues. These eigenvalues do not depend on $x$ and can be computed recursively from (\[square\]). For $\mathscr{L}_1^2$ the unique eigenvalue is given by $$\lambda_1^1=2$$ for $\mathscr{L}_2^2$ the two eigenvalues are given by $$\lambda_1^2=4+2 \cdot \sqrt{2}, \quad \lambda_2^2=4-2 \cdot \sqrt{2}$$ and for $\mathscr{L}_3^2$ the four eigenvalues are given by $$\begin{aligned} \lambda_1^3&=&2\Bigg (4+2\cdot \sqrt{2}+\sqrt{4+2 \cdot \sqrt{2}}\bigg(1+\sqrt{2}\bigg)\Bigg)\\ \lambda_2^3&=&2\Bigg (4-2\cdot \sqrt{2}+\sqrt{4-2 \cdot \sqrt{2}}\bigg(1-\sqrt{2}\bigg)\Bigg)\\ \lambda_3^3&=&2\Bigg (4+2\cdot \sqrt{2}-\sqrt{4+2 \cdot \sqrt{2}}\bigg(1+\sqrt{2}\bigg)\Bigg)\\ \lambda_4^3&=&2\Bigg (4-2\cdot \sqrt{2}-\sqrt{4-2 \cdot \sqrt{2}}\bigg(1-\sqrt{2}\bigg)\Bigg).\end{aligned}$$ We claim that all the eigenvalues of all the maps $\mathscr{L}_k$ for $1 \leq k \leq m-2$ are positive. It follows from assertion (v) in the preceding lemma that no eigenvalue of $\mathscr{L}_k$ is zero. To see that the eigenvalues are nonnegative we use the fact that they are independent of the path $x$ and the index of integrability $p$. For $k<m$ the map $\mathscr{L}_k$ maps $\mathscr{E}_{x,-1}$ to itself and moreover $\mathscr{L}_k$ is selfadjoint with respect to the $L^2$-inner product $L^2([0,1],T^* M)$ by assertion (iv). It follows that all the eigenvalues of $\mathscr{L}_k$ are real and hence all eigenvalues of $\mathscr{L}_k^2$ are nonnegative. Using independence of the eigenvalues from $x$ and $p$ the claim follows. For $1 \leq k \leq m-2$ denote by $\Pi_{k,\lambda}$ for $\lambda \in \sigma(\mathscr{L}^2_k)$ the projection to the eigenspace of $\lambda$ in $\mathscr{E}_{x,-1}$. For $3 \leq k \leq m$ we now extend the involution $\mathscr{I}_k$ to $\mathscr{E}_{x,-1}$ by the formula $$\mathscr{I}_k \xi:=\sum_{\lambda \in \sigma(\mathscr{L}^2_{k-2})} \frac{1}{\sqrt{\lambda}}\mathscr{L}_{k-2} \circ \Pi_{k-2,\lambda}\xi, \quad \xi \in \mathscr{E}_{x,-1}.$$ It remains to extend the involutions also the $\mathcal{E}_{x,1}$ the eigenspace to the eigenvalue $1$ of the first involution. To do that we introduce the maps $$\mathscr{H}_k \colon \mathscr{P}_{k+1} \to \mathscr{P}_k, \quad \mathscr{D}_k \colon \mathscr{P}_k \to \mathscr{P}_{k+1}, \quad k \in \mathbb{N}$$ by $$\mathscr{H}_k x(t):=x(t/2), \quad 0 \leq t \leq 1,\,\,x \in \mathscr{P}_{k+1}$$ and $$\mathscr{D}_k x(t)=\left\{\begin{array}{cc} x(2t) & 0 \leq t \leq 1/2\\ R(x(2-2t)) & 1/2 \leq t \leq 1. \end{array}\right.$$ We extend these maps in the obvious way to bundle maps $$\mathscr{H}_k \colon \mathscr{E}_x \to \mathscr{E}_{\mathscr{H}_k(x)}, \quad x \in \mathscr{P}_{k+1}$$ and $$\mathscr{D}_k \colon \mathscr{E}_x \to \mathscr{E}_{\mathscr{D}_k(x)}, \quad x \in \mathscr{P}_k$$ by setting $$\mathscr{H}_k\xi(t):=\xi(t/2), \quad 0 \leq t \leq 1,\,\,\xi \in \mathscr{E}_x$$ and $$\mathscr{D}_k \xi(t)=\left\{\begin{array}{cc} \xi(2t) & 0 \leq t \leq 1/2\\ R^*\xi(2-2t) & 1/2 \leq t \leq 1. \end{array}\right.$$ Note that $$\mathscr{H}_k \circ \mathscr{D}_k=\mathrm{id}|_{\mathscr{E}_x}.$$ We now define recursively for $\xi \in \mathscr{E}_{x,1}$ $$\mathscr{I}_{k+1} \xi:=\mathscr{D}_{k}\circ \mathscr{I}_k \circ \mathscr{H}_k \xi, \quad k \in \{1,\ldots, m-1\}.$$ \[commute\] Recall that the vertical differential of the section $\mathcal{F}$ is given by $$D_u\xi=\partial_s \xi+J(u)\partial_t \xi+\nabla_\xi J(u)\partial_t u$$ where $u \in \mathcal{F}^{-1}(0)$, $\xi \in T_u \mathcal{B}$, and $\nabla$ denotes the Levi-Civita connection of the metric $g(\cdot,\cdot)=\omega(\cdot,J\cdot)$. Observe that the last term does anticommute with the almost complex structure $J$. The compact operator in assertion (ii) of Theorem \[invo\] is given by $$Q_u\xi=\nabla_\xi J(u)\partial_t u, \quad \xi \in T_u\mathcal{B}.$$ If the almost complex structure is integrable, i.e. $\nabla J=0$, then $Q_u$ vanishes and $D_u$ commutes with $J$ and hence interchanges the two involutions. Kuranishi structures -------------------- We recall here the definition of Kuranishi structure as defined in [@fukaya-ono2], see also [@fukaya-oh-ohta-ono]. Our definition will be less general than the one in [@fukaya-ono2] since our Kuranishi neighbourhoods consist of manifolds instead of orbifolds, which is sufficient for our purposes. Let $X$ be a compact topological Hausdorff space. A Kuranishi structure assigns $(V_p, E_p, \psi_p,s_p)$ to each $p \in X$ and $(V_{pq},\hat{\phi}_{pq}, \phi_{pq} )$ to points $p,q \in X$ which are close to each other. They are required to satisfy the following properties: K1: : $V_p$ is a smooth manifold, and $E_p$ is a smooth vector bundle on it. K2: : $s_p$ is a continuous section of $E_p$. K3: : $\psi_p$ is a homeomorphism from $s_p^{-1}(0)$ to a neighbourhood of $p \in X$. K4: : $V_{pq}, \hat{\phi}_{pq},\phi_{pq}$ are defined if $q \in \psi_p(s_p^{-1}(0))$. K5: : $V_{pq}$ is an open subset of $V_q$ containing $\psi_q^{-1}(q)$. K6: : $(\hat{\phi}_{pq},\phi_{pq})$ is a map of vector bundles $E_q|_{V_{pq}} \to E_p$. K7: : $\hat{\phi}_{pq}s_q=s_p\phi_{pq}$. K8: : $\psi_q=\psi_p \phi_{pq}$. K9: : If $r \in \psi_q(s_q^{-1}(0) \cap V_{pq})$, then $$\hat{\phi}_{pq} \circ \hat{\phi}_{qr}=\hat{\phi}_{pr}$$ in a neighbourhood of $\psi_r^{-1}(r)$. K10: : $\mathrm{dim}V_p-\mathrm{rank}E_p$ does only depend on the connected component of $X$ in which $p$ lies and is called the *virtual dimension* of the Kuranishi structure of the connected component of $p$. Following [@fukaya-ono2] we will say that our Kuranishi structure has a *tangent bundle* if there exists a family of isomorphisms $$\Phi_{pq}\colon N_{V_p}V_q \cong E_p/E_q$$ satisfying the usual compatibility conditions. Here $N_{V_p}V_q$ denotes the normal bundle of $V_q$ in $V_p$. We say that a compact topological Hausdorff space $X$ and a sequence of continuous involutions $\{I_k\}_{1 \leq k \leq m}$ defined on closed subspaces $X_m \subset X_{m-1} \cdots \subset X_1=X$ of $X$ are of ***Arnold-Givental type***, if the following holds. (i) : The domain of the first involution $X_1$ is the whole space $X$, and the domain $X_k$ of $I_k$ for $2 \leq k \leq m$ is the fixpoint set of the previous involution, i.e. $$X_k=\mathrm{Fix}(I_{k-1}).$$ (ii) : The last involution acts freely, i.e. $$\mathrm{Fix}(I_m)=\emptyset.$$ We say that a space of Arnold-Givental type $(X,\{I_k\}_{1 \leq k \leq m})$ has a ***Kuranishi structure*** if $X$ admits a Kuranishi structure in the sense of Fukaya-Ono, such that in addition for each $p \in X_k$ there exist involutions $I_{p,j}$ for $1 \leq j \leq k$ where $I_{p,1}$ is defined on $V_{p,1}:=V_p$ and $I_{p,j}$ is defined on $V_{p,j}:=\mathrm{Fix}(I_{p,j-1})$ for $2 \leq j \leq k$, and extensions of $I_{p,j}$ to smooth bundle involutions $$\hat{I}_{p,j}\colon E_p|_{V_{p,j}} \to E_p|_{V_{p,j}},$$ where the following conditions are satisfied. (i) : If $p \in X_k \setminus X_{k+1}$ and $q \in \psi_p(s^{-1}_p(0)) \cap X_j$, then $j \leq k$, $I_{p,k}$ acts freely on $V_{p,k}$, and $$I_j(q)=\psi_p \circ I_{p,j} \circ \psi_p^{-1}(q).$$ (ii) : If $p,q \in X$ are close enough, $x \in V_{pq} \cap V_{q,j}$, and $\xi \in (E_q)_x$ then $$I_{p,j}(\phi_{pq}(x))=\phi_{pq}(I_{q,j}(x))$$ and $$\hat{I}_{p,j} \circ \hat{\phi}_{pq} \xi=\hat{\phi}_{pq}\circ \hat{I}_{q,j} \xi.$$ (iii) : The bundle involutions commute on their common domain of definition, i.e. if $x \in V_{p,j}$ and $\xi \in (E_p)_x$, then $$\hat{I}_{p,\ell} \circ \hat{I}_{p,j}\xi=\hat{I}_{p,j} \circ \hat{I}_{p,\ell}\xi$$ for $1 \leq \ell \leq j$. \[restriction\] Assume that $(X,\{I_k\}_{1 \leq k \leq m})$ has Arnold-Givental type. Then also $(X_j,\{I_k\}_{j \leq k \leq m})$ for $1 \leq j \leq m$ has Arnold-Givental type. Moreover, if $X$ has a Kuranishi structure, then also $X_j$ has a Kuranishi structure. A Kuranishi neighbourhood is constructed in the following way. For $p \in X_k$ with $k \geq j$ take $$V_{p,j}=\mathrm{Fix}(I_{p,j-1}|_{V_p})$$ and as obstruction bundle take the intersection of the eigenspaces to the eigenvalue $1$ of the previous involutions, i.e. $$E_{p,j}:=\bigcap_{1 \leq i \leq j-1}\mathrm{ker}(\hat{I}_{p,i}|_{E_p|V_p^j} -\mathrm{id}|_{E_p|V_p^j}).$$ The other ingredients of the Kuranishi structure are then given by the obvious restrictions. Note that since the involutions commute on their common domain of definition $E_{p,j}$ is invariant under $\hat{I}_{p,i}$ for $j \leq i \leq k$. We say that a space $(X,\{I_k\}_{1 \leq k \leq m})$ of Arnold-Givental which admits a Kuranishi structure has a ***tangent bundle*** if the Kuranishi spaces $X_j$ admit a tangent bundle in the sense of Fukaya-Ono and the isomorphisms $\Phi_{pq,j}$ for $p, q \in X_j$ close enough are obtained by restriction of $\Phi_{pq,1}=\Phi_{pq}$ for $1 \leq j \leq m$. In order to do useful perturbation theory we have to extend the involutions to the tangent bundle. Since by assertion (ii) of Theorem \[invo\] the vertical differential of the section $\mathcal{F}$ commutes with the involutions only modulo a compact operator the tangent bundle will in general only admit a “stable” Arnold-Givental structure. A similar phenomenon appeared in [@fukaya-ono2] where the Kuranishi structure in general only admitted a stable almost complex structure. We first have to recall the following terminology from [@fukaya-ono2]. A tuple $((F_{1,p},F_{2,p}),(\Phi_{1,pq},\Phi_{2,pq},\Phi_{pq}))$ is a bundle system over the Kuranishi space $X=(X,(V_p,E_p,\psi_p,s_p))$ if $F_{1,p}$ and $F_{2,p}$ are two vector bundles over $V_p$ for every $p \in X$, $\Phi_{1,pq}\colon F_{1,q} \to F_{1,p}|_{V_q}$ and $\Phi_{2,pq} \colon F_{2,q} \to F_{2,p}|_{V_q}$ are embeddings for $q$ sufficiently close to $p$, and $$\Phi_{pq} \colon \frac{F_{1,p}|_{V_q}}{F_{1,q}} \to \frac{F_{2,p}|_{V_q}}{F_{2,q}}$$ are isomorphisms of vector bundles. These maps are required to satisfy some compatibility conditions. Moreover, there is some obvious notion of isomorphism, Whitney sum, tensor product etc. for bundle systems. We refer the reader to [@fukaya-ono2] for details. If a Kuranishi structure has a tangent bundle, i.e. a family of isomorphisms $\Phi_{pq} \colon N_{V_p}V_q \cong E_p/E_q$, then one can define a bundle system $$TX=(TV_p, E_p, \hat{\phi}_{pq},d \phi_{pq}, \Phi_{pq})$$ over X. If the space $X$ is of Arnold-Givental type and admits a tangent bundle, then we define the normal bundle $$\begin{aligned} NX&=&\{TV_{p,j}|_{V_{p,j+1}}/TV_{p,j+1},\\ & &E_{p,j}|_{V_{p,j+1}}/E_{p,j+1},\\ & &d\phi_{pq,j}|_{V_{pq,j+1}}/d\phi_{pq,j+1},\\ & &\hat{\phi}_{pq,j}|_{V_{pq,j+1}}/\hat{\phi}_{pq,j+1},\\ & &\Phi_{pq,j}|_{V_{pq,j+1}}/\Phi_{pq,j+1}\}_{1 \leq j \leq m-1}\end{aligned}$$ as a bundle system over $\bigcup_{j=1}^{m-1}X_{j+1}$. The real $K$-group $KO(X)$ of a space with Kuranishi structure $X$ was defined in [@fukaya-ono2] as the quotient of the free abelian group generated by the set of all isomorphism classes of bundle systems modulo the relations $$[((F_{1,p},F_{2,p}),(\Phi_{1,pq},\Phi_{2,pq},\Phi_{pq})) \oplus((F'_{1,p},F'_{2,p}),(\Phi'_{1,pq},\Phi'_{2,pq},\Phi'_{pq}))]=$$ $$[((F_{1,p},F_{2,p}),(\Phi_{1,pq},\Phi_{2,pq},\Phi_{pq}))]+ [((F'_{1,p},F'_{2,p}),(\Phi'_{1,pq},\Phi'_{2,pq},\Phi'_{pq}))]$$ $$[((F_{1,p},F_{2,p}),(\Phi_{1,pq},\Phi_{2,pq},\Phi_{pq}))]=0$$ $$\mathrm{if}\,\,((F_{1,p},F_{2,p}),(\Phi_{1,pq},\Phi_{2,pq},\Phi_{pq})) \,\,\textrm{is trivial.}$$ If the Kuranishi structure has a tangent bundle $[TX]$ denotes the class of $TX$ in $KO(X)$. If the Kuranishi structure is of Arnold-Givental type and admits a tangent bundle $[NX]$ denotes the class of $NX$ in $\bigoplus_{j=2}^{m}KO(X_j)$. Assume that $(X,\{I_k\}_{1 \leq k \leq m})$ is a space of Arnold-Givental type which admits a Kuranishi structure $(X,(V_p,E_p,s_p,\psi_p,\phi_{pq}, \hat{\phi}_{pq}))$ and assume further that $((F_{1,p},F_{2,p}),(\Phi_{1;pq}, \Phi_{2;pq},\Phi_{pq}))$ is a bundle system on it. We say that $((F_{1,p},F_{2,p}),(\Phi_{1;pq},\Phi_{2;pq},\Phi_{pq}))$ is a ***Bundle System of Arnold-Givental type*** if for every $p \in X_k$ there exist extensions of the involutions $I_{p,j}$ for $1 \leq j \leq k$ to smooth involutative bundle maps $\hat{I}_{1,p,j} \colon F_{1,p}|_{V_{p,j}} \to F_{1,p}|_{V_{p,j}}$ and $\hat{I}_{2,p,j} \colon F_{2,p}|_{V_{p,j}} \to F_{2,p}|_{V_{p,j}}$ such that the following conditions are satisfied. Compatibility: : The transition maps $\Phi_{pq}$, $\Phi_{1;pq}$, and $\Phi_{2;pq}$ restricted to the domain of definition of the involutions interchange them. Commutativity: : The involutions commute on their common domain of definition. The Whitney sum of two bundle systems of Arnold-Givental type still has Arnold-Givental type and hence one can consider the $K$-group $K_{AG}(X)$ of bundle systems of Arnold-Givental type over $X$. There is an obvious map $$K_{AG}(X) \to KO(X).$$ If $X$ admits a tangent bundle then there is an obvious extension of $I_{p,j}$ to the bundle $E_p$ given by $\hat{I}_{p,j}$. However there is no obvious extension of $I_{p,j}$ to $T V_p$ in general. This motivates the following definition. Assume that $(X,\{I_k\}_{1 \leq k \leq m})$ is a space of Arnold-Givental type which has a Kuranishi structure with tangent bundle. We say that $X$ has a ***Kuranishi structure of Arnold-Givental type*** if the normal bundle $NX$ is a bundle system of Arnold-Givental type in $\bigcup_{j=2}^{m}X_j$. We say that $X$ has a ***Kuranishi structure of stable Arnold-Givental type*** if $[NX]$ is in the image of $\bigoplus_{j=2}^m K_{AG}(X_j) \to \bigoplus_{j=2}^m KO(X_j)$. Note that a Kuranishi structure of Arnold-Givental type is also a Kuranishi structure of stable Arnold-Givental type. \[kurtra\] We assume that $(X,\{I_k\}_{1 \leq k \leq m})$ has a Kuranishi structure of Arnold-Givental type whose virtual dimension is zero. Then for each $p \in X$, there exist smooth sections $\tilde{s}_p$ such that the following holds. (i) : $\tilde{s}_p\circ \phi_{pq}=\hat{\phi}_{pq}\circ \tilde{s}_q$, (ii) : Let $x \in V_{pq}$. Then the restriction of the differential of the composition of $\tilde{s}_p$ and the projection $E_p \to E_p/E_q$ coincides with the isomorphism $\Phi_{pq}\colon N_{V_p}V_q \cong E_p/E_q$. (iii) : The sections $\tilde{s}_p$ are transversal to $0$, and if $p \in X_k$ then $\tilde{s}_p^{-1}(0)$ is invariant under $I_{p,j}$ for $1 \leq j \leq k$. The sections $\tilde{s}_p$ will not necessarily be invariant under the involutions, only their zero set will be. We need the following lemma. \[chnorz\] Assume that $Y$ is a compact manifold, $N$ and $E$ are two vector bundle over $Y$ and $U \subset N$ is an open neighbourhood of $Y \subset N$. Denote by $\pi_N \colon N \to Y$ the canonical projection. Then there exists a section $s \colon N \to \pi^*_N E$ which satisfies the following conditions. (i) : The zero section $s^{-1}(0)$ is invariant under the involutative bundle map on $N$ defined by $(n,y) \mapsto (-n,y)$ for $y \in Y$ and $n \in N_y$. (ii) : Outside $U$ the section $s$ is invariant under the involutative bundle map of $\pi_N^* E$ given by $(e,n,y) \mapsto (-e,-n,y)$ for $y \in Y$, $n \in N_y$, and $e \in \pi_N^*E_{(n,y)}$. (iii) : The boundary of the set of transversal points has codimension at least one. More precisely, there exists a manifold $\Omega$ of dimension $\mathrm{dim}(\Omega)=\mathrm{dim}(Y)-\mathrm{rk}(E)+\mathrm{rk}(N)-1$ and a smooth map $f \colon \Omega \to N$ such that the $\omega$-limit set of the $\mathrm{dim}(Y)-\mathrm{rk}(E)+\mathrm{rk}(N)$-dimensional manifold of points of transversal intersection $$\mathscr{T}:=\{n \in s^{-1}(0): Ds(n)\,\,onto\}$$ is contained in the image of $\Omega$, i.e. $$\bigcap_{K \subset \mathscr{T}\,\,compact}\mathrm{cl} (\mathscr{T}\setminus K) \subset f(\Omega).$$ **Proof:** Choose a bundle map $\Phi \colon N \to E$ and define $$Q:=\{y \in Y: \mathrm{dim}(\mathrm{ker}\Phi(y))>0\}.$$ Let $\Psi \colon Y \to E$ be a section such that $$\Psi|_Q=0, \quad \Psi(y) \cap \Phi(N_y)=\{0\},\quad \forall\,\, y \in Y.$$ Choose further a smooth cutoff function $\beta \colon N \to \mathbb{R}$ such that $$\beta|_Y=1, \quad \mathrm{supp}(\beta) \subset U.$$ Define a section $s \colon N \to \pi_N^* E$ by $$s(n):=\beta(n)\pi_N^*\Psi(\pi_N(n))+\pi_N^*\Phi(n), \quad n \in N.$$ Then $s$ satisfies conditions (i) and (ii) and for generic choice of $\Psi$ and $\Phi$ also condition (iii). This proves the lemma. $\square$\ \ **Proof of Theorem \[kurtra\]:** We continue the notation of Remark \[restriction\]. For $j \in \{1,\cdots, m\}$ denote by $s_p^j$ for $p \in X_j$ the section from $V_{p,j}$ to $E_{p,j}$ which is induced from $s_p$. By induction on $j$ from $m$ to $1$ we find sections $\tilde{s}_p^j$ of $s_p^j$ which satisfy conditions (i) and (ii) of Theorem \[kurtra\] as well invariance of the zero set under the involutions, but instead of the transversality condition we impose the condition that the boundary of the manifold of points of transversal intersection has codimension at least one. For each $j \in \{1, \cdots, m\}$ and for each $p \in X_j$ there exists a manifold $\Omega_p^j$ of dimension $\mathrm{dim}(\Omega_p^j)=d^j_p-1$ where $d^j_p$ is the virtual dimension of the connected component of $p$ of the Kuranishi structure of $\mathrm{X}_j$ and a smooth map $f_p^j \colon \Omega_p^j \to V_p^j$ such that the $\omega$-limit set of the set of points of transversal intersection $$\mathscr{T}^j_p:=\{q \in (\tilde{s}^j_p)^{-1}(0): D\tilde{s}_p^j(q) \,\,onto\}$$ is contained in the image of $\Omega_p^j$, i.e. $$\bigcap_{K \subset \mathscr{T}^j_p\,\,compact} \mathrm{cl}( \mathscr{T}^j_p \setminus K) \subset f_p^j(\Omega_p^j).$$ To prove the induction step we observe that by the assumption that the Kuranishi structure is of Arnold-Givental type it follows that the two vector bundles $TV_{p,j}|_{(s_p^{j+1})^{-1}(0)}/ TV_{p,j+1}|_{(s_p^{j+1})^{-1}(0)}$ and $E_{p,j}|_{(s_p^{j+1})^{-1}(0)}/E_{p,j+1}|_{(s_p^{j+1})^{-1}(0)}$ induce bundles on the quotient $(s_p^{j+1})^{-1}(0)/I_{p,j+1}$. The induction step can now be concluded from Lemma \[chnorz\]. Since the Kuranishi structure of $X=X_1$ has virtual dimension zero it follows that for every $p \in X_1$ the set $\Omega^1_p$ is empty and hence condition (iii) holds. This proves the theorem. $\square$\ \ We recall from [@fukaya-ono2] that if $Y$ is a topological space and $X$ is a space with Kuranishi structure, a *strongly continuous (smooth) map* $f: X \to Y$ is a family of continuous (smooth) maps $f_p \colon V_p \to Y$ for each $p \in X$ such that $f_p \circ \phi_{pq}=f_q$. If $n$ is the virtual dimension of the Kuranishi structure we define the homology class $$f([X]) \in H_n(Y;\mathbb{Z}_2)$$ as in [@fukaya-ono2]. We are now able to draw the following Corollary from Theorem \[kurtra\]. Assume that $(X,\{I_k\}_{1 \leq k \leq m})$ has a Kuranishi neighbourhood of stable Arnold-Givental type whose virtual dimension is zero. Suppose further that $Y$ is a topological space and $f \colon X \to Y$ is a strongly continuous map. Then $$f([X])=0 \in H_0(Y;\mathbb{Z}_2).$$ If the space $\mathcal{M}$ consisting of $J$-holomorphic disks whose boundary is mapped to the Lagrangian $L$ is compact with respect to the Gromov topology, then $\mathcal{M}$ together with the involutions described in section \[agi\] is of Arnold-Givental type. We next prove that under the compactness assumption $\mathcal{M}$ has a Kuranishi neighbourhood of Arnold-Givental type. In our proof we mainly follow [@fukaya-ono2]. The new ingredient is to choose the obstruction bundle in such a way that it is invariant under the involutions. In general one cannot expect that $\mathcal{M}$ is compact due to the bubbling phenomenon. We hope that the approach pursued in this article together with the techniques developped in [@fukaya-ono2] and [@fukaya-oh-ohta-ono] will allow us to show that the compactification of $\mathcal{M}$ by bubble trees has a Kuranishi neighbourhood of Arnold-Givental type. \[kusta\] Assume that $\mathcal{M}$ is compact with respect to the Gromov topology. Then $(\mathcal{M},\{I_k\}_{1 \leq k \leq m})$ admits a Kuranishi structure of stable Arnold-Givental type. **Proof:** We first choose local trivialisations of the vector bundle $\mathcal{E}$. More precisely, for each $q \in \mathcal{B}$ we choose an open neighbourhood $U_q \subset \mathcal{B}$ of $q$ and a smooth family of Banach space isomorphisms $$\mathcal{P}^p_q \colon \mathcal{E}_q \to \mathcal{E}_p, \quad p \in U_q$$ such that the following conditions are satisfied. (T1) : If $q \in \mathcal{B}_k \setminus \mathcal{B}_{k+1}$ for $k \in \mathbb{N}$, then $U_q$ is invariant under $I_j$ for $1 \leq j \leq k$ and $I_k$ acts freely on $U_q$. (T2) : For $1 \leq j \leq k$ the trivialisations commute with the involutions, i.e. $$\mathcal{P}_q^p \circ I^{\mathcal{E}}_j=I^{\mathcal{E}}_j \circ \mathcal{P}_{I_j q}^{I_j p}, \quad p \in U_q.$$ Now choose for every $p \in \mathcal{M}$ an open neighbourhood $\hat{V}_p \subset U_p$ of $p$ in $\mathcal{B}$, choose a finite set $Q \subset \mathcal{M}$, for each $q \in Q$ a closed neighbourhood $\hat{U}_q \subset U_q$ of $q$ in $\mathcal{B}$, and a finite dimensional subspace $\hat{E}_q \subset \mathcal{E}_q$ consisting of smooth sections with the following properties: (i) : For every $k \in \mathbb{N}$ the sets $\hat{V}_p$ for every $p \in \mathcal{M} \cap (\mathcal{B}_k \setminus \mathcal{B}_{k+1})$ and the set $\bigcup_{q \in Q \cap (\mathcal{B}_k\setminus \mathcal{B}_{k+1})} \hat{U}_q$ are invariant under $I_j$ for $1 \leq j \leq k$ and $I_k$ acts freely on them. (ii) : For $1 \leq k \leq m$ the family of vectorspaces $\bigcup_{q \in \mathcal{M}_k} \hat{E}_q$ is invariant under $I_k^{\mathcal{E}}$. (iii) : For $p \in \mathcal{B}$ let $Q_p:=\{q \in Q: p \in \hat{U}_q\}$. Assume that $p \in \mathcal{M}$ and $p' \in \hat{V}_p$. Then the sum $\bigoplus_{q \in Q_p}\mathcal{P}_q^{p'}\hat{E}_q$ is direct, i.e. for every $q_0 \in Q_p$ we have $\mathcal{P}_{q_0}^{p'}\hat{E}_{q_0} \cap \sum_{q \in Q_p \setminus \{q_0\}}\mathcal{P}_q^{p'}\hat{E}_q=\emptyset.$ (iv) : For $p \in \mathcal{M}$ and $p' \in \hat{V}_p$ the operator $$\Pi^{p'}_p \circ D_{p'} \colon T_{p'}\mathcal{B} \to \mathcal{E}_{p'} /\bigoplus_{q \in Q_p} \mathcal{P}^{p'}_q\hat{E}_q$$ is surjective, where $$\Pi^{p'}_p \colon \mathcal{E}_{p'} \to \mathcal{E}_{p'} / \bigoplus_{q \in Q_p} \mathcal{P}_q^{p'}\hat{E}_q$$ denotes the canonical projection. Our first aim is to define the manifold $V_p$ of $p$ which occurs in the definition of Kuranishi structure. In our construction this manifold will be a small neighbourhood of zero in the kernel of the map $\Pi^p_p \circ D_p$. The size of this neighbourhood will depend on the domain of definition of the smooth injective evaluation maps $$\mathrm{ev}_p \colon V_p \to \hat{V}_p$$ which we have to define first. By (iv) we can choose a smooth family of uniformly bounded right inverses $$R^{p'}_p \colon \mathcal{E}_{p'}/\bigoplus_{q \in Q_p} \mathcal{P}_q^{p'}\hat{E}_q \to T_{p'}\mathcal{B}$$ of $\Pi^{p'}_p \circ D_{p'}$, i.e. $$\Pi^{p'}_p \circ D_{p'} \circ R^{p'}_p =\mathrm{id} |_{\mathcal{E}_{p'}/\bigoplus_{q \in Q_p}\mathcal{P}_q^{p'}\hat{E}_p}.$$ We need in addition some compatibility of the right inverses with the involutions. To state it we observe that it follows from condition (T2) on the local trivialisations together with conditions (i) and (ii) that for $p \in \mathcal{M}_k$ and $p' \in \hat{V}_p \cap \mathcal{B}_j$ for $j \leq k$ the family of vector spaces $\bigoplus_{q \in Q_p} \mathcal{P}_q^{p'}\hat{E}_q \cup \bigoplus_{q \in Q_{I_i p}}\mathcal{P}^{I_i p'}_{I_i q}\hat{E}_{I_i q}$ is invariant under $I_i$ for $1 \leq i \leq j$. Hence the involutions $I_j^{\mathcal{E}}$ induce involutions $$I_j^{\mathcal{E},p} \colon \bigcup_{p' \in \hat{V}_p \cap \mathcal{B}_j}\mathcal{E}_{p'}/\bigoplus_{q \in Q_p}\mathcal{P}^{p'}_q \hat{E}_q \to \bigcup_{p' \in \hat{V}_p \cap \mathcal{B}_j}\mathcal{E}_{p'}/\bigoplus_{q \in Q_p}\mathcal{P}^{p'}_q \hat{E}_q.$$ Using the fact that by (ii) of Theorem \[invo\] the operator $I^{\mathcal{E}}_k \circ D_u-D_u\circ I^{T\mathcal{B}}_k$ vanishes on $T_u\mathcal{B}_k$ together with (iii) of Theorem \[invo\], we can impose the following compatibility condition of the right inverse $R^{p'}_p$ and the involutions $$\label{inv1} R_p^{p'} \circ I_k^{\mathcal{E},p}\big| _{\bigcap_{j=1}^{k-1}\mathrm{ker}(I_j^{\mathcal{E},p}-\mathrm{id})} =I_k^{T\mathcal{B}}\circ R_p^{p'}\big| _{\bigcap_{j=1}^{k-1}\mathrm{ker}(I_j^{\mathcal{E},p}-\mathrm{id})} , \quad \forall\,\,p' \in \hat{V}_p \cap \mathcal{B}_k.$$ Now we are able to define the manifold $V_p$ which occurs in the definition of the Kuranishi structure. For $p \in \mathcal{B}$ and for $\xi \in T_p\mathcal{B}$ small enough we define $$\exp_p \xi \in \mathcal{B}$$ as the pointwise exponential map with respect to the metric $g(\cdot,\cdot)=\omega(\cdot,J\cdot)$. Note that if $p \in \mathcal{B}_k$ and $\xi \in T_p \mathcal{B}_k$ then $\exp_p \xi \in \mathcal{B}_k$. Now we choose $V_p$ as an open neighbourhood of zero in $\mathrm{ker}(\Pi_p\circ D_p) \subset T_p \mathcal{B}$ which is invariant under $I^{T\mathcal{B}}_k$ if $p \in \mathcal{B}_k$ and which is so small that we are able to define $$\mathrm{ev}^0_p \colon V_p \to \hat{V}_p, \quad \mathrm{ev}^0_p:=\exp_p|_{V_p}$$ and be recursion for $\nu \in \mathbb{N}$ $$\mathrm{ev}^\nu_p\colon V_p \to \hat{V}_p,\quad \xi \mapsto \exp_{\mathrm{ev}^{\nu-1}_p \xi}\bigg(R^{\mathrm{ev}^{\nu-1}_p \xi}_p \circ \Pi^{\mathrm{ev}^{\nu-1}_p \xi}_p \circ \mathcal{F} \circ \mathrm{ev}^{\nu-1}_p \xi\bigg),$$ and finally $$\mathrm{ev}_p \colon V_p \to \hat{V}_p, \quad \mathrm{ev}_p:= \lim_{\nu \to \infty} \mathrm{ev}^\nu_p.$$ We now define for each $p \in \mathcal{M}$ the obstruction bundle $E_p \to V_p$ by $$E_p:=(\mathrm{ev}_p)^*\bigoplus_{q \in Q_p}\mathcal{P}_q^{\mathrm{ev}_p}\hat{E}_q,$$ and the section $s_p \colon V_p \to E_p$ by $$s_p:=(\mathrm{ev}_p)^*\mathcal{F}.$$ The homeomorphisms $\psi_p$ from $s_p^{-1}(0)$ to a neighbourhood of $p \in \mathcal{M}$ are defined by $$\psi_p:=\mathrm{ev}_p|_{s_p^{-1}(0)}.$$ Perhaps after shrinking the manifolds $V_p$ we may assume using the assumption that the set $\hat{U}_q$ are closed for every $q \in Q$ that $$\label{Vpq} Q_{\psi_p(x)} \subset Q_p, \quad \forall \,\,p \in \mathcal{M},\,\,\forall \,\,x \in s_p^{-1}(0).$$ This implies that for $p \in \mathcal{M}$ and $x \in s_p^{-1}(0)$ the set $$V_{p \psi_p(x)}:=\mathrm{ev}_{\psi_p(x)}^{-1} \bigg(\mathrm{ev}_{\psi_p(x)}(V_{\psi_p(x)}) \cap \mathrm{ev}_p(V_p)\bigg)$$ is open in $V_{\psi_p(x)}$. We now define $$\phi_{p\psi_p(x)}\colon V_{p\psi_p(x)} \to V_p, \quad \phi_{p\psi_p(x)}:=\mathrm{ev}_p^{-1}\circ \mathrm{ev}_{\psi_p(x)}.$$ Using again (\[Vpq\]) we observe that for every $y \in V_{p\psi_p(x)}$ we have $$\bigoplus_{q \in Q_{\psi_p(x)}}\mathcal{P}_q^{\mathrm{ev}_{\psi_p(x)}(y)} \hat{E}_q \subset \bigoplus_{q \in Q_p}\mathcal{P}_q^{\mathrm{ev}_p \circ \phi_{p\psi_p(x)}(y)} \hat{E}_q.$$ We now define the bundle maps $\hat{\phi}_{p\psi_p(x)} \colon E_{\psi_p(x)}|_{V_{p\psi_p(x)}} \to E_p$ as the map induced by the above inclusion. To define the isomorphisms $\Phi_{p\psi_p(x)} \colon N_{V_p} V_{\psi_p(x)} \to E_p/E_{\psi_p(x)}$ which are required for the tangent bundle of the Kuranishi structure, we observe that there is a natural identification of the normal bundle of $\mathrm{ev}_{\psi_p(x)}(V_{p\psi_p(x)})$ in $\mathrm{ev}_p(V_p)$ $$N_{\mathrm{ev}_p(V_p)} \mathrm{ev}_{\psi_p(x)}(V_{p\psi_p(x)}) \cong \bigoplus_{q \in Q_p \setminus Q_{\psi_p(x)}}\mathcal{P}_q^ {\mathrm{ev}_{\psi_p(x)}}\hat{E}_q$$ due to the fact that $$\Pi_{\psi_p(x)}^{\mathrm{ev}_{\psi_p(x)}(y)} \circ D_{\mathrm{ev}_{\psi_p(x)}(y)} \colon T_{\mathrm{ev}_{\psi_p(x)}(y)}\mathcal{B} \to \mathcal{E}_{\mathrm{ev}_{\psi_p(x)}(y)}/\bigoplus_{q \in Q_{\psi_p(x)}} \mathcal{P}_q^{\mathrm{ev}_{\psi_p(x)}(y)}\hat{E}_q$$ is already surjective. The isomorphisms $\Phi_{p\psi_p(x)}$ are defined to be the induced isomorphisms of the identification above. Finally, using (\[inv1\]), the involutions $I^{\mathcal{E}}_k$ induce involutions $\hat{I}_{p,k}$ on the obstruction bundle $E_p$ for $p \in \mathcal{M}_k$. Hence we have proved that $\mathcal{M}$ admits a Kuranishi structure. We next show in the integrable case the Kuranishi structure is of Arnold-Givental type. As it was explained in Remark \[commute\], if the almost complex structure is integrable the operator $D_u$ will interchange the involutions, i.e. for $k \in \mathbb{N}$ and $u \in \mathcal{F}^{-1}(0) \cap \mathcal{B}_k$ it holds that $$\label{comm} I_k^{\mathcal{E}}\circ D_u=D_u \circ I_k^{T\mathcal{B}}.$$ If $p \in \mathcal{M}_k$ and $x \in V_{p,j}$ for $j \leq k$ then $$T_x V_{p,j}=\mathrm{ev}_p^*\bigg(\mathrm{ker}\big(\Pi^{\mathrm{ev}_p(x)}_p \circ D_{\mathrm{ev}_p(x)}\big) \cap \bigcap_{i=1}^{j-1} \big(\mathrm{ker}I_i^{T\mathcal{B}}-\mathrm{id}|_{T\mathcal{B}}\big)\bigg),$$ and it follows from (\[comm\]) and the invariance of the obstruction bundle under the involutions, that $I_i^{T\mathcal{B}}$ for $1 \leq i \leq j$ induce involutions on $TV_{p,j}$. Using these involutions one can endow the normal bundle with the structure of a bundle system of Arnold-Givental type. It remains to treat the non-integrable case. In general (\[comm\]) does not hold but by assertion (ii) of Theorem \[invo\] we can homotop $D_u$ through Fredholm operators to a Fredholm operator $D_u^1$ which interchanges the involutions by setting $$D_u^\lambda:=D_u-\lambda Q_u, \quad \lambda \in [0,1].$$ Now choose $\hat{V}^1_p \subset U_p$ for every $p \in \mathcal{B}$, a finite set $Q^1 \in \mathcal{B}$, and for every $q \in Q^1$ a finite dimensional subspace $\hat{E}_q^1 \subset \mathcal{E}_q$ consisting of smooth sections, which satisfy again assertions (i) to (iii) but assertion (iv) replaces by (iv)’ : For $p \in \mathcal{M}$ and $p' \in \hat{V}_p$ the operator $$\Pi^{p',1}_p \circ D^1_{p'} \colon T_{p'}\mathcal{B} \to \mathcal{E}_{p'}/\bigoplus_{q \in Q^1_p}\mathcal{P}^{p'}_q \hat{E}^1_q$$ is surjective. Define for $p \in \mathcal{M}_k$ and $x \in V_{p,j}$ for $j \leq k$ $$W_{p,j}:=\mathrm{ev}_p^*\bigg(\mathrm{ker}\big(\Pi^{\mathrm{ev}_p(x),1}_p \circ D^1_{\mathrm{ev}_p(x)}\big) \cap \bigcap_{i=1}^{j-1} \big(\mathrm{ker}I_i^{T\mathcal{B}}-\mathrm{id}|_{T\mathcal{B}}\big)\bigg).$$ Note that $W_{p,j}$ is invariant under the involutions $I_i^{T\mathcal{B}}$ for $1 \leq i \leq j$. One can now define a bundle system of Arnold-Givental type $N^1 X$ over $\bigcup_{j=2}^{m}X_j$ where $F_{1,p}$ is given by $\bigcup_{j=2}^m W_{p,j}|_{V_{p,j+1}}/W_{p,j+1}$, $F_{2,p}=\bigcup_{j=2}^m E_{p,j}|_{V_{p,j+1}}/E_{p,j+1}$, and the transition functions are defined in a similar manner as in the case of the normal bundle. Using the homotopy between $D_u$ and $D^1_u$ one shows that $$[N^1 X]=[NX] \in \bigoplus_{j=2}^m KO(X_j).$$ It follows that the Kuranishi structure is of stable Arnold-Givental type. This proves the Theorem. $\square$ Moment Floer homology {#mofoho} ===================== Moment Floer homology was introduces in [@frauenfelder2]. Moment Floer homology is a tool to count intersection points of some Lagrangians in Marsden-Weinstein quotients which are fixpoint set of some antisymplectic involution. In general, due to the bubbling phenomenon, the ordinary Floer homology for Lagrangians in Marsden-Weinstein quotients cannot be defined by standard means, see [@cho; @cho-oh] for a computation of the Floer homology of Lagrangian torus fibers of Fano toric manifolds. To overcome the bubbling problem one replaces Floer’s equations by the symplectic vortex equations to define the boundary operator. Under some topological assumptions on the enveloping manifold one can prove compactness of the relevant moduli spaces of the symplectic vortex equations. In the special case where the two Lagrangians are hamiltonian isotopic to each other one can use the antisymplectic involution to prove that moment Floer homology is equal to the singular homology of the Lagrangian with coefficients in some Novikov ring. This leads to a prove of the Arnold-Givental conjecture for some class of Lagrangians in Marsden-Weinstein quotients which are fixpoint sets of some antisymplectic involution. We will give in this section proofs of the main properties of the symplectic vortex equations to define moment Floer homology and refer to [@frauenfelder2] for complete details. To compute it we will need the techniques of section \[transi\]. These techniques were not available in [@frauenfelder2] and hence moment Floer homology could there only be computed under some additional monotonicity assumption which is removed here. The set-up ---------- In this subsection we introduce the notation to define the symplectic vortex equations and formulate the hypotheses under which compactness of the relevant moduli spaces can be proven. Let $G$ be a Lie group with Lie algebra $\mathfrak{g}$ which acts covariantly on a manifold $M$, i.e. there exists a smooth homomorphism $\psi: G \to \mathrm{Diff}(M)$. We will often drop $\psi$ and identify $g$ with $\psi(g)$. For $\xi \in \mathfrak{g}$ we denote by $X_\xi$ the vector field $M \to TM$ which is generated by the one-parameter subgroup generated by $\xi$, i.e. $$X_\xi(x):=\frac{d}{dt}\bigg|_{t=0}\exp(t \xi)(x), \quad \forall \,\,x \in M.$$ We shall use the linear mapping $L_x: \mathfrak{g} \to T_x M$ defined by $$L_x \xi:=X_\xi(x) \in T_x M.$$ We will denote the adjoint action of $G$ on $\mathfrak{g}$ by $$g \xi g^{-1}=\mathrm{Ad}(g)\xi =\frac{d}{dt}\bigg|_{t=0}g \exp(t\xi) g^{-1}.$$ If $I$ is an open intervall, $t_0 \in I$, and $g:I \to M$ is a smooth path, we will write $$(g^{-1}\partial_t g)(t_0):= d\mathcal{L}_{g(t_0)}^{-1}(g(t_0))\partial_t g(t_0) \in T_{\mathrm{id}} G=\mathfrak{g}$$ where $\mathcal{L}_g \in \mathrm{Diff}(G)$ is the left-multiplication by $g$. Assume that the Lie algebra is endowed with an inner product $\langle \cdot,\cdot \rangle$ which is invariant under the adjoint action of the Lie group. If $(M,\omega)$ is symplectic, we say that the action of $G$ is Hamiltonian, if there exists a moment map for the action, i.e. an equivariant function $\mu: M \to \mathfrak{g}$ [^4] where the action of $G$ on $\mathfrak{g}$ is the adjoint action, such that for every $\xi \in \mathfrak{g}$ $$d \langle \mu (\cdot),\xi \rangle=\iota(X_\xi)\omega.$$ Note that the function $\langle \mu( \cdot),\xi \rangle$ is a Hamiltonian function for the vector field $X_\xi$. Observe that if $\xi \in Z(\mathfrak{g})$ [^5], then $\mu_\xi(\cdot):=\mu(\cdot)+\xi$ is also a moment map for the action of $G$. Hence the moment map is determined by the action up to addition of a central element in each connected component of $M$. We assume now that $(M,\omega)$ is a symplectic (not necessarily compact) connected manifold, and $G$ a compact connected Lie group that acts on $M$ by Hamiltonian symplectomorphisms as above. We assume that the action is effective, i.e. the homomorphism $\psi$ is injective. Let $L_0$ and $L_1$ be two closed Lagrangian submanifolds of $M$. We do not require that the Lagrangians are $G$-invariant but we assume throughout this section the following compatibility condition with $G$. (H1) : *For $j \in \{0,1\}$ there exist antisymplectic involutions $R_j \in \mathrm{Diff}(M)$, i.e. $$R_j^*\omega=-\omega, \quad R_j^2=\mathrm{id},$$ which commute which $G$, i.e. for every $g \in G$ the symplectomorphism $R_j \psi(g) R_j$ lies in the image of $\psi$, such that $$L_j=\mathrm{Fix}(R_j)=\{x \in M: R_j(x)=x\}.$$* The maps $R_j$ lead to Lie group Automorphisms $S_j:G \to G$ defined by $$\label{S} S_j(g):=\psi^{-1}(R_j \psi(g) R_j), \quad \forall \,\, g \in G.$$ Note that $S_j$ are involutative, i.e. $$S^2_j=\mathrm{id}.$$ We assume that the inner product in the Lie algebra is also invariant under the differentials of $S_j$ at the identity. These are determined by the formula $$X_{dS_j(\mathrm{id})(\xi)}(x)=dR_j(R_j(x))^{-1}X_\xi(R_j x), \quad \forall\,\, x \in M.$$ In the following we will write $\dot{S}_j$ for $dS_j(\mathrm{id})$. If one identifies $G$ with $\psi(G)$, then formally $$\dot{S}_j=\mathrm{Ad}(R_j).$$ Let $\mu$ be a moment map for the action of $G$ on $M$. We further impose the following hypothesis throughout this section. (H2) : *The moment map $\mu$ is proper, zero is a regular value of $\mu$, and $G$ acts freely on $\mu^{-1}(0)$, i.e. $\psi(g)p=p$ for $p \in \mu^{-1}(0)$ implies that $g=\mathrm{id}$.* The Marsden-Weinstein quotient is defined to be the set of $G$-orbits in $\mu^{-1}(0)$ $$\bar{M}:=M//G:=\mu^{-1}(0)/G,$$ i.e. $x, y \in \mu^{-1}(0)$ are equivalent if there exists $g \in G$ such that $\psi(g)x=y$. It follows from hypothesis (H2) that $\bar{M}$ is a compact manifold of dimension $$\mathrm{dim}(\bar{M})=\mathrm{dim}(M)-2\mathrm{dim}(G).$$ The Marsden-Weinstein quotient carries a natural symplectic structure induced from the symplectic structure on $M$, see [@mcduff-salamon1 Proposition 5.40]. We denote by $$G_{L_j}:=\{g \in G:gL_j=L_j\}$$ for $j \in \{0,1\}$ the isotropy subgroup of the Lagrangian $L_j$. It follows directly from the definitions that $G_{S_j}:=\{g \in G: S_j g=g\}$ is a subgroup of $G_{L_j}$. If $\mu^{-1}(0) \cap L_j \neq \emptyset$, than the two groups agree. To see that, note that by (H1) there exists $p \in L_j$ whose isotropy subgroup is trivial, i.e. $G_p:=\{g \in G :gp=p\}=\{\mathrm{id}\}.$ If $g \in G_{L_j}$ then $g^{-1} S_j(g) p=p$ and hence $g \in G_{S_j}$. We denote by $\mathfrak{g}_{L_j}$ the Lie-algebra of $G_{L_j}$. Note that if $\mu^{-1}(0) \cap L_j \neq \emptyset$ $$\mathfrak{g}_{L_j}=\{\xi \in \mathfrak{g}: \dot{S}_j(\xi)= \xi\}, \quad \mathfrak{g}_{L_j}^\perp= \{\xi \in \mathfrak{g}: \dot{S}_j(\xi)=-\xi\},$$ where $\perp$ stands for the invariant inner product defined above. The following proposition says that the two Lagrangians induce Lagrangian submanifolds in the Marsden-Weinstein quotient. \[indlag\] Assume $(H1)$ and $(H2)$. Then the subsets of $\bar{M}$ $$\bar{L}_j:=G(L_j \cap \mu^{-1}(0))/G$$ are Lagrangian submanifolds of $\bar{M}$ and they are naturally diffeomorphic to $$(L_j \cap \mu^{-1}(0))/G_{L_j}.$$ The following example shows that Proposition \[indlag\] will in general be wrong if we do not assume hypothesis $(H1)$. Consider the standard action of $S^1$ on $\mathbb{C}^2$ given by $$(z_1,z_2) \mapsto (e^{i\theta}z_1 e^{i\theta} z_2).$$ A moment map for this action is given by $$\mu(z):=\frac{i}{2}(|z|^2-1)$$ and the Marsden-Weinstein quotient is the two sphere. Consider now tthe family of Lagrangian submanifolds of $\mathbb{C}^2$ given by $$L_a:=\{(x_1+ia,x_2): x_1,x_2 \in \mathbb{R}\}$$ where $a \in \mathbb{R}$. Note that if $a=0$ then $L_a$ equals the fixpoint set of the antisymplectic involution on $\mathbb{C}^2$ which is given by complex conjugation. This involution commutes with the $S^1$-action. For $a \neq 0$ the Lagrangians $L_a$ do not satisfy hypothesis $(H1)$. Consider the chart $$\bigg\{\frac{z_2}{z_1}:|z_1|^2+|z_2|^2=1,\,\,z_2 \neq 0\bigg\} \cong \mathbb{C}$$ of $\mu^{-1}/S^1 \cong S^2$. Then the images of $\bar{L}_a$ are given by $$\bigg\{\frac{t \sqrt{1-t^2-a^2}}{\sqrt{1-t^2}} \pm\frac{iat}{\sqrt{1-t^2}}: |t| \leq 1-a^2\bigg\}.$$ For $a \neq 0$ these are figure eights with nodal point $(0,0)$. To prove Proposition \[indlag\] we need two lemmas. \[clean\] Assume $(H1)$ and $(H2)$. The Lagrangians $L_j$ intersect cleanly with $\mu^{-1}(0)$, i.e. $\mu^{-1}(0) \cap L_j$ is a submanifold of $\mu^{-1}(0)$ and for every $p \in \mu^{-1}(0) \cap L_j$ we have $T_p \mu^{-1}(0) \cap T_p L_j=T_p( \mu^{-1}(0) \cap L_j)$. **Proof:** We may assume without loss of generality that $\mu^{-1}(0) \cap L_j \neq \emptyset$. For $p \in \mu^{-1}(0) \cap L_j$ we claim that $$\label{tracl} d \mu(p) T_p L_j=\mathfrak{g}_{L_j}^\perp.$$ We first calculate for $v \in T_p L_j$ and $\xi \in \mathfrak{g}_{L_j}$ $$\begin{aligned} \langle d \mu(p) v,\xi \rangle=d\langle \mu,\xi \rangle(p) v =\omega(X_\xi(p),v)=0\end{aligned}$$ and hence $$d \mu(p) T_pL_j \subset \mathfrak{g}^\perp_{L_j}.$$ To prove that equality holds in (\[tracl\]) it suffices to show the following implication $$\label{tracl2} \xi \in \mathfrak{g}_{L_j}^\perp,\,\,\langle \xi, d\mu(p)v \rangle=0\,\, \forall v \in T_p L_j \quad \Longrightarrow \quad \xi=0.$$ Assume that $\xi$ satisfies the assumption in (\[tracl2\]). Let $w \in T_p M$. Then $w=w_1+w_2$, where $w_1 \in T_p L_j$, i.e. $dR_j w_1=w_1$, and $dR_j w_2=-w_2$. We calculate $$\begin{aligned} \langle \xi,d\mu(p) w \rangle &=& \langle \xi d \mu(p) w_2 \rangle\\ &=& \omega(X_\xi(p),w_2)\\ &=& \omega(X_\xi(p),-dR_j w_2)\\ &=& \omega(dR_j X_\xi(p),w_2)\\ &=& \omega(X_{\dot{S}_j \xi}(p),w_2)\\ &=&-\omega(X_\xi(p),w_2)\\ &=&-\langle \xi, d\mu(p) w \rangle\end{aligned}$$ and hence $$\langle \xi, d \mu(p) w \rangle=0 \,\, \forall w \in T_p M.$$ Since $0=\mu(p)$ is a regular value of $\mu$ it follows that $d\mu(p)$ is surjective and hence $\xi=0$. This proves (\[tracl2\]) and hence (\[tracl\]). If $U$ is a sufficiently small open neighbourhood of $0$ in $\mathfrak{g}_{L_j}^\perp$, then $\mu^{-1}(U)$ is a submanifold of $M$. It follows from (\[tracl\]), that $\mu^{-1}(0)$ and $L_j \cap \mu^{-1}(U)$ intersect transversally in $\mu^{-1}(U)$. Hence $\mu^{-1}(0)$ and $L_j$ intersect cleanly. $\square$ \[base\] Assume $(H1)$ and $(H2)$. If $p \in L_j$, $G_p=\{g \in G: gp=p\}=\{\mathrm{id}\}$ and $\psi(g)p \in L_j$ for some $g \in G$, then $g \in G_{L_j}.$ **Proof:** Because $L_j=\mathrm{Fix}(R_j)$, $$R_jgR_jp=gp.$$ Since $R_jG=GR_j$ there exists $\tilde{g} \in G$ such that $$\tilde{g}x=R_jgR_jx \quad \forall \,\,x \in M.$$ Hence $$(\tilde{g})^{-1} g p=p$$ and because $G_p=\{\mathrm{id}\}$ $$\tilde{g}=g.$$ Hence $g=R_jgR_j$ and for every $q \in L_j$ $$gq=R_jgq.$$ This implies that $gq \in L_j$ and hence $g \in G_{L_j}$. $\square$\ \ **Proof of Proposition \[indlag\]:** It follows from Lemma \[clean\] and the fact that $G_{L_j}$ acts freely on $L_j \cap \mu^{-1}(0)$ that $(L_j \cap \mu^{-1}(0))/G_{L_j}$ is a manifold. There is an obvious surjective map from $(L_j \cap \mu^{-1}(0))/G_{L_j}$ to $\bar{L}_j$ which assigns to a representative $x \in L_j \cap \mu^{-1}(0)$ of an equivalence class in $(L_j \cap \mu^{-1}(0))/G_{L_j}$ the equivalence class of $x$ in $\bar{L}_j$. It follows from Lemma \[base\] that this map is an injection. $\square$\ \ In addition we make the following topological assumptions. (H3) : *$\pi_2(M)$, $\pi_1(M)$, $\pi_1(L_j)$, and $\pi_0(L_j)$ for $j \in \{0,1\}$ are trivial.*[^6] Convex structures for Hamiltonian group actions on symplectic manifolds were introduced in [@cieliebak-mundet-salamon]. We give a similar definition which takes care of the Lagrangian submanifolds. Recall that an almost complex structure is called $\omega$-compatible if $$\langle \cdot, \cdot \rangle=\omega(\cdot, J\cdot )$$ is a Riemannian metric on $TM$. We say that an almost complex structure is $G$-invariant, if $$J(z)=g_* J(z):=d\psi(g)^{-1}(gz)J(gz) d\psi(g)z, \quad \forall \,\,g \in G, \,\,\forall \,\,z \in M.$$ It follows from [@mcduff-salamon1 Proposition 2.50] that the space of $G$-invariant compatible almost complex structures is nonempty and contractible. \[convexstr\] A ***convex structure*** on $(M,\omega,\mu,L_0,L_1)$ is a pair $(f,J)$ where $J$ is a $G$-invariant $\omega$-compatible almost complex structure on $M$ which satisfies $$\label{Rinv} dR_j(R_jz)J(R_jz)dR_j(z)=-J(z), \quad \forall z \in M.$$ for $j \in \{0,1\}$ and $f:M \to [0,\infty)$ is a smooth function satisfying the following conditions. (C1) : $f$ is $G$-invariant and proper. (C2) : There exists a constant $c_0>0$ such that $$f(x) \geq c_0 \quad \Longrightarrow \quad \langle \nabla_\xi \nabla f(x),\xi\rangle \geq 0, \,\, df(x)J(x)X_{\mu(x)}(x) \geq 0,\,\, \mu(x) \neq 0$$ for every $x \in M$ and every $\xi \in T_xM$. Here $\nabla$ denotes the Levi-Civita connection of the metric $\langle \cdot, \cdot \rangle=\omega(\cdot,J\cdot)$. (C3) : For every $p \in L_j$ it holds that $\nabla f(p) \in T_p L_j$. As our fourth hypothesis we assume that a convex structure exists (H4) : *There exists a convex structure $(f,J_0)$ on $(M,\omega,\mu,L_0,L_1)$.* A convex structure will guarantee, that solutions of our gradient equation will remain in a compact domain.\ The main examples we have in mind, are of the following form. The symplectic manifold $(M,\omega)$ equals the complex vector space $\mathbb{C}^n$ endowed with its canonical symplectic structure $\omega_0$, the Lagrangians equal some linear Lagrangian subspace of $\mathbb{C}^n$, and the group action $\psi$ is given by some injective linear representation of a connected compact Lie group $G$ to $U(n)$. For a linear Lagrangian subspace $L$ of $\mathbb{C}^n$ there is a $\mathbb{R}$-linear splitting $$\mathbb{C}^n=L \oplus J_0 L$$ where $J_0$ is the standard complex structure given by multiplication with $i$. Let $R=R_L$ be the canonical antisymplectic involution given by $$R(x+J_0y)=x-J_0y$$ for $x,y \in L$. Hypothesis (H1) means $$\rho(G)R=R\rho(G).$$ In the special case where $\rho=\textrm{id}$ and $L=\mathbb{R}^n$, the induced involution $S$ on $G$ is given by complex conjugation. To see that, choose $A \in U(n)$ and $z \in \mathbb{C}$. We calculate $$S(A)z=RAR(z)=RA\bar{z}=\bar{A}z$$ and hence $S(A)=\bar{A}$. The Lie algebra $u(n)$ of $U(n)$ carries a natural invariant inner product given by $$\langle A,B \rangle:=\textrm{trace}(A^*B),$$ where $A^*$ is the complex conjugated transposed of $A$. Let $\dot{\rho}: \mathfrak{g} \to u(n)$ be the induced representation of $\rho$. Endow $\mathfrak{g}$ with the invariant inner product $$\langle \xi_1,\xi_2 \rangle_\rho= \langle \dot{\rho}(\xi_1),\dot{\rho}(\xi_2)\rangle, \quad \forall \,\, \xi_1,\xi_2 \in \mathfrak{g}.$$ Let $\dot{\rho}^*: u(n) \to \mathfrak{g}$ be the adjoint of $\dot{\rho}$, i.e. $$\langle \dot{\rho}(\xi),\eta \rangle=\langle \xi, \dot{\rho}^*(\eta)\rangle_\rho \quad \forall \,\,\xi \in \mathfrak{g}, \eta \in u(n),$$ then a moment map $\mu$ for the action of $G$ is given by $$\mu(z)=-\frac{1}{2} \dot{\rho}^*(i z z^*)-\tau$$ where $\tau$ is a central element of $\mathfrak{g}$. Since $\dot{\rho}$ is isometric with respect to our inner products, $\langle \cdot, \cdot \rangle$ is $\dot{S}$ invariant. A convex structure on $(\mathbb{C}^n,\omega_0, \mu,L)$ is defined for example by $$(f,J)=(\frac{1}{2}|z|^2,J_0).$$ Let $A$ be a $k \times n$-matrix of rank $k$ whose entries are positive integers. Let the $k$-torus $T^k$ act on $\mathbb{C}^n$ by $$z \mapsto \exp(2 \pi i A \theta^T)z$$ where $\theta=(\theta_1, \ldots,\theta_k) \in \mathbb{R} / \mathbb{Z} \times \ldots \times \mathbb{R} / \mathbb{Z}=T^k$ and $z=(z_1, \ldots ,z_n) \in \mathbb{C}^n$. For some $\tau \in i\mathbb{R}^k$ which is equal to the Lie algebra of the torus a moment map for the torus action above is defined by $$\mu(z)=\frac{1}{2i}(A^T A)^{-1}A^T \left(\begin{array}{c} |z_1|^2\\ \vdots\\ |z_n|^2 \end{array}\right)-\tau.$$ If $T^k$ acts freely on $\mu^{-1}(0)$, then the Marsden-Weinstein quotient $\mathbb{C}^n //T^k=\mu^{-1}(0)/T^k$ is called a toric manifold. There is a natural action of the unitary group $U(k)$ on $\mathbb{C}^{n\times k}$, the space of $k$-frames in $\mathbb{C}^n$, having the moment map $$\mu(B)=\frac{1}{2i}(B^* B-\textrm{id}).$$ The symplectic quotient is the complex Grassmannian manifold $G_\mathbb{C}(n,k)$. Let $L=\mathbb{R}^{n \times k}$ be the space of real $k$-frames. Its isotropy subgroup is the orthogonal group $O(k)$ and the induced Lagrangian in the symplectic quotient equals the real Grassmannian $G_\mathbb{R}(n,k)$. \[natur\] For $U \in U(n)$ let the representation $\rho_U$ be given by $$\rho_U(g)=U\rho(g) U^{-1},\quad \forall g \in G.$$ Then a moment map for this action is given by $$\mu_U(z)=\mu(U^{-1}z)$$ and there is a natural induced isomorphism from $\bar{M}$ to $\bar{M}_U:=\mu_U^{-1}(0)/G$ given by $$[z] \mapsto [Uz].$$ Defining the linear Lagrangian subspace $$L_U:=U(L)$$ the image of $\bar{L}$ under the above isomorphism equals $$\bar{L}_U:=G(\mu^{-1}_U(0) \cap L_U)/G.$$ Because the group $U(n)$ acts transitively on the set of Lagrangian subspaces of $\mathbb{C}^n$ one can always assume after applying some $U$ as above, that $L=\mathbb{R}^n$. The symplectic vortex equations on the strip -------------------------------------------- In this subsection we show how one can derive the symplectic vortex equations from an action functional. We define the path space $\mathscr{P}$ by $$\label{pathspa} \mathscr{P}:=\{(x,\eta) \in C^\infty([0,1],M \times \mathfrak{g}): x(j) \in L_j,\,\, \eta(j) \in \mathfrak{g}_{L_j}^\perp, \,\, j \in \{0,1\}\},$$ The assumption (H3) implies that $\mathscr{P}$ is connected and simply connected. The gauge group $\mathcal{H}$ is defined by $$\mathcal{H}:=\{g \in C^\infty([0,1],G):g(j) \in G_{L_j},\,\, g(j)^{-1}\partial_t g(j) \in \mathfrak{g}_{L_j}^\perp,\,\, j \in \{0,1\}\}.$$ The group structure is the pointwise multiplication of $G$. The gauge group $\mathcal{H}$ acts on $\mathscr{P}$ as follows $$g_*(x,\eta)=(g x,g \eta g^{-1}-\partial_t g g^{-1}), \quad g \in \mathcal{H}.$$ Choose a path $x_0: [0,1] \to M$ with $x_0(j) \in L_j$ for $j \in \{0,1\}$. For a smooth family of $G$-invariant functions $H_t:M \to \mathbb{R}$ for $t \in [0,1]$, we define the action functional $$\mathcal{A}_{\mu,H}:\mathscr{P} \to \mathbb{R}$$ by $$\mathcal{A}_{\mu,H}(x,\eta)=-\int_{[0,1]\times [0,1]}\bar{x}^*\omega +\int_0^1\big(\langle \mu(x(t)),\eta(t)\rangle-H_t(x(t))\big)dt,$$ where $\bar{x}:[0,1] \times [0,1] \to M$ is a smooth map, which satisfies $$\bar{x}(t,1)=x(t),\quad \bar{x}(t,0)=x_0(t),\quad \bar{x}(0,s) \in L_0,\quad \bar{x}(1,s) \in L_1.$$ Since $\omega$ is closed and vanishes on the Lagrangians the assumption that $\pi_2(M)=0$ and $\pi_1(L_j)=0$ for $j \in \{0,1\}$ together with Stokes theorem implies that the value of $\mathcal{A}_{\mu,H}(x,\eta)$ does not depend on the choice of $\bar{x}$. Moreover, $\mathcal{A}_{\mu,H}$ is invariant under the action of $\mathcal{H}_0$, the path-connected component of the identity of $\mathcal{H}$. To see this, let $g \in \mathcal{H}_0$, then there exists $h: [0,1]\times [0,1] \to G$ with $$h(t,1)=g(t), \quad h(t,0)=\mathrm{id}, \quad h(j,s) \in G_{L_j}, \quad h^{-1}\partial_t h(j,s) \in \mathfrak{g}_{L_j}^\perp,\,\,j \in \{0,1\}.$$ The claim follows with $\overline{g^*x}=h \bar{x}$. The tangent space $T_{(x,\eta)}\mathscr{P}$ of the path space $\mathscr{P}$ at $(x,\eta) \in \mathscr{P}$ is defined as the vector space $$\{(\hat{x},\hat{\eta}) \in C^\infty([0,1],x^*TM \times \mathfrak{g}): \hat{x}(j) \in T_{x(j)}L_j,\,\,\hat{\eta}(j) \in \mathfrak{g}^\perp_{L_j},\,\, j \in \{0,1\}\}.$$ A family of $G$-invariant, $\omega$-compatible, almost complex structures $J_t$ determines an inner product on $\mathscr{P}$ by $$\label{metric} \langle (\hat{x}_1,\hat{\eta}_1),(\hat{x}_2,\hat{\eta}_2)\rangle =\int_0^1(\langle \hat{x}_1(t),\hat{x}_2(t)\rangle_t+ \langle \hat{\eta}_1(t),\hat{\eta}_2(t)\rangle) dt$$ for $(\hat{x}_1,\hat{\eta}_1),(\hat{x}_2,\hat{\eta}_2) \in T_{(x,\eta)}\mathscr{P}$, where $$\langle \cdot,\cdot \rangle_t=\langle \cdot, \cdot \rangle_{J_t} =\omega( \cdot, J_t \cdot).$$ The gradient of $\mathcal{A}_{\mu,H}$ with respect to the above inner product as usual defined by $$d \mathcal{A}_{\mu,H}(x,\eta)[\hat{x},\hat{\eta}]=\langle \mathrm{grad} \mathcal{A}_{\mu,H}(x,\eta),[\hat{x},\hat{\eta}]\rangle$$ is given by $$\textrm{grad}\mathcal{A}_{\mu,H}(x,\eta)= \left(\begin{array}{c} J_t(\dot{x}+X_\eta(x)-X_{H_t}(x))\\ \mu(x) \end{array}\right).$$ The set $\mathrm{crit}(\mathcal{A}) \subset \mathscr{P}$ of critical points of $\mathcal{A}_{\mu,H}$ consists of paths $(x,\eta):[0,1] \to M \times \mathfrak{g}$ which satisfy $$\dot{x}+X_\eta(x)=X_{H_t}(x), \quad \mu(x)=0, \quad x(j) \in L_j, \quad \eta(j) \in \mathfrak{g}_{L_j}^\perp,\,\,j \in \{0,1\}.$$ Since $G$ acts freely on $\mu^{-1}(0)$ the group $\mathcal{H}$ acts freely on $\mathrm{crit}(\mathcal{A})$. If $\bar{H}$ is the induced Hamiltonian function of $H$ in the Marsden-Weinstein quotient $\bar{M}$ and $\phi^t_{\bar{H}}$ its flow, i.e. $$\frac{d}{dt}\phi^t_{\bar{H}}=X_{\bar{H}_t} \circ \phi^t_{\bar{H}}, \quad \phi^0_{\bar{H}}=\mathrm{id},$$ then we will prove in Lemma \[zehnder\] below that there is a natural bijection $$\mathrm{crit}(\mathcal{A})/\mathcal{H} \cong \phi^1_{\bar{H}} (\bar{L}_0) \cap \bar{L}_1.$$ Let $$\Theta=\{z=s+it \in \mathbb{C}:0 \leq t \leq 1\}$$ be the strip. The flow lines of the vector field $\textrm{grad}\mathcal{A}_{\mu,H}$ are pairs $(u,\Psi) \in C^\infty_{loc}(\Theta,M \times \mathfrak{g})$, which satisfy the following partial differential equation $$\label{eq1} \begin{array}{c} \partial_s u+J_t(u)(\partial_tu+X_\Psi(u)-X_{H_t}(u))=0\\ \partial_s \Psi+\mu(u)=0\\ u(s,j)\in L_j, \quad\eta(s,j) \in \mathfrak{g}_{L_j}^\perp,\,\,\, j\in\{0,1\}. \end{array}$$ We define further the gauge group $$\mathcal{G}_{loc}= \{g \in C^\infty_{loc}(\Theta,G):g(s,j) \in G_{L_j},\, g^{-1}\partial_t g(s,j) \in \mathfrak{g}_{L_j}^\perp, \, j \in \{0,1\}\}.$$ Solutions of the problem (\[eq1\]) are invariant under the action of $\mathcal{H}$ but not of $\mathcal{G}_{loc}$. To make the problem invariant under the gauge group $\mathcal{G}_{loc}$, we introduce an additional variable $\Phi$. Given a solution $(u_0,\Psi_0)$ of (\[eq1\]) and $g \in \mathcal{G}_{loc}$ then $(u,\Psi,\Phi)=(g u_0,g \Psi_0 g^{-1}-g^{-1}\partial_t g,-g^{-1}\partial_s g)$ is a solution of the so called **symplectic vortex equations on the strip** $$\label{eq} \begin{array}{c} \partial_s u+X_\Phi(u)+J_t(u)(\partial_tu+X_\Psi(u)-X_{H_t}(u))=0\\ \partial_s \Psi-\partial_t\Phi+[\Phi,\Psi]+\mu(u)=0\\ u(s,j)\in L_j, \quad \Phi(s,j)\in \mathfrak{g}_{L_j}, \quad \Psi(s,j) \in \mathfrak{g}_{L_j}^\perp \quad j\in\{0,1\}. \end{array}$$ Moreover, (\[eq\]) is invariant under the action of $g \in \mathcal{G}_{loc}$ given by $$g_*(u,\Psi,\Phi)=(g u, g \Psi g^{-1}-\partial_t g g^{-1},g \Phi g^{-1}-\partial_s g g^{-1}).$$ On the other hand each solution of (\[eq\]) is gauge equivalent to a solution of (\[eq1\]). To see that, let $(u,\Psi,\Phi)$ be a solution of (\[eq\]) and take the solution $g:\Theta \to G$ of the following ordinary differential equation on the strip $$\partial_s g=g \Phi, \quad g(0,t)=\textrm{id}.$$ Then $g \in \mathcal{G}_{loc}$ and $g_*\Phi=0$. In the terminology of gauge theory, this means that solutions of (\[eq1\]) are solutions of (\[eq\]) in so called **radial gauge**. \[connection\] If one introduces the connection $A=\Phi ds+\Psi dt$ on the trivial $G$-bundle over the strip, then the first two equations of (\[eq\]) can be written as $$\bar{\partial}_{J,H,A}(u)=0, \qquad *F_A+\mu(u)=0.$$ These equations were discovered independently by D.Salamon and I.Mundet (see [@cieliebak-gaio-salamon], [@cieliebak-mundet-salamon], and [@mundet]). In the physics literature they are known as gauged sigma models. \[naturality\] Solutions of the problem (\[eq\]) have the following properties. Let $K_t$ be some smooth family of $G$-invariant functions on $M$ and let $\psi_K^t:M \to M$ be the Hamiltonian symplectomorphism defined by $$\frac{d}{dt}\psi^t_K=X_{K_t} \circ \psi^t_K, \quad \psi_K^0=\mathrm{id}.$$ If $(u,\Psi,\Phi)$ is a solution of (\[eq\]), then $$(\tilde{u},\tilde{\Psi},\tilde{\Phi})(s,t):=(\psi_K^{-t} \circ u, \Psi,\Phi)(s,t)$$ is also a solution of (\[eq\]) with $H,J,L_0,L_1$ replaced by $$\tilde{H}_t:=(H_t-K_t) \circ \psi^t_K, \quad \tilde{J}_t:=(\psi_K^t)^*J_t, \quad \tilde{L}_0=L_0, \quad \tilde{L}_1=\psi^{-1}_K L_1.$$ In particular, by choosing $H_t=K_t$ one can always assume that $H \equiv 0$. \[zehnder\] There is a natural bijection between $\mathrm{crit}(\mathcal{A})/\mathcal{H}$ and $\phi^1_{\bar{H}}(\bar{L}_0) \cap \bar{L}_1$. **Proof:** By Remark \[naturality\] we may assume without loss of generality that $H=0$. Denote by $\pi$ the canonical projection from $\mu^{-1}(0)$ to $\bar{M}=\mu^{-1}(0)/G$. If $q \in \bar{L}_0 \cap \bar{L}_1$, then there exists $x_0 \in L_0$, $x_1 \in L_1$, and $h \in G$ such that $$\pi(x_0)=\pi(x_1)=q, \quad x_1=hx_0.$$ Choose a smooth path $g \in C^\infty([0,1],G)$ such that $$g(0)=\mathrm{id}, \quad g(1)=h, \quad (\partial_t g) g^{-1}(0) \in \mathfrak{g}_{L_0}^\perp, \quad (\partial_t g) g^{-1}(1) \in \mathfrak{g}_{L_1}^\perp.$$ Such a path exists, since $G$ is connected. Now define $$I: \bar{L}_0 \cap \bar{L}_1 \to \mathrm{crit}(\mathcal{A})/\mathcal{H}, \quad q \mapsto [(g(t)x_0,-(\partial_t g) g^{-1}(t))].$$ Here $[ \cdot,\cdot]$ denotes the equivalence class in $\mathrm{crit}(\mathcal{A})/\mathcal{H}$.\ We have to show that $I$ is well defined. To see that choose another quadruple $(\tilde{x}_0,\tilde{x}_1,\tilde{h},\tilde{g})$ which satisfies the relations above. It follows from Lemma \[base\] that there exists $h_j \in G_{L_j}$ for $j \in \{0,1\}$ such that $$\tilde{x}_j=h_j x_j.$$ Define $$\gamma(t):=g(t) h_0^{-1}\tilde{g}(t)^{-1}.$$ Then $$\gamma_*(g(t)x_0,-(\partial_t g) g^{-1})= (\tilde{g}(t)\tilde{x}_0,-(\partial_t \tilde{g}) \tilde{g}^{-1})$$ and $$\gamma \in \mathcal{H}.$$ This shows that $I$ is well defined. The verification that $I$ is a bijection is easy, namely to construct the inverse of $I$ map $(x,\eta) \in \mathrm{crit}(\mathcal{A})$ to $\pi(x)(0)$. $\square$ \[extens\] Every solution can be extended to the whole complex plane. To see that let $(u,\Psi,\Phi)$ be a solution of (\[eq\]) and assume that $$\label{R-inv} J_j(z)=-dR_j(R_jz) J_j(R_jz)dR_j(z), \quad z \in M,\,\, j \in \{0,1\}.$$ For simplicity, assume also that $H \equiv 0$. Let $\hat{J}_t$ for $t \in \mathbb{R}$ be the unique $G$-invariant extension of $\omega$-compatible almost complex structures on $M$ defined by the following conditions $$\begin{aligned} \hat{J}|_{[0,1] \times M}&=&J\\ \hat{J}_{2n-t}(z)&=&-dR_0(R_0 z) \hat{J}_{2n+t}(R_0 z) dR_0(z), \quad n \in \mathbb{Z},\,\,\,t \in (0,1]\\ \hat{J}_{2n+1-t}(z)&=&-dR_1(R_1 z) \hat{J}_{2n+1+t}(R_1 z) dR_1(z), \quad n \in \mathbb{Z},\,\,\,t \in (0,1].\end{aligned}$$ For $n \in \mathbb{Z}$ and $t \in (0,1]$ let $(\hat{u}, \hat{\Psi},\hat{\Phi}) \in W^{1,p}_{loc}(\mathbb{C},M \times \mathfrak{g} \times \mathfrak{g})$ be defined by the conditions $$\begin{aligned} (\hat{u},\hat{\Psi},\hat{\Phi})|_{\Theta}&=&(u,\Psi,\Phi),\\ (\hat{u},\hat{\Psi},\hat{\Phi})(s,2n-t)&=& (R_0 \hat{u},-\dot{S}_0(\hat{\Psi}), \dot{S}_0(\hat{\Phi}))(s,2n+t),\\ (\hat{u},\hat{\Psi},\hat{\Phi})(s,2n+1-t)&=& (R_1 \hat{u},-\dot{S}_1(\hat{\Psi}), \dot{S}_1(\hat{\Phi}))(s,2n+1+t).\end{aligned}$$ Here $S_j$ for $j \in \{0,1\}$ was defined in (\[S\]). The map $(\hat{u},\hat{\Psi},\hat{\Phi})$ solves $$\label{eqext} \begin{array}{c} \partial_s \hat{u}+X_{\hat{\Phi}}(\hat{u})+J_t(\hat{u}) (\partial_t\hat{u}+X_{\hat{\Psi}}(\hat{u}))=0\\ \partial_s \hat{\Psi}-\partial_t\hat{\Phi}+ [\hat{\Phi},\hat{\Psi}]+\mu(\hat{u})=0\\ (\hat{u},\hat{\Psi},\hat{\Phi})(s+2,t)=[(R_0) \circ (R_1) \hat{u}, (\dot{S}_0) \circ (\dot{S}_1)\hat{\Psi}, (\dot{S}_0) \circ (\dot{S}_1) \hat{\Phi})(s,t). \end{array}$$ Solutions of (\[eqext\]) are invariant under the action of the gauge group $$\hat{\mathcal{G}}_{loc}:=\{g \in C^\infty_{loc}(\mathbb{C},G): g(s,t+2)=(S_0) \circ (S_1) g(s,t)\}.$$ Compactness ----------- The energy of a solution of (\[eq\]) is defined by $$E(u,\Psi,\Phi):=\int_\Theta \bigg(|\partial_s u+X_\Phi(u)|^2 +|\mu(u)|^2 \bigg)dsdt.$$ The aim of this subsection is to prove that every sequence of finite energy solutions of (\[eq\]) has a convergent subsequence modulo gauge invariance. The main ingredient in the proof is Uhlenbeck’s compactness theorem, which states that a connection with an $L^p$-bound on the curvature is gauge equivalent to a connection which satisfies an $L^p$-bound on all its first derivatives. Compactness fails if solutions of (\[eq\]) can escape to infinity. To make sure that this cannot happen we have to choose our almost complex structure and the Hamiltonian function appropriately. Fix some convex structure $\mathcal{K}=(f,\tilde{J})$ on $(M,\omega,\mu,L_1,L_2)$. Let $\mathcal{J}(M,\omega,\mu,\mathcal{K})$ be the space of all $G$-invariant $\omega$-compatible almost complex structures $J$ on $(M,\omega)$ which equal $\tilde{J}$ outside of a compact set in $M$. It is proven in Proposition 2.50 in [@mcduff-salamon1] that the space $\mathcal{J}(M,\omega,\mu,\mathcal{K})$ is nonempty and contractible. We define the **space of admissible families of almost complex structures** $$\mathcal{J}:=\mathcal{J}([0,1],M,\omega,\mu,\mathcal{K}) \subset C^\infty([0,1],\mathcal{J}(M,\omega,\mu,\mathcal{K}))$$ as the space consisting of smooth families of $J_t \in \mathcal{J}(M,\omega,\mu,\mathcal{K})$ which satisfy (\[R-inv\]). Let $C^\infty_{0,G}(M)$ be the space of smooth $G$-invariant functions on $M$ with compact support, and $$\mathrm{Ham}:=\mathrm{Ham}(M,G):= \{H \in C_0^\infty([0,1]\times M): H_t \in C^\infty_{0,G}(M)\}$$ the space of $G$-invariant functions parametrised by $t \in [0,1]$. \[compactn\] Let $(u_\nu,\Psi_\nu,\Phi_\nu) \in C^\infty_{loc}(\Theta,M\times \mathfrak{g} \times \mathfrak{g})$ be a sequence of solutions of (\[eq\]) with respect to a smooth family of almost complex structures $J_t \in \mathcal{J}$ and to a smooth family of Hamiltonian functions $H_t \in \mathrm{Ham}$. If the energies are uniformly bounded, then there exists a sequence of gauge transformations $g_\nu \in \mathcal{G}_{loc}$ such that a subsequence of $(g_\nu)_*(u_\nu,\Psi_\nu,\Phi_\nu)$ converges in the $C^{\infty}_{loc}$-topology to a smooth solution $(u,\Psi,\Phi)$ of the vortex problem(\[eq\]). Instead of Theorem \[compactn\] we prove the following stronger theorem. \[uh\] Let $(u_\nu,\Psi_\nu) \in C^\infty_{loc}(\Theta,M\times\mathfrak{g})$ be a sequence of solutions of (\[eq1\]) with respect to a smooth family of almost complex structures $J_t \in \mathcal{J}$ and to a smooth family of Hamiltonian functions $H_t \in \mathrm{Ham}$. If the energies are uniformly bounded, then there exists a sequence of gauge transformations $g_\nu \in \mathcal{H}$ such that a subsequence of $(g_\nu)_*(u_\nu,\Psi_\nu)$ converges in the $C_{loc}^{\infty}$-topology to a smooth solution $(u,\Psi)$ of the gradient equation (\[eq1\]). **Proof:** Let $(\hat{u}_\nu,\hat{\Psi}_\nu) \in C^\infty_{loc}(\mathbb{C},M \times \mathfrak{g})$ be the extension of $(u_\nu,\Psi_\nu)$ as in Remark \[extens\]. We will prove the theorem in four steps.\ \ **Step 1:** *For every compact subset $K$ of $\mathbb{C}$ there exists a sequence of gauge transformations $g_\nu \in C^\infty(K,G)$ such that $(g_\nu)_*(\hat{u}_\nu,\hat{\Psi}_\nu,0)|_K$ converges in the $C^\infty$-topology.*\ \ Step 1 was proved in [@cieliebak-mundet-salamon Theorem 3.4.]. The main ideas are the following. Let $\hat{A}_\nu=\hat{\Psi}_\nu dt$ be the connection on the trivial $G$-bundle over $\mathbb{C}$. By convexity $\hat{u}_\nu(K)$ is contained in a compact subset of $M$. The curvature of $\hat{A}_\nu$ is given by $F_{\hat{A}_\nu}=\partial_s \hat{\Psi}_\nu ds \wedge dt$. Hence the equation $\partial_s \hat{\Psi}_\nu+\mu(\hat{u}_\nu)=0$ implies that the curvature is uniformly bounded. Moreover, because of (H4) there is no bubbling and hence $$\sup_\nu||\partial_s \hat{u}_\nu|_K||_\infty<\infty.$$ Step 1 follows now from a combination of Uhlenbecks compactness theorem, see [@uhlenbeck; @wehrheim], and the Compactness Theorem for the Cauchy-Riemann operator (see for example [@mcduff-salamon2 Appendix B]).\ \ **Step 2:** *There exists a sequence of gauge transformations $g_\nu \in C^\infty_{loc}(\mathbb{C},G)$ such that a subsequence of $(g_\nu)_*(\hat{u}_\nu,\hat{\Psi}_\nu,0)$ converges in the $C^\infty_{loc}$-topology.*\ \ We use the fact that $\mathbb{C}$ can be exhausted by compact sets, $\mathbb{C}=\bigcup_{n \in \mathbb{N}}B_n$ where $B_n:=\{z \in \mathbb{C}:|z|=n\}$. Let $g_\nu^n$ be the sequence of gauge transformations on $B_n$ for $n \in \mathbb{N}$ obtained in step 1. We show that we may assume that $$\label{induc} g^n_\nu|_{B_{n-1}}=g^{n+1}_\nu|_{B_{n-1}}, \quad \forall \,\,\nu \in \mathbb{N}.$$ To prove (\[induc\]) we first observe that there exists $h^n \in C^\infty(B_n,G)$ such that $h^n_\nu:=(g_\nu^{n+1})^{-1} \circ g_\nu^n|_{B_n}$ has a subsequence which converges in the $C^\infty$-topology to $h^n$. Hence there exists a sequence $\tilde{h}^n_\nu$ which has a converging subsequence and satisfies $$\tilde{h}^n_\nu(z):=\left\{\begin{array}{cc} h^n_\nu(z) & z \in B_{n-1}\\ \mathrm{id} & z \in B_{n+1} \setminus B_n. \end{array}\right.$$ Now replace $g^{n+1}_\nu$ by $g^{n+1}_\nu \circ \tilde{h}^n_\nu$, which satisfies (\[induc\]). Now define $g_\nu$ by $$g_\nu|_{B_n}:=g_\nu^{n+1}|_{B_n}, \quad \forall \,\,n \in \mathbb{N}.$$ **Step 3:** *The sequence of gauge transformations $g_\nu(s+it) \in C^\infty_{loc}(\mathbb{C},G)$ in step 2 may be chosen independent of $s$.*\ \ We use an idea from [@jones-rawnsley-salamon]. Denote the limit of the sequence $(g_\nu)_*(\hat{u}_\nu,\hat{\Psi}_\nu,0)$ as $\nu$ goes to infinity by $(\hat{u},\hat{\Psi},\hat{\Phi})$. Choose $h\in C^{\infty}_{loc}(\mathbb{C},G)$ such that $$h_*\hat{\Phi}=0.$$ Observe that $$\lim_{\nu \to \infty}(\partial_s(h \circ g_\nu))(h \circ g_\nu)^{-1}) =\lim_{\nu \to \infty}(h \circ g_\nu)_*(0)=h_*\hat{\Phi}=0.$$ It follows that $\partial_s (h \circ g_\nu)$ converges to zero in the $C^{\infty}_{loc}$-topology. Now set $$\tilde{g}_\nu(s,t):=h \circ g_\nu(0,t).$$ Obviously, $\tilde{g}_\nu$ is independent of $s$. Moreover, since $\tilde{g}_\nu \circ (h \circ g_\nu)^{-1}$ converges to the identity in the $C^\infty_{loc}$-topology, $\tilde{g}_\nu$ satisfies the assumptions of step 2. We denote $\tilde{g}_\nu$ by $g_\nu$ as before.\ \ **Step 4:** *We prove the theorem.*\ \ We have to modify further our sequence of gauge transformations $g_\nu$, such that they satisfy the boundary conditions. Because the energy of the sequence $(u_\nu,\Psi_\nu)$ is bounded, the energy of $(\hat{u},\hat{\Psi})|_{\Theta}$ is bounded. Using the fact that finite energy solutions converge uniformly at the two ends of the strip, see [@frauenfelder2], we conclude that $\mu(\hat{u})(s,t)$ converges to zero as $s$ goes to infinity uniformly in the $t$-variable. Since $G$ acts freely on $\mu^{-1}(0)$, there exists $s_0 \in \mathbb{R}$ such that $$G_{u(s_0,0)}=G_{u(s_0,1)}=\{\textrm{id}\}.$$ Since by assumption $\hat{u}_\nu(s_0,j) \in L_j$ for $j \in \{0,1\}$ it follows that $\hat{u}(s_0,j) \in GL_j$. Now choose $h \in C^\infty([0,1],G)$ such that $h(0)\hat{u}(s_0,0) \in L_0$ and $h(1)\hat{u}(s_0,1) \in L_1$. This is possible, because $G$ is connected. Choose a sequence $h_\nu \in C^\infty([0,1],G)$ converging to the identity satisfying $$(h_\nu(j) \circ h(j) \circ g_\nu(j))u_\nu(s_0,j) \in L_j , \quad j \in \{0,1\}.$$ We can assume without loss of generality that $$G_{u_\nu(s_0,0)}=G_{u_\nu(s_0,1)}=\{\textrm{id}\}$$ for every $\nu$. By Lemma \[base\], $h_\nu(j) \circ h(j) \circ g_\nu(j) \in G_{L_j}$ for $j \in \{0,1\}$. In particular, $$(h_\nu \circ h \circ g_\nu)u_\nu(s,j) \in L_j, \quad j \in \{0,1\}.$$ Hence $h_*\hat{u}(s,j) \in L_j$ for $j \in \{0,1\}$. By a similar procedure as above we may find a further sequence of gauge transformations $\tilde{h}_\nu \in C^\infty([0,1],G)$ which satisfy the following conditions. (i) : $\tilde{h}_\nu(j) \in G_{L_j}$ for $j \in \{0,1\}$, (ii) : $(\tilde{h}_\nu)_* (g_\nu)_* h_*\hat{\Psi}_\nu(s,j) \in \mathfrak{g}_{L_j}^\perp$ for every $\nu$ and $j \in \{0,1\}$, (iii) : There exists $\tilde{h} \in C^\infty([0,1],G)$ such that $\tilde{h}_\nu$ converges as $\nu$ goes to infinity in the $C^\infty$-topology to $\tilde{h}$. Now set $$\tilde{g}_\nu:=\tilde{h}_\nu \circ h_\nu \circ h \circ g_\nu|_{[0,1]} \in C^\infty([0,1],G).$$ Moreover, using the assumption that $\hat{\Psi}_\nu(j) \in \mathfrak{g}_{L_j}^\perp$ for $j \in \{0,1\}$, one calculates $$\tilde{g}_\nu(j) \in G_{L_j}, \quad \tilde{g}_\nu^{-1}\partial_t \tilde{g}_\nu(j) \in \mathfrak{g}_{L_j}^\perp, \quad j \in \{0,1\},$$ and hence $\tilde{g}_\nu \in\mathcal{H}$. Set $$(u,\Psi):=\tilde{h}_* h_*(\hat{u},\hat{\Psi})|_\Theta.$$ Now the theorem follows with $g_\nu$ replaced by $\tilde{g}_\nu$. $\square$ Moduli spaces ------------- We assume that the Hamiltonian $H \in \mathrm{Ham}$ has the property that the Lagrangians $\phi^1_{\bar{H}}(\bar{L}_0)$ and $\bar{L}_1$ in the Marsden-Weinstein quotient intersect transversally. Under this assumption it can be shown, see [@frauenfelder2], that finite energy solutions of the symplectic vortex equation (\[eq\]) are gauge equivalent to solutions which decay exponentially fast at the two ends of the strip. More precisely, define for some small number $\delta>0$ and some smooth cutoff function $\beta$ satisfying $\beta(s)=-1$ if $s<0$ and $\beta(s)=1$ if $s>1$ $$\gamma_\delta \in C^\infty(\mathbb{R}), \quad s \mapsto e^{\delta \beta(s)s}.$$ For an open subset $S \subset \Theta$ we define the $||\,\,||_{C^k_\delta}$-norm for some smooth function $f \colon S \to \mathbb{R}$ by $$||f||_{C^k_\delta}:=||\gamma_\delta \cdot f||_{C^k}$$ and denote $$\label{Cdelta} C^\infty_\delta(S):=\{f \in C^\infty(S) \colon ||f||_{C^k_\delta}<\infty,\,\,\forall \,\,k \in \mathbb{N}\}.$$ We now introduce the Fréchet manifold $\mathcal{B}=\mathcal{B}_\delta$ as the set consisting of all $w=(u,\Psi,\Phi) \in C^\infty_{loc}(\Theta,M \times \mathfrak{g}) \times C^\infty_{\delta}(\Theta,\mathfrak{g})$ which satisfy the following conditions: (i) : $w$ maps $(s,j)$ to $L_j \times \mathfrak{g}_{L_j} \times \mathfrak{g}_{L_j}^{\perp}$ for $j \in \{0,1\}$ and $s \in \mathbb{R}$. (ii) : There exists a critical point of the action functional $(x_1,\eta_1) \in \mathrm{crit}(\mathcal{A})$, a real number $T_1 \in \mathbb{R}$, and $(\xi_1,\psi_1) \in C^\infty_\delta((-\infty,T_1] \times [0,1],x^*_1 TM \times \mathfrak{g})$ such that $$(u,\Psi)(s,t)=(\mathrm{exp}_{x_1(t)}(\xi_1(s,t)),\eta_1(t) +\psi_1(s,t)), \quad s \leq -T_1.$$ (iii) : There exists a critical point of the action functional $(x_2,\eta_2) \in \mathrm{crit}(\mathcal{A})$, a real number $T_2 \in \mathbb{R}$, and $(\xi_2,\psi_2) \in C^\infty_\delta ([T_2,\infty) \times [0,1],x_2^*TM\times \mathfrak{g})$ such that $$(u,\Psi)(s,t)=(\mathrm{exp}_{x_2(t)}(\xi_2(s,t)),\eta_2(t)+ \psi_2(s,t)).$$ The theorem about exponential decay proved in [@frauenfelder2] now tells us, that for $\delta>0$ chosen small enough every finite energy solution of the symplectic vortex equations is gauge equivalent to an element in $\mathcal{B}_\delta$. Moreover, one can prove that every solution of (\[eq\]) which lies in $\mathcal{B}_{\delta}$ has finite energy. In a similar vein we define the gauge group \[banachgauge\] $\mathcal{G}=\mathcal{G}_\delta$ consisting of gauge transformations $g \in \mathcal{G}_{loc}$ which decay exponentially fast at the two ends of the strip to elements of $\mathcal{H}_0$ the connected component of the identity of the gauge group $\mathcal{H}$. Note that there are natural evaluation maps $$\mathrm{ev}_1, \mathrm{ev}_2 \colon \frac{\{(\ref{eq})\} \cap \mathcal{B}}{\mathcal{G}} \to \pi_0(\mathrm{crit}(\mathcal{A})) \cong (\phi^1_{\bar{H}}(\bar{L}_0) \cap \bar{L}_1) \times \pi_0(\mathcal{H}),$$ induced by the maps $w \mapsto (x_1,\eta_1)$ and $w \mapsto (x_2,\eta_2)$. One can check that the energy of an element $w \in (\{(\ref{eq})\}\cap \mathcal{B})/\mathcal{G}$ is given by the difference of the actions $$E(w)=\mathcal{A}(\mathrm{ev}_1(w))-\mathcal{A}(\mathrm{ev}_2(w)) \footnote{Since the action functional is invariant under the action of $\mathcal{H}_0$ we denote by abuse of notation the function induced from the action functional on $\pi_0(\mathrm{crit}(\mathcal{A}))$ also by $\mathcal{A}$.}.$$ It can be shown, see [@frauenfelder2] that the linearization of the symplectic vortex equations considered as an operator between suitable Banach spaces is a Fredholm operator. Moreover, using hypothesis (H3) it follows that the path space $\mathscr{P}$ defined in (\[pathspa\]) is connected and simply connected. This implies that there exists a function $$I \colon \pi_0(\mathrm{crit}(\mathcal{A})) \to \mathbb{Z}$$ such that the Fredholm index of the linearized symplectic vortex equations at a point $w \in \mathcal{B}/\mathcal{G}$ is given by the difference $I(\mathrm{ev}_1(w))-I(\mathrm{ev}_2(w))$. We can now introduce on $\pi_0(\mathrm{crit}(\mathcal{A}))$ the following equivalence relation. We say that two connected components of $\mathrm{crit}(\mathcal{A})$ are equivalent if they project to the same intersection point of $\phi^1_{\bar{H}}(\bar{L}_0) \cap \bar{L}_1$ and the action functional $\mathcal{A}$ and the index $I$ agree on them. We denote the set of such equivalence classes by $\mathscr{C}=\mathscr{C}(\mathcal{A})$. For $c_1, c_2 \in \mathscr{C}$ we define the moduli space $$\label{modulispa} \tilde{\mathcal{M}}(c_1,c_2):=\{w \in (\{\ref{eq}\}\cap \mathcal{B}) /\mathcal{G}: \mathrm{ev}_1(w) \in c_1,\,\,\mathrm{ev}_2(w) \in c_2\}.$$ Note that the moduli spaces depend on the choice of the almost complex structure $J \in \mathcal{J}$, i.e. $\tilde{\mathcal{M}}(c_1,c_2)=\tilde{\mathcal{M}}_J(c_1,c_2)$. It is proved in [@frauenfelder2] that for generic choice of the almost complex structure the Fredholm operators obtained by linearizing the symplectic vortex equations are surjectiv. In particular, the moduli spaces $\tilde{\mathcal{M}}_J(c_1,c_2)$ are smooth manifolds whose dimension is given by the difference of the Fredholm indices of $c_1$ and $c_2$. There exists a subset $\mathcal{J}_{reg} \subset \mathcal{J}$ of second category such that $\tilde{\mathcal{M}}_J(c_1,c_2)$ for every $c_1,c_2 \in \mathscr{C}$ are smooth finite dimensional manifolds whose dimension is given by $$\mathrm{dim}(\tilde{\mathcal{M}}_J(c_1,c_2))=I(c_1)-I(c_2).$$ The group $\mathbb{R}$ acts on $\tilde{\mathcal{M}}(c_1,c_2)$ by timeshift We define the path space $\mathscr{P}$ by $$w(s,t) \mapsto w(s+r,t), \quad r \in \mathbb{R}.$$ If $c_1 \neq c_2$ then this action is free and the quotient $\tilde{\mathcal{M}}(c_1,c_2)/\mathbb{R}$ is again a manifold. Using the compactness result in Theorem \[compactn\] one can show as in [@salamon1] that the only obstruction to compactness of the spaces $\tilde{\mathcal{M}}(c_1,c_2)$ is breaking off of flow lines, see [@frauenfelder2]. A Novikov ring -------------- If $c_1, c_2 \in \pi_0(\mathrm{crit}(\mathcal{A}))$ and $h \in \mathcal{H}$ then $$\mathcal{A}(c_1)-\mathcal{A}(c_2)=\mathcal{A}(h c_1)-\mathcal{A}(h c_2), \quad I(c_1)-I(c_2)=I(hc_1)-I(hc_2)$$ where $I$ is the Fredholm-index introduced in the previous subsection. It follows that $I(hc)-I(c)$ and $\mathcal{A}(hc)-\mathcal{A}(c)$ is independent of the choice of $c \in \pi_0(\mathrm{crit}(\mathcal{A}))$. Hence we may define the maps $$I_{\mathcal{H}} \colon \mathcal{H} \to \mathbb{Z}, \quad E_{\mathcal{H}} \colon \mathcal{H} \to \mathbb{R}, \quad h \mapsto I(hc)-I(c), \quad h \mapsto \mathcal{A}(hc)-\mathcal{A}(c)$$ for some arbitrary $c \in \pi_0(\mathrm{crit}(\mathcal{A}))$. One easily checks that these maps are group homomorphisms. Moreover they vanish on $\mathcal{H}_0$, the connected component of the identity of $\mathcal{H}$. For the special case where $M=\mathbb{C}^n$, $L_0=L_1=\mathbb{R}^n$, and $G$ acts on $\mathbb{C}^n$ by a linear injective representation $\rho$ one can show, see [@frauenfelder2], that the index map $I_{\mathcal{H}}$ is given by $$I_{\mathcal{H}}(h)=\mathrm{deg}(\mathrm{det}^2_{\mathbb{C}}(\rho(h)))$$ for $h \in \mathcal{H}$. We define $$\Gamma=\frac{\mathcal{H}} {\ker I_\mathcal{H} \cap \ker E_\mathcal{H}}.$$ To the group $\Gamma$ we associate the Novikov ring $\Lambda=\Lambda_\Gamma$ whose elements are formal sums $$r=\sum_{\gamma \in \Gamma} r_\gamma \gamma$$ with coefficients $r_\gamma \in \mathbb{Z}_2$ which satisfy the finiteness condition $$\#\{\gamma \in \Gamma:r_\gamma \neq 0, E_{\mathcal{H}}(\gamma) \geq \kappa\}<\infty$$ for every $\kappa>0$. The multiplication is given by $$r*s=\sum_{\gamma \in \Gamma}\bigg (\sum_{\substack{\gamma_1,\gamma_2 \in \Gamma\\ \gamma_1 \circ \gamma_2=\gamma}}r_{\gamma_1} s_{\gamma_2}\bigg)\gamma.$$ Since the coefficients $r_\gamma$ are taken in a field, the Novikov ring is actually a field. The ring comes with a natural grading defined by $$\deg(\gamma)=I_{\mathcal{H}}(\gamma)$$ and we shall denote by $\Lambda_k$ the elements of degree $k$. Note in particular that $\Lambda_0$ is a subfield of $\Lambda$. Moreover, the multiplication maps $\Lambda_j \times \Lambda_k \to \Lambda_{j+k}$. Definition of the homology -------------------------- We assume that $H \in \mathrm{Ham}$ has the property that $\phi^1_{\bar{H}}(\bar{L}_0)$ and $\bar{L_1}$ intersect transversally and $J \in \mathcal{J}_{reg}=\mathcal{J}_{reg}(H)$, i.e. the Fredholm operators obtained by linearizing the symplectic vortex equations are surjective. Recall $$\label{critfloer} \mathscr{C}=\mathscr{C}(\mathcal{A})= \frac{\textrm{crit}(\mathcal{A})}{\ker I_\mathcal{H} \cap \ker E_{\mathcal{H}}} \cong (\phi^1_{\bar{H}}(\bar{L}_0) \cap \bar{L}_1) \times \Gamma.$$ We define the chain complex $CF_*(H,L_0,L_1,\mu)$ as a module over the Novikov ring $\Lambda$. More precisely, $CF_k(H,L_0,L_1,\mu)$ are formal sums of the form $$\xi=\sum_{\substack{c \in \mathscr{C}\\I(c)=k}}\xi_c c$$ with $\mathbb{Z}_2$-coefficients $\xi_c$ satisfying the finiteness condition $$\label{fino} \#\{c:\xi_c \neq 0,E(c)\geq \kappa\}<\infty$$ for every $\kappa>0$. The action of $\Gamma$ on $\mathscr{C}$ is the induced action of $\mathcal{H}$ on $\mathrm{crit}(\mathcal{A})$. The Novikov ring acts on $CF_*$ by $$r*\xi=\sum_{c \in \mathscr{C}} \sum_{\substack{c' \in \mathscr{C}, \gamma' \in \Gamma\\ \gamma' c'=c}}\bigg(r_{\gamma'}\xi_{\gamma'}\bigg)c.$$ $CF_k$ is invariant under the action of $\Lambda_0$. In particular, $CF_k$ which may be an infinite dimensional vector space over the field $\mathbb{Z}_2$, is a finite dimensional vector space over the field $\Lambda_0$. Recall that for $c_1,c_2 \in \mathscr{C}$ the moduli space is defined by $$\tilde{\mathcal{M}}(c_1,c_2):=\{w \in (\{\ref{eq}\}\cap \mathcal{B}) /\mathcal{G}: \mathrm{ev}_1(w) \in c_1, \,\mathrm{ev}_2(w) \in c_2\}.$$ Let $\mathcal{T}$ be the group $$\mathcal{T}:=\frac{\ker I_\mathcal{H} \cap \ker E_\mathcal{H}} {\mathcal{H}_0},$$ where $\mathcal{H}_0$ is the connected component of the identity of $\mathcal{H}$. Then $\mathcal{T}\times \mathbb{R}$ act on $\tilde{\mathcal{M}}$ by $$w(s) \mapsto g_*w(s+r), \quad (g,r) \in \mathcal{T} \times \mathbb{R}$$ and we define $$\mathcal{M}(c_1,c_2):= \frac{\tilde{\mathcal{M}}(c_1,c_2)}{\mathcal{T}\times \mathbb{R}}.$$ Assume that $c_1 \neq c_2$. Under this assumption $\mathcal{T}\times \mathbb{R}$ acts freely on $\tilde{\mathcal{M}}(c_1,c_2)$ and since $J \in \mathcal{J}_{reg}$ the moduli spaces are manifolds of dimension $$\dim \mathcal{M}(c_1,c_2)=\dim \tilde{\mathcal{M}}(c_1,c_2)-1= I(c_1)-I(c_2)-1.$$ Using the fact that the only obstruction to compactness for strips of finite energy is the breaking off phenomenon, which cannot happen in the case where the index equals zero, we conclude that for $c_1,c_2 \in \mathscr{C}$ with $I(c_1)-I(c_2)=1$ and $\kappa>0$ we have $$\label{finite} \sum_{\substack{\gamma \in \Gamma\\E_\mathcal{H}(\gamma) \geq \kappa, \,\, I_\mathcal{H}(\gamma)=0}} \#\mathcal{M}(c_1,\gamma c_2)<\infty.$$ Set $$n(c_1,c_2):=\#\mathcal{M}(c_1,c_2) \mod 2$$ and define the boundary operator $\partial_k:CF_k \to CF_{k-1}$ as linear extension of $$\partial_k c=\sum_{I(c')=k-1}n(c,c')c'$$ for $c \in \mathscr{C}$ with $I(c)=k$. Note that (\[finite\]) guarantees the finiteness condition (\[fino\]) for $\partial_k c$. As in the standard theory (see [@schwarz1; @schwarz2; @hofer-salamon] one shows that $$\partial^2=0.$$ This gives rise to homology groups $$HF_k(H,J,L_0,L_1,\mu;\Lambda):= \frac{\ker \partial_{k+1}}{\textrm{im} \partial_k}.$$ A standard argument (see [@schwarz1] or [@hofer-salamon]) shows that $HF_k(H,J,,L_0,L_1,\mu;\Lambda)$ is actually independent of the regular pair $(H,J)$. Hence we set for some regular pair $(H,J)$ $$HF_k(L_0,L_1,\mu;\Lambda):=HF_k(H,J,L_0,L_1,\mu;\Lambda).$$ We call the graded $\Lambda$ vector space $HF_*(L_0,L_1,\mu;\Lambda)$ the **moment Floer homology**. Computation of the homology {#agiv} --------------------------- In this subsection we compute moment Floer homology for case where the two Lagrangians coincide, i.e. $L_0=L_1=L$. We will see that in this case, the moment Floer homology equals the singular homology of the induced Lagrangian in the Marsden-Weinstein quotient $\bar{L}$ tensored with the Novikov ring introduced above. As a corollary we get a proof of the Arnold-Givental conjecture for $\bar{L}$. \[arnold\] Assume that the two Lagrangians coincide, i.e. $L_0=L_1=L$, then $$HF_*(L,\mu;\Lambda) :=HF_*(L,L,\mu;\Lambda)=H\bar{L}_*(\mathbb{Z}_2)\otimes_{\mathbb{Z}_2} \Lambda.$$ The Arnold-Givental conjecture holds for $\bar{L}$, i.e. under the transversality assumption $\bar{L} \pitchfork \phi^1_{\bar{H}} \bar{L}$, $$\#(\bar{L} \cap \phi^1_{\bar{H}}\bar{L}) \geq \sum_k b_k(\bar{L},\mathbb{Z}_2).$$ To prove Theorem \[arnold\] we consider the case where the Hamiltonian $H$ vanishes. In this case the Lagrangians in the quotient $\bar{L}$ and $\phi^1_{\bar{H}}(\bar{L})=\bar{L}$ coincide. In particular, they do not intersect transversally but still cleanly, i.e. there intersection is still a manifold whose tangent space is given by the intersection of the two tangent spaces. This is the infinite dimensional analogon of a Morse-Bott situation. In our case the critical manifold can be identified with $\bar{L} \times \Gamma$. Following the approach explained in the appendix, we still can define the homology in this case by choosing a Morse function on the critical manifold. To define the boundary operator one has to count flow lines with cascades. There is a natural splitting of the boundary operator into two parts. The first part takes account of the flow lines with zero cascades, i.e. Morse flow lines on the critical manifold, and the second part takes account of flow lines with at least one cascade. To prove the theorem we have to show that the second part of the boundary operator vanishes. Using the antisymplectic involution we construct successive involutions on the cascades which endowes the space of cascades with the structure of a space of Arnold-Givental type which admits the structure of a Kuranishi structure of stable Arnold-Givental type. Using this it follows that the second part of the boundary vanishes.\ We now define moment Floer homology for the case where the Hamiltonian $H=0$. We think of $\mathcal{H}(\mu^{-1}(0) \cap L)$ as the critical manifold of the action functional $\mathcal{A}=\mathcal{A}_0$ of the unperturbed symplectic vortex equations. A Morse function on the induced Lagrangian in the Marsden-Weinstein quotient $\bar{L}=\mu^{-1}(0)/G_L$ will lift to a $\mathcal{H}$-invariant Morse function on the critical manifold of $\mathcal{A}$. We first describe the elements which are needed to define the chain complex. Choose a Riemannian metric $g$ and a Morse-function $f$ on $\bar{L}$ which satisfy the Morse-Smale condition, i.e. stable and unstable manifolds intersect transversally, and lift it to a $G_L$-equivariant metric $\tilde{g}$ and a $G_L$-equivariant Morse-function $\tilde{f}$ on $\mu^{-1}(0) \cap L$. Recall that for $x \in M$ and $\eta \in \mathfrak{g}$ the linear map $L_x \colon \mathfrak{g} \to T_x M$ was defined by $$L_x \eta=X_\eta(x)=\frac{d}{dr}\bigg|_{r=0}\exp(r\eta)(x).$$ Let $\tilde{\mathscr{C}}_0=\tilde{\mathscr{C}}_0(f)$ be the set of smooth maps $(x,\eta):[0,1] \to M \times \mathfrak{g}$ satisfying $$\dot{x}(t)+L_{x(t)} \eta(t)=0, \,\mu(x(t))=0,\,t \in [0,1],$$ $$x(j) \in \mathrm{crit}(\tilde{f}), \, \eta(j) \in \mathfrak{g}_L^\perp, \,j \in \{0,1\}.$$ Note that $\eta$ is completely determined by $x$ through the formula $$\eta(t)=-(L_{x(t)}^* L_{x(t)})^{-1}L_{x(t)}^* \dot{x}(t),$$ where $L_x^*$ is the adjoint of $L_x$ with respect to the fixed invariant inner product on $\mathfrak{g}$ and the inner product $\omega_x(\cdot,J_t(x) \cdot)$ on $T_{x(t)}M$. Moreover, it follows from Proposition \[zehnder\] that there exists an element $g_x$ of the gauge group $\mathcal{H}$ such that $$x(t)=g_x(t)x(0).$$ Denote $$\label{critmorse} \mathscr{C}_0:=\mathscr{C}_0(f):=\frac{\tilde{\mathscr{C}}_0} {\ker I_\mathcal{H} \cap \ker E_\mathcal{H}}.$$ Then the map $$(x,\eta) \mapsto (x(0),g_x)$$ defines a natural bijection $$\tilde{\mathscr{C}}_0 \cong\mathrm{crit}(\tilde{f}) \times \mathcal{H}$$ and induces a bijection in the quotient $$\mathscr{C}_0 \cong \frac{\mathrm{crit}(\tilde{f})}{G_L} \times \Gamma \cong \mathrm{crit}(f) \times\Gamma.$$ If $\mathrm{ind}_f$ is the Morse-index, then the index of a critical point $c=(q,h) \in \tilde{\mathscr{C}}_0$ is defined to be $$I(q,h):=\mathrm{ind}_{f}(\pi(q))+I_\mathcal{H}(h)$$ for the canonical projection onto the Marsden-Weinstein quotient $\pi:\mu^{-1}(0) \to \bar{M}=\mu^{-1}(0)/G$. We define the energy of a critical point by $$E(q,h):=E_\mathcal{H}(h).$$ By abuse of notation we will also denote by $I$ and $E$ the induced index and the induced energy on the quotient $\mathscr{C}_0$. We next introduce flow lines with cascades which are needed to define the boundary operator. For $c_1=(q_1,h_1),c_2=(q_2,h_2) \in \tilde{\mathscr{C}}_0$ and $m \in \mathbb{N}$ a ***flow line from $c_1$ to $c_2$ with $m$ cascades*** $$v= ((w_k)_{1 \leq k \leq m},(T_k)_{1 \leq k \leq m-1})=$$ $$((u_k,\Psi_k,\Phi_k)_{1 \leq k \leq m},(T_k)_{1 \leq k \leq m-1})$$ consists of the triple of functions $(u_k,\Psi_k,\Phi_k) \in C^\infty_{loc}(\Theta,M\times \mathfrak{g}) \times C^\infty_\delta(\Theta,\mathfrak{g})$ [^7] and the nonnegative real numbers $T_k \in \mathbb{R}_\geq:=\{r \in \mathbb{R}:r \geq 0\}$ which have the following properties: (i) : $(u_k,\Psi_k, \Phi_k)$ are nonconstant, finite energy solutions of (\[eq\]) with Hamiltonian equal to zero, i.e. $$\label{eqd} \begin{array}{c} \partial_s u_k+X_{\Phi_k}(u_k)+J_t(u_k)(\partial_tu_k+ X_{\Psi_k}(u_k))=0\\ \partial_s \Psi_k-\partial_t\Phi_k+[\Phi_k,\Psi_k]+\mu(u_k)=0\\ u_k(s,j)\in L, \quad \Phi_k(s,j)\in \mathfrak{g}_L, \quad \Psi_k(s,j) \in \mathfrak{g}_L^\perp, \end{array}$$ where $j \in \{0,1\}$. (ii) : There exist points $p_1 \in W^u_{\tilde{f}}(q_1)$ and $p_2 \in W^s_{\tilde{f}}(q_2)$ such that $\lim_{s \to -\infty}u_1(s,t)=h_1(t)p_1$ and $\lim_{s \to \infty}u_m(s,t)=h_2(t)p_2$ uniformly in the $t$-variable. (iii) : For $1 \leq k \leq m-1$ there exist Morse-flow lines $y_k:(-\infty,\infty) \to \mu^{-1}(0) \cap L$, i.e. solutions of $$\dot{y}_k=-\nabla_{\tilde{g}} \tilde{f}(y_k),$$ and $g_k \in \mathcal{H}$ such that $$\lim_{s \to \infty}u_k(s,t)=g_k(t)y_k(0)$$ and $$\lim_{s \to -\infty}u_{k+1}(s,t)=g_k(t)y_k(T_k),$$ where the two limites are uniform in the $t$-variable. A ***flow line with zero cascades from $c_1=(q_1,h)$ to $c_2=(q_2,h)$*** is a tuple $(y,h)$ where y is just an ordinary Morse flow line from $q_1$ to $q_2$. Rccall from p. that the gauge group $\mathcal{G}$ consists of smooth maps from the strip to $G$, which satisfy appropriate boundary conditions and which decay exponentially. For $m \in \mathbb{N}$ the group $\mathcal{G}_m$ consists of $m$-tuples $$\textbf{g}=(g_k)_{1 \leq k \leq m}$$ where $g_k \in \mathcal{G}$, which have the additional property that they form a chain, i.e. $$\mathrm{ev}_2 (g_k)=\mathrm{ev}_1(g_{k+1}) ,\quad 1 \leq k \leq m-1.$$ For $m \geq 1$ the group $\mathcal{G}_m\times \mathbb{R}^m$ acts on the space of flow lines with $m$ cascades as follows $$(u_k,\Psi_k,\Phi_k)(s) \mapsto (g_k)_*(u_k,\Psi_k,\Phi_k)(s+s_k)$$ for $1 \leq k \leq m$ and $(g_k,s_k) \in \mathcal{G} \times \mathbb{R}$. For $m=0$ the group $\frac{\mathcal{H}}{\ker I_{\mathcal{H}} \cap \ker E_{\mathcal{H}}} \times \mathbb{R}$ acts on the space of flow lines with zero cascades by $$(y(s),h) \mapsto (y(s+s_0),g \circ h).$$ For $c_1,c_2 \in \mathscr{C}_0$ we denote the quotient of flow lines with $m$ cascades from $c_1$ to $c_2$ for $m \in \mathbb{N}_0$ by $$\mathcal{M}_m(c_1,c_2).$$ We define the **space of flow lines with cascades from $c_1$ to $c_2$** by $$\mathcal{M}(c_1,c_2):= \bigcup_{m \in \mathbb{N}_0}\mathcal{M}_m(c_1,c_2).$$ Using the transversality result for the symplectic vortex equations in [@frauenfelder2], one proves in the same way as Theorem \[manifold\] that the moduli spaces of flow lines with cascades are finite dimensional manifolds for generic choice of the almost complex structure. \[mani0\] For each pair of a Morse function $f$ on $\bar{L}=(\mu^{-1}(0) \cap L)/G_L$ and a Riemannian metric $g$ on $\bar{L}$ which satisfy the Morse-Smale condition, i.e. its stable and unstable manifolds intersect transversally, there exists a subset of the space of admissible families of almost complex structures $$\mathcal{J}_{reg}=\mathcal{J}_{reg}(f,g) \subset \mathcal{J}$$ which is of the second category, i.e. $\mathcal{J}_{reg}$ is a countable intersection of open and dense subsets of $\mathcal{J}$, and which is regular in the following sense. For any two critical points $c_1, c_2 \in \mathscr{C}_0$ the space $\mathcal{M}(c_1,c_2)=\mathcal{M}(c_1,c_2;J,f,g)$ is a smooth finite dimensional manifold. Its dimension is given by $$\mathrm{dim}(\mathcal{M}(c_1,c_2))=I(c_1)-I(c_2)-1.$$ If $I(c_1)-I(c_2)-1=0$, then $\mathcal{M}(c_1,c_2)$ is compact and hence a finite set. We are now able to define moment Floer homology in the case where the Hamiltonian vanishes. Choose a triple $(f,g,J)$ where $f$ is a Morse function on $\bar{L}=(\mu^{-1}(0)\cap L)/G_L$, $g$ is a Riemannian metric on $\bar{L}$, such that all the stable and unstable manifolds of $(f,g)$ intersect transversally, and $J \in \mathcal{J}_{reg}(f,g)$. As in the transversal case we define the chain complex $CF_k(f,g,J,L,\mu;\Lambda)$ as the $\mathbb{Z}_2$ vector space consisting of formal sums of the form $$\xi=\sum_{\substack{c \in \mathscr{C}_0\\I(c)=k}}\xi_c c$$ with $\mathbb{Z}_2$-coefficients $\xi_c$ satisfying the finiteness condition $$\#\{c:\xi_c \neq 0,E(c) \geq \kappa\}<\infty$$ for every $\kappa>0$. The Novikov ring $\Lambda=\Lambda_\Gamma$ acts naturally on $CF_*$. Defining the boundary operator $\partial_k: CF_k \to CF_{k-1}$ in the usual way, we obtain homology groups $$HF_k(f,g,J,L,\mu;\Lambda):=\frac{\mathrm{ker}\partial_{k+1}} {\mathrm{im}\partial_k}.$$ As in theorem \[continuation\] one shows, that $HF_*(f,g,J,L,\mu;\Lambda)$ is canonically isomorphic to the moment Floer homology groups $HF_*(L,\mu;\Lambda)$. To compute moment Floer homology we show that contribution of the cascades vanishes. To do that we endow the space of cascades with the structure of a space of Arnold-Givental type. Recall that $\dot{S}=dS(\mathrm{id})$ is the involution on the Lie algebra induced by the antisymplectic involution $R$. First note that $R$ induces an involution $R_*$ on the path space $\mathscr{P}$ by $$R_*(x,\eta)(t):=(Rx,-\dot{S}(\eta))(1-t).$$ One easily checks that if $c \in \tilde{\mathscr{C}}_0$ then $c$ and $R_* c$ represent the same element in $\mathscr{C}_0$. Choose now an almost complex structure $J \in \mathcal{J}$ which is independent of the $t$-variable and satisfies $$R^*J=-J.$$ If $(u,\Psi,\Phi)$ is a cascade then one verifies that $$R_*(u,\Psi,\Phi)(s,t):=(Ru,-\dot{S}(\Psi),\dot{S}(\Phi))(s,1-t)$$ is also a cascade. Moreover, one verifies that $$\mathrm{ev}_j(R_*(u,\Psi,\Phi))=R_*\mathrm{ev}_j(u,\Phi,\Psi), \quad j \in \{0,1\}.$$ The following lemma shows us the relation between fixed gauge orbits of $R_*$ and fixpoints of $R_*$. \[halb\] Assume that $(u,\Psi,\Phi)$ is a cascade and $g \in \mathcal{G}_{loc}$ is a gauge transformation such that $$(u,\Psi,\Phi)=g_*(R_*(u,\Psi,\Phi)).$$ Then there exists $h \in \mathcal{G}_{loc}$ such that $$\label{bokyo} h_*(u,\Psi,\Phi)=R_*(h_*(u,\Psi,\Phi)).$$ **Proof:** Choose $h \in \mathcal{G}_{loc}$ such that $$h_* \Phi=0, \quad \lim_{s \to \infty}h_*(u,\Psi)(s,t)=(p,0)$$ where $p \in L \cap \mu^{-1}(0)$. Using the formula $$h_*(u,\Psi,\Phi)= (h g (Sh)^{-1})_* \circ R_* \circ h_*(u,\Psi,\Phi).$$ we get $$0=h_*\Phi=(hg(Sh)^{-1})_*(0)=(hg(Sh)^{-1})^{-1}\partial_s (hg(Sh)^{-1})$$ and hence $hg(Sh)^{-1}$ is independent of $s$. Moreover, taking the limit $s \to \infty$ and recalling $p=Rp$ we have $$(hg(Sh)^{-1})(t)p=p.$$ Since $G$ acts freely on $\mu^{-1}(0)$ we obtain $$hg(Sh)^{-1} \equiv \mathrm{id}$$ and hence $h$ is the required gauge transformation, i.e. (\[bokyo\]) holds. $\square$\ \ **Proof of Theorem \[arnold\]:** In view of the Lemma \[halb\] we can find in each fixed gauge orbit a fixed point. For fixed points we can define a sequence of involutions whose domain is the fixed point set of the previous one in the same way as for the pseudo-holomorphic disks in section \[agi\]. Using some equivariant version of Theorem \[kusta\] it follows that the space of cascades admits a Kuranishi structure of Arnold-Givental type. In particular, the only contribution to the boundary comes from the flow lines with zero cascades, i.e. the Morse-flow lines. This proves the theorem. $\square$ Morse-Bott theory {#morsebott} ================= In this appendix we define a homology for Morse-Bott functions. Using an idea of Piunikhin, Salamon and Schwarz, see [@piunikhin-salamon-schwarz], we define Morse-Bott homology by counting flow lines with cascades. The homology is independent of the choice of the Morse-Bott function and hence isomorphic to the ordinary Morse homology. Morse-Bott functions -------------------- Let $(M,g)$ be a Riemannian manifold. A smooth function $f \in C^\infty(M,\mathbb{R})$ is called **Morse-Bott** if $$\mathrm{crit}(f):=\{x \in M: df(x)=0\}$$ is a submanifold of $M$ and for each $x \in \mathrm{crit}(f)$ we have $$T_x\mathrm{crit}(f)=\ker(\mathrm{Hess}(f)(x)).$$ Let $M=\mathbb{R}^{k_0} \times \mathbb{R}^{k_1} \times \mathbb{R}^{k_2}$. Write $x=(x_0,x_1,x_2)$ according to the splitting of $M$. Then $$f(x_0,x_1,x_2)=x_1^2-x_2^2$$ is a Morse-Bott function on $M$. \[nobot\] Let $M=\mathbb{R}$. Then $$f(x)=x^4$$ is no Morse-Bott function on $M$. \[exp\] Let $(M,g)$ be a compact Riemannian manifold and $f$ a Morse-Bott function on it. Let $y: \mathbb{R} \to M$ be a solution of $$\dot{y}(s)=-\nabla f(y(s)).$$ Then there exists $x \in \mathrm{crit}(f)$ and positive constants $\delta$ and $c$ such that $$\lim_{s \to \infty}y(s)=x, \quad |\dot{y}(s)| \leq ce^{-\delta s}.$$ An analoguous result holds as $s$ goes to $-\infty$. Without the Morse-Bott condition Theorem \[exp\] will in general not hold. Let $M$ and $f$ be as in Example \[nobot\]. Then $$y(s):=\frac{1}{\sqrt{8s}}$$ is a solution of the gradient equation which converges to the critical point $0$ as $s$ goes to $\infty$. But the convergence is not exponential. **Proof of Theorem \[exp\]:** Since $M$ is compact and $\mathrm{crit}(f)$ is normally hyperbolic there exists $x \in \mathrm{crit}(f)$ such that $$\lim_{s \to \infty} y(s)=x.$$ Set $$A(s):=f(y(s))-f(x).$$ Then for some $\epsilon>0$ we have $$\dot{A}(s)=-|\dot{y}(s)|^2=-|\nabla f(y(s))|^2 \leq -\epsilon A(s).$$ The last inequality follows from the Morse-Bott assumption. Hence there exists a constant $c_0>0$ such that $$A(s) \leq c_0 e^{-\epsilon s}.$$ This proves the theorem. $\square$ Flow lines with cascades {#flwc} ------------------------ Let $(M,g)$ be a compact Riemannian manifold, $f$ a Morse-Bott function on $M$, $g_0$ a Riemannian metric on $\mathrm{crit}(f)$, and $h$ a Morse-function on $\mathrm{crit}(f)$. We assume that $h$ satisfies the Morse-Smale condition, i.e. stable and unstable manifolds intersect transversally. For a critical point $c$ on $h$ let $\mathrm{ind}_f(c)$ be the number of negative eigenvalues of $\mathrm{Hess}(f)(c)$ and $\mathrm{ind}_h(c)$ be the number of negative eigenvalues of $\mathrm{Hess}(h)(c)$. We define $$\mathrm{Ind}(c):=\mathrm{Ind}_{f,h}(c):= \mathrm{ind}_f(c)+\mathrm{ind}_h(c).$$ \[casc\] For $c_1,c_2 \in \mathrm{crit}(h),$ and $m \in \mathbb{N}$ a ***flow line from $c_1$ to $c_2$ with $m$ cascades*** $$(\textbf{x},\textbf{T})=((x_k)_{1 \leq k \leq m},(t_k)_{1 \leq k \leq m-1})$$ consists of $x_k \in C^\infty(\mathbb{R},M)$ and $t_k \in \mathbb{R}_{\geq}:=\{r \in \mathbb{R}: r \geq 0\}$ which satisfy the following conditions: (i) : $x_k \in C^\infty(\mathbb{R},M)$ are nonconstant solutions of $$\dot{x}_k=-\nabla f(x_k).$$ (ii) : There exists $p \in W^u_h(c_1) \subset \mathrm{crit}(f)$ and $q \in W^s_h(c_2)$ such that $\lim_{s \to -\infty}x_1(s)=p$ and $\lim_{s \to \infty}x_m(s)=q$. (iii) : for $1 \leq k \leq m-1$ there are Morse-flow lines $y_k \in C^\infty(\mathbb{R},\mathrm{crit}(f))$ of $h$, i.e. solutions of $$\dot{y}_k=-\nabla h(y_k),$$ such that $$\lim_{s \to \infty}x_k(s)=y_k(0)$$ and $$\lim_{s \to -\infty}x_{k+1}(s)=y_k(t_k).$$ A ***flow line with zero cascades from $c_1$ to $c_2$*** is an ordinary Morse flow line of $h$ on $\mathrm{crit}(f)$ from $c_1$ to $c_2$. In Definition \[casc\] we do not require that the Morse-flow lines are nonconstant. It may happen that a cascade converges to a critical point of $h$, but the flow line will only remain for a finite time on the critical point. We denote the space of flow lines with $m$ cascades from $c_1$ to $c_2 \in \mathrm{crit}(h)$ by $$\tilde{\mathcal{M}}_m(c_1,c_2).$$ The group $\mathbb{R}$ acts by timeshift on the set of solutions connecting two critical points on the same level $\tilde{\mathcal{M}}_0(c_1,c_2)$ and the group $\mathbb{R}^m$ acts on $\tilde{\mathcal{M}}_m(c_1,c_2)$ by timeshift on each cascade, i.e. $$x_k(s) \mapsto x_k(s+s_k).$$ We denote the quotient by $$\mathcal{M}_m(c_1,c_2).$$ We define the **set of flow lines with cascades from $c_1$ to $c_2$** by $$\mathcal{M}(c_1,c_2):=\bigcup_{m \in \mathbb{N}_0}\mathcal{M}_m(c_1,c_2).$$ Immediately from the gradient equation the following lemma follows. If $f(c_1)<f(c_2)$ then $\mathcal{M}(c_1,c_2)$ is empty. If $f(c_1)=f(c_2)$ then $\mathcal{M}(c_1,c_2)$ contains only flow lines with zero cascades. If $f(c_1)>f(c_2)$ then $\mathcal{M}(c_1,c_2)$ contains no flow line with zero cascades. A sequence of flow lines with cascades may break up in the limit into a connected chain of flow lines with cascades. To deal with this phenomenon, we make the following definitions. Let $c,d \in \mathrm{crit}(h)$. A ***broken flow line with cascades*** from $c$ to $d$ $$\textbf{v}=(v_j)_{1 \leq j \leq \ell}$$ for $\ell \in \mathbb{N}$ consists of flow lines with cascades $v_j$ from $c_{j-1}$ to $c_j \in \mathrm{crit}(h)$ for $0 \leq j \leq \ell$ such that $c_0=c$ and $c_\ell=d$. \[fgc\] Assume that $c,d \in \mathrm{crit}(h)$. Suppose that $v^\nu$ for $\nu \in \mathbb{N}$ is a sequence of flow lines with cascades which satisfies the following condition. There exists $\nu_0 \in \mathbb{N}$ such that for every $\nu\geq \nu_0$ it holds that $v^\nu$ is a flow line with cascades from $c$ to $d$. There are two cases. In the first case $c$ and $d$ lie on the same level and hence $v^\nu \in C^\infty(\mathbb{R},\mathrm{crit}(f))$ is a flow line with zero cascades for every $\nu \geq \nu_0$, in the second case $c$ and $d$ lie on different levels and hence $v^\nu=((x_k^\nu)_{1 \leq k \leq m^\nu},(t_k)_{1 \leq k \leq m^\nu-1})$ is a flow line with at least one cascade for every $\nu \geq \nu_0$. We say that $v^\nu$ ***Floer-Gromov converges*** to a broken flow line with cascades $\textbf{v}=(v_j)_{1 \leq j \leq \ell}$ from $c$ to $d$ if the following holds. (a) : In the first case, where the $v^\nu$’s are flow lines with zero cascades for large enough $\nu$’s, all $v_j$’s are flow lines with zero cascades and there exists real numbers $s_j^\nu$ for $\nu \geq \nu_0$ such that $(s^\nu_j)_*(v^\nu)(\cdot):=v^\nu(\cdot+s^\nu_j)$ converges in the $C^\infty_{loc}$-topology to $v_j$. (b) : In the second case, where the $v^\nu$’s have at least one cascade for $\nu$ large enough, we require the following conditions. (i) : If $v_j \in C^\infty(\mathbb{R},\mathrm{crit}(f))$ is a flow line with zero cascades, then there exists a sequence of solutions $y^\nu_j \in C^\infty(\mathbb{R},\mathrm{crit}(f))$ of $\dot{y}^\nu_j=-\nabla h(y^\nu_j)$ converging in $C^\infty_{loc}$ to $v_j$, a sequence of real numbers $s^\nu_j$, and a sequence of integers $k^\nu \in [1,m^\nu]$ such that either $\lim_{s \to -\infty}x^\nu_{k^\nu}(s)=y^\nu_j(s^\nu_j)$ or $\lim_{s \to \infty}x^\nu_{k^\nu}(s)=y^\nu_j(s^\nu_j)$. (ii) : If $v_j$ is a flow line with at least one cascade, then we write $v_j=((x_{i,j})_{1 \leq i \leq m_j},(t_{i,j})_{1 \leq i \leq m_j-1}) \in \tilde{\mathcal{M}}_{m_j}(c_{j-1},c_j)$ for $m_j \geq 1$. We require that there exist surjective maps $\gamma^\nu \colon [1, \sum_{p=1}^\ell m_p] \to [1,m^\nu]$, which are monotone increasing, i.e. $\gamma^\nu(\lambda_1) \leq \gamma^\nu(\lambda_2)$ for $\lambda_1 \leq \lambda_2$, and real numbers $s^\nu_\lambda$ for every $\lambda \in [1,\sum_{p=1}^\ell m_p]$, such that $$(s^\nu_\lambda)_*x^\nu_{\gamma^\nu(\lambda)}(\cdot) =x^\nu_{\gamma^\nu(\lambda)}(\cdot+s^\nu_\lambda) \stackrel{C^\infty_{loc}}{\longrightarrow} x_\lambda$$ where $x_\lambda=x_{i,j}$ such that $\lambda=\sum_{p=1}^jm_p+i$. For $\lambda \in [1,\sum_{p=1}^\ell m_p-1]$ we set $$\tau_\lambda=\left\{\begin{array}{cc} t_{i,j} & \lambda=\sum_{p=1}^jm_p+i, \quad 0 <i<m_{j+1}\\ \infty & \lambda=\sum_{p=1}^jm_p\end{array}\right.$$ and $$\tau^\nu_\lambda=\left\{\begin{array}{cc} t^\nu_{\gamma^\nu(\lambda)} & \lambda=\mathrm{max}\{ \lambda' \in [1,\sum_{p=1}^\ell m_p-1]: \gamma^\nu(\lambda')= \gamma^\nu(\lambda)\}\\ 0 & \mathrm{otherwise.} \end{array}\right.$$ Now we require, that $$\lim_{\nu \to \infty} \tau^\nu_\lambda=\tau_\lambda.$$ Here we use the convention that a sequence of real numbers $\tau^\nu$ converges to infinity, if for every $n \in \mathbb{N}$ there exists a $\nu_0(n) \in \mathbb{N}$ such that $\tau^\nu \geq n$ for $\nu \geq \nu_0(n)$. \[fcc\] Let $v^\nu$ be a sequence of flow lines with cascades. Then there exists a subsequence $v^{\nu_j}$ and a broken flow line with cascades $\textbf{v}$ such that $v^{\nu_j}$ Floer-Gromov converges to $\textbf{v}$. **Proof:** First assume that there exists a subsequence $\nu_j$ such that $v^{\nu_j}$ are flow lines with zero cascades. In this case Floer-Gromov convergence to a broken flow line without cascades follows from the classical case, see [@schwarz1 Proposition 2.35]. Otherwise pick a subsequence $\nu_j$ of $\nu$ such that $v^{\nu_j}$ are flow lines with at least one cascade. Since the number of critical points of $h$ is finite, we can perhaps after passing over to a further subsequence assume that all the $v^{\nu_j}$ are flow lines with cascades from a common critical point $c$ of $h$ to a common critical point $d$ of $h$. Since the number of connected components of $\mathrm{crit}(f)$ is finite, we can assume by passing to a further subsequence that the number of cascades $m^\nu=m$ is independent on $\nu$.\ For simplicity of notation we denote the subsequence $\nu_j$ again by $\nu$. We consider the sequence of points $p^\nu:=\lim_{s \to \infty}x^\nu_1(s) \in \mathrm{crit}(f).$ Let $y^\nu \in C^\infty(\mathbb{R},\mathrm{crit}(f))$ be the unique solution of the problem $$\dot{y}^\nu(s)=-\nabla h(y^\nu(s)), \quad y^\nu(0)=p^\nu.$$ Note that for every $\nu$ $$\lim_{s \to -\infty}y^\nu(s)=c.$$ Using again [@schwarz1 Proposition 2.35] it follows that perhaps passing over to a subsequence (also denoted by $\nu$) there exists $p \in \mathrm{crit}(f)$, a nonnegative integer $m_0 \in \mathbb{N}$, and a sequence of Morse-flow lines $v_j \in C^\infty(\mathbb{R},\mathrm{crit}(f))$ with respect to the gradient flow of $h$ for $1 \leq j \leq m_0$ such that the following holds. (i) : $\lim_{\nu \to \infty} p^\nu=p$, (ii) : If $m_0=0$, then $p \in W^u_h(c)$, (iii) : If $m_0 \geq 1$, then $$\begin{aligned} \lim_{s \to -\infty}v_1(s)&=&c, \\ \lim_{s \to \infty}v_j(s)&=&\lim_{s \to -\infty}v_{j+1}(s), \quad 1 \leq j \leq m_0-1, \\ p &\in& W^u_h(\lim_{s \to \infty}v_{m_0}(s)).\end{aligned}$$ Using induction on $\mu \in [1,m]$, where $m$ is the number of cascades of each $v^\nu$, the following claim follows as [@schwarz1 Proposition 2.35].\ \ **Claim:** Given the claim for $\mu=m$, the Theorem follows now by applying [@schwarz1 Proposition 2.35] again. $\square$ \[manifold\] Let $c_1,c_2 \in \mathrm{crit}(h)$. For generic choice of the Riemannian metric $g$ of $M$ the space $\mathcal{M}(c_1,c_2)$ is a smooth finite dimensional manifold. Its dimension is given by $$\dim \mathcal{M}(c_1,c_2)=\mathrm{ind}(c_1)-\mathrm{ind}(c_2)-1.$$ If $\mathrm{Ind}(c_1)-\mathrm{Ind}(c_2)=1$ then $\mathcal{M}(c_1,c_2)$ is compact and hence a finite set. The rest of this subsection is devoted to the proof of Theorem \[manifold\]. Choose $0<\delta<\min(\{|\lambda|: \lambda \in \sigma(\mathrm{Hess}(f)(x)) \setminus \{0\}, x \in \mathrm{crit}(f)\}$. For a smooth cutoff function $\beta$ such that $\beta(s)=-1$ if $s<0$ and $\beta(s)=1$ if $s>1$ define $$\gamma_\delta:\mathbb{R} \to \mathbb{R}, \quad s \mapsto e^{\delta \beta (s)s}.$$ Let $\Omega$ be an open subset of $\mathbb{R}$. We define the $|| \,||_{k,p,\delta}$-norm for a locally integrable function $f: \Omega \to \mathbb{R}$ with weak derivatives up to order $k$ by $$||f||_{k,p,\delta}:=\sum_{i=0}^k||\gamma_\delta \partial^i f||_p.$$ We denote $$W^{k,p}_\delta(\Omega):=\{f \in W^{k,p}(\Omega): ||f||_{k,p,\delta}<\infty\}=\{f \in W^{k,p}(\Omega):\gamma_\delta f \in W^{k,p}(\Omega)\}.$$ We also set $$L^p_\delta(\Omega):=W^{0,p}_\delta(\Omega).$$ Let $$T_h(t) \in \mathrm{Diff}(\mathrm{crit}(f))$$ be the smooth family of diffeomorphisms which assigns to $p \in \mathrm{crit}(f)$ the point $x_p(t)$ where $x_p$ is the unique flow line of $h$ with $x_p(0)=p$. We define $$\mathcal{B}:=\mathcal{B}_\delta^{1,p}(M,f,h)$$ as the Banach manifold consisting of all tuples $v=((x_j)_{1 \leq j \leq m},(t_j)_{1 \leq j \leq m-1}) \in (W^{1,p}_{loc}(\mathbb{R},M))^{m}\times \mathbb{R}_+^{m-1}$ where $\mathbb{R}_+:=\{r \in \mathbb{R}: r>0\}$ and $m \in \mathbb{N}$ which satisfy the following conditions: (Asympotic behaviour) : For $1 \leq j \leq m$ there exists $p_j, q_j \in \mathrm{crit}(f)$, $\xi_{1,j} \in W^{1,p}_\delta((-\infty,T], T_{p_j}M)$, $\xi_{2,j} \in W^{1,p}_\delta([T,\infty),T_{q_j}M)$ for $T \in \mathbb{R}$ such that $$x_j(s)=\exp_{p_j}(\xi_{1,j}(s)), \quad s \leq -T, \qquad x_j(s)=\exp_{q_j}(\xi_{2,j}(s)), \quad s \geq T,$$ where $\exp$ is taken with respect to the Riemannian metric $g$ of $M$. (Connectedness) : $p_{j+1}=T_h(t_j) q_j$ for $1 \leq j \leq m-1$. To define local charts on $\mathcal{B}$ choose $v=((x_j)_{1 \leq j \leq m},(t_j)_{1 \leq j \leq m-1}) \in \mathcal{B}$ such that all the $x_j$ for $1 \leq j \leq m$ are smooth and define a neighbourhood of $v$ in $\mathcal{B}$ via the exponential map of $g$ [^8]. There are two natural smooth evaluation maps $$\mathrm{ev}_1, \mathrm{ev}_2: \mathcal{B} \to \mathrm{crit}(f), \qquad \mathrm{ev}_1(v)=p_1, \quad \mathrm{ev}_2(v)=q_m.$$ After choosing cutoff functions and smooth trivializations $\chi_{p_j}$ and $\chi_{q_j}$ of $TM$ near $p_j$ respectively $q_j$ the tangent space of $\mathcal{B}$ at $v$ can naturally be identified with tuples $$\zeta=((\xi_{j,0},\xi_{j,1},\xi_{j,2})_{1 \leq j \leq m},(\tau_j)_{1 \leq j \leq m-1}) \in$$ $$\bigoplus_{j=1}^m (W^{1,p}_\delta(\mathbb{R},x^*_j TM) \times T_{p_j}\mathrm{crit}(f) \times T_{q_j}\mathrm{crit}(f)) \times \mathbb{R}^{m-1}$$ which satisfy $$d T_h(t_j) \xi_{j,2}+\frac{d}{dt}(T_h(0)q_j)\tau_j=\xi_{j+1,1} \quad 1 \leq j \leq m-1.$$ $T_v \mathcal{B}$ is a Banach space with norm $$\label{norm} ||\zeta||:=\sum_{j=1}^m(||\xi_{j,0}||_{1,p,\delta}+||\xi_{j,1}|| +||\xi_{j,2}||)+\sum_{j=1}^{m-1}|\tau_j|$$ Let $\mathcal{E}$ be the Banachbundle over $\mathcal{B}$ whose fiber at $v \in \mathcal{B}$ is given by $$\mathcal{E}_v:=\bigoplus_{j=1}^m L^p_\delta(\mathbb{R},x^*_j TM).$$ Set $$\tilde{\mathcal{M}}:=\mathcal{F}^{-1}(0)$$ where $$\mathcal{F}:\mathcal{B} \to \mathcal{E}, \quad v \mapsto (\dot{x}_k+\nabla f(x_k))_{1 \leq k \leq m}.$$ Where $\nabla=\nabla_g$ is the Levi-Civita connection of the Riemannian metric $g$ on $M$. Note that $\mathcal{F}=\mathcal{F}_g$ depends on $g$. Let $$D_v:=D \mathcal{F}(v): T_v \mathcal{B} \to \mathcal{E}_v$$ be the vertical differential of $\mathcal{F}$ at $v \in \mathcal{F}^{-1}(0)$. If $p \in \mathrm{crit}(f)$ denote by $\dim_p \mathrm{crit}(f)$ the local dimension of $\mathrm{crit}(f)$ at $p$. Note that it follows from the Morse-Bott condition that $\dim_p \mathrm{crit}(f)$ equals the dimension of the kernel of $\mathrm{Hess}(f)(p)$. Then we have $D_v$ is a Fredholm-operator of index $$\mathrm{ind}(D_v)= \mathrm{ind}_f(\mathrm{ev}_1(v))+\dim_{\mathrm{ev}_1(v)} \mathrm{crit}(f) -\mathrm{ind}_f(\mathrm{ev}_2(v))+m-1,$$ where $m=m(v)$ is the number of cascades. **Proof:** For $1 \leq j \leq m$ let $$D_{v,j}: W^{1,p}_\delta(\mathbb{R},x^*_j TM) \to L^p_\delta(\mathbb{R}, x^*_j TM)$$ be the restriction of $D_v$ to $W^{1,p}_\delta(\mathbb{R},x^*_j TM)$. It suffices to show, that $D_{v,j}$ is a Fredholm-operator of index $$\mathrm{ind}(D_{v,j})=\mathrm{ind}_f(p_j)-\mathrm{ind}_f(q_j) -\dim_{q_j}\mathrm{crit}(f).$$ Write $$D_{v,j}:=\partial_s +A(s).$$ Then $$A_1:=\lim_{s \to -\infty}A(s)=\mathrm{Hess}(f)(p_j), \quad A_2:=\lim_{s \to \infty}A(s)=\mathrm{Hess}(f)(q_j).$$ Define the continuous isomorphisms $$\phi_1: L^p_\delta \to L^p, \quad f \mapsto f \gamma_\delta, \qquad \phi_2: W^{1,p}_\delta \to W^{1,p}, \quad f \mapsto f \gamma_\delta.$$ Define $$\tilde{D}_v:W^{1,p}(\mathbb{R}, x^*_j TM) \to L^p(\mathbb{R},x^*_j TM)$$ by $$\tilde{D}_v:=\phi_1 D_v \phi_2^{-1}.$$ $D_v$ is exactly a Fredholm-operator if $\tilde{D}_v$ is a Fredholm-operator. In this case $$\mathrm{ind}(D_v)=\mathrm{ind}(\tilde{D}_v).$$ For $\xi \in W^{1,p}(\mathbb{R},x_j^* TM)$ we calculate $$\begin{aligned} \tilde{D}_v \xi&=&\phi_1 D_v \phi_2^{-1}\xi\\ &=&\phi_1 D_v(\xi \gamma_{-\delta})\\ &=&\phi_1(\partial_s(\xi \gamma_{-\delta})+A(s)\xi \gamma_{-\delta})\\ &=&\phi_1((\partial_s \xi)\gamma_{-\delta}+(\delta \beta(s)+ \delta \partial_s \beta(s) s)\xi\gamma_{-\delta}+A(s)\xi \gamma_{-\delta})\\ &=&\partial_s \xi+(A(s)+\delta(\beta(s)+\partial_s \beta(s) s)\mathrm{id})\xi.\end{aligned}$$ Hence $\tilde{D}_v$ is given by $$\tilde{D}_v=\partial_s +B(s)$$ where $$B(s)=A(s)+\delta(\beta(s)+\partial_s \beta(s)s)\mathrm{id}.$$ Let $$B_j:=A_j+(-1)^j \delta, \quad j \in \{1,2\}.$$ Then $$\lim_{s \to -\infty} B(s)=B_1, \quad \lim_{s \to \infty} B(s)=B_2$$ and the $B_j$ are invertible by our choice of $\delta$. If $p=2$ then it follows from [@robbin-salamon1] that $\tilde{D}_v$ is a Fredholm-operator of the required index. For general $p$ the lemma follows from [@salamon1]. See also [@schwarz2], for an alternative proof. $\square$\ \ For $n \in \mathbb{N}$ we define the evaluation maps $$\mathrm{EV}_n: \tilde{\mathcal{M}}^n \to \mathrm{crit}(f)^n \times \mathrm{crit}(f)^n \cong \mathrm{crit}(f)^{2n},$$ $$(v_1,\ldots,v_n) \mapsto (\mathrm{ev}_1(v_j)_{1 \leq j \leq n}, \mathrm{ev}_2(v_j)_{1 \leq j \leq n}),$$ where $\tilde{\mathcal{M}}=\mathcal{F}^{-1}(0)$ as introduced above. For critical points $c_1,c_2$ of $h$ we define $A_n(c_1,c_2)$ to be the submanifold of $\mathrm{crit}(f)^n \times \mathrm{crit}(f)^n$ consisting of all tuples $((p_j)_{1 \leq j \leq n},(q_j)_{1 \leq j \leq n}) \in \mathrm{crit}(f)^n \times \mathrm{crit}(f)^n$ such that $p_1 \in W^u_h(c_1)$, $q_n \in W^s_h(c_2)$, and $p_{j+1}=q_j$ for $1 \leq j \leq n-1$. We shall prove the following theorem. \[mani\] For generic Riemannian metric $g$ on $M$ the set $\tilde{\mathcal{M}}$ has the structure of a finite dimensional manifold. Its local dimension $$\label{dim} \dim_v \tilde{\mathcal{M}} = \mathrm{ind} D_v =\mathrm{ind}_f(\mathrm{ev}_1(v))+\dim_{\mathrm{ev}_1(v)} \mathrm{crit}(f) -\mathrm{ind}_f(\mathrm{ev}_2(v))+m-1,$$ where $m=m(v)$ is the number of cascades. Moreover, for every $n \in \mathbb{N}$ and for every $c_1,c_2 \in \mathrm{crit}(h)$ the evaluation maps $\mathrm{EV}_n$ intersects $A_n(c_1,c_2)$ transversally. \[fhg0reg\] Assume that $f$ is Morse-Bott function on a compact manifold $M$, $h$ is a Morse-function on $\mathrm{crit}(f)$, and $g_0$ is a Riemannian metric on $\mathrm{crit}(f)$, such that $h$ and $g_0$ satisfy the Morse-Smale condition, i.e. stable and unstable manifolds of the gradient flow of $h$ with respect to $g_0$ on $\mathrm{crit}(f)$ intersect transversally. We say that a Riemannian metric $g$ on $M$ is ***[$(f,h,g_0)$-regular]{}*** if it satisfies the conditions of Theorem \[mani\]. **Proof of Theorem \[mani\]:** For a positive integer $\ell$ let $\mathcal{R}^\ell$ be the Banach manifold of Riemannian metrics on $M$ of class $C^\ell$. Let $$\mathcal{F}: \mathcal{R}^\ell \times \mathcal{B} \to \mathcal{E}$$ be defined by $$\mathcal{F}(g,v)=(\dot{x}_k+\nabla_g f(x_k))_{1 \leq k \leq m} =(\dot{x}_k+g^{-1}df(x_k))_{1 \leq k \leq m},$$ where $g^{-1} \colon T^*M \to TM$ is defined as the inverse of the map defined by $$g(\nabla_g f, \cdot)=df(\cdot).$$ We will prove that the universal moduli space $$\mathcal{U}^\ell:=\{(v,g) \in \mathcal{B} \times \mathcal{R}^\ell: \mathcal{F}(v,g)=0\}$$ is a separable manifold of class $C^\ell$. To show that, we have to verify that $$D_{v,g}:T_v \mathcal{B} \times T_g \mathcal{R}^\ell \to \mathcal{E}_v$$ given by $$\begin{aligned} D_{v,g}(\zeta,A) &=& (\partial_s \zeta_k+\nabla_{\zeta_k}\nabla f(x_k))_{1\leq k \leq m} -(g^{-1}Ag^{-1}df(x_k))_{1 \leq k \leq m}\\ &=& D_v\zeta-(g^{-1}Ag^{-1}df(x_k))_{1 \leq k \leq m}\end{aligned}$$ is onto for every $(v,g) \in \mathcal{U}^\ell$. Here $\nabla_{\zeta_k}$ is the Levi-Civita connection of the metric $g$. The tangent space of $\mathcal{R}^\ell$ at $g \in \mathcal{R}^\ell$ consists of all symmetric $C^\ell$-sections from $M$ to $T^*M \times T^* M$. Since $D_v$ is a Fredholm operator, it has a closed range and a finite dimensional cokernel. Hence $D_{v,g}$ has a closed range and a finite dimensional cokernel and it only remains to prove that its range is dense. To see this, let $$\eta \in (\mathcal{E}_v)^*=\bigoplus_{j=1}^m L^q_{-\delta} (\mathbb{R}, x^*_j TM), \quad \frac{1}{p}+\frac{1}{q}=1,$$ such that $\eta$ vanishes on the range of $D_{v,g}$, i.e. $$\label{reg1+} \sum_{j=1}^m \int_{\mathbb{R}}\langle \eta_j, (D_v \zeta)_j\rangle ds =0$$ for every $\zeta \in T_v \mathcal{B}$ and $$\label{reg2+} \sum_{j=1}^m \int_{\mathbb{R}}\langle \eta_j,g^{-1}A g^{-1}df(x_j)\rangle ds =0$$ for every $A \in T_g \mathcal{R}^\ell$. It follows from (\[reg1+\]) that $\eta_j$ is continuously differentiable for $1 \leq j \leq m$. Now (\[reg2+\]) implies that $\eta$ vanishes identically. This proves that $D_{v,g}$ is onto for every $(v,g) \in \mathcal{U}^\ell$. Now it follows from the implicit function theorem that $\mathcal{U}^\ell$ is a Banach-manifold. The differential $d \pi^\ell$ of the projection $$\pi^\ell: \mathcal{U}^\ell \to \mathcal{R}^\ell, \quad (v,g) \mapsto g$$ at a point $(v,g) \in \mathcal{U}^\ell$ is just the projection $$d \pi^\ell(v,g)\colon T_{(v,g)}\mathcal{U}^\ell \to T_g \mathcal{R}^\ell, \quad (\zeta,A) \mapsto A.$$ The kernel of $d\pi^\ell(v,g)$ is isomorphic to the kernel of $D_v$. Its image consists of all $A$ such that $(g^{-1} A g^{-1} df(x_k))_{1 \leq k \leq m} \in \mathrm{im}D_v$. Moreover, $\mathrm{im} d\pi^\ell(v,g)$ is a closed subspace of $T_g \mathcal{R}^\ell$, and, since $D_{v,g}$ is onto, it has the same (finite) codimension as the image of $D_v$. Hence $d\pi^\ell(v,g)$ is a Fredholm operator of the same index as $D_{v,g}$. In particular, the projection $\pi^\ell$ is a Fredholm map and it follows from the Sard-Smale theorem that for $\ell$ sufficiently large, the set $\mathcal{R}^\ell_{reg}$ of regular values of $\pi^\ell$ is dense in $\mathcal{R}^\ell$. Note that $g \in \mathcal{R}^\ell$ is a regular value of $\pi^\ell$ exactly if $D_v$ is surjective for every $v \in \mathcal{F}_g^{-1}(0)$. Here $\mathcal{F}_g=\mathcal{F}(\cdot,g)$. For $c>0$ let $\mathcal{U}^{c,\ell} \subset \mathcal{U}^\ell$ be the set of pairs $(v,g) \in \mathcal{U}^\ell$ such that $$||\partial_s x_j(s)||\leq ce^{-|s|/c}, \quad 1 \leq j \leq m, \qquad \frac{1}{c} \leq t_k \leq c, \quad 1 \leq j \leq m-1.$$ The space $$\tilde{\mathcal{M}}^{\ell,c}(g):=\{v:(v,g) \in \mathcal{U}^{\ell,c}\}$$ is compact for every $g$. Indeed, the uniform exponential decay prevents the cascades from breaking up into several pieces. It follows that the set $\mathcal{R}^{\ell,c}_{reg}$ consisting of all $g \in \mathcal{R}^\ell$ such that $D_v$ is onto for all $(v,g) \in \tilde{\mathcal{M}}^{\ell,c}(g)$ is open and dense in $\mathcal{R}^\ell$. Hence the set $$\mathcal{R}^{\infty,c}_{reg}:=\mathcal{R}^{\ell,c}_{reg} \cap \mathcal{R}$$ is dense in $\mathcal{R}^\ell$ with respect to the $C^\ell$-topology. Here $\mathcal{R}=\mathcal{R}^\infty$ denotes the Fréchet manifold of smooth metrics on $M$. Since this holds for every $\ell$ if follows that $\mathcal{R}^{\infty,c}$ is dense in $\mathcal{R}$ with respect to the $C^\infty$-topology. Using compactness again one obtains that $\mathcal{R}^c_{reg}$ is also $C^\infty$-open. It follows that the set $$\mathcal{R}_{reg}:=\bigcap_{c \in \mathbb{N}} \mathcal{R}^{\infty,c}_{reg}$$ is a countable intersection of open and dense subsets of $\mathcal{R}$. To prove that the evaluation maps $\mathrm{EV}_n$ intersect $A_n(c_1,c_2)$ transversally for generic $g$, we show that the evaluation maps $\mathrm{ev}_j: \mathcal{U}^\ell \to \mathrm{crit}(f)$ for $j \in \{1,2\}$ are submersive. Let $(v,g) \in \mathcal{U}^\ell$ and $\xi \in T_{\mathrm{ev}_j(v,g)}\mathrm{crit}(f)$. We have to show that there exists $(\zeta,A) \in \mathrm{ker} D_{v,g}$ such that $$\label{submers} d(\mathrm{ev}_j)(v,g)(\zeta,A)=\xi.$$ Choose some arbitrary $(\zeta_0,A_0) \in T_v \mathcal{B} \times T_g \mathcal{R}^\ell$ such that $$d(\mathrm{ev}_j)(v,g)(\zeta_0,A_0)=\xi.$$ In the same way as one proved that $D_{v,g}$ is surjective one can also show that already $D_{v,g}$ restricted to $\{\zeta \in T_v \mathcal{B}: d (\mathrm{ev}_j) \zeta=0\} \times T_g \mathcal{R}^\ell$ is surjective. Hence there exist $(\zeta_1,A_1) \in T_v \mathcal{B} \times T_g \mathcal{R}^\ell$ such that $$D_{v,g}(\zeta_0,A_0)=D_{v,g}(\zeta_1,A_1)$$ and $$d(\mathrm{ev}_j)(v,g)(\zeta_1,A_1)=0.$$ Now set $$(\zeta,A):=(\zeta_0-\zeta_1,A_0-A_1).$$ Then $(\zeta,A)$ lies in the kernel of $D_{v,g}$ and satisfies (\[submers\]). This proves the theorem. $\square$\ \ For $c_1,c_2 \in \mathrm{crit}(h) \subset \mathrm{crit}(f)$ define $$\begin{aligned} \tilde{\mathcal{M}}^-(c_1)&:=&\{v \in \tilde{\mathcal{M}}: \mathrm{ev}_1(v) \in W^u_h(c_1)\},\\ \tilde{\mathcal{M}}^+(c_2)&:=&\{v \in \tilde{\mathcal{M}}: \mathrm{ev}_2(v) \in W^s_h(c_2)\},\\ \tilde{\mathcal{M}}(c_1,c_2)&:=& \tilde{\mathcal{M}}^-(c_1) \cap \tilde{\mathcal{M}}^+(c_2).\end{aligned}$$ Immediately from Theorem \[mani\] the following Corollary follows. For generic Riemannian metric $g$ on $M$ the spaces $\tilde{\mathcal{M}}_u(c_1)$, $\tilde{\mathcal{M}}_s(c_2)$, and $\tilde{\mathcal{M}}(c_1,c_2)$ are finite dimensional manifolds. Its local dimensions are $$\begin{aligned} \dim_v \tilde{\mathcal{M}}^-(c_1)&=&\mathrm{Ind}(c_1) -\mathrm{ind}_f(\mathrm{ev}_2(v))+m-1\\ \dim_v \tilde{\mathcal{M}}^+(c_2)&=&\mathrm{ind}_f(\mathrm{ev}_1(v)) +\dim_{\mathrm{ev}_1(v)}\mathrm{crit}(f)-\mathrm{Ind}(c_2)+m-1\\ \dim_v \tilde{\mathcal{M}}(c_1,c_2)&=&\mathrm{Ind}(c_1)-\mathrm{Ind}(c_2) +m-1.\end{aligned}$$ It follows from the Corollary above that for generic Riemannian metric $g$ the moduli space of flow lines with cascades $\mathcal{M}(c_1,c_2)$ is a manifold of dimension $\mathrm{Ind}(c_1)-\mathrm{ind}(c_2)-1$ in a neighbourhood of an element $v=((x_j)_{1 \leq j \leq m},(t_j)_{1 \leq j \leq m-1})$ all of whose $t_j>0$. It remains to consider the case where some of the $t_j$’s are zero. In a neighbourhood of such an element the number of cascades may vary and we need a gluing theorem to parametrise such a neighbourhood. In view of the transversality, the subspace $\mathcal{B}(c_1,c_2)$ of $\mathcal{B}$ for $c_1,c_2 \in \mathrm{crit}(h)$ defined by $$\mathcal{B}(c_1,c_2):=\{v \in \mathcal{B}: \mathrm{ev}_1(v) \in W^u_h(c_1),\,\,\mathrm{ev}_2(v) \in W^s_h(c_2)\}$$ is actually a submanifold. Recall that the evaluation map $\mathrm{EV}_n \colon \tilde{\mathcal{M}}^n \to \mathrm{crit}(f)^{2n}$ was defined by $\mathrm{EV}_n(v_1,\cdots,v_n) \mapsto (\mathrm{ev}_1(v_j)_{1 \leq j \leq n}, \mathrm{ev}_2(v_j)_{1 \leq j \leq n})$ and $A_n(c_1,c_2) \subset \mathrm{crit}(f)^{2n}$ was defined as the subspace of all tuples $((p_j)_{1 \leq j \leq n},(q_j)_{1 \leq j \leq n}) \in \mathrm{crit}(f)^{2n}$ satisfying $p_1 \in W^u_h(c_1), q_n \in W^s_h(c_2)$, and $p_{j+1}=p_j$ for $1 \leq j \leq n-1$. We next define for $T$ large enough the **pregluing map** $$\#^0:\mathrm{EV}_n^{-1}(A_n(c_1,c_2)) \times (T,\infty)^{n-1} \to \mathcal{B}(c_1,c_2).$$ Let $\textbf{v}=(v_k)_{1 \leq k \leq n}= ((x_{k,j})_{1 \leq j \leq m_k},( t_{k,j})_{1 \leq j \leq m_k-1}))_{1 \leq k \leq n} \in \mathrm{EV}_n^{-1}(A_n(c_1,c_2))$ and $\textbf{R}=(R_j)_{1 \leq j \leq n-1} \in (T,\infty)^{n-1}$. For $1 \leq p \leq n$ and $$\sum_{k=1}^{p-1} m_k-p+2 \leq i < \sum_{k=1}^p m_k-p+1$$ set $$s_i:=t_{p,i-\sum_{k=1}^{p-1}m_k+p-1}.$$ For $$\sum_{k=1}^{p-1} m_k-p+2 < i < \sum_{k=1}^p m_k-p+1$$ put $$y_i:=x_{p,i-\sum_{k=1}^{p-1}m_k+p-1}.$$ Define in addition $$y_1:=x_{1,1}, \quad y_{\sum_{k=1}^n m_k-n+1}:=x_{n,m_n}.$$ Recall that $x_{p,m_p}(s)$ converges as $s$ goes to $\infty$ to $\mathrm{ev}_2(v_p)$ and $x_{p+1,1}(s)$ converges as $s$ goes to $-\infty$ to $\mathrm{ev}_2(v_p)$ as well. In particular, it follows that there exist $\xi_p(s), \eta_p(s) \in T_{\mathrm{ev}_2(v_p)} M$ such that $x_{p,m_p}(s)=\exp_{\mathrm{ev}_2(v_p)}(\xi_p(s))$ for large negative $s$ and $x_{p+1,1}(s)=\exp_{\mathrm{ev}_2(v_p)}(\eta_p(s))$ for large positive $s$ For $1 \leq p \leq n-1$ and $$i=\sum_{k=1}^p m_k-p+1$$ put $$y_i:=x_{p,m_p} \#^0_{R_p} x_{p+1,1}$$ where $$x_{p,m_p} \#^0_{R_p}x_{p+1,1}:=$$ $$\left\{ \begin{array}{cc} x_{p,m_p}(s+R_p), & s \leq -R_p/2-1,\\ \exp_{\mathrm{ev}_2(v_p)}(\beta(-s-R_p/2)\eta_p(s+R_p)), & -R_p/2-1 \leq s \leq -R_p/2,\\ \mathrm{ev}_2(v_p), & -R_p/2 \leq s \leq R_p/2,\\ \exp_{\mathrm{ev}_2(v_p)}(\beta(s-R_p/2)\xi_p(s-R_p)), & R_p/2 \leq s \leq R_p/2+1,\\ x_{p+1,1}(s-R_p), & s \geq R_p/2+1. \end{array}\right.$$ Here $\beta: \mathbb{R} \to [0,1]$ is a cutoff function equal to $1$ for $s \geq 1$ and equal to $0$ for $s \leq 0$. We abbreviate for the number of cascades of the image of the pregluing map $\#^0$ $$\ell=\sum_{k=1}^n m_k-n+1.$$ Then we define $$v^0_{\textbf{R}}:=\#^0(\textbf{v},\textbf{R}):=((y_i)_{1 \leq i \leq \ell}, (s_i)_{1 \leq i \leq \ell-1}).$$ $v^0_{\textbf{R}}$ will in general only lie “close” to $\tilde{\mathcal{M}}$. We will next construct $v_{\textbf{R}} \in \tilde{\mathcal{M}}$ in a small neighbourhood of $v_{\textbf{R}}^0$. It can be shown that there exists a Riemannian metric $g$ on $M$ such that $\mathrm{crit}(f)$, $W^u_h(c_1)$, and $W^s_h(c_2)$ are totally geodesic with respect to $g$. For $\zeta \in T_v \mathcal{B}(c_1,c_2)$ let $$\rho(v,\zeta):\mathcal{E}_v \to \mathcal{E}_{\exp_v(\zeta)} \footnote{We denote the restriction of $\mathcal{E}$ to $\mathcal{B}(c_1,c_2)$ also by $\mathcal{E}$.}$$ be the parallel transport along the path $\tau \mapsto \exp_v(\tau \zeta)$ for $\tau \in [0,1]$. Define $$F_v:T_v \mathcal{B}(c_1,c_2) \to \mathcal{E}_v, \quad \zeta \mapsto \rho(v,\zeta)^{-1}\mathcal{F}(\exp_v(\zeta)).$$ Note that $$DF_v(0)=D\mathcal{F}(v).$$ For large $R \in \mathbb{R}$ and $i \in \mathbb{N}_0$ define $\gamma^i_{\delta,R}:\mathbb{R} \to \mathbb{R}$ by $$\gamma^i_{\delta,R}(s):=\left\{\begin{array}{cc} (1-\beta[-s])\gamma_\delta(s+R)+\beta(s)\gamma_\delta(s-R) & i \geq 1\\ (1-\beta[-s-R])\gamma_\delta(s+R)+\beta(s+R)\gamma_\delta(s-R) & i=0 \end{array}\right.$$ Here $\beta$ is a smooth cutoff function which equals $0$ for $s\leq -1$ and equals $1$ for $s \geq 0$. For a locally integrable real valued function $f\colon \mathbb{R} \to \mathbb{R}$ which has weak derivatives up to order $k$ we define the norms $$||f||_{k,p,\delta,R}:= \left\{\begin{array}{cc}||\gamma^1_{\delta,R}f||_p & k=0\\ \sum_{i=0}^k||\gamma^i_{\delta,R}\partial^i f||_p+||f(0)|| & k>0. \end{array}\right.$$ The $||\,||_{k,p,\delta, R}$-norm is equivalent to the $||\,||_{k,p,\delta}$-norm, but their ratio diverges as $R$ goes to infinity. For $1 \leq i \leq \ell$ set $$R^i:=\left\{ \begin{array}{cc} R_q & i=\sum_{k=1}^q m_k-q+1 \\ 0 & \mathrm{else}\end{array}\right.$$ We modify the Banachspace norm (\[norm\]) on $T_{v^0_{\textbf{R}}}\mathcal{B}(c_1,c_2)$ in the following way. For $\zeta \in T_{v^0_{\textbf{R}}}\mathcal{B}(c_1,c_2)$ we define $$||\zeta||_{\textbf{R}}:=\sum_{j=1}^\ell( ||\xi_{j,0}||_{1,p,\delta,R^j}+||\xi_{j,1}||+||\xi_{j,2}||)+\sum_{j=1}^{\ell-1} |\delta_j|.$$ Analoguously, we define for $\eta \in \mathcal{E}_{v^0_{\textbf{R}}}$ the norm $$||\eta||_{\textbf{R}}:=\sum_{j=1}^\ell||\eta_j||_{p,\delta,R^j}.$$ These norms were introduced in [@fukaya-oh-ohta-ono Section 18] and are required to guarantee the uniformity of the constant $c$ in (\[rightinv\]) below. Abbreviate $$D_{\textbf{R}}:=DF_{v^0_{\textbf{R}}}(0)=D_{v^0_{\textbf{R}}}.$$ Recall that $R_j>T$ for $1 \leq j \leq n-1$. It is shown in [@schwarz1 Chapter 2.5] that if $T$ is large enough then there exists a constant $c>0$ independent of $\textbf{R}$ and a right inverse $Q_{\textbf{R}}$ of $D_{\textbf{R}}$, i.e. $$\label{Q} D_{\textbf{R}} \circ Q_{\textbf{R}}=\mathrm{id},$$ such that $$\label{orth} \mathrm{im} Q_{\textbf{R}}=\mathrm{ker}(D_{\textbf{R}})^\perp$$ and $$\label{rightinv} ||Q_{\textbf{R}} \eta||_{\textbf{R}} \leq c||\eta||_{\textbf{R}},$$ for every $\eta \in \mathcal{E}_{v^0_{\textbf{R}}}$. Moreover, it follows from the construction of the pregluing map that there exist constants $c_0>0$ and $\kappa>0$ such that $$\label{est} ||F_{v^0_{\textbf{R}}}(0)||_{\textbf{R}} \leq c_0 e^{-\kappa ||\textbf{R}||}.$$ Now (\[Q\]), (\[orth\]), (\[rightinv\]), (\[est\]) together with the Banach Fixpoint theorem imply that for $||\textbf{R}||$ large enough there exists a constant $c>0$ and a unique $\xi_{\textbf{R}}:= \xi_{\textbf{v},\textbf{R}} \in \mathrm{ker}(D_{\textbf{R}})^\perp$ such that $$\label{glue} F_{v^0_{\textbf{R}}}(\xi_{\textbf{R}})=0, \quad ||\xi_{\textbf{R}}||_{\textbf{R}} \leq c||F_{v^0_{\textbf{R}}}(0)||.$$ For details see [@schwarz1 Chapter 2.5]. Let $U(\textbf{v})$ be a small neighbourhood of $\textbf{v}$ in $\mathrm{EV}^{-1}_n(A_n(c_1,c_2))$. Define the gluing map $$\#:U(\textbf{v}) \times (T,\infty)^{n-1} \to \tilde{\mathcal{M}}(c_1,c_2)$$ by $$\#(\textbf{w},\textbf{R}):= \exp_{w^0_{\textbf{R}}}(\xi_{\textbf{w},\textbf{R}}).$$ Let $$N:=\sum_{k=1}^n m_k.$$ Then $\mathbb{R}^N$ acts on $U(\textbf{v})$ by timeshift on each cascade of the first factor. The gluing map induces a map $$\hat{\#}:(U(\textbf{v})/\mathbb{R}^N) \times (T,\infty)^{n-1} \to \mathcal{M}(c_1,c_2)$$ which is an embedding for $T$ large enough.\ \ **Proof of Theorem \[manifold\]:** For $m \in \mathbb{N}$ let $\mathbb{N}_m:=\{1,\ldots,m\}$. For $I \subset \mathbb{N}_{m-1}$ let $$\mathcal{M}_{m,I}(c_1,c_2) \subset \mathcal{M}_m(c_1,c_2)$$ be the set of flow lines with cascades $((x_k)_{1 \leq k \leq m},(t_k)_{1 \leq k \leq m-1})$ in $\mathcal{M}_m(c_1,c_2)$ such that $$t_k>0 \quad \mathrm{if}\,\, k \in I, \qquad t_k=0 \quad \mathrm{if} \,\, k \notin I.$$ It follows from Theorem \[mani\] that for generic Riemannian metric $g$ on $M$ the space $\mathcal{M}_{m,I}(c_1,c_2)$ is a smooth manifold. Note that $$\mathcal{M}_m(c_1,c_2)=\bigcup_{I \subset \mathbb{N}_{m-1}} \mathcal{M}_{m,I}(c_1,c_2)$$ and $$\mathrm{int}\mathcal{M}_m(c_1,c_2)=\mathcal{M}_{m,\mathbb{N}_{m-1}} (c_1,c_2).$$ It follows from (\[dim\]) in Theorem \[mani\] that $$\dim\mathcal{M}_{m,\mathbb{N}_{m-1}}(c_1,c_2)=\mathrm{Ind}(c_1) -\mathrm{Ind}(c_2)-1.$$ In particular, $\mathcal{M}_m(c_1,c_2)$ has for generic $g$ the structure of a manifold with corners of dimension $\mathrm{Ind}(c_1)-\mathrm{Ind}(c_2)-1$. We put for $n \in \mathbb{N}$ $$\mathcal{M}_{\leq n}:=\bigcup_{1 \leq m \leq n} \mathcal{M}_m(c_1,c_2).$$ We show by induction on $n$ that for generic $g$ the set $\mathcal{M}_{\leq n}(c_1,c_2)$ can be endowed with the structure of a manifold of dimension $\mathrm{Ind}(c_1)-\mathrm{Ind}(c_2)-1$. This is clear for $n=1$. It follows from gluing that $\mathcal{M}_{\leq n}(c_1,c_2)$ can be compactified to a manifold with corners $\bar{\mathcal{M}}_{\leq n}(c_1,c_2)$ such that $$\partial \bar{\mathcal{M}}_{\leq n}(c_1,c_2) =\bigcup_{I \subsetneq \mathbb{N}_n} \mathcal{M}_{n+1,I}(c_1,c_2) =\partial \mathcal{M}_{n+1}(c_1,c_2).$$ Hence $$\mathcal{M}_{\leq n+1}(c_1,c_2)=\mathcal{M}_{\leq n}(c_1,c_2) \cup \mathcal{M}_{n+1}(c_1,c_2)$$ can be endowed with the structure of a manifold such that $$\dim \mathcal{M}_{\leq n+1}(c_1,c_2)=\dim\mathcal{M}_{\leq n}(c_1,c_2) =\mathrm{Ind}(c_1)-\mathrm{Ind}(c_2)-1.$$ This proves the theorem. $\square$ Morse-Bott homology ------------------- We assume in this subsection that $M$ is a compact manifold. We say that a quadruple $(f,h,g,g_0)$ consisting of a Morse-Bott function $f$ on $M$ a Morse function $h$ on $\mathrm{crit}(f)$, a Riemannian metric $g$ on $M$ and a Riemannian metric $g_0$ on $\mathrm{crit}(f)$ is a ***[regular Morse-Bott quadruple]{}*** if the following conditions hold. (i) : $h$ and $g_0$ satisfy the Morse-Smale condition, i.e. stable and unstable manifolds of the gradient of $h$ with respect to $g_0$ on $\mathrm{crit}(f)$ intersect transversally. (ii) : $g$ is $(f,h,g_0)$-regular, in the sense of Definition \[fhg0reg\]. Since the Morse-Smale condition is generic, see [@schwarz1 Chapter 2.3], it follows from Theorem \[mani\], that regular Morse-Bott quadruples exist in abundance. In particular, every pair $(f,g)$ consisting of a Morse function $f$ on $M$ and a Riemannian metric $g$ on $M$ which satisfy the Morse-Smale condition gives a regular Morse-Bott quadruple. For a pair $(f,h)$ consisting of a Morse-Bott function $f$ on $M$ and a Morse-function $h$ on $\mathrm{crit}(f)$, we define the chain complex $CM_*(M;f,h)$ as the $\mathbb{Z}_2$ vector space generated by the critical points of $h$ which is graded by the index. More precisely, $CM_k(M;f,h)$ are formal sums of the form $$\xi=\sum_{\substack{c \in \mathrm{crit}(h)\\ \mathrm{ind}(c)=k}}\xi_c c$$ with $\xi_c \in \mathbb{Z}_2$. For generic pairs $(g,g_0)$ of a Riemannian metric $g$ on $M$ and a Riemannian metric $g_0$ on $\mathrm{crit}(f)$, the moduli-spaces of of flow lines with cascades $\mathcal{M}(c_1,c_2)$ is a smooth manifold of dimension $$\dim{\mathcal{M}}(c_1,c_2)=\mathrm{ind}(c_1)-\mathrm{ind}(c_2)-1.$$ If $\dim{\mathcal{M}}(c_1,c_2)$ equals $0$, then $\mathcal{M}(c_1,c_2)$ is compact by Theorem \[fcc\]. Set $$n(c_1,c_2):=\#\mathcal{M}(c_1,c_2) \,\,\mathrm{mod}\,2.$$ We define a boundary operator $$\partial_k: CM_k(M;f,h) \to CM_{k-1}(M;f,h)$$ by linear extension of $$\partial_k c=\sum_{\mathrm{ind}(c')=k-1}n(c,c')c'$$ for $c \in \mathrm{crit}(h)$ with $\mathrm{ind}(c)=k$. The usual gluing and compactness arguments imply that $$\partial^2=0.$$ This gives rise to homology groups $$HM_k(M;f,h,g,g_0):=\frac{\mathrm{ker}\partial_{k+1}} {\mathrm{im}\partial_k}.$$ \[continuation\] Let $(f^\alpha,h^\alpha,g^\alpha,g_0^\alpha)$ and $(f^\beta,h^\beta,g^\beta,g_0^\beta)$ be two regular quadrupels. Then the homologies $HM_*(M;f^\alpha,h^\alpha,g^\alpha,g_0^\alpha)$ and $HM_*(M;f^\beta,h^\beta,g^\beta,g_0^\beta)$ are naturally isomorphic. **Proof:** Pick some $\ell \in \mathbb{N}$ and choose for $1 \leq k \leq \ell$ smooth functions $f_k \in C^\infty(\mathbb{R} \times M, \mathbb{R}$ and smooth families of Riemannian metrics $g_{k,s}$ on $TM$ with $f_k(s, \cdot)$ and $g_{k,s}$ independent of $s$ for $|s| \geq T$ for some large enough constant $T>0$ such that $$f_1(-T)=f^\alpha, \quad f_\ell(T)=f^\beta, \quad f_k(T)=f_{k+1}(-T),\,\, 1 \leq k \leq \ell-1$$ and $$g_{1,-T}=g^\alpha, \quad g_{\ell,T}=g^\beta, \quad g_{k,T}=g_{k+1,-T},\,\, 1 \leq k \leq \ell-1.$$ We assume further that $f_k(T)$ is Morse-Bott for $1 \leq k \leq \ell-1$. For $2 \leq k \leq \ell$ let $r_k \in \mathbb{R}_{\geq}$ be nonnegative real numbers. Choose smooth Morse functions $h_1 \in C^\infty((-\infty,0] \times \mathrm{crit}(f^\alpha),\mathbb{R})$, $h_{\ell+1} \in C^\infty([0,\infty) \times \mathrm{crit}(f^\beta),\mathbb{R})$, and $h_k \in C^\infty([0,r_k] \times \mathrm{crit}(f_{k+1}(T))$ and smooth families of Riemannian metrics $g_{0,1,s}$ on $\mathrm{crit}(f^\alpha)$ for $s \in (-\infty,0]$ and $g_{0,\ell+1,s}$ on $\mathrm{crit}(f^\beta)$ for $s \in [0,\infty)$ and $g_{0,k,s}$ on $\mathrm{crit}(f_{k+1}(T))$ for $s \in [0,r_k].$ They are required to fulfill $$h_1(s)=h^\alpha,\,\,g_{0,1,s}=g_0^\alpha,\,\,s \leq -T,$$ $$h_{\ell+1}(s)=h^\beta,\,\, g_{0,\ell+1,s}=g_0^\beta,\,\, s \geq T.$$ For $c_1 \in \mathrm{crit}(h^\alpha)$, $c_2 \in \mathrm{crit}(h^\beta)$, $m_1,m_2 \in \mathbb{N}_0$ we consider the following flow lines from $c_1$ to $c_2$ with $m=m_1+m_2+\ell$ cascades $$(\textbf{x},\textbf{T})=((x_k)_{1 \leq k \leq m},(t_k)_{1 \leq k \leq m-1})$$ for $x_k \in C^\infty(\mathbb{R},M)$ and $t_k \in \mathbb{R}_{\geq}:=\{r \in \mathbb{R}: r \geq 0\}$ which satisfy the following conditions: (i) : $x_k$ are nonconstant solutions of $$\dot{x}_k(s)=-\nabla_{\tilde{g}_{k,s}} \tilde{f}_k(s,x_k),$$ where $$\tilde{f}_k=\left\{\begin{array}{cc} f^\alpha & 1 \leq k \leq m_1\\ f_{k-m_1} & m_1+1 \leq k \leq m_1+\ell\\ f^\beta & m_1+\ell+1 \leq k \leq m \end{array}\right.$$ and $$\tilde{g}_k=\left\{\begin{array}{cc} g^\alpha & 1 \leq k \leq m_1\\ g_{k-m_1} & m_1+1 \leq k \leq m_1+1+\ell\\ g^\beta & m_1+\ell+1 \leq k \leq m. \end{array}\right.$$ (ii) : There exists $p_1 \in W^u_{h^\alpha}(c_1)$ and $p_2 \in W^s_{h^\beta}(c_2)$ such that $\lim_{s \to -\infty}x_1(s)=p_1$ and $\lim_{s \to \infty}x_m(s)=p_2$. (iii) : denote $$\tilde{h}_k=\left\{\begin{array}{cc} h^\alpha & 1 \leq k \leq m_1-1\\ h_{k-m_1+1} & m_1 \leq k \leq m_1+\ell-1\\ f^\beta & m_1+\ell \leq k \leq m-1 \end{array}\right.$$ and $$\tilde{g}_{0,k}=\left\{\begin{array}{cc} g^\alpha_0 & 1 \leq k \leq m_1-1\\ g_{0,k-m_1+1} & m_1 \leq k \leq m_1+1+\ell-1\\ g^\beta_0 & m_1+\ell \leq k \leq m-1. \end{array}\right.$$ For $1 \leq k \leq m-1$ there are Morse-flow lines $y_k$ of $h$, i.e. solutions of $$\dot{y}_k(s)=-\nabla_{\tilde{h}_{0,k,s}}\tilde{h}_k(s,y_k),$$ such that $$\lim_{s \to \infty}x_k(s)=y(0)$$ and $$\lim_{s \to -\infty}x_{k+1}(s)=y(t_k).$$ (iv) : $t_{k+m_1-1}=r_k$ for $2 \leq k \leq \ell$. For generic choice of the data the space of solutions of (i) to (iv) is a smooth manifold whose dimension is given by the difference of the indices of $c_1$ and $c_2$. If $I(c_1)=I(c_2)$ then this manifold is compact and we denote by $n(c_1,c_2) \in \mathbb{Z}_2$ its cardinality modulo 2. We define a map $$\Phi^{\alpha \beta}: CM_*(M;f^\alpha,h^\alpha) \to CM_*(M;f^\beta,h^\beta)$$ by linear extension of $$\Phi^{\alpha \beta} c=\sum_{\substack{c' \in \mathrm{crit}(h^\beta)\\ \mathrm{ind}(c')=\mathrm{ind}(c)}} n(c,c') c'$$ where $c \in \mathrm{crit}(f^\alpha)$. Standard arguments, see [@schwarz1 Chapter 4.1.3], show that $\Phi^{\alpha\beta}$ induces isomorphisms on homologies $$\phi^{\alpha \beta}: HM_*(M;f^\alpha,h^\alpha,g^\alpha,g_0^\alpha) \to HM_*(M;f^\beta,h^\beta,g^\beta,g_0^\beta)$$ which satisfy $$\phi^{\alpha \beta} \circ \phi^{\beta \gamma}=\phi^{\alpha \gamma}, \quad \phi^{\alpha \alpha}=\mathrm{id}.$$ This proves the theorem. $\square$\ \ Theorem \[continuation\] allows us to define the **Morse-Bott homology** of $M$ by $$HM_*(M):=HM_*(M;f,h,g,g_0)$$ for some regular quadruple $(f,h,g,g_0)$. Either for the special case of a Morse function or the case where $f$ vanishes identically one obtains that Morse-Bott homology is isomorphic to Morse homology. Hence we have proved the following Corollary. Morse-Bott homology of a compact manifold $M$ is isomorphic to the Morse-homology of $M$ and hence also to the singular homology of $M$. [99]{} K.Cieliebak, R.Gaio, I.Mundet, D.Salamon,*The symplectic vortex equations and invariants of Hamiltonian group actions* Preprint 2001. K.Cieliebak, R.Gaio, D.Salamon, *J-holomorphic curves, moment maps, and invariants of Hamiltonian group actions*, IMRN **10** (2000),813-882. C.-H.Cho *Holomorphic discs, spin structures and the Floer cohomology of the Clifford torus*, Thesis, Univ. of Wisconsin-Madison, (2003). C.-H.Cho, Y. Oh *Floer cohomology and disc instantons of Lagrangian torus fibers in fano toric manifolds*, math.SG/0308225 (2003). A.Floer, H.Hofer, D.Salamon, *Transversality in elliptic Morse theory for the symplectic action*, Duke Math.Journal, **80**(1996), 251-292. A.Floer, Morse theory for Lagrangian intersections, *J. Diff. Geom.* **28** (1988), 513-547. A.Floer, A relative Morse index for the symplectic action, *Comm. Pure Appl. Math.* **41** (1988), 393-407. A.Floer, The unregularized gradient flow of the symplectic action, *Comm. Pure Appl. Math.* **41** (1988), 775-813. A.Floer, Wittens complex and infinite dimensional Morse theory, *J. Diff. Geom.* **30** (1989), 207-221. A.Floer, Symplectic fixed points and holomorphic spheres, *Comm. Math. Phys.* **120** (1989), 575-611. H.Hofer, C.Taubes, A.Weinstein, E.Zehnder (editors), *The Floer Memorial Volume*, Birkhäuser, Basel, 1995. K.Fukaya, K.Ono, *Arnold conjecture and Gromov-Witten invariants*, Topology *38* (1999), 933-1048. K.Fukaya, Y.Oh, H.Ohto, K.Ono *Lagrangian Intersection Floer theory - Anomaly and Obstruction -*, Kyoto University, 2000. U.Frauenfelder, *Floer homology of symplectic quotients and the Arnold-Givental conjecture*, thesis, (2003). A.Givental, *Periodic maps in symplectic topology*, Funct. Anal. Appl. **23:4** (1989), 37-52. R.Gaio, D.Salamon *Gromov-Witten invariants of symplectic quotients and adiabatic limits* H.Hofer, D.Salamon, *Floer homology and Novikov rings*, in [@floer-memorial]. J.Jones, J.Rawnsley, D.Salamon, *Instanton homology and donaldson polynomials*, unpublished notes. L.Lazzarini, *Pseudoholomorphic curves and the Arnold-Givental conjecture*, Thesis 1999. D.McDuff and D.Salamon, *Introduction to Symplectic Topology*, Oxford University Press, 1995, 2nd edition 1998. D.McDuff and D.Salamon, *J-holomorphic Curves and Quantum Cohomology*, AMS, University Lecture Series, Vol. **6**, Providence, Rhode Island, 1994. I.Mundet, *Yang-Mills-Higgs theory for symplectic fibrations*, PhD thesis, Madrid, April 1999. Y.Oh, *Floer cohomology of Lagrangian intersections and pseudoholomorphic disks I*, Communications in Pure and Applied Mathematics, **46** (1993), 949-994. Y.Oh, *Floer cohomology of Lagrangian intersections and pseudoholomorphis disks II*, Communications in Pure and Applies Mathematics, **46** (1993), 995-1012. Y.Oh, *Floer cohomology of Lagrangian intersections and pseudoholomorphic disks III. Arnold-Givental conjecture*, in [@floer-memorial]. Y.Oh, *Floer cohomology, spectral sequences, and the Maslov class of Lagrangian embeddings*, Internat.Math.Res.Notices (1996),no.7, 305-346. S.Piunikhin, D.Salamon. M.Schwarz *Symplectic Floer-Donaldson theory and quantum cohomology*, Contact and Symplectic Geometry, edited by C.B. Thomas, Newton Institute Publications, Cambridge University Press, 1996, 171-200. J.Robbin and D.Salamon, *The Spectral flow and the Maslov index*, Bulletin of the LMS **27** (1995), 1-33. J.Robbin and D.Salamon, *The Maslov index for paths*, Topology **32** (1993), 827-844. D.Salamon, *Morse theory, the Conley index and Floer homology*, Bull.London Math.Soc. **22** (1990), 113-140. D.Salamon, *Lectures on Floer homology*, In Symplectic Geometry and Topology, edited by Y.Eliashberg and L.Traynor, IAS/Park City Mathematics series, Vol **7** (1999), 143-230. D.Salamon and E.Zehnder, *Morse theory for periodic solutions of Hamiltonian systems and the Maslov index*, Comm. Pure. Appl. Math. **45** (1992), 1303-1360. M.Schwarz, *Morse homology*, Birkhäuser Verlag (1993). M.Schwarz, *Cohomology operations from $S^1$-cobordisms in floer theory*, Thesis 1995. K.Uhlenbeck, *Connections with $L^p$-bounds on curvature*, Comm.Math.Phys.**83** (1982), 31-42. K.Wehrheim, *Uhlenbeck compactness*, textbook to appear. (U. Frauenfelder) Departement of Mathematics, Hokkaido University, Sapporo 060-0810, Japan E-mail address: [email protected] [^1]: Partially supported by Swiss National Science Foundation [^2]: We abbreviate for manifolds $A_2 \subset A_1$ and $B_2 \subset B_1$ by $C^\infty((A_1,A_2),(B_1,B_2))$ the space of smooth maps from $A_1$ to $B_1$ which map $A_2$ to $B_2$. [^3]: Two submanifolds $L_1,L_2 \subset M$ are said to intersect cleanly if their intersection is a manifold such that for each $x \in L_1 \cap L_2$ it holds that $T_x (L_1 \cap L_2)=T_x L_1 \cap T_x L_2$. [^4]: Some authors use the convention that the moment map takes values in the dual of the Lie algebra. Since we have an inner product we can identify the Lie algebra with its dual. [^5]: The centraliser $Z(\mathfrak{g})$ consists of all $\xi \in \mathfrak{g}$ such that $[\xi,\eta]=0$ for every $\eta \in \mathfrak{g}$ [^6]: Most of the results of this paper could be generalized to the case, where we replace (H3) by the following weaker assumption $(H3')$\ \ **(H3’)** *For every smooth map $v:(B,\partial B) \to (M,L_j)$, we have $$\int_{B} v^* \omega=0.$$* However, if we only assume (H3’) instead of (H3), then our path space will in general neither be connected nor simply connected. In particular, there will be no well defined action functional. [^7]: See (\[Cdelta\]) for the definition of the space $C^\infty_\delta$. [^8]: Note that the differentiable structure of $\mathcal{B}$ is independent of the metric $g$ on $M$
--- abstract: 'Stellar streams result from the tidal disruption of satellites and star clusters as they orbit a host galaxy, and can be very sensitive probes of the gravitational potential of the host system. We select and study narrow stellar streams formed in a Milky-Way-like dark matter halo of the Aquarius suite of cosmological simulations, to determine if these streams can be used to constrain the present day characteristic parameters of the halo’s gravitational potential. We find that orbits integrated in static spherical and triaxial NFW potentials both reproduce the locations and kinematics of the various streams reasonably well. To quantify this further, we determine the best-fit potential parameters by maximizing the amount of clustering of the stream stars in the space of their actions. We show that using our set of Aquarius streams, we recover a mass profile that is consistent with the spherically-averaged dark matter profile of the host halo, although we ignored both triaxiality and time evolution in the fit. This gives us confidence that such methods can be applied to the many streams that will be discovered by the Gaia mission to determine the gravitational potential of our Galaxy.' author: - 'Robyn E. Sanderson' - 'Johanna Hartke & Amina Helmi' bibliography: - 'literature.bib' title: | Modeling the gravitational potential of a cosmological dark matter halo\ with stellar streams --- Introduction {#sec:intro} ============ Stellar streams are the result of the accretion of globular clusters or dwarf galaxies onto a more massive host galaxy, and are formed as their stars are tidally stripped under the influence of the potential of the host. The knowledge of a stream’s trajectory provides a constraint on the potential and thus on the matter distribution and shape of the dark halo [@2009BinneyA; @2010LawA; @2009Willett; @2010Newberg; @2013MNRAS.433.1826S]. There exist many streams in the Milky Way halo; among the most well-studied is the stream associated with the Sagittarius dwarf galaxy [@1994Ibata; @2000Ivezic; @2000Yanny]. This stream is a good example of how assumptions about the symmetry, shape, and functional form of the Milky Way’s dark halo, combined with incomplete knowledge of the phase-space of stars in the tidal stream, can lead to conflicting conclusions about the mass distribution of the Milky Way (MW). Based on 3D positions and radial velocities (a total of 4 phase-space coordinates) of about 75 carbon stars, @2001Ibata argued that the dark halo should be nearly spherical since the stars’ positions roughly followed a great circle on the sky. Analysis of parts of the stream discovered with the Sloan Digital Sky Survey suggested that the mass distribution could be oblate [@2004Martinez], and a similar conclusion was reached using the precession of the orbital plane of the stream’s M giants from 2MASS [@2005Johnston]. On the other hand, the radial velocities of the leading stream’s M giants from 2MASS clearly favored a prolate shape [@2004Helmi]. These works used the 4D phase-space coordinates of several hundred stars. The conundrum was solved by @2010LawA who showed that a triaxial halo could reconcile the angular position of the stream with its radial velocities. In their model, a logarithmic density profile was assumed with axis ratios and orientation constant with radius, leading to a best fit where the disk was parallel to the intermediate axis of the dark halo; a dynamically untenable situation. More recently, @2013Vera-Ciro argued that it is not necessary to assume a constant shape with radius, and that there is enough freedom in the data to allow for a model that is oblate and aligned with the disk at small radii. Furthermore, the authors find that if the gravitational contribution of the Large Magellanic Cloud is included, the resulting best-fit Milky Way halo resembles those in cosmological simulations at large radii. The @2010LawA model in the outskirts may be seen as an effective potential, which is the sum of that of the LMC and of the underlying halo of the Galaxy. Although @2013Ibata argue that a spherical halo with an unusual rising rotation curve out to nearly 50 kpc can fit the data, the velocities are clearly less well reproduced in this model. @Belokurov2014 also argue that the azimuthal precession rate can be used to measure the radial dependence of the mass distribution. Clearly, thus far not all of the available data has been used optimally nor has the modelling been sufficiently general from assumptions about the mass distribution: its shape, radial profile, and even the effect of substructures like the LMC, to reach its full potential. How then can the mass distribution of the MW’s dark halo be unambiguously determined? One could for example try to fit more than one stream, or try to obtain the remaining phase-space coordinates for the stream being fitted, in the hopes that this could break the degeneracy. Given that stars in a single stream all have similar orbits, one would expect that even perfect and complete data on a single stream would be insufficient to break all the degeneracies. It is unclear how many streams, or which ones, would be sufficient to do this. However, the Gaia mission [@2001Perryman], launched in December 2013, will at least make it possible to analyze multiple streams simultaneously, by measuring the positions and velocities of 1 billion Milky Way stars, including many halo stars. Gaia will measure full six-dimensional phase-space coordinates for roughly 15 percent of these stars, and five-dimensional coordinates for the remaining 85 percent. This dataset will likely contain hundreds of streams [@1999Helmi] and its uniformity will enable simultaneous analysis of multiple streams. Even in this case, however, the fact would remain that the MW’s dark halo does not precisely follow a particular functional form; it is clear from the example of the Sgr stream that our assumptions do affect the results of the fit. In this work we explore how Gaia’s upcoming observations can help resolve some of these problems, and test how simplifying assumptions about the potential are projected onto both the ability of orbits to resemble a given stream, and the resulting best-fit profile. Since many stream-fitting algorithms [e.g., @2011MNRAS.417..198V; @2014MNRAS.443..423S; @2014ApJ...794....4P; @2014ApJ...795...94B] compare the positions and velocities of stream stars to models in 6-dimensional phase space (or less), our first goal is to determine whether a center-of-mass orbit integrated in an spherical instead of triaxial potential lined up equally well (or badly) for streams on different kinds of orbits. Though streams do not exactly follow orbits [e.g. @2009BinneyA], the degree to which an integrated orbit lines up with the stars of a stream is a simple proxy for whether a given potential will be able to produce a stream that fits the observations; in fact to save computing time the best-fit single orbit is often used as a starting point for the progenitor’s center-of-mass orbit when searching parameter space with N-body simulations [e.g. @fardal:2006aa; @2010Law]. We wish to determine whether this strategy will be effective for the streams produced self-consistently from satellites in the Aquarius simulations. Additionally, the degree to which streams lie in or out of a plane has been used to conjecture about the spherical symmetry (or lack thereof) of the MW halo [e.g. @2001Ibata]. We wish to determine whether the Aquarius streams can be successfully used in this way, and if so, which streams are most sensitive. Our second goal is to test the robustness of the results of a new potential-fitting algorithm, based on maximizing the information content of the action space of stream stars [@2015ApJ...801...98S], if the potential being fit was substantially less complicated than the real potential. Unlike many methods this one does not directly compare positions and velocities of stream stars with a model, but does analyze multiple streams simultaneously. We want to test whether the results from fitting a simple spherical potential will still reflect the true mass distribution of the halo to the extent permitted by such an oversimplified model. To conduct the two tests we selected stellar streams produced from the cosmological, dark-matter-only N-body simulation Aquarius A [@2008Springel hereafter S08] via stellar tagging according to a semianalytic model of star formation [@2010Cooper] and used them to evaluate different potential models. In Section \[sec:data\], we describe the two models for the dark halo: a triaxial and a spherical NFW potential both fit to the known dark-matter distribution of the simulated halo. In Section \[sec:integration\] we selected 15 structures based on their streamy appearance and low mass (i.e. narrow width) and integrated center-of-mass orbits for each stream in the two different potential models, to see how well the orbits traced the streams. Then in Section \[sec:KLD\] we fit a spherical NFW model simultaneously to all 15 streams using the @2015ApJ...801...98S action-clustering method and compared the best-fit result to the spherically averaged DM distribution from the N-body simulation. In Section \[sec:discussion\] we discuss our results and implications for future work. Data and models {#sec:data} =============== ![image](allsky.png){width=".95\hsize"} The Aquarius project is a suite of N-body simulations of Milky-way sized halos run in a $\Lambda$CDM cosmology (S08). Six different halos, labeled A to F, were simulated, each of them at different resolutions, labeled by the numbers 1 (highest) to 5 (lowest). Stellar populations were associated to subsets of CDM particles [@2010Cooper] using the semi-analytic <span style="font-variant:small-caps;">Galform</span> model [@2000Cole]. In this work we focus on the halo Aquarius-A-2 (Aq-A-2) at redshift $z = 0$. The mass, shape and orientation of the halo change over time [@2011Vera], and it contains a population of subhalos resolved down to about $10^5$ solar masses (S08). From the set of dark matter particles tagged by @2010Cooper, we select those associated with infalling luminous satellites with total stellar mass between $1\times 10^{3}\;M_{\odot}$ and $5\times 10^{5}\;M_{\odot}$ and which gave rise to structures that appeared spatially coherent or “streamy" (i.e. long and thin) in position space [@2011Helmi]. The selected streams are shown in Figure \[fig:all\_streams\] as a sky projection viewed from the centre-of-mass of the host halo. Different streams are shown in different colors in this and subsequent plots. Potentials and parameters {#ssec:potentials} ------------------------- We use the Navarro-Frenk-White (NFW) profile [@1996NFW] to describe the mass profile of a dark halo with scale radius $r_s$ and scale density $\rho_s$. In order to avoid degeneracies in action space when using the KLD method (cf. section \[sec:KLD\]), we use the enclosed mass at the scale radius, $$M_s \equiv M_{encl}(r_s) = 4\pi\rho_s r_s^3 \left( \ln 2 - \frac{1}{2} \right), \label{eq:MRsrhos}$$ as one of the two parameters in the potential rather than $\rho_s$ (the other parameter is the scale radius $r_s$). In terms of $M_s$ and $r_s$, the potential is $$\Phi(r) = -G \frac{M_s}{\ln 2 - 1/2} \frac{\ln (1 + r/r_s)}{r} \label{eq:PhiMsRs}$$ where $G$ is Newton’s constant. For a spherical NFW halo, the radius $r$ is simply defined as $r^2 = x^2 + y^2 + z^2$. To produce a triaxial NFW halo with the same overall mass and scale radius, we follow @2008Vogelsberger and define the ellipsoidal radius $$\label{eq:rEllipsoidal} r_E^2 \equiv \left(\frac{x}{a}\right)^2 + \left(\frac{y}{b}\right)^2 + \left(\frac{z}{c}\right)^2,$$ where $a$, $b$, and $c$ are the relative lengths of the major, intermediate, and minor axes respectively. In this orientation the major axis of the ellipsoid is therefore aligned with the $x$ axis, and so forth. In order to maintain the proper normalization we require $a^2 + b^2 + c^2 = 3$. We then replace $r$ in the spherical NFW potential with the quantity $$\label{eq:rTilde} \tilde{r} = \frac{(r_a + r)r_E}{r_a + r_E},$$ where $r_a$ is the scale over which the potential shape transitions from ellipsoidal to near spherical. As in @2008Vogelsberger we set $r_a=2r_s$. This produces a halo that is ellipsoidal in the center and becomes spherical for $r \gg r_a$. We use the axis ratios for the potential of Aq-A determined by [@2011Vera] using the method for defining isopotential contours described in [@2007Hayashi]. The shape of Aq-A changes as a function of radius; we take the axis ratios of the potential at the scale radius from Figure A2 of [@2011Vera] ($b/a=0.90$, $c/a=0.85$) and scale the axis lengths to the proper normalization for the ellipsoidal radius, obtaining $a = 1.09$, $b = 0.98$, and $c = 0.93$. The rotation matrix that transforms from the coordinate system of the Aquarius simulations $(x_A,y_A,z_A)$ to the one aligned with the ellipsoidal axes $(x,y,z)$ was determined by @2011Vera: $$\begin{pmatrix} x\\ y\\ z\\ \end{pmatrix} = \begin{pmatrix} 0.24 & -0.73 & 0.64\\ -0.12 & 0.63 & 0.76\\ -0.96 & -0.27 & 0.07\\ \end{pmatrix} \begin{pmatrix} x_A\\ y_A\\ z_A\\ \end{pmatrix}$$ We use this matrix to rotate the coordinates of the selected stream stars. [rlcccc|c]{} $M_{200}$ & $10^{12} M_\odot$ & 1.842 & – & 1.842 & $1.322$ & $1.530^{+0.818}_{-0.402}$\ $r_{200}$ & kpc & 245.88 & – & 245.88 & – & $230.28^{+35.35}_{-22.26}$\ $\delta_V$ & $10^4$ & 2.060 & 2.130 & 2.038 & 1.529 & $1.230^{+1.364}_{-0.692}$\ $c_{NFW}$ & – & – & 16.19 & – & – & $13.18^{+4.54}_{-3.76}$\ $r_{-2}$ & kpc/$h$ & – & – & 11.15 & – & $12.82^{+3.38}_{-1.82}$\ $\rho_{-2}$ & $10^6 h^2 M_\odot$ [$\textrm{kpc}^{-3}$]{} & – & 10.199 & 7.332 & – & $6.156^{+6.828}_{-3.465}$\ $\rho_{s}$ & $10^6 M_\odot$ [$\textrm{kpc}^{-3}$]{} & – & $21.98$ & – & $15.78$ & $13.27^{+14.72}_{-7.47}$\ $r_s$ & kpc & – & 15.19 & – & 15.19 & $17.47\substack{+4.60\\-2.48}$\ $M_s$ & $10^{12} M_\odot$ & – & $0.187$ & – & $0.134$ & $0.172\substack{+0.057\\-0.020}$\ ${\ensuremath{r_{\mathrm{max}}}}$ & kpc & 28.14 & 32.87 & 28.14 & 32.87 & $37.77^{+9.95}_{-5.36}$\ ${\ensuremath{v_{\mathrm{max}}}}$ & km [$\textrm{s}^{-1}$]{} & 208.49 & 248.97 & 208.49 & 210.96 & $217.44^{+53.58}_{-35.80}$\ \[tab:par\] For the mass and scale radius of the Aq-A-2 halo, we have several different options. S08 and @2010Navarro [hereafter N10] both determine slightly different sets of halo parameters for Aq-A-2 that lead to different values for $M_s$ assuming a spherical NFW halo; both papers find the same value for $r_s$ under this assumption. To obtain a value for $r_s$, S08 determine the radius $r_{200}$ in which the virial mass $M_{200}$ is enclosed (the mass enclosed in a sphere with average density 200 times the critical value). Then, from the peak value of the circular velocity curve $v_{max}$ at $r_{max}$, they determine the characteristic density contrast $\delta_V$, $$\delta_V \equiv 2 \left(\frac{{\ensuremath{V_{\mathrm{max}}}}}{H_0 {\ensuremath{r_{\mathrm{max}}}}}\right)^2,$$ which can be converted to the standard NFW concentration $c$ via [@1996NFW] $$\label{eq:deltaV} \delta_c = 7.213 \delta_V = \frac{200}{3} \frac{c^3}{\ln(1 + c) - \frac{c}{1 + c}}.$$ This results in in $c = 16.19$ for Aq-A-2 assuming that the halo is well fit by an NFW profile. The virial radius is then related to the NFW scale radius by $$r_s = r_{200}/c = 15.19\;\text{kpc}. \label{eq:Rsr200c}$$ $M_{200}$ and $r_{200}$ are related to the scale density $\rho_s$ via $$\begin{aligned} M_{200} &=& M_{\text{encl}}(r_{200}) = 4\pi\rho_s (r_{200}/c)^3 \left( \ln (1 + c) - \frac{c}{1 + c} \right) \nonumber \\ \rho_s &=& \frac{M_{200}}{4\pi (r_{200}/c)^3 \left( \ln (1 + c) - \frac{c}{1 + c} \right)}, \label{eqn:rho1}\end{aligned}$$ or, equivalently, to $\delta_V$ via: $$\rho_s = \delta_c \rho_{crit} = 7.213 \delta_V \frac{3 H^2}{8\pi G}. \label{eqn:rho2}$$ Both relations result in a value for $M_s$ (using Equation \[eq:MRsrhos\]) of $1.87\times10^{11}\ M_{\odot}$. On the other hand, N10 characterize the same halo by determining the radius $r_{-2}$ where the logarithmic slope of the profile, $\gamma (r) = -\text{d} \ln \rho / \text{d} \ln r$, equals the isothermal value, $\gamma = 2$. The density at $r_{-2}$ is denoted as $\rho_{-2}$. For a NFW profile, $r_{-2} = r_s$ and $4\rho_{-2} = \rho_s$. The value obtained for $r_s$ this way is identical to that obtained by finding $r_{200}$ and applying Equation , but the value obtained for $M_s$ using this method is somewhat smaller, $1.34\times10^{11} \ M_{\odot}$. The different results are summarized in Table \[tab:par\]. The parameters from S08 give the correct $M_{200}$ by definition, but the NFW profile with this $M_s$ and $r_s$ has a significantly higher peak circular velocity than is measured directly from the numerical simulation of the halo. On the other hand, the parameters from N10 give close to the the correct peak of the circular velocity curve, but significantly underestimate the enclosed mass at $r_{200}$. This discrepancy occurs because the Aq-A halo mass profile is not strictly NFW and these two methods normalize the mass profile at two different radii. The N10 profile agrees with the spherical mass profile best within 40 kpc, while the S08 profile is too high until near the virial radius. The streams we will use to fit the halo orbit in a radial range from inside $r_{-2}$ out to about half of $r_{200}$. Thus it is likely that our fit will match the empirical mass profile best over some radial range intermediate to these two, and in Section \[sec:KLD\] we will compare our fit results to both values of $M_s$ since they bracket the possible masses obtained by fitting an NFW profile to the halo, depending on which range of radii is used for the fit. For integrating orbits, we used the values determined by S08, but the range of masses we explore includes the N10 profile. Since Aq-A is not well fit by a single NFW profile at all radii, we also determined its radial mass profile directly from the dark matter particle data to compare to our fit results. The empirical mass profile was obtained by first removing all bound substructures identified by the structure finder <span style="font-variant:small-caps;">Subfind</span> [@2001MNRAS.328..726S also used to produce the stellar stream catalog], then binning the remaining particles in spherical radius. Because the bound substructures are removed, this empirical profile has a slightly lower virial mass than S08. Orbit integration {#sec:integration} ================= Methods {#ssec:integrationMethods} ------- We integrated center-of-mass orbits for each selected stream from Aq-A-2 in the spherical and triaxial NFW potentials described above, using as initial conditions the position and velocity of a particle chosen by eye to lie about midway along each stream In the case of stream 1051588, we tried using a range of different particles at different positions along the stream, but none of the orbits we integrated traced the stream closely at all. The equations of motion were integrated numerically with `scipy` using a fourth-order Runge-Kutta algorithm[@1988dopri], with a timestep of 0.01 Gyr for both the spherical and the triaxial potentials. Each orbit was integrated forward and backward in time, starting from the initial conditions, for 2 Gyr in each direction. The center-of-mass orbits in each potential were compared to the current positions of the stream stars, to determine whether the spherical or triaxial potential produced an orbit that more closely followed the stream. Inspired by comparisons used in fitting orbits to streams, we calculated for each stream the minimum distance between the integrated orbit of the central particle and each star in the stream. The phase-space location along the orbit $({\ensuremath{{\mathbf{x}}_{\mathrm{orb}}}},{\ensuremath{{\mathbf{v}}_{\mathrm{orb}}}})$ was tabulated at each timestep using the orbit integration and compared to the phase-space location of each star particle by summing the squares of the minimum distances, in the spirit of a chi-squared. We calculate this statistic independently for position and velocity and normalize by the number of stars in each stream: $$\begin{aligned} \chi^2_x &\equiv & \frac{1}{N_*} \sum_{i=0}^{N_*} \textrm{min}\left[ ({\ensuremath{{\mathbf{x}}_{\mathrm{orb}}}} - {\mathbf{x}}_i)^2 \right] \nonumber \\ \chi^2_v &\equiv & \frac{1}{N_*} \sum_{i=0}^{N_*} \textrm{min}\left[ ({\mathbf{v}}_{\mathrm{orb}} - {\mathbf{v}}_{i})^2\right] \label{eq:chisq}\end{aligned}$$ To eliminate a few outliers, we discard any stars in the streams whose minimum distance from the orbit is larger than 25 kpc. For most streams no stars are thrown out; for a few of the largest a handful of stars are discarded. The largest number of discarded stars is 66, from stream 1025754 which contains 5158 stars in total. Most of these are well outside this distance, and our results do not change appreciably with the cutoff distance. We also explored how changing the parameters $M_s$ and $r_s$ affected the agreement between the integrated orbit and the stream. Having determined which potential (spherical or triaxial) produced the best stream-orbit agreement with the known parameters, we then varied each parameter from 0.25 to 2 times its known value while holding the other fixed. Results {#ssec:orbits} ------- ![Relative difference in “chi-squared" (Equations ) between the orbit and stream when the orbit is integrated in a spherical rather than triaxial potential. The x axis shows the difference in position, while the y axis shows the difference in velocity. The four highlighted streams shown as red squares are shown in several projections in Figure \[fig:streamsorbits1\]; streams 1040180 and 1030962 are also shown in Figures \[fig:varymass1\] and \[fig:varyr1\]. []{data-label="fig:chisq"}](chisq_new.png){width="45.00000%"} ![image](streams_examples.png){width="0.85\hsize"} ![image](selection_M_new.png){width="\hsize"} ![image](selection_R_new.png){width="\hsize"} We compared the alignment between the stars in each selected stream and the integrated orbits. We do not expect the streams to align exactly with the integrated orbits, not only because the spread of energies in the stream stars produces a slight misalignment, but also because the true potential in which the streams have evolved is both lumpy and time-evolving, as opposed to our smooth and static models. However, the goal is to see if the assumption of spherical symmetry makes a significant difference in how closely the integrated orbits follow the streams. The values of the average minimum distances $\chi^2_x$ and $\chi^2_v$ (Equations ) for all the streams in our sample are compared for the spherical and triaxial potentials in Figure \[fig:chisq\]. The streams highlighted as red squares and labeled with their ID numbers in Figure \[fig:chisq\] are shown in a few different projections in Figure \[fig:streamsorbits1\]. In all projections, the stream stars are shown as black points; the central particle whose orbit is integrated forward and backward is marked with a big turquoise circle, and the integrated spherical and triaxial orbits are shown as blue and green solid lines, respectively. We also show for comparison the orbit integrated in the best-fit spherical potential determined with the KLD fit as a red dashed line, with accompanying red dot-dashed lines spanning the range of uncertainty of the fit. Figure \[fig:chisq\] shows that in general there is a slight preference for a triaxial orbit over a spherical one in terms of average minimum distance between stars and orbit. There are two streams for which using a spherical instead of triaxial halo nearly doubles the average minimum distance; both these streams are on very radial orbits and the spherical orbit fails to incorporate precession out of the plane that is especially apparent in the stars at the edges of the stream. One of these, stream 1040180, is shown in the second row of Figure \[fig:streamsorbits1\]; the other stream has similar characteristics. Although (as expected) even the triaxial orbit does not perfectly line up with the stream edges, especially in the $x$-$z$ projection, it does a somewhat better job than the spherical orbit. Other than these two outliers, for many of the streams it is hard to see by eye whether the triaxial or spherical halo is a better fit. Stream 1030962, shown in the third row of Figure \[fig:streamsorbits1\], is a good example of this situation. The orbits in the two potentials are nearly identical and the stream is much wider than the distance between the two orbits, and so it cannot discriminate. This stream, like most in the sample, shows some signs of discontinuity that also complicate the choice between potentials. Finally, one stream appears to slightly prefer the spherical potential, which is surprising given that we know the Aquarius halo is triaxial. This stream, 55000000, is shown in the first row of Figure \[fig:streamsorbits1\]. Stream 55000000 is another case similar to 1030962, where the stream is much wider than the difference in orbits and small differences end up producing a chi-squared that slightly favors the spherical potential rather than triaxial in this case. We can estimate the sensitivity of the chi-squared test using stream 1051588 (fourth row of Figure \[fig:streamsorbits1\]), for which neither orbit lines up very well with the stream at all, so that in this case the differing chi-squared values are just choosing between two equally bad options. For this stream the chi-squared in position space favors a spherical potential while the chi-squared in velocity space favors a triaxial potential, but the fractional differences are only about ten percent. This illustrates that differences of this order in the chi-squared are not really indicative of a preference for one potential over the other. On the other hand, differences on the order of 30 percent and larger in chi-squared, such as the differences shown by stream 1040180, are produced primarily when comparing the positions of the ends of the streams and orbits. This is consistent with the expectation that longer streams do a better job constraining the shape of the halo, since precession induced by departures from spherical symmetry has a more noticeable effect on a longer stream. The influence of the halo mass on the orbit can be probed by varying the parameter $M_s$, as shown in Figure \[fig:varymass1\] for two example streams, one where the triaxial potential is clearly a better fit (1040180, top row) and one where the two orbits are nearly indistinguishable from one another (1030962, bottom row). Increasing the halo mass shifts the orbit’s apo- and pericenter inwards. The $r-v_r$ curve becomes more elongated for decreasing mass resulting in a larger radial range and a smaller velocity range covered. Looking at the projections onto different axes, orbits with $M_s \substack{+25\%\\ -50\%}$ all follow most of the stars in the stream. For reference, we also show the error bounds of the orbit using the best-fit parameters; these are typically a smaller range than we can distinguish by eye. We also probe the influence of the scale radius on the orbits, shown in Figure \[fig:varyr1\] for the same two streams as in Figure \[fig:varymass1\]. A larger scale radius results in a shift of apo- and pericentre towards larger radii and a more elongated radial velocity curve, which implies a larger radial range covered but a smaller range in radial velocities. Thus there is a partial degeneracy, in terms of orbital characteristics, between increasing the scale radius and lowering the scale mass (and vice versa). The orbit is generally less sensitive to $r_s$ than to $M_s$: even when the scale radius is increased to 1.75 times its measured value, the orbits still follow most of the stream. Thus, although the degree to which an orbit in a trial potential lies along a stream can select a rough range of potentials around the correct one, this method of determining the potential is neither very accurate nor very precise. We will show in the following section that a method that represents streams more realistically, as collections of stars on neighboring orbits, gives a better result in terms of both accuracy and precision. Potential fitting using action-space clustering {#sec:KLD} =============================================== To fit a spherical NFW potential to the selected Aquarius streams, we use the method described in @2015ApJ...801...98S, which maximizes the clustering of the selected stars in the space of their actions, ${\mathbf{J}}$, by varying the potential parameters ${\mathbf{a}} \equiv (M_s, r_s)$ used to calculate the actions from the stars’ positions and velocities. The potential parameters giving rise to the most clustered distribution of actions are chosen as the best fit, ${\mathbf{a}}_0$. Measuring clustering with the KLD --------------------------------- The amount of action-space clustering is measured statistically by calculating the Kullback-Leibler divergence (KLD), $${\ensuremath{{\ensuremath{D^{\mathrm{I}}}}_{\mathrm{KL}}}} =\int f_{{\mathbf{a}}}\left({\mathbf{J}}\right) \log \frac{f_{{\mathbf{a}}}\left({\mathbf{J}}\right)}{{\ensuremath{f_{{\mathbf{a}}}^{\mathrm{shuf}}}}\left({\mathbf{J}}\right)}\ d^3{\mathbf{J}}, \label{eqn:KLDI}$$ between the distribution of stellar actions for a specific set of potential parameters, $f_{{\mathbf{a}}}({\mathbf{J}})$, and the product of its marginal distributions, ${\ensuremath{f_{{\mathbf{a}}}^{\mathrm{shuf}}}}({\mathbf{J}})$. The product of marginals is constructed by computing the actions for a particular ${\mathbf{a}}$, then shuffling the different components of each action relative to one another to break correlations between actions, so we call it the “shuffled" distribution. We use a modified Breiman density estimator to infer $f_{{\mathbf{a}}}$ and ${\ensuremath{f_{{\mathbf{a}}}^{\mathrm{shuf}}}}$ from the set of stellar actions, then calculate the KLD using numerical integration over a regular grid of ${\mathbf{J}}$, which replaces the integral in Equation with a sum over grid squares. More details on the numerical methods are available in @2015ApJ...801...98S. The larger the KLD, the more clustered the action space; the best-fit parameters are those for which the KLD is maximized. Using the KLD as a figure of merit when fitting the potential has two advantages. First, because we measure clustering statistically there is no need to assign stars to a particular stream. Second, once the best-fit is found we can then use the KLD to set error contours on the best-fit parameters, ${\mathbf{a}}_0$, by comparing the action distribution for the best-fit parameter values, $f_{{\mathbf{a}}_0}({\mathbf{J}})$, to the distribution for other trial values of the parameters, $f_{{\ensuremath{{\mathbf{a}}_{\mathrm{trial}}}}}({\mathbf{J}})$: $${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}} \equiv \int f_{{\mathbf{a}}_0}({\mathbf{J}}) \log \frac{f_{{\mathbf{a}}_0}({\mathbf{J}})}{f_{{\ensuremath{{\mathbf{a}}_{\mathrm{trial}}}}}({\mathbf{J}})} d{\mathbf{J}}.$$ This KLD is related to the conditional probability of the potential parameters ${\ensuremath{{\mathbf{a}}_{\mathrm{trial}}}}$ relative to ${\mathbf{a}}_0$, averaged over the stars in the sample: $${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}} = \langle \log \frac{\mathcal{P}({\mathbf{a}}_0|{\mathbf{J}})}{\mathcal{P}({\ensuremath{{\mathbf{a}}_{\mathrm{trial}}}}|{\mathbf{J}})} \rangle_{{\mathbf{J}}}. \label{eq:KLD2interp}$$ A full discussion of this interpretation is in @2015ApJ...801...98S. Qualitatively, this expression measures how well the KLD can distinguish between the action distribution produced by the best fit parameters and the distributions produced by other parameters. Interpreting this as an uncertainty requires assuming that the distribution produced using the best-fit parameters is correct and comparing other distributions to it; hence the appearance of a conditional probability in Equation . As an example, if for some ${\ensuremath{{\mathbf{a}}_{\mathrm{trial}}}}$ we get ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}}=1$, it means that those parameters are $e$ times less likely than the best-fit ${\mathbf{a}}_0$ to have produced the distribution of actions associated with the best-fit parameters (we are using natural logs everywhere). In a Gaussian probability distribution, (68, 95, 99) percent of the probability is inside the region where $\log P > -1/2 (-2, -9/2)$, so in this work we show the ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}}=1/2 (2,9/2)$ contours as rough analogs to one-(two-, three-)sigma uncertainties in the Gaussian case. However, analogies with Gaussian uncertainties should not be carried too far, since what our uncertainties really measure is the ability of the information in the action distribution to distinguish between different potentials, rather than the probability that the stream stars are drawn from a generative model of the action-space distribution (based on some potential parameters). Because we assume no generative model for the action-space distribution, the quoted uncertainties cannot be interpreted in a chi-squared sense. The level where e.g. $\log P = -1/2$ is more properly construed as the set of potential parameters that produce a distribution of stellar actions for which the probability that they are drawn from the most clustered distribution is $e^{-1/2} = 0.61.$ This interpretation takes into account 1) the unknown number of clumps in action space and their unknown positions, 2) the limited resolution of the distribution thanks to the finite number of stars in the sample, and 3) the way in which the action-space distribution changes as the potential parameters are changed. Computing the actions --------------------- ![Distribution in action space of the streams in Figure \[fig:all\_streams\], for the spherical NFW potential parameters derived from S08. The colors of the streams correspond to Figure \[fig:all\_streams\].[]{data-label="fig:actions1"}](actions_spr.png){width=".8\hsize"} A spherical potential has three independent actions that can be expressed in several different ways. We use the set comprised of the radial action $J_r$, the absolute value of the total angular momentum $L$, and the $z$ component of the angular momentum, which in our coordinate system points along the minor axis of the dark matter halo (though the potential to be fit is spherical). Thus our actions are $(J_r, L, L_z)$, of which only $J_r$ depends on the potential parameters. The other actions are included when calculating the KLD because although they do not change with the potential, they are still clumpy and correlated with $J_r$, so they improve the contrast between better and worse choices of the potential parameters. As when integrating the center-of-mass orbits, we use the potential of Equation to represent the spherical NFW halo, so our parameters ${\mathbf{a}}$ are the scale radius $r_s$ and the enclosed mass at the scale radius $M_s$. The angular momenta $L$ and $L_z$ are calculated from the stars’ positions ${\mathbf{x}}$ and velocities ${\mathbf{v}}$: $${\mathbf{L}} \equiv {\mathbf{x}} \times {\mathbf{v}}; \qquad L \equiv |{\mathbf{L}}|; \qquad L_z \equiv {\mathbf{L}}\cdot\hat{z}.$$ The radial action $J_r$ is calculated by numerical integration of $$J_r = \frac{1}{\pi} \int_{{\ensuremath{r_{\mathrm{min}}}}}^{{\ensuremath{r_{\mathrm{max}}}}} dr\ \sqrt{2E - 2\Phi(r) - \frac{L^2}{r^2}}$$ where $r \equiv |{\mathbf{x}}|$ and the energy E is $$E = {\frac{1}{2}}{\mathbf{v}}\cdot {\mathbf{v}} + \Phi(r).$$ The potential $\Phi(r)$, which depends on the parameters $r_s$ and $M_s$, is given by Equation . The integral endpoints ${\ensuremath{r_{\mathrm{min}}}}$ and ${\ensuremath{r_{\mathrm{max}}}}$ are determined by finding (also numerically) the two roots of $$2E - 2\Phi(r) - \frac{L^2}{r^2} = 0.$$ The range of $J_r$ varies as a function of the scale mass $M_s$, so to avoid undersampling and comparison issues when calculating the KLD (discussed further in Section 5 of @2015ApJ...801...98S) we scale the radial action such that $$\label{eq:JrScaling} {\ensuremath{J_r^{\mathrm{scaled}}}} \equiv \frac{J_r}{GM_s/\left(\ln 2 - 1/2\right)},$$ which keeps the overall range of $J_r$ roughly constant, and comparable to the range of $L$ and $L_z$, for different $M_s$. Figure \[fig:actions1\] shows the distribution of $(J_r, L_z)$ for the stars in the selected Aquarius streams, calculated using the values for $r_s$ and $M_s$ derived from ${\ensuremath{M_{\mathrm{200}}}}$ and ${\ensuremath{r_{\mathrm{200}}}}$ via the method outlined in S08 and Section \[ssec:potentials\]. Although we are calculating the actions using a spherical approximation to the potential instead of the true triaxial one, we still see that both $J_r$ and $L_z$ are clumpy (as is $L$, not shown here) and that the clumps correspond to different streams (shown here in different colors, though the fitting method does not use this information). Thus the central assumption underlying our fitting method—that streams correspond to action-space clumps—is still satisfied. ![Contours of ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{I}}}}$. The largest value of ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{I}}}}$ ($\vec{a}_0$) for the full sample is marked with a green cross; the small purple points show the best-fit values from leave-one-out samples (see Section \[ssec:uncert\]). The dashed vertical line is the value of $M_s$ derived from S08, the solid vertical line is $M_s$ calculated from N10, and the solid horizontal line is the measured value of $r_s$ (the same in both papers; see Table \[tab:par\]).[]{data-label="fig:kldMencl1"}](kld1_new_plus_loo.png){width="\hsize"} Finding the best fit -------------------- In order to find the best fit we compute ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{I}}}}$ (equation \[eqn:KLDI\]) for a grid of parameter points, increasing the grid resolution adaptively in regions where the KLD is changing rapidly. We used 5 levels of adaptive refinement to converge the locations of the few highest KLD values. Figure \[fig:kldMencl1\] shows the contours of ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{I}}}}$. We find that the best-fit parameters lie on a ridge of high ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{I}}}}$ with the very highest value (green cross) in between the scale mass derived from the N10 parameters and that derived from the S08 parameters. This is not surprising since the S08 values describe the mass profile best close to the virial radius (246 kpc), while the N10 parameters describe the mass profile better near the scale radius (15 kpc). Most of the “stars" in our fitting sample are at distances somewhere in between these two radii, with an average distance around 40 kpc but reaching to about 120 kpc, so we expect our best-fit mass to interpolate between these two values. The scale radius value we obtain is slightly larger than the S08/N10 value; it is mainly determined by matching the enclosed mass at the average distance of the fitting sample, which gives rise to the degeneracy seen in the contours of Figure \[fig:kldMencl1\]. ![Contours of ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}}$. The red, yellow and green contours denote analogs to 68, 95 and 99% confidence contours under the assumption of a roughly Gaussian probability distribution (as discussed in Section \[sec:KLD\]) on the best fit value $\vec{a}_0$, shown as a green cross. The small purple points show the best-fit values from leave-one-out samples (see Section \[ssec:uncert\]). The dashed vertical line is the value of $M_s$ derived from S08, the solid vertical line is $M_s$ calculated from N10, and the solid horizontal line is the measured value of $r_s$ (the same in both papers; see Table \[tab:par\]). Black triangles indicate the extrema given by the one-dimensional uncertainties (red shaded region in Figure \[fig:mencl\]); green stars indicate extrema taking into account the sense of the degeneracy between parameters in Step 1 (red dashed lines in Figure \[fig:mencl\]).[]{data-label="fig:kldMencl2"}](kld2_new_plus_loo.png){width="\hsize"} Determination of uncertainties on the best-fit value {#ssec:uncert} ---------------------------------------------------- To determine uncertainties we use the KLD of Equation to compare the distribution of actions at the points in the parameter grid with the one at the best-fit values identified in the first step in order to determine their error bounds, as described in Section \[sec:KLD\]. In order to compensate for the different ranges in the radial action $J_r$ at different grid points in parameter space, we scale it with the prefactor of the potential, $G M_s (\ln 2 - 1/2)$, before comparing the action distributions with the KLD. Figure \[fig:kldMencl2\] shows the contours of ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}}$ relative to the best fit (green cross). It is clear from these contours that the analogy with Gaussian uncertainties that motivated our choice of levels of ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}}$ is only plausible for ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}} \lesssim 1/2$ (green contour) and certainly not beyond that. To validate the use of Equation and the choice of which value of ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}}$ to use for setting uncertainties on the best-fit values, we performed a leave-one-out analysis on the set of 15 streams used for the fit. In practice this would not be possible, since we do not require membership information of stars in individual streams. For the same reason, this analysis should underestimate the uncertainty, since it does not fully account for our lack of assumptions about either stream membership or the number and locations of the streams in action space. However it does give a sense of the contribution that each stream makes to determining the best-fit model, which should drive the fit uncertainty. We created 15 new samples by removing one of the 15 streams for each sample (presuming perfect membership knowledge), and re-ran step I on each sample to determine the best fit. We then compared the range of parameter values obtained in the leave-one-out fits to the range predicted for the full sample by Equation for ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}}\leq1/2$. The different parameter values obtained by the leave-one-out fits are superposed as purple points on the full-sample contours of ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{I}}}}$ in Figure \[fig:kldMencl1\] and ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}}$ in Figure \[fig:kldMencl2\]. Figure \[fig:mencl\] compares the range of mass profiles obtained by the leave-one-out fits (cyan lines) with the range predicted by ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}}$ for the full sample (red lines and shaded region) and with several other ways of obtaining the mass profile. As seen in Figure \[fig:kldMencl1\], the leave-one-out fits select different points on a degenerate curve in parameter space, corresponding roughly to a constant enclosed mass at the mean radius of stars in the fitting sample, that is also followed by the contours of ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{I}}}}$ from the full sample. The primary difference between the range of parameter values predicted by Equation and the range obtained from leave-one-out analysis, as shown in Figure \[fig:kldMencl2\], is that the KLD is insensitive to this enclosed-mass degeneracy when comparing neighboring action-space distributions. This is a reflection of the fact that in the first step of the analysis the KLD is used to estimate the degree of clustering at a given choice of parameters, which is sensitive to the fit degeneracy seen in Figure \[fig:kldMencl1\], while in calculating an uncertainty we use the KLD to compare action distributions at neighboring points in parameter space, which is insensitive to the degeneracy as is clear from the shape of the contours in Figure \[fig:kldMencl2\]. It is also expected that the leave-one-out points will cluster more tightly than our fit results since we have implicitly used stream membership information to generate the different fitting samples. The range of $r_s$ values allowed by ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}}\leq1/2$ is also slightly larger than the range from leave-one-out, while the mass range is slightly narrower. This is an indication that the action-space distribution is more sensitive to changes in the mass parameter than the scale radius parameter, which is consistent with our results from orbit integrations in position and velocity space (Section \[ssec:orbits\]). Figure \[fig:mencl\] illustrates the range of allowed mass profiles resulting from different approaches to determining the uncertainty. The thick red line shows our best fit (the maximum value of ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{I}}}}$) for the full sample, while the thinner cyan lines show the results for the different leave-one-out samples. The red shaded region is the allowed range in the mass profile at the one-dimensional extrema of the green contour in Figure \[fig:kldMencl2\]; that is, the upper limit is the mass profile with the maximum allowed value of $M_s$ and the minimum allowed value of $r_s$, while the lower limit is the profile with the minimum $M_s$ and maximum $r_s$. These points are marked as black triangles in Figure \[fig:kldMencl2\]. Because this method of determining the uncertainty ignores the mass-radius degeneracy it produces a wider range of allowed profiles than the spread of the leave-one-out fits. If one takes the extrema of the best-fit parameters obtained by the leave-one-out fits in the same way, in the opposite sense from the mass-radius degeneracy (shown as thick dashed cyan lines) the range of allowed profiles is nearly as wide. Conversely, if we follow the sense of degeneracy outlined by the ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{I}}}}$ contours in Figure \[fig:kldMencl1\] in choosing points on the ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}}\leq1/2$ contour (marked with green stars in Figure \[fig:kldMencl2\]) then we get the range shown by the thick red dashed lines in Figure \[fig:mencl\], which is comparable to the spread in profiles from the leave-one-out analysis, although wider at small radii. The difference in the spread of allowable profiles reflects the difference in information between the leave-one-out analysis, which includes perfect membership assignments for all stars, and the KLD strategy, which does not use membership information at all. Our choice of ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{II}}}}\leq1/2$ as the uncertainty range is also supported by examining the difference between the best-fit distribution (top right panel of Figure \[fig:actions\]) and the distribution generated by a point on the $\log P = -1/2$ contour (bottom right panel of Figure \[fig:actions\]). The “1-sigma” distribution is visibly less clustered than the best-fit distribution. This comparison indicates two possible routes to determining the range of allowed mass profiles without using membership information. The most conservative option, represented by the shaded red area in Figure \[fig:mencl\], is to report the full range of allowed parameter values in each dimension and allow all the combinations of parameters in that range as possible mass profiles. Slightly less conservative is to include information on the degeneracy between parameters obtained in the first step by reporting the range of allowed profiles following the sense of the degeneracy revealed by the contours of ${\ensuremath{{\ensuremath{D_{\mathrm{KL}}}}^{\mathrm{I}}}}$. This approach will become more difficult as the mass model becomes more sophisticated. Combining the results of step I and step II the best-fit enclosed mass is $M_{s_0} = 1.72\substack{+0.57 \\ -0.20}\times 10^{11}\, M_{\odot}$ and the best-fit scale radius is $r_{s_0} = 17.46\substack{+4.60 \\ -2.48}$ kpc, where the one-dimensional uncertainties here and in Table \[tab:par\] are the extrema of the 1-sigma contour marked as the two black triangles marked on Figure \[fig:kldMencl2\]. The second best-fit mass is $M_{s_1} = 1.81\times10^{11} \, M_{\odot}$ enclosed in $r_s = 17.78$ kpc. Since we find a larger scale radius than either S08 or N10, our value of the scale mass is closer to S08; however at their value of the scale radius (15.19 kpc) our best-fit mass profile has $M(R=15.19\textrm{ kpc}) = 1.43\substack{+0.90\\-0.52}\times 10^{11}\ M_{\odot}$, which is in between S08 and N10, and includes both values in its range of uncertainties. ![image](actions_new.png){width="\textwidth"} Results {#ssec:results} ------- Figure \[fig:actions\] shows the actions $L_z$ and $J_r$ at different points in parameter space. It can be seen that the KLD indeed recovers the most clustered action distribution: the actions are more clumped for the best-fit potential (upper right panel) than for the spherical potential with either set of parameters measured directly from the simulated halo (upper left and upper center panels). The lower two panels of the figure show that the actions get progressively less clustered when moving to parameter configurations farther from the best fit: the second-best fit (lower left panel) is visibly less clumpy at higher $J_r$, while a point on the $1\sigma$-equivalent contour produces a less clumpy distribution everywhere. Figure \[fig:mencl\] shows the mass profiles for the different spherical halo models. Our best fit from the KLD method is shown in red, with shaded error bounds showing the range of mass (dashed) and scale radius (dotted) within the red contour in Figure \[fig:kldMencl2\]. The profiles using the parameters determined by S08 (green solid line) and N10 (blue solid line) bracket the empirical mass profile obtained by binning the dark matter particles in the smooth component of the simulated halo (black line). Our best fit is similar to the N10 fit at small radius where both agree with the spherically-averaged mass profile, but grows more steeply than N10 beyond about 20 kpc as does the spherically averaged profile. It never quite reaches the S08 mass profile, which agrees with the empirical mass profile close to the virial radius. We attribute this to the absence of stars in our fitting sample beyond 120 kpc (about half the virial radius of the halo). We also show in Figure \[fig:mencl\] the approximate mass profile along the three axes of symmetry in the triaxial halo (blue dotted lines with symbols), to give a sense of the degree to which the halo departs from spherical. More specifically, we take the average axis ratios of the *density*, in the range 10-100 kpc, determined for Aq-A by @2011Vera in their Figure A2, obtaining $b/a = 0.7$ and $c/a = 0.55$. Renormalizing the axis lengths so that $a^2 + b^2 + c^2=3$ as in the Vogelsberger prescription for the triaxial NFW potential gives $a=1.29$, $b=0.91$, and $c=0.71$. We then plot three *spherical* NFW mass profiles, using the mass and scale radius from N10, where the radius variable is rescaled by each of the three axis lengths: ${\ensuremath{M_{\mathrm{NFW}}}}(r/a)$ (circles), ${\ensuremath{M_{\mathrm{NFW}}}}(r/b)$ (triangles), and ${\ensuremath{M_{\mathrm{NFW}}}}(r/c)$ (squares). These are not precisely the mass within isodensity contours along each symmetry axis, but do serve to give a rough sense of the variation of the mass profile between different directions in the halo. The S08 mass profile overestimates the mass compared to the empirical profile, which is (partly) due to the fact that subhalos are excluded from the empirical profile, but included when determining $r_{200}$ and $M_{200}$, which S08 use to set the profile parameters. N10 set their normalization at a much smaller radius, where most of the material is not in subhalos, and so are not as affected by the exclusion of bound substructure. The best-fit mass profile traces the spherical NFW mass profile from N10 at small radii and the empirical mass profile at larger radii. We get the best agreement between our fit and the empirical mass profile at the radius at which we have the most stars, around $40$ kpc. The mass $M_s$ at the scale radius $r_s$ is marked with a diamond in the figure, corresponding to the crosshairs in Figures \[fig:kldMencl1\] and \[fig:kldMencl2\]. The KLD method recovers a larger scale radius $r_s$, and hence a larger enclosed mass $M_s$, than the profiles which were fit to the whole halo. The error bars on the fit are approximately as wide as the span between the profiles in S08 and N10, reflecting the inability of a single NFW profile to fit the Aq-A halo out to the virial radius. Compared to this difficulty, the effect of triaxiality is relatively small, as shown by the span of the mass profiles along the three principal axes in the triaxial potential. Discussion and Conclusion {#sec:discussion} ========================= In this work we investigated how assuming a smooth, time-independent potential with either spherical or triaxial symmetry affects the analysis of streams formed in a cosmological dark matter halo that is lumpy and time-evolving. First, we integrated the center-of-mass orbits for various streams in both spherical and triaxial potentials and compared how well the orbits traced the streams using the average minimum distance between the orbit and the stream stars (Section \[sec:integration\]). We find that orbits integrated in a smooth, static potential resembling the present-day Aquarius A halo can trace the stellar streams extracted from the halo via stellar tagging, when starting from the position and velocity of a star midway through the stream. This agreement is striking given that the streams formed in a dynamic halo whose potential evolved with time and included many subhalos of various masses. In the majority of cases, the best agreement between orbit and stream was indeed achieved using a triaxial potential, which is not surprising since it is more representative of the true halo shape. However, in many cases the triaxiality of Aq-A was not enough to produce an appreciable difference (more than 10 percent in the average distance) between orbits integrated in spherical and triaxial potentials. For streams where we get good agreement with an orbit, the mass and scale of the halo can be roughly estimated by visually comparing the stream with a series of integrated orbits: the scale mass could be determined to within about 50%, but the scale radius to within only about a factor of 2. Among the streams we compared, there are occasionally hints of the additional structure in the potential that was ignored; for example, in the one case where neither potential could produce an orbit that traced the stream, there appear to be gaps in the stream stars that might point to an interaction with a subhalo. However in general we find that the streams encode the present-day potential, and that ignoring substructure will not interfere catastrophically with the general tendency of streams to lie near orbits. This is similar to what @2006ApJ...645..240P found using numerical experiments with analytic potentials. Furthermore, as expected, the degree to which streams can distinguish between triaxial and spherical potentials via orbit fitting varies, depending on the progenitor’s orbit and the extent of the stream. The most diagnostic streams in our sample were long and on very radial orbits; the extreme ends of these streams were most diagnostic in choosing a triaxial over a spherical potential. Second, we tried fitting a smooth, spherical NFW mass profile to the entire set of 15 streams using the KLD method (Section \[sec:KLD\]). We found that over the range of radii covered by stars in the fitting sample the best-fit profile followed the empirical, spherically-averaged profile computed directly from the dark matter distribution at the present day, roughly interpolating between the profile found by N10 that fits well at small radii and the profile found by S08 that fits well near the virial radius. The results confirm that this type of fit is insensitive to the adiabatic time-evolution of the host halo. This is expected, since the actions are adiabatic invariants; reassuringly, our fitting method has smaller uncertainties in $M_s$ and $r_s$ than simply comparing how well orbits line up with the streams by eye. Furthermore, stream-subhalo interactions in this model halo are not frequent or intense enough to destroy the action-space coherence of individual streams; neither is the poor assumption of a spherical rather than triaxial potential. In fact, our best-fit model produces a clumpier action distribution (and better agreement over a wider range of galactocentric distances) than two common ways of determining the same parameters directly from simulations: either by finding $M_{200}$ and $r_{200}$ (as in S08) or by determining $\rho_{-2}$ and $r_{-2}$ (as in N10). Although these two methods of parameterizing halos are initially independent of assumptions about the functional form of the potential, imposing the NFW functional form on either set of parameters to obtain $M_s$ and $r_s$ will only produce a good fit over a limited range of radii. Our best-fit mass profile, which is effectively normalized around 40 kpc where we have the largest number of stream stars, agrees with the empirical profile from the simulation over a wider range of radii than either of the parameterizations, while also recovering $M_{200}$ within 10% and $r_{200}$ within 4% of the values determined directly from the dark matter distribution. Our difficulty fitting the Aq-A halo with an NFW profile is consistent with recent results by @2015arXiv150700771H, who find that the parameters they obtain using their fitting method are biased by up to 30% for Aq-A because it differs significantly from the NFW form. We see indications that our model is not fully representative of the halo in the fact that there seem to be two sets of preferred parameters, according to $D_{KL}^{I}$ (Figure \[fig:kldMencl1\]), that occupy a ridge in parameter space. The $M_s$ values of the two fits are nearly the same (and are within each others’ approximate $1\sigma$ confidence contours) but the two preferred $r_s$ values differ substantially. Additionally, the uncertainty in $M_{200}$ for our best-fit model is comparable to the difference between normalizing the enclosed mass directly at $r_{200}$ and normalizing $\rho_{-2}$ at $r_{-2}$, and also to the variation of the mass profile along the different axes of symmetry thanks to its triaxiality, which was ignored in the fit. @2014ApJ...795...94B [hereafter B14] explored the effect of assuming a smooth halo on the ability to determine the Milky Way’s mass from fits to individual extremely narrow streams like those from globular clusters (GCs), evolved in the cosmological potential of the Via Lactea II simulation which, like Aquarius A, is both lumpy and time-evolving. They found that mass estimates from fits to single streams obtained using the streakline method [@2012MNRAS.420.2700K] could indeed be highly biased, and (as we do) that this bias was worse for streams closer to the Galactic center. They further found that assuming a smooth analytic potential limited the fundamental accuracy of mass estimates even when fitting many streams. Our work differs from this, and extends their results, in a few respects. First, the streams we study in this work more closely resemble those from small satellite galaxies than GCs: though their total stellar mass in some cases is similar, they tend to be much less concentrated in phase space initially than a GC would be. Second, the orbital distribution of streams within the Aquarius stellar halo is in our case determined by the cosmological simulation, whereas B14 inserted their GCs by hand at systematic locations. @2010Cooper show that the density profiles of the stellar halos derived from the Aquarius suite, and the luminosity-radius relation of the satellites that built up these halos, are consistent with observations of the stellar halo and satellites of the Milky Way, so we expect that the distribution and width of the Aquarius streams we study in this paper should reflect what is expected for the Milky Way. B14 find that mass estimates from streams beyond 70 kpc are significantly less biased, but the bulk of the stellar halo, and especially the part that will be observed by Gaia, is likely located at smaller galactocentric distances than this , so it is important to account properly for the radial distribution of stream stars when considering how well we will be able to determine the Galactic potential. Second, we fit a limited sample of thin streams simultaneously rather than combining individual results, and use a method that does not require assigning stars to particular streams. We expect the action-clustering method to be more robust to scattering by lumpiness in the potential than methods, like the one used by B14, that compare distributions in 6D phase space. The effect of unaccounted-for lumps in the potential on the action-space distribution is mainly to dilute or subdivide the existing clumps of stars in a way uncorrelated between different streams. This dilution will slightly lower the maximum information attained by the fit and thereby increase the size of the error contour, but because all streams are fitted simultaneously it should not introduce a bias in the final outcome. Motivated by this reasoning, as well as the number of narrow streams predicted by the Aquarius stellar halo model, we work with a much smaller number of streams than B14: 15 in our sample versus 256 in their case. Our results demonstrate that fitting the potential using action-space clustering rather than comparisons in position- and velocity-space avoids some of the pitfalls illustrated by B14 in their paper. First, simultaneous fitting allows us to use relatively few cold streams and still recover $M_{200}$ within 10%. Second, analyzing the adiabatically invariant action-space distribution for goodness of fit, rather than making comparisons in position and velocity space, means we can use streams that are closer to the Galactic center: most of our stars are within 50 kpc and all are within 120 kpc, while B14 found that single streams closer than 70 kpc tended to have larger biases in the recovered mass. Finally, although we also find (like B14) that our assumptions lead to a slight under-estimate of the total mass, the true value lies well within our uncertainties, which reflect the degree to which our assumptions fail to match the true shape of the smooth potential rather than bias among individual streams. Most stream-fitting methods that have been tested to date, like the streakline method employed by B14 and its many recent variations [@2014MNRAS.443..423S; @2014ApJ...795...95B; @2015MNRAS.450..575A; @2015MNRAS.452..301F], require membership information for each stream to be fit and can therefore also preserve and use information about the angular phases of the stars in each stream, which is discarded by the KLD fit. Most of these tests have looked at the results of fitting individual streams and have found that single streams can produce either an excellent estimate of the potential or an extremely biased one, depending on the details of both the method used and the particular stream being fit. For example, @2013MNRAS.436.2386L found that fitting orbits to mock streams created in a smooth static halo produced estimates of the shape parameters and depth of a potential that were biased to about 20 percent, and that streams on different orbits placed different constraints on the potential parameters. Their results pointed toward using multiple streams fit simultaneously as we do here. Like our method, @2013MNRAS.433.1826S takes an action-space approach, but leverages the distinctive shape of the angle-frequency distribution of streams rather than their clumpiness in action space, by requiring that the slopes in angle and frequency be the same for a given stream in the correct potential. Their tests, using a single globular-cluster-like stream in a smooth static potential, show that without errors the circular velocity and shape parameter are recovered almost perfectly with this method. (Their tests used a scale-free isothermal potential, so were not subject to the mass-scale degeneracy we observe in our method.) However, they also find that the likelihood landscape is complex, with many local minima, and that introducing observational errors can create biases much larger than the formal uncertainties on the parameter estimates, though this can be improved by binning along the stream. They additionally find that the stream’s length and overall orbital phase affects fit performance, with longer streams giving better constraints and streams at apocenter performing better than those near pericenter. These authors thus also argue for combining multiple streams to get a better fix on the potential. A more extended comparison of these different methods, and a treatment of the impact of using multiple streams, will be the subject of a forthcoming paper developed at the Gaia Challenge workshop series[^1] (Sanderson et al. in prep). Our results come with a few caveats. Although we do not need stream membership information to perform our fit, we did use it to select which streams to include in the sample, deliberately preferring thinner streams with a relatively narrow range of masses. This sort of sample is ideal for getting the best possible performance from our fitting algorithm since the action space has many tight clusters (and therefore high information) of similar size (so that smaller clusters are not overwhelmed). In this regard our sample is similar in nature to that used by B14, where the streams are all from a globular cluster model with the same mass and particle number. Furthermore we have not attempted to reproduce the proper number of stellar tracers or the expected observational errors, which will undoubtedly result in larger uncertainties and could also conceivably bias the fit results. However, our results do show that oversimplifying the model to be fit does not fundamentally produce a bias in the recovered mass profile; conversely, improving the model (for example, moving from a spherical to triaxial model) should reduce this contribution to the overall uncertainty. Our fitting method provides guidance on whether one model is a better representation than another: better models should be capable of producing a more clustered action distribution and hence a higher peak value of $D_{KL}^{I}$. We intend to test these two predictions in future work. Acknowledgements ================ RES is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST- 1400989. AH acknowledges financial support from ERC Starting Grant GALACTICA-240271 and a NWO-Vici grant. [^1]: <http://tinyurl.com/gaiachallenge>
--- abstract: 'We studied the influence of oxygen on the electronic trap states in a pentacene thin film. This was done by carrying out gated four-terminal measurements on thin-film transistors as a function of temperature and without ever exposing the samples to ambient air. Photooxidation of pentacene is shown to lead to a peak of trap states centered at 0.28eV from the mobility edge, with trap densities of the order of 10$^{18}$cm$^{-3}$. These trap states need to be occupied at first and cause a reduction in the number of free carriers, i.e. a consistent shift of the density of free holes as a function of gate voltage. Moreover, the exposure to oxygen reduces the mobility of the charge carriers above the mobility edge. We correlate the change of these transport parameters with the change of the essential device parameters, i.e. subthreshold performance and effective field-effect mobility. This study supports the assumption of a mobility edge for charge transport, and contributes to a detailed understanding of an important degradation mechanism of organic field-effect transistors. Deep traps in an organic field-effect transistor reduce the effective field-effect mobility by reducing the number of free carriers and their mobility above the mobility edge.' author: - 'Wolfgang L. Kalb' - Kurt Mattenberger - Bertram Batlogg title: | Oxygen-related traps in pentacene thin films:\ Energetic position and implications for transistor performance --- \[sec:level1\] Introduction =========================== Organic semiconductors are among the most promising candidates for the future’s flexible and low-cost electronics. Apart from the processability from solution or by thermal evaporation, a huge potential lies in tailoring the properties of the organic semiconductor by means of synthetic organic chemistry. Charge carrier mobilities in organic field-effect transistors are comparable to the mobilities in hydrogenated amorphous silicon transistors and are adequate for many applications.[@LinYY19972; @GundlachDJ2008] Critical issues, however, are electrical stability and environmental stability. The electrical stability of both n- and p-type organic semiconductor transistors has recently been shown to be very high if suitable gate dielectrics are used.[@KalbWL2007; @ZhangXH2007] The environmental stability of the organic semiconductor, thus, is an urgent issue to be addressed.[@deLeeuwDM1997; @AnthopoulosTD2007] Studies of the degradation of organic field-effect transistors are rare and indicate that, in the case of p-type organic semiconductors, atmospheric oxygen or ozone is a major cause.[@PannemannC2004; @MaliakalA2004; @DeAngelisF2006; @ChabinycML2007; @KlaukH2007] It is crucial to understand in detail the way in which an oxidation of the organic semiconductor impedes the charge transport and thus degrades the transistor characteristics. The charge transport in crystalline organic semiconductors such as pentacene may be described by assuming a mobility edge which separates extended and localized states. The charge carriers are transported in the extended states above the mobility edge, but are trapped by and thermally released from localized trap states below the mobility edge.[@AndersonPW1972; @WartaW1985; @HorowitzG1995; @PernstichKP2008] The mobility edge may be identified with the valence or conduction band edge. However, the parameters dominating charge transport are the trap densities as a function of energy relative to the mobility edge, the number of delocalized charge carriers above the mobility edge and the mobility of the latter charge. In highly disordered organic semiconductors, a description of the charge transport by variable range hopping may be more appropriate. Importantly, this situation can be described by trap-controlled transport in a transport level with a distribution of localized states below the transport level and is thus very similar to the mobility edge picture.[@ArkhipovVI2003] Organic field-effect transistors are excellent tools to study the charge transport in organic semiconductors since the position of the quasi-Fermi level at the dielectric-semiconductor interface can be fine-tuned by applying a gate voltage.[@HorowitzG1995; @SchauerF1999; @LangDV2004; @SalleoA2004; @CalhounMF2007; @KawasakiN2007; @PernstichKP2008] Here we report on an extended study of pentacene thin-film transistors. Organic field-effect transistors were characterized by temperature dependent measurements without ever exposing the samples to ambient air and after controlled exposure to oxygen and light. The effect of oxygen can only be clarified with pristine samples. Moreover, since the field-effect conductivity depends (approximately) exponentially on temperature, temperature is a very sensitive parameter. Another distinct feature of our study are the gated four-terminal measurements instead of the commonly employed gated two-terminal measurements. This approach has previously been used to estimate basic parameters, such as the effective field-effect mobility and the contact resistance.[@TakeyaJ2003; @PesaventoPV2004]. The contact resistance can significantly affect the field-effect mobility. Consequently, contact effects may also lead to errors e.g. in the trap densities as extracted from transistor characteristics. In order to assess the fundamental transport parameters we have developed a scheme for organic field-effect transistors that is easy to use. The approach readily reveals all the key parameters with high accuracy in a straightforward and unambiguous fashion. The scheme is based on a method developed by Grünewald et al. for amorphous silicon field-effect transistors.[@GrunewaldM1980] Instead of estimating the interface potential from the transistor characteristic measured at a single temperature as in the original scheme, we extract the interface potential from the temperature-dependence of the field-effect conductivity. Experimental ============ The system used in this study allows for the fabrication and characterization of organic thin-film transistors without breaking the high vacuum of order 10$^{-8}$mbar between the fabrication and measurement steps. It consists of a prober station connected to an evaporation chamber. The prober station contains a cryostat for temperature-dependent measurements. A schematic drawing of the system can be found in Ref. . Heavily doped Si wafers with a 260nm thick SiO$_{2}$ layer were cut, cleaned with hot solvents, fixed on a sample holder and were then introduced into the cryo-pumped evaporation chamber of the device fabrication and characterization system. After approximately 24h, two times sublimation purified pentacene was evaporated onto the samples through a shadow mask at a base pressure of the order of 10$^{-8}$mbar. The substrates were kept at room temperature during the evaporation and the final film thickness was 50nm. The sample holder with the samples was then placed on a shadow mask for the gold evaporation without breaking the high vacuum and gold electrodes were evaporated onto the pentacene. The completed thin-film transistors had a channel length of $L=450$$\mu$m and a channel width of $W=1000$$\mu$m. Moreover, two voltage sensing electrodes were situated at $(1/6)L$ and $(5/6)L$ and had little overlap with the “masked” pentacene film, as schematically shown in Fig. \[device\]. The alignment was achieved by means of a high precision mask positioning mechanism. The sample holder was then transferred to and clamped on the cryostat in the turbo-pumped prober station of the device fabrication and characterization system without breaking the high vacuum (10$^{-8}$mbar). ![\[device\] (Color online) The transistors for the gated four-terminal measurements consisted of a well-defined stripe of pentacene with a width of $W=1000$$\mu$m and gold electrodes. The distance between source and drain was $L=450$$\mu$m. The voltage sensing electrodes were used to measure the potentials $V_{1}$ and $V_{2}$ with respect to the grounded source and were separated by $L'=300$$\mu$m.](KalbW-figure1.ps){width="0.90\linewidth"} A previous study revealed that the device performance improves with time when pentacene thin-film transistors are kept in high vacuum.[@KalbWL20072] Therefore, the devices were kept in the prober station at a pressure of $\approx2\times10^{-8}$mbar for approximately three weeks before starting the study. After that time, the device characteristics were found to be stable on the timescale of several days. The prober station is equipped with five micro-probers. The prober arms are connected to the cryostat with thick copper braids and are thus cooled when the cryostat and the sample holder with the samples is cooled down. For the temperature-dependent gated four-terminal measurements, the source, the drain and the voltage sensing electrodes were contacted with thin gold wires attached to four of the micro-probers. By means of an electrical feedthrough to the cryostat, a gate bias could be applied. In order to measure the temperature on the surface of the samples, an AuFe/Chromel thermocouple was attached on the 5th micro-prober and was carefully pressed against the surface of the sample at each temperature. The electrical measurements were carried out with an HP 4155A semiconductor parameter analyzer. Transfer characteristics were measured with a drain voltage of $V_{d}=-2$V in steps of 0.2V with an integration time of 20ms and a delay time of 100ms. For each gate voltage $V_{g}$, the drain current $I_{d}$ and the potentials $V_{1}$ and $V_{2}$ between the grounded source electrode and the respective voltage sensing electrode were measured (Fig. \[device\]). All electrical measurements were done in the dark. The devices were exposed to oxygen by introducing 1bar of oxygen (purity $\geq99.9999$vol$\%$) into the prober station through a leak valve. The pressure of the oxygen in the prober station was measured with a mechanical pressure gauge. In addition, the samples were also exposed to a combination of 1bar of oxygen and white light from a cold light source (color temperature 3200K). Charge transport parameters =========================== Our approach consists of measuring the transfer characteristics at a low drain voltage ($V_{d}=-2$V). This may be understood as staying as close as possible to the “unperturbed” situation where charge is accumulated by a gate voltage in a metal-insulator-semiconductor (MIS) structure but no drain voltage is applied.[@HorowitzG2004] We begin by specifying the basic parameters to be extracted from the transfer characteristics. Field-effect conductivity, field-effect mobility and contact resistance ----------------------------------------------------------------------- As described in Ref.  in more detail, the field-effect conductivity $\sigma$ is given by $$\label{sigma2P} \sigma(V_{g})=\frac{L}{W}\frac{I_{d}}{V_{d}}.$$ and is free from contact effects when calculated with $$\label{sigma4P} \sigma(V_{g})=\frac{L'}{W}\frac{I_{d}}{(V_{1}-V_{2})}.$$ The field-effect mobility $\mu_{eff}$ in the linear regime is generally estimated with $$\label{mu2P} \mu_{eff}(V_{g})=\frac{L}{WV_{d}C_{i}}\bigg(\frac{\partial I_{d}}{\partial V_{g}}\bigg)_{V_{d}}.$$ This mobility is not influenced by the contact resistance if calculated with $$\label{mu4P} \mu_{eff}(V_{g})=\frac{L'}{W(V_{1}-V_{2})C_{i}}\left(\frac{\partial I_{d}}{\partial V_{g}}\right)_{V_{d}}.$$ We use the terms “two-terminal conductivity” and “four-terminal conductivity” for the expressions defined respectively in Eq. \[sigma2P\] and Eq. \[sigma4P\]. Eq. \[mu2P\] is the “two-terminal mobility” and Eq. \[mu4P\] is the “four-terminal mobility”. $\mu_{eff}$, as calculated with Eq. \[mu2P\] or Eq. \[mu4P\], is an effective mobility. For a p-type semiconductor such as pentacene, it is a rough estimate of the ratio of the free surface hole density $P_{free}$ to the total surface hole density $P_{total}$ multiplied with the mobility $\mu_{0}$ of the holes in the valence band, i.e. $$\label{allgmu} \mu_{eff}\approx\frac{P_{free}}{P_{total}}\mu_{0}.$$ Assuming a linear voltage drop all along the transistor channel the total contact resistance can be approximated as $$R_{contact}(V_{g})=\frac{V_{d}-(V_{1}-V_{2})L/L'}{I_{d}}$$ and should be compared to the channel resistance of the device as given by $$R_{channel}(V_{g})=\frac{(V_{1}-V_{2})L/L'}{I_{d}}.$$ In order to gain a deeper insight into the physics of the organic semiconductor in a field-effect transistor, a more sophisticated description is developed in the following. Spectral density of trap states and free hole density {#demo1} ----------------------------------------------------- The transistor characteristics critically depend on trap states. We treat the polycrystalline pentacene layer as uniform and assume that trap states on the surface of the gate dielectric only contribute to a non-zero flatband voltage. Consequently, the trap densities to be determined are an average over in-grain and grain boundary regions and may also be influenced, to some extend, by trap states on the surface of the gate dielectric. In the following, we outline how a straightforward conversion of Poisson’s equation along with boundary conditions eventually leads to the key equations. We assume that the electrical potential $V(x)$ at the surface of the pentacene film of thickness $d$ vanishes under all biasing conditions, i.e. $$V(x=d)=0.$$ The electric field $F$ at this position is also assumed to drop to zero: $$F=-\frac{d}{dx}V(x=d)=0.$$ This is reasonable as long as the pentacene film is thicker than the decay length of the potential.[@GrunewaldM1980] The situation is depicted in Fig. \[transistor\]. The dielectric strength at the insulator-semiconductor interface must be continues, i.e. for a zero flatband voltage $$\label{dielectricstrength} \epsilon_{i}\frac{V_{g}-V_{0}}{l}=-\epsilon_{s}\frac{d}{dx}V(x=0).$$ $l$ is the thickness of the gate dielectric, $\epsilon_{i}$ and $\epsilon_{s}$ are the dielectric constants of the gate insulator and the semiconductor and $V(x=0)=V_{0}$ is the interface potential. As detailed in Ref. , a conversion of Poisson’s equation with these boundary conditions eventually leads to an expression for the total hole density $p$ as a function of the interface potential $V_{0}$: $$\label{holes} p(V_{0})=\frac{\epsilon_{0}\epsilon_{i}^{2}}{\epsilon_{s}l^{2}e}U_{g}\left(\frac{dV_{0}}{dU_{g}}\right)^{-1}.$$ $p$ (lower case) denotes a volume density of holes. The volume density depends on the distance $x$ from the insulator-semiconductor interface, i.e. on the electrical potential $V(x)$ in the semiconductor. The volume density $p$ and the surface hole density $P$ are related by integrating over the depth of the whole film, i.e. $P=\int_{0}^{d} p(x)dx$. Eq. \[holes\] yields the functional dependence of the volume density of holes on the potential $V_{0}$. Moreover, $$U_{g}=|V_{g}-V_{FB}|$$ in Eq. \[holes\] is the gate voltage above the flatband voltage $V_{FB}$. Since the total hole density $p$ can be written as $$p(V)=\int_{-\infty}^{+\infty}N(E)\left[f(E-eV)-f(E)\right]dE,$$ its derivative is given by $$\label{holedensity} \frac{1}{e}\frac{dp(V)}{dV}=\int_{-\infty}^{+\infty}N(E)\left|\frac{df(E-eV)}{d(E-eV)}\right|dE.$$ Eq. \[holedensity\] is a convolution of the density of states function $N(E)$ with the derivative of the Fermi function. We approximate the Fermi function with a step function according to the common zero-temperature approximation.[@HorowitzG1995; @KalbWL20072; @BragaD2008] Its derivative then is a delta function and we eventually have $$\label{DOS} \frac{1}{e}\frac{dp(V_{0})}{dV_{0}}\approx N(E_{F}+eV_{0}).$$ ![\[transistor\] (Color online) Potential drop across the gate insulator (thickness l, dielectric constant $\epsilon_{i}$) and the pentacene thin film (thickness d, dielectric constant $\epsilon_{s}$). Most of the gate voltage drops across the gate dielectric. At the insulator-semiconductor interface the potential is $V_{0}$.](KalbW-figure2.ps){width="0.60\linewidth"} From Eq. \[holes\] and Eq. \[DOS\] we can see that the interface potential $V_{0}$ as function of the gate voltage is the key to the density of states function (DOS). Since the change of the interface potential and the change of the drain current with gate voltage are linked, it is possible to extract the interface potential from the transfer characteristic measured at a single temperature.[@GrunewaldM1980] Once the interface potential is known, the trap DOS in the mobility gap can be estimated with Eq. \[holes\] and Eq. \[DOS\]. Thus, the trap densities can be plotted as a function of the band shift $eV_{0}$ at the interface, which is the energy of the traps relative to the Fermi energy of the sample (Fig. \[bandalt\]). In a previous study we have applied this method to four-terminal conductivity data.[@KalbWL20072] For the present study we have advanced the extraction scheme. As described in the following, we used gated four-terminal measurements at various temperatures to estimate the interface potential. We then used Eq. \[holes\] and Eq. \[DOS\] to calculate the DOS. The consistency of the assumption of charge transport above a mobility edge with the temperature-dependent measurements provides a greater degree of confidence to any conclusion. Moreover, the latter approach has the advantage of eventually giving the DOS as a function of energy relative to the mobility edge. ![\[bandalt\] (Color online) Gate voltage induced band bending at the insulator-semiconductor interface. At a given gate voltage, the band shift at the interface is $eV_{0}$. The energy of trap states with an energy $E'$ is raised at the interface and now coincides with the Fermi energy $E_{F}$ of the sample. The energy of these trap states relative to the energy of the mobility edge $E_{V}$ is $E_{V}-E_{F}-eV_{0}$ and is approximated by the experimentally available activation energy $E_{a}$ of the field-effect conductivity.](KalbW-figure3.ps){width="0.55\linewidth"} We now show that the activation energy $E_{a}(V_{g})$ of the field-effect conductivity as defined by $$\label{sigmaacti} \sigma(V_{g})=A\exp\left(-\frac{E_{a}}{kT}\right)$$ and as determined with Arrhenius plots is related to the band shift $eV_{0}$ at the insulator-semiconductor interface. Following Boltzmann’s approximation, the field-effect conductivity $$\label{sigmanorm} \sigma(V_{g})=e\mu_{0}P_{free}$$ may be written as $$\begin{aligned} \label{sigmaboltz} \sigma(V_{g})=e\mu_{0}\int_{0}^{d}p_{free}dx= \nonumber \\ =e\mu_{0}N_{V}\exp\left(-\frac{E_{V}-E_{F}}{kT}\right)\times\int_{0}^{d}\exp\left(\frac{eV(x)}{kT}\right)dx.\end{aligned}$$ $N_{V}$ is the effective (volume) density of extended states, $E_{V}$ is the energetic position of the mobility edge and $E_{F}$ is the Fermi energy. As an example we consider an exponential trap DOS $$N(E)=N_{0}\exp\left(\frac{E}{kT_{0}}\right)$$ with a characteristic slope of $kT_{0}$. If we assume that all the gate-induced charge is trapped, integration of the exponential trap DOS leads to a simple exponential dependence of the total hole density $p$ on the potential $V(x)$, which is $$\label{revision1} p\propto\exp\left(\frac{eV}{kT_{0}}\right).$$ This approximation is not expected to lead to serious errors as long as the majority of the gate-induced charge is trapped.[@SpearWE1972] With Eq. \[revision1\] it can be shown that, except for small values of $V$, also the electric field $F$ perpendicular to the insulator-semiconductor interface exponentially depends on $V$, as $$\label{revision2} F\propto\exp\left(\frac{eV}{2kT_{0}}\right).$$ Ref.  shows in detail how the electric field as in Eq. \[revision2\] results in an expression for the free surface hole density $P_{free}$ in Eq. \[sigmanorm\]. This expression is $$\begin{aligned} \label{pfree} P_{free}\approx\frac{2kT\epsilon_{0}\epsilon_{s}}{eC_{i}U_{g}}\frac{l}{2l-1}N_{V}\exp\left(-\frac{E_{V}-E_{F}}{kT}\right)\times\nonumber \\ \times\left[\exp\left(\frac{eV_{0}}{kT}\right)-\exp\left(\frac{eV_{0}}{2lkT}\right)\right]\end{aligned}$$ where $$l=\frac{T_{0}}{T}.$$ The second exponential term in Eq. \[pfree\] can safely be neglected and so we have $$\label{pfreeapprox} P_{free}\approx L_{a}\frac{l}{2l-1}N_{V}\exp\left(-\frac{E_{V}-E_{F}-eV_{0}}{kT}\right),$$ with $$L_{a}=\frac{2kT\epsilon_{0}\epsilon_{s}}{eC_{i}U_{g}}.$$ The factor $l/(2l-1)$ is expected to be close to unity and $L_{a}$ may be understood as the effective thickness of the accumulation layer.[@HorowitzG2000] Small deviations from an exponential trap DOS might be considered with a variable parameter $l$. However, the variation of $l$ can be ignored when compared to the exponential term in Eq. \[pfreeapprox\]. If we compare Eq. \[pfreeapprox\] with Eq. \[sigmaacti\] and Eq. \[sigmanorm\], we eventually have $$\label{energy} E_{a}\approx E_{V}-E_{F}-eV_{0}=E_{V}-E'_{F}.$$ The measured activation energy of the field-effect conductivity $E_{a}(V_{g})$ is approximately equal to the energetic difference between the Fermi level $E_{F}$ and the mobility edge at the interface, as indicated in Fig. \[bandalt\]. $E'_{F}$, as defined in Eq. \[energy\], is the quasi-Fermi level at the insulator-semiconductor interface. By substituting $dV_{0}=-dE_{a}/e$ in Eq. \[holes\] and Eq. \[DOS\], we finally have the DOS $$\label{DOSfinal} N(E)\approx\frac{d}{dE_{a}}\left[\frac{\epsilon_{0}\epsilon_{i}^{2}}{\epsilon_{s}l^{2}}U_{g}\left(\frac{dE_{a}}{dU_{g}}\right)^{-1}\right]$$ as a function of the energy $E=E_{V}-E'_{F}\approx E_{a}(V_{g})$ relative to the mobility edge. Fraction of free holes and band mobility {#demo3} ---------------------------------------- The fraction of free holes $P_{free}/P_{total}$ is of crucial importance since it is proportional to the effective field-effect mobility as described by Eq. \[allgmu\]. It can readily be extracted from temperature dependent measurements. From Eq. \[pfreeapprox\] and Eq. \[energy\] and with the total surface hole density $$\label{ptotal} P_{total}=C_{i}U_{g}/e,$$ we eventually have $$\label{rat} \frac{P_{free}}{P_{total}}=\frac{L_{a}e}{C_{i}U_{g}}\frac{l}{2l-1}N_{V}\exp\left(-\frac{E_{a}}{kT}\right).$$ Finally, from Eq. \[sigmanorm\] we see that the band mobility $\mu_{0}$ can be estimated with $$\label{bandband} \mu_{0}=\sigma/(eP_{free}).$$ Results ======= Extraction method and the influence of the contact resistance {#demo2} ------------------------------------------------------------- In this section, we demonstrate the extraction of the DOS and the hole densities from a set of gated four-terminal measurements. Moreover, we analyze the influence of the contact resistance on these functions. In a first step, we derived the activation energy $E_{a}(V_{g})$ of the four-terminal conductivity as a function of the gate voltage, according to Eq. \[sigmaacti\]. Fig. \[arrhenius\] shows Arrhenius plots and the corresponding linear regression lines. We found that only currents equal to or above $\approx1$nA are usable in the sense that the corresponding four-terminal conductivity follows a straight line in an Arrhenius plot. Therefore, we used all currents above 1nA for the extraction of the activation energy. At low gate voltages, only the measurements at the highest temperatures were considered as a consequence of the 1nA limit. The final result is shown in Fig. \[activation\]. The activation energy $E_{a}(V_{g})$ was then represented by a smooth fit (red/gray line in Fig. \[activation\]) in order to suppress the noise in the data. ![\[arrhenius\] (Color online) Arrhenius plots of the four-terminal conductivity at various gate voltages $V_{g}$. The activation energy $E_{a}(V_{g})$ was derived from the slope of the linear regression lines.](KalbW-figure4.ps){width="0.90\linewidth"} ![\[activation\] (Color online) Activation energy $E_{a}(V_{g})$ as determined with linear regressions according to Eq. \[sigmaacti\] and as verified with Arrhenius plots (Fig. \[arrhenius\]). The graph also shows a smooth fit of the activation energy (red/gray line).](KalbW-figure5.ps){width="0.90\linewidth"} Finally, the DOS was obtained with Eq. \[DOSfinal\] and is plotted as a function of the energetic distance to the mobility edge $E_{a}(V_{g})\approx E_{V}-E'_{F}$. For the calculations, the dielectric constant of pentacene was assumed to be $\epsilon_{s}=3$. In order to determine $U_{g}=|V_{g}-V_{FB}|$ in Eq. \[DOSfinal\], the flatband voltage $V_{FB}$ was assumed to be equal to the device onset voltage at room temperature. The onset voltage is the gate voltage where the drain current sharply rises when plotted on a logarithmic scale, i.e. where the drain current becomes measurable. The flatband voltage marks the onset of the accumulation regime and a small difference between the flatband voltage and the onset voltage may thus exist. A scheme to extract the flatband voltage was developed for amorphous silicon-based transistors and this scheme involves the temperature-dependence of the device off-current.[@WeisfieldRL1981] The scheme can, however, not be applied to our devices because the off-currents are due to experimental limitations and are not related to the conductivity of the pentacene thin film. Fig. \[DOScomp\] (circles) shows the DOS as derived from the activation energy in Fig. \[activation\]. The procedure was also applied to the same data without correcting for the contact resistance, i.e. to the two-terminal conductivity. The dashed line in Fig. \[DOScomp\] is the result, highlighting the necessity to correct for the contacts. Even for long channel devices ($L=450$$\mu$m), neglecting the contact resistance leads to significant errors in the shape and magnitude of the DOS, even more so closer to the mobility edge. ![\[DOScomp\] Density of traps as a function of energy relative to the mobility edge (circles). The dashed line was extracted from the same set of temperature dependent measurements, but the contact resistance was neglected. The contact resistance can lead to significant errors in the density of states function, particularly closer to the mobility edge.](KalbW-figure6.ps){width="0.90\linewidth"} The free hole density, the total hole density and the fraction of free holes was obtained with Eq. \[pfreeapprox\], Eq. \[ptotal\] and Eq. \[rat\], respectively. We have assumed that the effective density of extended states $N_{V}$ is equal to the density of the pentacene molecules, i.e. $N_{V}=3\times10^{21}$cm$^{-3}$ and Fig. \[ratiometh\] is the result. For the given sample at high gate voltages, $\approx10$% of the holes that are induced by the gate are free, i.e. only this fraction actually contributes to the drain current. ![\[ratiometh\] Total hole density $P_{total}=C_{i}U_{g}/e$, free hole density $P_{free}$ and fraction of free holes $P_{free}/P_{total}$ at room temperature as derived from the temperature dependence of the four-terminal conductivity. A significant fraction of the total gate-induced charge is trapped even at high gate voltages. The dashed line is the ratio $P_{free}/P_{total}$ if the contact resistance is neglected.](KalbW-figure7.ps){width="1.00\linewidth"} Oxygen-related device degradation --------------------------------- This study correlates the oxygen-related degradation of the transistor characteristics with the change of the fundamental transport parameters. We begin by presenting the characteristic effects of oxygen on the pentacene transistor characteristics. The blue (gray) line in Fig. \[compcurrents\] is a transfer characteristic measured as grown (after a high vacuum storage time of approximately 3 weeks). The sample was then exposed to 1bar of oxygen for 19h and, additionally, to 1bar of oxygen and white light for 4h. The red (gray) curve in Fig. \[compcurrents\] is a measurement of the same device after the oxidation process and after an evacuation time of 22h at a base pressure of the order of $10^{-8}$mbar. Fig. \[compcurrents\] contains the forward and the reverse sweeps in both cases. Fig. \[compmob\] shows the corresponding four-terminal mobilities and, for comparison, the respective two-terminal mobilities. The degradation effects are the following: a significant degradation of the subthreshold performance, a decrease in on-current, a decrease in effective mobility and a shift of the transfer characteristic towards more positive voltages. Also the contact resistance is increased after the oxygen exposure. At room temperature and $V_{g}=-50$V it increases from the as grown value of $R_{contact}W=2.8\times10^{4}$$\Omega$cm to $9.2\times10^{4}$$\Omega$cm after the oxygen exposure, i.e. it increases by a factor of 3.3. The increase in contact resistance may be due to the fact that the contact resistance is dominated by the film resistance. In a top-contact device, the holes must pass from the electrodes through the pentacene film to the channel at the insulator-semiconductor interface.[@PesaventoPV2006; @KalbWL20072] Importantly, before and after the oxygen exposure we have a device that is not limited by the contact resistance. At $V_{g}=-50$V for example, the contact resistance $R_{contact}$ is $\approx14$ times smaller than the channel resistance $R_{channel}$ prior to the oxygen exposure and $\approx9$ times smaller than the channel resistance after the oxygen exposure. ![\[compcurrents\] (Color online) Linear regime transfer characteristic of a pentacene thin-film transistor measured as grown (blue/gray line) and after oxidation (red/gray line). The graph shows the forward and the reverse sweeps in both cases. The characteristic oxygen-related degradation effects are a decrease in subthreshold performance, a decrease in on-current and a shift of the transfer characteristic to more positive voltages. The current hysteresis is essentially unaffected.](KalbW-figure8.ps){width="0.90\linewidth"} ![\[compmob\] (Color online) Four-terminal effective mobility (4-t.) from an as grown sample (blue/gray pentagons) and after the oxygen exposure (red/gray circles). At $V_{g}=-50$V for example, the contact-corrected field-effect mobility decreases from $\mu_{eff}=0.35$cm$^{2}$/Vs to 0.17cm$^{2}$/Vs, i.e. it is reduced by a factor of 2.1. The dashed lines represent the respective two-terminal mobility (2-t.) where the contact resistance is neglected.](KalbW-figure9.ps){width="0.90\linewidth"} The degradation effects can be observed when the transistor is subjected to oxygen in the dark. It is, however, much accelerated when the oxygen exposure is carried out in the presence of light, i.e. in the presence of activated oxygen and oxygen radicals. The degradation of the subthreshold performance and the field-effect mobility is due to oxygen-related defects that cause electrically active trap states within the mobility gap. The shift of the transfer characteristic is due to a change of the flatband voltage. It is known that oxygen can cause changes of the flatband voltage in an organic semiconductor device.[@MeijerEJ2004; @WangA2006] Oxygen-related traps -------------------- We now turn to the determination of the trap densities prior to and after the oxygen exposure. For the temperature-dependent gated four-terminal measurements, we used a low cooling rate ($0.2-0.25$$^{\circ}$/min) in order not to damage the sample. In Fig. \[together\] we show the temperature dependence of the transfer characteristics prior to and after the oxygen exposure. The temperature dependent measurements in Fig. \[together\] are from the same sample as the measurements in Fig. \[compcurrents\] and Fig. \[compmob\] and were carried out shortly after these measurements. The main difference after the oxygen exposure is that the temperature dependence in the subthreshold regime (drain currents on a logarithmic scale) is much more pronounced. ![image](KalbW-figure10.ps){width="1.0\linewidth"} The DOS was extracted for both sets of measurements as described in Secs. \[demo1\] and \[demo2\]. The main panel of Fig. \[DOSgraph\] shows the final result on a logarithmic scale. The difference between two adjacent data points in Fig. \[DOSgraph\] corresponds to a change of 0.2V in the gate voltage. Some gate voltages $U_{g}$ are indicated in the graph. The spacing between the data points decreases as the gate voltage is increased since at high gate voltages it is increasingly difficult to shift the quasi-Fermi level $E'_{F}$ towards the mobility edge due to the increased trap density. ![\[DOSgraph\] (Color online) Main panel: DOS as a function of energy relative to the mobility edge on a logarithmic scale. The blue (gray) pentagons are trap densities measured prior to the oxygen exposure and the red (gray) circles are trap densities measured after the oxygen process. The oxidation of the pentacene film leads to a significant increase in traps that are somewhat deeper in energy. The corresponding gate voltage $U_{g}$ above flatband is indicated. The inset shows the deeper traps on a linear scale.](KalbW-figure11.ps){width="0.90\linewidth"} We keep in mind that even in an ideal (trap-free) MIS structure, the interface potential increases with gate voltage more rapidly at low gate voltages, than at high gate voltages. This is a screening effect. The screening depends on the total charge in the device and it increases with gate voltage. The oxygen exposure leads to a significant increase in the density of traps that are somewhat deeper in energy (Fig. \[DOSgraph\]). The inset in Fig. \[DOSgraph\] shows the deeper traps on a linear scale. In Fig. \[peak\] we show the difference of the trap densities prior to and after the oxygen exposure on a linear scale (black line). We assume that our method allows for a determination of the DOS to an accuracy of $5$% and this is indicated by the error bars in Fig. \[peak\]. At energies $\leq0.25$eV from the mobility edge the difference in the DOS is comparable to or smaller than the estimated error and so for energies $\leq0.25$eV, the DOS is essentially unaffected by the oxygen exposure. At larger energies, however, the oxygen exposure leads to a broad peak of trap states. The dotted green (gray) line in Fig. \[peak\] is a Gaussian fit of the experimental data for energies $\geq0.25$eV and the dashed red (gray) line is a Lorentzian fit. In both cases good agreement is achieved. The Lorentzian fitting function is centered at $E_{C}=0.28$eV and has a width of 0.16eV and a height of $2.2\times10^{19}$cm$^{-3}$eV$^{-1}$. The area under the peak gives a volume trap density of $\approx4\times10^{18}$cm$^{-3}$. With a density of the pentacene molecules of $3\times10^{21}$cm$^{-3}$ this gives an oxygen-related impurity concentration of $\approx0.1$% provided that each impurity results in one trap. ![\[peak\] (Color online) Difference of the DOS prior to and after the oxygen exposure (black line). A relative error of 5% is assumed for the determination of a trap DOS and this is indicated by the error bars. The oxygen exposure leads to a broad peak of trap states. A Gaussian fit for energies $\geq0.25$eV (dotted green/gray line) gives good agreement with the measured curve and and so does a Lorentzian fit (dashed red/gray line). The peak is centered at $E_{C}=0.28$eV and the width and height of the peak are respectively 0.16eV and $2.2\times10^{19}$cm$^{-3}$eV$^{-1}$. At energies $\leq0.25$eV the difference in the DOS is comparable to or smaller than the estimated error and the DOS is essentially unaffected by the oxygen exposure.](KalbW-figure12.ps){width="0.90\linewidth"} The DOS close to the mobility edge is well described by a single exponential function. A fit for energies $\leq0.22$eV gives essentially identical characteristic slopes prior to and after the oxygen exposure, i.e. respectively $kT_{0}=47$meV and $kT_{0}=48$meV. These values are in good agreement with characteristic slopes from pentacene-based field-effect transistors in the literature. A characteristic slope of $kT_{0}=40$meV is reported as determined by simulating the measured transfer characteristics of pentacene thin-film transistors.[@VoelkelAR2002] Characteristic slopes of $kT_{0}=32-37$meV were derived from pentacene thin-film transistors with another device simulation program.[@OberhoffD2007; @PernstichKP2008] Yet another program gives a slope of $kT_{0}=100$meV for pentacene thin-film transistors.[@ScheinertS2007] However, the band mobility $\mu_{0}$ needs to be fixed for the simulations and depending on the choice of the band mobility, slopes of up to 400meV are also used in Ref. . With the initial scheme by Grünewald et al. and from similar pentacene thin-film transistors as in the present study we have extracted characteristic slopes of $kT_{0}=32$meV shortly after the evaporation of the pentacene and of $kT_{0}=37$meV in the aged thin film with reduced trap density.[@KalbWL20072] For a pentacene single-crystal device, a characteristic slope of $kT_{0}=109$meV is reported.[@LangDV2004] Trap induced changes in the free hole density --------------------------------------------- The upper panel in Fig. \[ratiocomp\] shows the free surface hole density (Eq. \[pfreeapprox\]) prior to and after the oxygen exposure from the two sets of temperature dependent measurements in Fig. \[together\]. The parameter $l=T_{0}/T$ was calculated with the characteristic slopes mentioned above: for room temperature $l=1.9$ ($l/(2l-1)=0.68$) prior to and after the oxygen exposure. At sufficiently high gate voltages, the free hole density as a function of gate voltage is shifted by 9V towards higher gate voltages as a consequence of the oxygen exposure. ![\[ratiocomp\] (Color online) Upper panel: free hole density $P_{free}$ prior to and after the oxygen exposure as derived from the temperature-dependent gated four-terminal measurements. After the oxygen exposure the curve is shifted by 9V. The magnitude of the shift is closely linked to density of the additional traps. Additional trapped holes with a density of $5\times10^{18}$cm$^{-3}$ can be estimated from the shift which is highly consistent with the trap density estimated from an integration of the peak in Fig \[peak\] ($4\times10^{18}$cm$^{-3}$). Lower panel: corresponding fraction of free holes functions $P_{free}/P_{total}$ prior to and after the oxygen exposure. At a given gate voltage $U_{g}$, the fraction of free holes is significantly reduced after the oxygen exposure. At $U_{g}=40$V for example, the fraction of free holes drops from 15% to 8%, i.e. it is reduced by a factor of 1.9.](KalbW-figure13.ps){width="1.00\linewidth"} The corresponding fractions of the free holes $P_{free}/P_{total}$ were extracted according to Eq. \[rat\] and are shown in the lower panel of Fig. \[ratiocomp\]. The fraction of the free holes changes significantly due to the oxygen exposure. At $U_{g}=40$V for example, 15% of all the induced holes are free prior to the oxygen exposure and this fraction drops to 8% after the oxygen exposure. It is reduced by a factor of 1.9. A fraction of free holes of $8-15$% at 40V is in good agreement with values for pentacene thin-film transistors found in the literature. In Ref. , a fraction of free holes around 10% is specified for comparable total gate-induced charge densities. Stability of the oxygen-related defects --------------------------------------- The DOS after the oxygen exposure was measured after a re-evacuation time of $\approx22$h. In order to elucidate the stability of the oxygen-related traps, the sample was kept in the prober station at 10$^{-8}$mbar for an additional 7days period. After that time, temperature dependent gated four-terminal measurements were carried out and the DOS was extracted. After these measurements, the sample was kept at 10$^{-8}$mbar for another 10 days. The pentacene films were then slowly heated to 50$^{\circ}$C at a rate of $0.2$$^{\circ}$/min with an electrical heating element at the cryostat. The temperature was held for 2h and the sample was then left to cool down. The same procedure was repeated with a final temperature of 70$^{\circ}$C. Due to the low heating and even lower cooling rates, the whole process took 3 days and the effective heating time was very long. Fig. \[dosheating\] shows the DOS after a re-evacuation time of $\approx1$day (same as in Fig. \[DOSgraph\]), 8days and 22days, the latter time including the heating procedure. The DOS functions are very similar and we can conclude that the oxygen-related trap states are very stable. ![\[dosheating\] (Color online) Trap densities after oxygen exposure. The re-evacuation time after the oxygen exposure is 1day (full red/gray line), 8days (dashed black line) and 22days (dash-dotted green/gray line). The oxygen-related traps are very stable, i.e. the DOS functions coincide. Prior to the last characterization after 22days, the sample was slowly heated to temperatures up to 70$^{\circ}$C.](KalbW-figure14.ps){width="0.90\linewidth"} Discussion ========== Effect of oxygen on the trap DOS -------------------------------- The DOS as extracted from the measurements of the as grown sample is of particular interest since the sample was kept at a pressure of the order of 10$^{-8}$mbar all along. The trap densities are relatively high ($10^{18}-10^{21}$cm$^{-3}$eV$^{-1}$) with a rather smooth dependence on energy. In the case of the measurements on the as grown samples, the effect of ambient gases can be excluded. It should be kept in mind that we used pentacene powder that was re-crystallized in high vacuum twice. We conclude that the “amorphouslike” trap DOS measured with an as grown sample is mainly due to structural defects within the pentacene. Trap states on the surface of the gate dielectric, caused by certain chemical groups for example, may also contribute to the states that are deeper in energy. When pentacene is exposed to oxygen, the gas migrates into the pentacene film and interacts with the pentacene molecules. This effect is expected to be accelerated if, in the presence of light, oxygen is activated and its dissociation is aided. We observe significant and irreversible changes in the transfer characteristics and in the DOS caused by the oxygen exposure. It should be kept in mind, however, that several hours of exposure to 1bar of oxygen are necessary in order to observe these changes. Consequently, pentacene thin-films are not very sensitive towards oxidation. The oxygen exposure leads to a broad peak of trap states centered at 0.28eV, as shown in Fig. \[peak\]. This suggests the degradation mechanism to be dominated by the creation of a specific oxygen-related defect. The large width of the peak (0.16eV) is thought to result from local structural disorder that modifies the on-site energy of the oxygen-affected molecules. As a matter of fact, very similar arguments are used to explain the smooth distribution of trap states in hydrogenated amorphous silicon. Even small deviations in the local structure of a defect lead to a different electronic state.[@StreetRA1991] Theoretical studies predict various types of oxygen-related defects in pentacene.[@NorthrupJE2003; @TsetserisL2007] In Ref.  oxygen defects are discussed in which a H atom of a pentacene molecule is replaced by an oxygen atom to form a C$_{22}$H$_{13}$O molecule. The oxygen forms a double bond with the respective C atom and the p$_{\mathrm{z}}$ orbital at this atom no longer participates in the $\pi$-electron system of the pentacene molecule. The oxidation at the middle ring is shown to be energetically most favorable. These oxygen defects are expected to lead to trap states in the mobility gap.[@NorthrupJE2003] In Ref.  other oxygen defects are described. An example is a single oxygen intermolecular bridge where a single oxygen atom is covalently bound to the carbon atoms on the center rings of two neighboring pentacene molecules. This defect, for instance, is calculated to lead to electrically active traps at 0.33 and 0.4eV above the valence band maximum.[@TsetserisL2007] Influence of oxygen-related traps on the field-effect mobility -------------------------------------------------------------- It is immediately plausible that the oxygen-related traps which are somewhat deeper in energy lead to a degradation of the subthreshold performance of the thin-film transistors. We do, however, also observe a significantly decreased field-effect mobility after oxygen exposure. This can be understood as follows. The deep traps that are created by the oxidation need to be filled at first and the position of the quasi-Fermi level lags behind the position of the quasi-Fermi level before the oxygen exposure. This is indicated in Fig. \[DOSgraph\] by labeling the corresponding gate voltages $U_{g}$. At the same gate voltage (which is proportional to the total gate-induced hole density), the quasi-Fermi level is further away from the mobility edge. The fraction of free holes, however, depends exponentially on the position of the quasi-Fermi level (Eq. \[rat\]). The field-effect mobility as described by Eq. \[allgmu\] is proportional to $P_{free}/P_{total}$ and so a reduction in the fraction of free holes readily affects the field-effect mobility. At $U_{g}=40$V, for example, the fraction of free holes is reduced by a factor of 1.9 after the oxygen exposure (main panel of Fig. \[ratiocomp\]). In addition, it is quite possible that the mobility $\mu_{0}$ of the delocalized charge above the mobility edge is changed after the oxygen exposure. With Eq. \[bandband\], this mobility is estimated to be $\mu_{0}=1.2$cm$^{2}$/Vs prior to the oxygen exposure and $\mu_{0}=0.95$cm$^{2}$/Vs after the oxygen exposure. We have a reduction by a factor of 1.3 and a change of the “intrinsic” charge transport. Conclusively, the major cause for the reduction of the effective field-effect mobility is occupancy statistics, and a reduction of the mobility above the mobility edge also plays a role. The reduction in the mobility $\mu_{0}$ might be explained by a scattering of charge carriers at the oxygen-related defects. Another indication that scattering plays an important role in organic field-effect transistors is the fact that the mobilities $\mu_{0}$ that we extract are lower than the best field-effect mobilities (up to 5cm$^{2}$/Vs) from pentacene thin-film transistors.[@KelleyTW2003] In addition, repeated purification of pentacene has been shown to lead to very high mobilities in pentacene single crystals.[@JurchescuOD2004] This effect is attributed to reducing the concentration of the oxidized pentacene species 6,13-pentacenequinone which degrades the transport properties by scattering the charge carriers.[@JurchescuOD2004] Trapped holes vs. traps ----------------------- The upper panel in in Fig. \[ratiocomp\] shows that the oxidation causes a shift of the curve for the free hole density $P_{free}$ by $\Delta U_{g}=9$V. The same free hole density $P_{free}$ is realized for different total hole densities $P_{total}$. Clearly, for an identical number of free holes the difference in the number of total holes must be attributed to a difference in the number of the trapped holes. Consequently, due to the oxygen-related traps we have additional holes that are trapped with a density of $C_{i}\Delta U_{g}/e=7.5\times10^{11}$cm$^{-2}$. Except at very low gate voltages above the flatband voltage, the charge in an organic field-effect transistor is concentrated at the insulator-semiconductor interface. As explained above, our extraction scheme only considers currents above 1nA. Therefore it is reasonable to assume that the holes are trapped in a region at the insulator-semiconductor interface with a thickness of the order of one molecular layer ($\approx1.5$nm for pentacene). This gives a volume density of trapped holes of $\approx5\times10^{18}$cm$^{-3}$. By integrating the peak in the trap DOS we have derived a trap density of $\approx4\times10^{18}$cm$^{-3}$ which is in very good agreement with the density of the trapped holes. Deep traps and device performance --------------------------------- This study reveals how an increase in the density of deeper traps can significantly affect the field-effect mobility. The influence of deep traps on the device characteristics is of most general concern because deep traps can have various origins. Trap states due to the surface of the gate dielectric, for example, are expected to be electronically deep traps. Modifying the gate dielectric with a self-assembled monolayer or using a polymeric gate dielectric not only leads to an improved subthreshold swing but can also result in improved mobility.[@LinYY19972; @VeresJ2004] This also holds in the case of organic single crystal transistors where the semiconductor is grown separately and growth-related effects can be excluded.[@PodzorovV2003; @GoldmannC20062; @KalbWL2007] In the light of the present study, it seems possible that these effects can be solely understood with transport in extended states above a mobility edge and a distribution of trap states: a reduced number of deep traps leads to an increased number of free carriers above the mobility edge and to a higher mobility of that charge. Summary and conclusions ======================= Pentacene-based thin film transistors were characterized without exposing the samples to ambient air (as grown) and after exposure to oxygen in combination with white light. The exposure of the pentacene to the oxidizing agent leads to a degradation of the subthreshold performance, a decrease in field-effect mobility, a shift of the flatband voltage and to an increased contact resistance. Contact-corrected trap state functions were extracted from temperature dependent gated four-terminal measurements. We show that the exposure to oxygen leads to a broad peak of trap states centered at 0.28eV, with a width of 0.16eV and a height of the order of 10$^{19}$cm$^{-3}$eV$^{-1}$. The emergence of a peak indicates the process to be dominated by the creation of a specific oxygen-related defect. The large width of the peak is due to the energetically different surroundings induced by structural disorder. The oxygen defects are very stable and are likely to be caused by pentacene molecules with covalently bound oxygen. The decrease in field-effect mobility is caused by the oxygen-related deep traps. These states are filled upon increasing the gate voltage and the quasi-Fermi level at the interface lags behind the position it has in as-deposited samples. This leads to a significantly smaller fraction of free holes. The magnitude of the shift in the free hole function is highly consistent with the density of the oxygen-related traps ($\approx4\times10^{18}$cm$^{-3}$) as estimated from the difference in the trap DOS prior to and after the oxygen exposure. In addition, the oxygen exposure leads to a decrease of the mobility of the charge carriers above the mobility edge. This latter effect may be due to scattering of the charge carriers at the oxygen defects. The results can be seen from a more general point of view. At first, the temperature dependent measurements are self-consistent with the assumption of a mobility edge, thus contributing to an understanding of charge transport in organic semiconductors. Moreover, they are an example for the way in which deeper traps can influence the effective field-effect mobility. Theoretical studies may help to identify the oxygen defect and organic synthetic chemistry may soon find a way to tailor organic semiconductors where the creation of defects by oxidation is completely inhibited. The authors thank Kurt Pernstich, David Gundlach and Simon Haas for support in the early stages of the study and Thomas Mathis and Matthias Walser for stimulating discussions. [44]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , , , , , , , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , , , , , , ****, (). , ****, (). , , , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). , ****, (). , , , , , ****, (). , , , , , , , ****, (). , , , ****, (). , , , , , , , , , ****, (). , , , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). , , , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , ****, (). , ** (, ). , ****, (). , ****, (). , , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , , , ****, ().
--- abstract: 'The success of generative modeling in continuous domains has led to a surge of interest in generating discrete data such as molecules, source code, and graphs. However, construction histories for these discrete objects are typically not unique and so generative models must reason about intractably large spaces in order to learn. Additionally, structured discrete domains are often characterized by strict constraints on what constitutes a valid object and generative models must respect these requirements in order to produce useful novel samples. Here, we present a generative model for discrete objects employing a Markov chain where transitions are restricted to a set of local operations that preserve validity. Building off of generative interpretations of denoising autoencoders, the Markov chain alternates between producing 1) a sequence of corrupted objects that are valid but not from the data distribution, and 2) a learned reconstruction distribution that attempts to fix the corruptions while also preserving validity. This approach constrains the generative model to only produce valid objects, requires the learner to only discover local modifications to the objects, and avoids marginalization over an unknown and potentially large space of construction histories. We evaluate the proposed approach on two highly structured discrete domains, molecules and Laman graphs, and find that it compares favorably to alternative methods at capturing distributional statistics for a host of semantically relevant metrics.' author: - | Ari Seff\ Princeton University\ Princeton, NJ\ `[email protected]`\ Wenda Zhou\ Columbia University\ New York, NY\ `[email protected]`\ Farhan Damani\ Princeton University\ Princeton, NJ\ `[email protected]`\ Abigail Doyle\ Princeton University\ Princeton, NJ\ `[email protected]`\ Ryan P. Adams\ Princeton University\ Princeton, NJ\ `[email protected]`\ bibliography: - 'references.bib' title: | Discrete Object Generation\ with Reversible Inductive Construction --- Introduction ============ Many applied domains of optimization and design would benefit from accurate generative modeling of structured discrete objects. For example, a generative model of molecular structures may aid drug or material discovery by enabling an inexpensive search for stable molecules with desired properties. Similarly, in computer-aided design (CAD), generative models may allow an engineer to sample new parts or conditionally complete partially-specified geometry. Indeed, recent work has aimed to extend the success of learned generative models in continuous domains, such as images and audio, to discrete data including graphs [@GraphRNN; @DeepGAR], molecules [@GomezChemDesign; @GrammarVAE], and program source code [@SyntacticCodeGen; @NeurConditionProg]. However, discrete domains present particular challenges to generative modeling. Discrete data structures often exhibit non-unique representations, e.g., up to $n!$ equivalent adjacency matrix representations for a graph with $n$ nodes. Models that perform additive construction—incrementally building a graph from scratch [@GraphRNN; @DeepGAR]—are flexible but face the prospect of reasoning over an intractable number of possible construction paths. For example, @GraphRNN leverage a breadth-first-search (BFS) to reduce the number of possible construction sequences, while @GraphVAE avoid additive construction and instead directly decode an adjacency matrix from a latent space, at the cost of requiring approximate graph matching to compute reconstruction error. In addition, discrete domains are often accompanied by prespecified hard constraints denoting what constitutes a *valid* object. For example, molecular structures represented as SMILES strings [@SMILES] must follow strict syntactic and semantic rules in order to be decoded to a real compound. Recent work has aimed to improve the validity of generated samples by leveraging the SMILES grammar [@GrammarVAE; @SyntaxDirectedVAE] or encouraging validity via reinforcement learning [@DiscreteValidity]. Operating directly on chemical graphs, @JunctionVAE leverage chemical substructures encountered during training to build valid molecular graphs and @MolGAN encourage validity for small molecules via adversarial training. In other graph-structured domains, strict topological constraints may be encountered. For example, Laman graphs [@Laman], a class of geometric constraint graphs, require the relative number of nodes and edges in each subgraph to meet certain conditions in order to represent well-constrained geometry. In this work we take the broad view that graphs provide a universal abstraction for reasoning about structure and constraints on discrete spaces. This is not a new take on discrete spaces: graph-based representations such as factor graphs [@kschischang2001factor], error-correcting codes [@gallager1962low], constraint graphs [@montanari1974networks], and conditional random fields [@crf] are all examples of ways that hard and soft constraints are regularly imposed on structured prediction tasks, satisfiability problems, and sets of random variables. ![Reconstruction model processing given an input molecule. Location-specific representations computed via message passing are passed through fully-connected layers outputting probabilities for each legal operation.[]{data-label="fig:rec_model"}](figs/method_figure.pdf){width="80.00000%"} We propose to model discrete objects by constructing a Markov chain where each possible state corresponds to a valid object. Learned transitions are restricted to a set of local *inductive* moves, defined as minimal insert and delete operations that maintain validity. Leveraging the generative model interpretation of denoising autoencoders [@GeneralizedDAEs], the chain employed here alternatingly samples from two conditional distributions: a fixed distribution over corrupting sequences and a learned distribution over reconstruction sequences. The equilibrium distribution of the chain serves as the generative model, approximating the target data-generating distribution. This simple framework allows the learned component—the reconstruction model—to be treated as a standard supervised learning problem for multi-class classification. Each reconstruction step is parameterized as a categorical distribution over adjacent objects, those that are one inductive move away from the input object. Given a local corrupter, the target reconstruction distribution is also local, containing fewer modes and potentially being easier to learn than the full data-generating distribution [@GeneralizedDAEs]. In addition, various hard constraints, such as validity rules or requiring the inclusion of a specific substructure, are incorporated naturally. One limitation of the proposed approach is its expensive sampling procedure, requiring Gibbs sampling at deployment time. Nevertheless, in many areas of engineering and design, it is the downstream tasks following initial proposals that serve as the time bottleneck. For example, in drug design, wet lab experiments and controlled clinical trials are far more time intensive than empirically adequate mixing for the proposed method’s Markov chain. In addition, as an implicit generative model, the proposed approach is not equipped to explicitly provide access to predictive probabilities. We compare statistics for a host of semantically meaningful features from sets of generated samples with the corresponding empirical distributions in order to evaluate the model’s generative capabilities. We test the proposed approach on two complex discrete domains: molecules and Laman graphs [@Laman], a class of geometric constraint graphs applied in CAD, robotics, and polymer physics. Quantitative evaluation indicates that the proposed method can effectively model highly structured discrete distributions while adhering to strict validity constraints. ![Corruption and subsequent reconstruction of a molecular graph. Our method generates discrete objects by running a Markov chain that alternates between sampling from fixed corruption and learned reconstruction distributions that respect validity constraints.](figs/teaser_new_new.pdf){width="\textwidth"} Reversible Inductive Construction ================================= Let $p(x)$ be an unknown probability mass function over a discrete domain, $D$, from which we have observed data. We assume there are constraints on what constitutes a valid object, where $V \subseteq D$ is the subset of valid objects in $D$, and $\forall x \notin V, p(x) = 0$. For example, in the case of molecular graphs, an invalid object may violate atom-specific valence rules. Our goal is to learn a generative model $p_\theta(x)$, approximating $p(x)$, with support restricted to the valid subset. We formulate our approach, generative reversible inductive construction (GenRIC)[^1], as the equilibrium distribution of a Markov chain that only visits valid objects, without a need for inefficient rejection sampling. The chain’s transitions are restricted to legal inductive moves. Here, an inductive move is a local insert or delete operation that, when executed on a valid object, results in another valid object. The Markov kernel then needs to be learned such that its equilibrium distribution approximates $p(x)$ over the valid subspace. Learning the Markov kernel -------------------------- The desired Markov kernel is formulated as successive sampling between two conditional distributions, one fixed and one learned, a setup originally proposed to extract the generative model implicit in denoising autoencoders [@GeneralizedDAEs]. A single transition of the Markov chain involves first sampling from a fixed corrupting distribution $c(\tilde{x} \mid x)$ and then sampling from a learned reconstruction distribution $p_\theta(x \mid \tilde{x})$. While the corrupter is free to damage $x$, validity constraints are built into both conditional distributions. The joint data-generating distribution over original and corrupted samples is defined as $p(x, \tilde{x}) = c(\tilde{x} \mid x)p(x)$, which is also uniquely defined by the corrupting distribution and the target reconstruction distribution, $p(x \mid \tilde{x})$. We use supervised learning to train a reconstruction distribution model $p_\theta(x \mid \tilde{x})$ to approximate $p(x \mid \tilde{x})$. Together, the corruption and learned reconstruction distributions define a Gibbs sampling procedure that asymptotically samples from marginal $p_\theta(x)$, approximating the data marginal $p(x)$. Given a reasonable set of conditions on the support of these two conditionals and the consistency of the employed learning algorithm, the learned joint distribution can be shown to be asymptotically consistent over the Markov chain, converging to the true data-generating distribution in the limit of infinite training data and modeling capacity [@GeneralizedDAEs]. However, in the more realistic case of estimation with finite training data and capacity, a valid concern arises regarding the effect of an imperfect reconstruction model on the chain’s equilibrium distribution. To this end, @GSNs adapts a result from perturbation theory [@PerturbationMarkov] for finite state Markov chains to show that as the learned transition matrix becomes arbitrarily close to the target transition matrix, the equilibrium distribution also becomes arbitrarily close to the target joint distribution. For the discrete domains of interest here, we can enforce a finite state space by simply setting a maximum object size. Sampling training sequences --------------------------- Let $c(s \,|\, x)$ be a fixed conditional distribution over a sequence of corrupting operations ${s = [s_1, s_2, ..., s_k]}$ where $k$ is a random variable representing the total number of steps and each ${s_i \in \text{Ind}(\tilde{x}_i)}$ where $\text{Ind}(\tilde{x}_i)$ is a set of legal inductive moves for a given $\tilde{x}_i$. The probability of arriving at corrupted sample $\tilde{x}$ from $x$ is $$c(\tilde{x} \mid x) = \sum_s c(\tilde{x},s \mid x) = \sum_{s \in S(x,\tilde{x})} c(s \mid x),$$ where $S(x,\tilde{x})$ denotes the set of all corrupting sequences from $x$ to $\tilde{x}$. Thus, the joint data-generating distribution is $$p(x, s, \tilde{x}) = c(\tilde{x},s \mid x)p(x)$$ where $c(\tilde{x},s \mid x) = 0$ if $s \notin S(x,\tilde{x})$. Given a corrupted sample, we aim to train a reconstruction distribution model $p_\theta(x \mid \tilde{x})$ to maximize the expected conditional probability of recovering the original, uncorrupted sample. Thus, we wish to find the parameters $\theta^*$ that minimize the expected KL divergence between the true $p(x,s \mid \tilde{x})$ and learned $p_\theta(x, s \mid \tilde{x})$, $$\theta^* = \operatorname*{argmin}_\theta \operatorname{\mathbb{E}}_{p(x, s, \tilde{x})} \left[ D_{\text{KL}}(p(s,x \mid \tilde{x}) \mathbin{\|} p_\theta(s,x \mid \tilde{x})) \right],$$ which amounts to maximum likelihood estimation of $p_\theta(s,x \mid \tilde{x})$ and likewise $p_\theta(x \mid \tilde{x})$. The above is an expectation over the joint data-generating distribution, $p(x,s,\tilde{x})$, which we can sample from by drawing a data sample and then conditionally drawing a corruption sequence: $$x \sim p(x), \quad \tilde{x}, s \sim c(\tilde{x},s \mid x).$$ Fixed corrupter {#corrupter} --------------- In general, we are afforded flexibility when selecting a corruption distribution, given certain conditions for ergodicity are met. We implement a simple fixed distribution over corrupting sequences approximately following these steps: 1) Sample a number of moves $k$ from a geometric distribution. 2) For each move, sample a move type from $\{$Insert, Delete$\}$. 3) Sample from among the legal operations available for the given move type. We make minor adjustments to the weighting of available operations for specific domains. See \[sec:appendix:corrupter\] for full details. The geometric distribution over corruption sequence length ensures exponentially decreasing support with edit distance, and likewise the support of the target reconstruction distribution is local to the conditioned corrupted object. The globally non-zero (yet exponentially decreasing) support of both the corruption and reconstruction distributions trivially satisfy the conditions required in Corollary A2 from @GSNs for the chain defined by the corresponding Gibbs sampler to be ergodic. Alternatively, one could employ conditional distributions with truncated support after some edit distance and still satisfy ergodicity conditions via the stronger Corollary A3 from @GSNs. Unless otherwise stated, the results reported in \[app\_mols,app\_laman\], use a geometric distribution with five expected steps for the corruption sequence length. In general, we observe shorter corruption lengths lead to better samples, though we did not seek to specially optimize this hyperparameter for generation quality. See \[appendix:geom\_param\] for some results with other step lengths. Reconstruction distribution --------------------------- A sequence of corrupting operations ${s = [s_1, s_2, ..., s_k]}$ corresponds to a sequence of visited corrupted objects $[\tilde{x}_1, \tilde{x}_2, ..., \tilde{x}_k]$ after execution on an initial sample $x$. We enforce the corrupter to be Markov such that its distribution over the next corruption operation to perform depends only on the current object. Likewise, the target reconstruction distribution is then also Markov, and we factorize the learned reconstruction sequence model as the product of memoryless transitions culminating with a stop token: $$p_\theta(s_{\text{rev}}\mid \tilde{x}) = p_\theta(\text{stop}\mid x) p_\theta(x \mid \tilde{x}_1) \prod_{i=1}^{k-1} p_\theta(\tilde{x}_i \mid \tilde{x}_{i+1})$$ where $s_\text{rev} = [s_{k_\text{rev}}, s_{{k-1}_\text{rev}}, ..., s_{1_\text{rev}}, \text{stop}]$, the reverse of the corrupting operation sequence. If a stop token is sampled from the model, reconstruction ceases and the next corruption sequence begins. For the molecule model, an additional “revisit” stop criterion is also used: the reconstruction ceases when a molecule is revisited (see \[sec:appendix:revisit\] for details). For each individual step, the reconstruction model outputs a large conditional categorical distribution over $\mathrm{Ind}(\tilde{x})$, the set of legal modification operations that can be performed on an input $\tilde{x}$. We describe the general architecture employed and include domain-specific details in \[app\_mols,app\_laman\]. Any operation in $\mathrm{Ind}(\tilde{x})$ may be defined in a general sense by a *location* on the object $\tilde{x}$ where the operation is performed and a *vocabulary* element describing which vocabulary item (if any) is involved (\[fig:rec\_model\]). The prespecified vocabulary consists of domain-specific substructures, a subset of which may be legally inserted or deleted from a given object. The model induces a distribution over all legal operations (which may be described as a subset of the Cartesian product of the locations and vocabulary elements) by computing location embeddings for an object and comparing those to learned embeddings for each vocabulary element. For the graph-structured domains explored here, location embeddings are generated using a message passing neural network structure similar to @ConvNetMol [@MPN] (see \[sec:appendix:training\_details\]). In parallel, the set of vocabulary elements is also given a learned embedding vector. The unnormalized log-probability for a given modification is then obtained by computing the dot product of the embedding of the location where the modification is performed and the embedding of the vocabulary element involved. For most objects from the molecule and Laman graph domains, this defines a distribution over a discrete set of operations with cardinality in the tens of thousands. We note that although our model induces a distribution over a large discrete set, it does not do so through a traditional fully-connected softmax layer. Indeed, the action space of the model is heavily factorized, ensuring that the computation is efficient. The factorization is present at two levels: the actions are separated into broad categories (e.g., insert at atom, insert at bond, delete, for molecules), that do not interact except through the normalization. Additionally, actions are further factorized through a location component and vocabulary component, that only interact through a dot product, further simplifying the model. Application: Molecules {#app_mols} ====================== Molecular structures can be defined by graphs where nodes represent individual atoms and edges represent bonds. In order for such graphs to be considered valid molecular structures by standard chemical informatics toolkits (e.g., RDKit [@RDKit]), certain constraints must be satisfied. For example, aromatic bonds can only exist within aromatic rings, and an atom can only engage in as many bonds as permitted by its valence. By restricting the corruption and reconstruction operations to a set of modifications that respect these rules, we ensure that the resulting Markov chain will only visit valid molecular graphs. Legal operations ---------------- When altering one valid molecular graph into another, we restrict the set of possible modifications to the insertion and deletion of valid substructures. The vocabulary of substructures consists of non-ring bonds, simple rings, and bridged compounds (simple rings with more than two shared atoms) present in training data. This is the same type of vocabulary proposed in @JunctionVAE. The legal insertion and deletion operations are set as follows: **Insertion** For each atom and bond of a molecular graph, we determine the subset of the vocabulary that would be chemically compatible for attachment. Then, for each compatible vocabulary substructure, the possible assemblies of it with the atom or bond of interest are enumerated (keeping its already-connected neighbors fixed). For example, when inserting a ring from the vocabulary via one of its bonds, there is often more than one valid bond to select from. Here, we only specify the 2D configuration of the molecular graph and do not account for stereochemistry. **Deletion** We define the leaves of a molecule to be those substructures that can be removed while the rest of the molecular graph remains connected. Here, the set of leaves consists of either non-ring bonds, rings, or bridged compounds whose neighbors have a non-zero atom intersection. The set of possible deletions is fully specified by the set of leaf substructures. To perform a deletion, a leaf is selected and the atoms whose bonds are fully contained within the leaf node substructure are removed from the graph. These two minimal operations provide enough support for the resulting Markov chain to be ergodic within the set of all valid molecular graphs constructible via the extracted vocabulary. As @JunctionVAE find, although an arbitrary molecule may not be reachable, empirically the finite vocabulary provides broad coverage over organic molecules. Further details on the location and vocabulary representations for each possible operation are given in the appendix. Data ---- For molecules we test the proposed approach on the ZINC dataset, which contains about 250K drug-like molecules from the ZINC database [@ZINC]. The model is trained on 220K molecules according to the same train/test split as in @JunctionVAE [@GrammarVAE]. Distributional statistics {#mol_dist_stat} ------------------------- While predictive probabilities are not available from the implicit generative model, we can perform posterior predictive checks on various semantically relevant metrics to compare our model’s learned distribution to the data distribution. Here, we leverage three commonly used quantities when assessing drug molecules: the *quantitative estimate of drug-likeness* (QED) score (between 0 and 1) [@QED], the *synthetic accessibility* (SA) score (between 1 and 10) [@SAS], and the *log octanol-water partition coefficient* (logP) [@logP]. For QED, a higher value indicates a molecule is more likely to be drug-like, while for SA, a lower value indicates a molecule is more likely to be easily synthesizable. logP measures the hydrophobicity of a molecule, with a higher value indicating more hydrophobic. Together these metrics take into account a wide array molecular features (ring count, charge, etc.), allowing for an aggregate comparison of distributional statistics. Our goal is not to optimize these statistics but to evaluate the quality of our generative model by comparing the distribution that our model implies over these quantities to those in the original data. A good generative model would have novel molecules but those molecules would have similar aggregate statistics to real compounds. In \[fig:mol\_property\_hist\], we display Gaussian kernel density estimates (KDE) of the above metrics for generated sets of molecules from seven baseline methods, in addition to our own (see \[sec:appendix:sampling\_details\] for chain sampling details). A normalized histogram of the ZINC training distribution is shown for visual comparison. For each method, we obtain 20K samples either by running pre-trained models [@JunctionVAE; @GomezChemDesign; @GrammarVAE], by accessing pre-sampled sets [@CG-VAE; @GraphVAE; @DeepGAR], or by training models from scratch [@SmilesLSTM][^2]. Only novel molecules (those not appearing in the ZINC training set) are included in the metric computation, to avoid rewarding memorization of the training data. In addition, \[table:mol\_post\_pred\] displays bootstrapped Kolmogorov–Smirnov (KS) distances between the samples for each method and the ZINC training set. Our method is capable of generating novel molecules that have statistics closely matched to the empirical QED and logP distributions. The SA distribution seems to be more challenging, although we still report lower mean KS distance than some recent methods. Because we allow the corrupter to uniformly select from the vocabulary, even if a particular vocabulary element occurs very rarely in training data, it can sometimes introduce molecules without an accessible synthetic route that the reconstructor does not immediately recover from. One could alter the corrupter and have it favor commonly appearing vocabulary items to mitigate this. We also note that our approach lends itself to Markov chain transitions reflecting known (or learned) chemical reactions. Interestingly, the SMILES-based LSTM model [@SmilesLSTM] is effective at matching the ZINC dataset statistics, producing a substantially better-matched SA distribution than the other methods. However, as noted in [@CG-VAE], by operating on the linear SMILES representation, the LSTM has limited ability to incorporate structural constraints, e.g., enforcing the presence of a particular substructure. In addition to the above metrics, we report a validity score (the percentage of samples that are chemically valid) for each method in \[table:mol\_post\_pred\]. A sample is considered to be valid if it can be successfully parsed by RDKit [@RDKit]. The validity scores displayed are the self-reported values from each method. Our method, like @JunctionVAE [@CG-VAE], enforces valid molecular samples, and the model does not have to learn these constraints. See \[sec:appendix:guacamol\_benchmarks\] for additional evaluation using the GuacaMol distribution-learning benchmarks [@GuacaMol]. We might also inquire how the reconstructed samples of the Markov chain compare to the corrupted samples. See \[fig:corrupt\_mol\_property\_hist\] in the supplementary material for a comparison. On average, we observe corrupted samples that are less druglike and less synthesizable than their reconstructed counterparts. In particular, the output reconstructed molecule has a 21% higher QED relative to the input corrupted molecule on average. Running the corrupter repeatedly (with no reconstruction) leads to samples that severely diverge from the data distribution. ![Distributions of QED (left), SA (middle), and logP (right) for sampled molecules and ZINC.[]{data-label="fig:mol_property_hist"}](figs/mol_metrics_across_methods.pdf){width="\textwidth"} Application: Laman Graphs {#app_laman} ========================= Geometric constraint graphs are widely employed in CAD, molecular modeling, and robotics. They consist of nodes that represent geometric primitives (e.g., points, lines) and edges that represent geometric constraints between primitives (e.g., specifying perpendicularity between two lines). To allow for easy editing and change propagation, best practices in parametric CAD encourage keeping a part well-constrained at all stages of design [@GeomConstraintCAD]. A useful generative model over CAD models should ideally be restricted to sampling well-constrained geometry. Laman graphs describe two-dimensional geometry where the primitives have two degrees of freedom and the edges restrict one degree of freedom (e.g., a system of rods and joints) [@Laman]. Representing minimally rigid systems, Laman graphs have the property that if any single edge is removed, the system becomes under-constrained. For a graph with $n$ nodes to be a valid Laman graph, the following two simple conditions are necessary and sufficient: 1) the graph must have exactly ${2n-3}$ edges, and 2) each node-induced subgraph of $k$ nodes can have no more than ${2k-3}$ edges. Together, these conditions ensure that all structural degrees of freedom are removed (given that the constraints are all independent), leaving one rotational and two translational degrees of freedom. In 3D, although the corresponding Laman conditions are no longer sufficient, they remain necessary for well-constrained geometry. Legal operations ---------------- @Henneberg describes two types of node-insertion operations, known as *Henneberg moves*, that can be used to inductively construct any Laman graph (\[henneberg\_moves\]). We make these moves and their inverses (the delete versions) available to both the corrupter and reconstruction model. While moves \#1 and \#2 can always be reversed for any nodes of degree 2 and 3 respectively, a check has to be performed to determine where the missing edge can be inserted for reverse move \#2 [@MinimallyRigidGraphs]. Here, we use the $O(n^2)$ Laman satisfaction check described in [@PebbleGame] to determine the set of legal neighbors. At the rigidity transition, it runs in only $O(n^{1.15})$. ![image](figs/henneberg.pdf){width="\textwidth"} \[henneberg\_moves\] Data ---- For Laman graphs, we generate synthetic graphs randomly via Algorithm 7 from @MoussaouiThesis, originally proposed for evaluating geometric constraint solvers embedded within CAD programs. This synthetic generator allows us to approximately control a produced graph’s degree of decomposability (DoD), a metric which indicates to what extent a Laman graph is composed of well-constrained subgraphs. Such subsystems are encountered in various applications, e.g., they correspond to individual components in a CAD model or rigid substructures in a protein. The degree of decomposability is defined as ${\text{DoD} = g/n}$, where $g$ is the number of well-constrained, node-induced subgraphs and $n$ is the total number of nodes. We generate 100K graphs each for a low and high decomposability setting (see \[sec:dataset:laman\] for full details). Distributional statistics {#distributional-statistics} ------------------------- \[table:laman\_post\_pred\] displays statistics for Laman graphs generated by our model as well as by two baseline methods all trained on the low decomposability dataset (we observe similar results in the high decomposability setting). For each method, 20K graphs are sampled. The validity metric is defined the same as for molecules (\[mol\_dist\_stat\]). In addition, bootstrapped KS distance between the sampled graphs and training data for DoD distribution is shown for each method. While it is unsurprising that the simple Erdős–Rényi model [@ErdosRandomGraphs] fails to meet validity requirements ($< 0.1$% valid), we see that the recently proposed GraphRNN [@GraphRNN] fails to do much better. While deep graph generative models have proven to be very effective at reproducing a host of graph statistics, Laman graphs represent a particularly strict topological constraint, imposing necessary conditions on every subgraph. Today’s flexible graph generative models, while effective at matching local statistics, are ill-equipped to handle this kind of global constraint. By leveraging domain-specific inductive moves, the proposed method does not have to learn what a valid Laman graph is, and instead learns to match the distributional DoD statistics within the set of valid graphs. Conclusion and Future Work ========================== In this work we have proposed a new method for modeling distributions of discrete objects, which consists of training a model to undo a series of local corrupting operations. The key to this method is to build both the corruption and reconstruction steps with support for reversible inductive moves that preserve possibly-complicated validity constraints. Experimental evaluation demonstrates that this simple approach can effectively capture relevant distributional statistics over complex and highly structured domains, including molecules and Laman graphs, while always producing valid structures. One weakness of this approach, however, is that the inductive moves must be identified and specified for each new domain; one direction of future work is to learn these moves from data. In the case of molecules, restricting the Markov chain’s transitions to learned chemical reactions could improve the synthesizability of generated samples. Future work can also explore enforcing additional hard constraints besides structural validity. For example, if a particular core structure or scaffold with some desired baseline functionality (e.g., benzodiazepines) should be included in a molecule, chain transitions can be masked to respect this. Coupled with other techniques such as virtual screening, conditional generation may enable efficient searching of candidate drug compounds. Acknowledgements {#acknowledgements .unnumbered} ---------------- We would like to thank Wengong Jin, Michael Galvin, Dieterich Lawson, and members of the Princeton Laboratory for Intelligent Probabilistic Systems for valuable discussion and feedback. This work was partially funded by the Alfred P. Sloan Foundation, NSF IIS-1421780, and the DataX Program at Princeton University through support from the Schmidt Futures Foundation. AS was supported by the Department of Defense through the National Defense Science and Engineering Graduate Fellowship (NDSEG) Program. We acknowledge computing resources from Columbia University’s Shared Research Computing Facility project, which is supported by NIH Research Facility Improvement Grant 1G20RR030893-01, and associated funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010. Geometric Distribution for Corrupter {#appendix:geom_param} ==================================== \[fig:mol\_property\_hist\] displays distributions for molecular samples from models trained with varying geometric distributions for the corruption sequence length. As the sequence length increases (and the corruptions become less local), the models produces worse samples. A short average corruption sequence length of one step seems to lead to a better-matched SA distribution, albeit with slower observed mixing for the Markov chain. ![Distributions of QED (left), SA (middle), and logP (right) for the ZINC dataset and models trained with varying expected length corruption sequences.[]{data-label="fig:geom_hist"}](figs/geom_param.pdf){width="\textwidth"} Molecular Reconstruction Model Operations {#sec:appendix:molecular_operations} ========================================= We describe the representation assigned to each inductive operation. As described in \[app\_mols\], each modification is associated with a location (molecule dependent) and an operation type (molecule independent). 1. Stop: a global operation, naturally associated with the entire molecule. The location embedding is produced by embedding the entire molecule. 2. Delete atom leaf: a deletion operation where the deletion target is a single atom. The vocabulary is unique, and the location is associated with a single atom. 3. Delete ring leaf: a deletion operation where the deletion target is a ring or bridged compound. The vocabulary is unique, and the location is associated with a ring. 4. Insert via atom fusion: an insertion operation where the insertion is performed by attaching at an existing atom. The vocabulary is given by all atoms in each molecule of the vocabulary, and the location is associated with a single atom. 5. Insert via bond fusion: an insertion operation where the insertion is performed by attaching at an existing bond. The vocabulary is given by all bonds belonging to rings in each molecule of the vocabulary, and the location is associated with a single bond in a ring. Embeddings for locations are computed in the following fashion. We follow a message passing architecture similar to @ConvNetMol [@MPN], which produces a message for each bond and for each atom. The atom messages are transformed and pooled to produce the molecule embedding (used for `stop` prediction). Messages for each leaf atom are also transformed to produce embeddings for `delete leaf atom` actions. Messages for each bond in a leaf ring are transformed and pooled to produce embeddings for `delete leaf ring`. Messages for atoms and bonds are transformed to produce embeddings for `insert via atom fusion` and `insert via bond fusion`. Training Details {#sec:appendix:training_details} ================ In this section we give a brief description of the choices of parameters in training. We refer the reader to the source code[^3] for a full description of the model architecture and parameters. Molecule model -------------- Molecules are converted into graphs in a manner identical to the representation used by [@JunctionVAE]. The message passing model runs five steps of message passing. An embedding for the molecule is produced by transforming atom-level messages through a two-layer fully connected network, and aggregating the result through an average-pooling and a max-pooling operation (concatenated). For each task-relevant location, an embedding is produced by transforming and pooling the relevant messages, concatenating those with the molecule representation, and transforming with a two-layer fully-connected network. All messages and hidden layers have size 384. We train each model for 50 epochs, with the Adamax optimizer and a base learning rate of $2 \times 10^{-3}$ at batch size $128$. The base learning rate is scaled linearly with the batch size. We also apply a learning rate schedule, dividing the learning rate by 10 after epochs $12$, $24$ and $36$. Additionally, we apply learning rate warm-up by linearly scaling the learning rate from 0 to its base value during the first five epoch. The training is performed with a batch size of 1024, although we did not see any difference with smaller batch sizes (we did notice some issues with larger batches). Laman model ----------- Laman graphs are encoded for the message passing model with a single node degree feature. That feature is encoded with a Fourier encoding of the node degree. The message passing model runs five step of message passing. An embedding for the graph is produced by transforming the node messages with a two-layer fully-connected network, and aggregated using average and max pooling. Location-specific embeddings are produced in the same fashion as in the molecule model. We train each model for 30 epochs, with the same optimizer settings as in the molecule model. We use a batch size of 256. All models are trained using a Nvidia Titan X Pascal (12 GB) graphics card. Sampling Details {#sec:appendix:sampling_details} ================ Our proposed models require a Markov sampling step. We describe the details below. For both the molecule and Laman models, we sample from the chain defined by the trained reconstructor by starting from a random object in the training dataset. The chain then alternatively samples sequences from the corrupter and the reconstructor. In both cases, the results reported in the main text use a corrupter that performs an average of 5 moves (with a geometric distribution). As we sample from a Markov chain, we do not gather i.i.d. samples. In fact, sometimes the reconstructor returns to the same molecule on adjacent transitions due to perfect reconstruction. The results reported here use every sample from the Markov chain without thinning. Although validity is maintained through the inductive moves, for both the molecule and Laman models, we in fact encode an action space slightly larger than the true set of valid inductive moves (to make the space more regular). When such an invalid action is sampled by the reconstructor, it is ignored, and another sample is taken. In some very rare instances, the reconstructor repeatedly samples invalid actions, in which case the entire transition (including the corruption) is resampled. For both the molecule and Laman model, a minimal size is set (one leaf for molecules, and three nodes for Laman), to prevent the chain from deleting the entire object (which would cause problems in terms of the representation). For molecules, we also set a maximum size (in terms of the number of atoms), at 25 atoms, although we found values between 25 and 35 to have little effect on the results. Revisit Stop Criterion {#sec:appendix:revisit} ---------------------- In the molecule setting, we make use of an additional stop criterion which is necessary as our model exhibits high precision and does not have access to any recurrent state which would enable it to increase the probability of stopping as the length of the reconstruction sequence increases. At each reconstruction step, we keep a history of all the molecules visited by the reconstructor so far, and stop the reconstruction process when the output of the reconstruction model already exists in its history. We interpret this “revisit” as the model implicitly indicating that the obtained molecule is realistic enough (on average) that it is willing to return to it, despite not indicating stop itself due to the high precision of the model. Dataset Details =============== Laman {#sec:dataset:laman} ----- As we did not find high-quality real-world datasets for Laman graphs, we considered some synthetic datasets generated by inductively sampling Henneberg moves in a random fashion. More explicitly, each graph in the dataset is generated using Algorithm 7 of [@MoussaouiThesis], reproduced here as \[alg:gen\_laman\], where the size $n$ and the probability of selecting Henneberg type I moves $p$ are sampled randomly. For all datasets, we sample $n$ from a normal distribution with mean 30 and standard deviation 5. The distribution of $p$ determines the distribution of the degree of decomposability of the graphs in the dataset. We choose the following distributions of $p$ for each dataset: $p \sim \mathcal{U}(0, 0.1)$ for low decomposability and $p \sim \mathcal{U}(0.9, 1)$ for high decomposability. *Initialize from complete graph on three elements* $G \leftarrow K_3$ Corrupter Details {#sec:appendix:corrupter} ================= Molecule Corrupter ------------------ We use a single fixed corrupter for all molecule models. To corrupt a molecule, we sample a number of corruption steps from a geometric distribution with the given mean, and iteratively apply \[alg:molecule\_corrupter\] to the molecule. We made no attempt to optimize the corrupter to produce better samples from the generative model or ease the learning process. Laman Corrupter --------------- We use a single a single corrupter for all our Laman models. For a single corruption sub-step, this corrupter first chooses among the four action types (Henneberg type I and type II, and their inverses) uniformly at random, and then uniformly samples among the valid actions of the chosen type. As with the molecule corrupter, the number of sub-steps is sampled from a geometric distribution with given mean. GuacaMol Benchmarks {#sec:appendix:guacamol_benchmarks} =================== We also evaluate our model on the new GuacaMol distribution-learning benchmarks [@GuacaMol] after training on the ChEMBL dataset [@ChEMBL]. Using the same hyperparameters as for the ZINC model, we obtain validity: 1.0, uniqueness: 0.933, novelty: 0.942, KL divergence: 0.771, and FCD: 0.058 (see [@GuacaMol] for a description of each metric and comparisons with a few other methods). Note that the FCD score [@FCD] is not directly applicable to our model’s samples due to inherent autocorrelation in the generated chains. In part, the FCD score assesses diversity of samples compared to the training set by computing the activation covariance of the penultimate layer of ChemNet. The autocorrelation limits the sample diversity but may be addressed by standard techniques for Markov chains such as thinning. Here, we report results using the same sampling framework as for the ZINC model. Reconstructed vs. Corrupted Samples {#sec:appendix:recon_vs_corrupt} =================================== In \[fig:corrupt\_mol\_property\_hist\], we display QED and SA score distributions for the reconstructed molecules ($x$) and the corrupted molecules ($\tilde{x}$) visited during Gibbs sampling as well as molecules generated by solely running the corrupter (with no reconstruction). The corruption-only samples severely diverge from the data distribution. Example Chains ============== Below, we display three example chains for the molecular model. ![Example chain. The molecule is displayed after each transition of the Markov chain.](figs/chains/chain_10.pdf){width="\textwidth"} ![Example chain. The molecule is displayed every five transitions.](figs/chains/chain_11.pdf){width="\textwidth"} ![Example chain. The molecule is displayed every ten transitions.](figs/chains/chain_12.pdf){width="\textwidth"} [^1]: <https://github.com/PrincetonLIPS/reversible-inductive-construction> [^2]: We use the implementation provided by [@GuacaMol] for the SMILES LSTM [@SmilesLSTM]. [^3]: <https://github.com/PrincetonLIPS/reversible-inductive-construction>
--- abstract: 'In this talk we present the basis for the protocol for radiation monitoring of the ATLAS Pixel Sensors. The monitoring is based on a current measurement system, HVPP4. The status on the ATLAS HVPP4 system development is also presented.' author: - 'Igor Gorelov, Martin Hoeferkamp, Sally Seidel, Konstantin Toms' title: ATLAS Pixel Radiation Monitoring with HVPP4 System ---
--- abstract: | Saari’s homographic conjecture in $N$-body problem under the Newton gravity is the following; configurational measure $\mu=\sqrt{I}\, U$, which is the product of square root of the moment of inertia $I=(\sum m_k)^{-1}\sum m_i m_j r_{ij}^2$ and the potential function $U=\sum m_i m_j/r_{ij}$, is constant if and only if the motion is homographic. Where $m_k$ represents mass of body $k$ and $r_{ij}$ represents distance between bodies $i$ and $j$. We prove this conjecture for planar equal-mass three-body problem. In this work, we use three sets of shape variables. In the first step, we use $\zeta=3q_3/(2(q_2-q_1))$ where $q_k \in \mathbb{C}$ represents position of body $k$. Using $r_1=r_{23}/r_{12}$ and $r_2=r_{31}/r_{12}$ in intermediate step, we finally use $\mu$ itself and $\rho=I^{3/2}/(r_{12}r_{23}r_{31})$. The shape variables $\mu$ and $\rho$ make our proof simple. address: - ' $^{1}$ $^{2}$ $^{4}$College of Liberal Arts and Sciences, Kitasato University, 1-15-1 Kitasato, Sagamihara, Kanagawa 252-0329, Japan ' - ' $^3$General Education Program Center, Tokai University, Shimizu Campus, 3-20-1, Orido, Shimizu, Shizuoka 424-8610, Japan ' author: - | Toshiaki Fujiwara$^1$, Hiroshi Fukuda$^2$, Hiroshi Ozaki$^3$\ and Tetsuya Taniguchi$^4$ title: 'Saari’s homographic conjecture for planar equal-mass three-body problem in Newton gravity' --- Saari’s homographic conjecture ============================== In 2005, Donald Saari formulated his conjecture in the following form [@SaariCollisions; @SaariSaarifest]; in the $N$-body problem under the potential function $$\label{eq:U} U=\sum_{1\le i<j\le N} \frac{m_i m_j}{r_{ij}^\alpha}, \quad \alpha>0,$$ a motion has a constant configurational measure $$\label{eq:mu} \mu = I^{\alpha/2}\, U$$ if and only if the motion is homographic. Here, $r_{ij}$ represents the mutual distance between the bodies $i$ and $j$, and $I$ represents the moment of inertia $$\label{eq:I} I=(\sum_{1\le k \le N} m_k)^{-1} \sum_{1\le i<j\le N} m_i m_j r_{ij}^2.$$ Florin Diacu, Toshiaki Fujiwara, Ernesto Perez-Chavela and Manuele Santoprete called this conjecture the “Saari’s homographic conjecture” and partly proved this conjecture for some cases [@DiacHomographic]. Recently, the present authors proved this conjecture for planar equal-mass three-body problem for $\alpha=2$ [@Fujiwara2011]. In this paper, we extends our proof to $\alpha=1$, the Newton gravity. In section \[sec:eqOfMotion\], we derive the equations of motion for the size change, rotation and shape change. To do this, we use the shape variable $\zeta$, $$\label{eq:zeta} \zeta = \frac{3}{2}\frac{q_3}{q_2-q_1},$$ introduced by Richard Moeckel and Richard Montgomery . Here, $q_k \in \mathbb{C}$, $k=1,2,3$ represents position of the body $k$. Then, in the section \[sec:nesCond\], we investigate motions with $\mu=$ constant and non-homographic, and we derive a necessary condition that must be satisfied by such motion. The contents in the sections \[sec:eqOfMotion\] and \[sec:nesCond\] are review of our previous paper [@Fujiwara2011], although we changed few notations. To prove the Saari’s conjecture, we will show that no finite orbit satisfies the necessary condition. To attain this purpose, the expression of the necessary condition by $\zeta$ is too complex. To simplify the expression, we will use other set of shape variables, $$\label{def:r1andr2} r_1=|\zeta-1/2|=r_{23}/r_{12},\quad r_2=|\zeta+1/2|=r_{31}/r_{12}.$$ Then, using the invariance of the system under the permutations of $\{q_1, q_2, q_3\}$, we rewrite the necessary condition in another set of shape variables $\mu$ itself and $\rho$, $$\mu=I^{1/2}\left(\frac{1}{r_{12}}+\frac{1}{r_{23}}+\frac{1}{r_{31}}\right), \quad \rho=\frac{I^{3/2}}{r_{12}r_{23}r_{31}},$$ that are manifestly invariant under the permutations. Since, we are considering $\mu=$ constant orbits, variables $\mu$ and $\rho$ make our proof easy. This expression is given in section \[sec:invariance\]. The proof of the Saari’s conjecture is given in the section \[sec:proof\]. In section \[sec:discussions\], we give discussions. Equations of motion {#sec:eqOfMotion} =================== In this section, we summarize the equations of motion for $\alpha=1$ in terms of size, rotation and shape. We don’t assume $\mu=$ constant in this section. Let $q_k \in \mathbb{C}$ be the position and mass $m_k=1$ for $k=1,2,3$. We take the center of mass frame, $\sum q_k=0$. The Lagrangian is given by, $$L=\frac{1}{2}\sum \left| \frac{dq_k}{dt} \right|^2 +U.$$ We take the shape variable $\zeta \in \mathbb{C}$ in (\[eq:zeta\]). This variable is invariant under the scaling and rotation, $q_k \to \lambda e^{i\theta}q_k$ with $\lambda, \theta \in \mathbb{R}$. Thus, $\zeta$ depends only on shape. Let us define $\xi_k=q_k/(q_2-q_1)$. Then, we have, $$\label{eq:xi} \xi_1=-\frac{1}{2}-\frac{\zeta}{3},\quad \xi_2=\frac{1}{2}-\frac{\zeta}{3},\quad \xi_3=\frac{2\zeta}{3}.$$ Since, the triangle $q_1 q_2 q_3$ and $\xi_1 \xi_2 \xi_3$ are similar and have the same orientation, we have two variables $I \ge 0$ and $\theta \in \mathbb{R}$, such that $$q_k = \sqrt{I}\, e^{i\theta} \frac{\xi_k}{\sqrt{\sum |\xi_\ell|^2}}.$$ We take $I$, $\theta$ and $\zeta$ for dynamical variables. In the following, we identify $\zeta=x+iy$ and $\mathbf{x}=(x,y) \in \mathbb{R}^2$. By direct calculations, we obtain the Lagrangian $$L =\frac{\dot{I}^2}{8I} +\frac{I}{2} \left( \dot{\theta} +\frac{\frac{4}{3}\mathbf{x} \wedge \dot{\mathbf{x}}} {1+\frac{4}{3}|\mathbf{x}|^2} \right)^2 +\frac{I}{2} \frac{\frac{4}{3}|\dot{\mathbf{x}}|^2} {(1+\frac{4}{3}|\mathbf{x}|^2)^2} +\frac{\mu(\mathbf{x})}{\sqrt{I}}.$$ Here, $\,\dot{}\,$ represents time derivative, $\mathbf{x} \wedge \dot{\mathbf{x}}=x\dot{y}-y\dot{x}$ and $$\mu(\mathbf{x}) =\sqrt{\frac{1}{2}+\frac{2}{3}|\mathbf{x}|^2} \left( 1 +\frac{1}{\sqrt{(x-1/2)^2+y^2}} +\frac{1}{\sqrt{(x+1/2)^2+y^2}} \right).$$ Since, $\theta$ is cyclic, the angular momentum $C$ is constant of motion, $$C =\frac{\partial L}{\partial \dot{\theta}} =I \left( \dot{\theta} +\frac{\frac{4}{3}\mathbf{x} \wedge \dot{\mathbf{x}}} {1+\frac{4}{3}|\mathbf{x}|^2} \right).$$ Therefore, the total energy $E$ is given by $$E=\frac{\dot{I}^2}{8I} +\frac{C^2}{2I} +\frac{I}{2} \frac{\frac{4}{3}|\dot{\mathbf{x}}|^2} {(1+\frac{4}{3}|\mathbf{x}|^2)^2} -\frac{\mu(\mathbf{x})}{\sqrt{I}}.$$ The three terms in the kinetic energy are kinetic energy for the size change, for the rotation and for the shape change respectively. The equation of motion for $I$ yields Lagrange-Jacobi identity, $\ddot{I}=4E+2U$. From this equation, we get the following “Saari’s relation” [@SaariCollisions], $$\frac{d}{dt}\left( \frac{I^2}{2} \frac{\frac{4}{3}|\dot{\mathbf{x}}|^2} {(1+\frac{4}{3}|\mathbf{x}|^2)^2} \right) =\sqrt{I}\,\frac{d\mu}{dt}.$$ Using the ‘time’ variable $s$ defined by $$\label{defS} \frac{ds}{dt} =\frac{1}{2I}\left( 1+\frac{4}{3}|\mathbf{x}|^2\right),$$ the Saari’s relation is written as $$\label{Saari} \frac{d}{ds}\left(\frac{1}{6}\left|\frac{d\mathbf{x}}{ds}\right|^2 \right) =\sqrt{I}\,\frac{d\mu}{ds}.$$ The equation of motion for $\mathbf{x}$ in terms of $s$ is $$\label{eqOfMotion} \frac{d^2 \mathbf{x}}{ds^2} =\frac{\displaystyle 4C-\frac{8}{3}\mathbf{x}\wedge \frac{d\mathbf{x}}{ds}} {1+\frac{4}{3}|\mathbf{x}|^2} \left( \frac{dy}{ds}, -\frac{dx}{ds}\right) + 3\sqrt{I}\,\frac{\partial \mu}{\partial \mathbf{x}}.$$ Up to here, we didn’t assume $\mu=$ constant. Necessary condition {#sec:nesCond} =================== Now, we consider a motion with $\mu=$ constant. By the Saari’s relation (\[Saari\]), we have $$\left|\frac{d\mathbf{x}}{ds}\right| = v$$ with constant $v \ge 0$. For the case $v=0$, $d\mathbf{x}/ds=0$ then $d^2\mathbf{x}/ds^2=0$. The equation of motion (\[eqOfMotion\]) yields $\partial \mu/\partial \mathbf{x}=0$. Namely, the motion is homographic and the system stays one of the central configurations. Let us examine the case $v>0$. In this case, the point $\mathbf{x}(s)$ moves on the curve $\mu(\mathbf{x})$ with finite speed $v$. Since the number of points $\partial \mu/\partial \mathbf{x}=0$ are five, we can always take a finite arc on which $\partial \mu/\partial \mathbf{x} \ne 0$. To keep satisfy $d\mu/ds=0$, the velocity $d\mathbf{x}/ds$ must be orthogonal to $\partial \mu/\partial \mathbf{x}$, so we have $$\label{velocity} \frac{d\mathbf{x}}{ds}=\frac{\epsilon v}{|\partial \mu/\partial \mathbf{x}|} \left(-\frac{\partial \mu}{\partial y}, \frac{\partial \mu}{\partial x} \right).$$ Here, $\epsilon=\pm 1$ determines the direction of the motion. Then, the acceleration (\[eqOfMotion\]) is given by $$\label{acceleration} %\fl \frac{d^2\mathbf{x}}{ds^2} =\Bigg( \frac{\epsilon v}{(1+4|\mathbf{x}|^2/3)|\partial \mu/\partial \mathbf{x}|} \left( 4C -\frac{8\epsilon v}{3|\partial\mu/\partial \mathbf{x}|} \mathbf{x}\cdot\frac{\partial\mu}{\partial\mathbf{x}} \right) +3\sqrt{I} \Bigg) \frac{\partial\mu}{\partial\mathbf{x}}.$$ Thus, the velocity (\[velocity\]) and the acceleration (\[acceleration\]) determine the curvature of this orbit $$%\fl \kappa =\frac{1}{1+4|\mathbf{x}|^2 /3} \left( -\frac{4C}{v} +\frac{8\epsilon}{3|\partial \mu/\partial \mathbf{x}|} \left(\mathbf{x}\cdot \frac{\partial \mu}{\partial \mathbf{x}} \right) \right) -\frac{3\epsilon\sqrt{I}}{v^2} \left| \frac{\partial \mu}{\partial \mathbf{x}} \right|.$$ On the other hand, the curve $\mu(\mathbf{x})=$ constant has its own curvature, $$%\fl \kappa =\frac{\epsilon}{|\partial \mu/\partial \mathbf{x}|^3} \left( \left(\frac{\partial \mu}{\partial y}\right)^2 \frac{\partial^2 \mu}{\partial x^2} - 2\frac{\partial \mu}{\partial x}\frac{\partial \mu}{\partial y} \frac{\partial^2 \mu}{\partial x \partial y} + \left(\frac{\partial \mu}{\partial x}\right)^2 \frac{\partial^2 \mu}{\partial y^2} \right).$$ Equate the two expressions for $\kappa$, we have a necessary condition for the motion, $$\begin{aligned} \fl \sqrt{I} =-\frac{4\epsilon C v}{3(1+4|\mathbf{x}|^2 /3)|\partial \mu/\partial \mathbf{x}|} + \frac{8v^2}{9(1+4|\mathbf{x}|^2 /3)|\partial \mu/\partial \mathbf{x}|^2} \left( \mathbf{x}\cdot \frac{\partial \mu}{\partial \mathbf{x}} \right) \nonumber\\ -\frac{v^2}{3|\partial \mu/\partial \mathbf{x}|^4} \left( \left(\frac{\partial \mu}{\partial y}\right)^2 \frac{\partial^2 \mu}{\partial x^2} - 2\frac{\partial \mu}{\partial x}\frac{\partial \mu}{\partial y} \frac{\partial^2 \mu}{\partial x \partial y} + \left(\frac{\partial \mu}{\partial x}\right)^2 \frac{\partial^2 \mu}{\partial y^2} \right).\end{aligned}$$ This is the condition that any motion with $\mu=$ constant and $d\mathbf{x}/dt \ne 0$ must satisfy. The equation of motion is invariant under the scale transformation $q_k \to \lambda q_k$ and $t \to \lambda^{3/2} t$. This transformation makes $\sqrt{I} \to \lambda \sqrt{I}$, $C \to \lambda^{1/2} C$, $\mathbf{x} \to \mathbf{x}$, $s \to \lambda^{-1/2}s$, and $v \to \lambda^{1/2} v$. Using this invariance, we can take $v=\sqrt{3}$ without loosing generality. We write $C$ for $\epsilon C$. Then, the necessary condition is $$\begin{aligned} \label{nesCondition} \fl \sqrt{I} =-\frac{4C}{\sqrt{3}\,(1+4|\mathbf{x}|^2 /3)|\partial \mu/\partial \mathbf{x}|} + \frac{8}{3(1+4|\mathbf{x}|^2 /3)|\partial \mu/\partial \mathbf{x}|^2} \left( \mathbf{x}\cdot \frac{\partial \mu}{\partial \mathbf{x}} \right) \nonumber\\ -\frac{1}{|\partial \mu/\partial \mathbf{x}|^4} \left( \left(\frac{\partial \mu}{\partial y}\right)^2 \frac{\partial^2 \mu}{\partial x^2} - 2\frac{\partial \mu}{\partial x}\frac{\partial \mu}{\partial y} \frac{\partial^2 \mu}{\partial x \partial y} + \left(\frac{\partial \mu}{\partial x}\right)^2 \frac{\partial^2 \mu}{\partial y^2} \right),\end{aligned}$$ and the energy is given by $$\label{EbyI} E= \frac{1}{2}\left(\frac{d\sqrt{I}}{dt}\right)^2+\frac{C^2+1}{2I}-\frac{\mu}{\sqrt{I}}.$$ Substituting $d\sqrt{I}/dt =(\partial \sqrt{I}/\partial \mathbf{x}) \cdot(d\mathbf{x}/ds) (ds/dt)$, $d\mathbf{x}/ds$ in (\[velocity\]) and the condition (\[nesCondition\]) into this expression for the energy, we will obtain the necessary condition expressed only by the shape variable $\mathbf{x}$. However, the condition (\[nesCondition\]) in $\mathbf{x}$ turns out to be so complex to treat. In the next section, we will rewrite the condition (\[nesCondition\]) in a concise form. Invariance of the necessary condition {#sec:invariance} ===================================== Since we are considering equal mass case, the theory is invariant under the permutations of positions $\{q_i\}$. The exchange of $q_1$ and $q_2$ makes $\zeta \to -\zeta$ and $\mathbf{x} \to -\mathbf{x}$. The invariance of the necessary condition (\[nesCondition\]) is manifest. On the other hand, the cyclic permutation $q_1 \to q_2 \to q_3 \to q_1$ makes $$\label{theMap} \zeta \to \zeta'=\frac{3}{2}\, \frac{q_1}{q_3-q_2} =\frac{1}{2}\frac{3/2+\zeta}{1/2-\zeta}.$$ The invariance of (\[nesCondition\]) under this transformation is not manifest. In this section, we will rewrite the necessary condition in a manifestly invariant form. Invariants ---------- Under the map (\[theMap\]), the Lagrange points $\zeta=\pm i \sqrt{3}/2$ are fixed and the Euler points $\zeta=-3/2, 0, 3/2$ are cyclically permuted. Let us define $\mu_k=I^{1/2}/r_{ij}$ for $(i,j,k)=(1,2,3)$, $(2,3,1)$ and $(3,1,2)$. Expressions by $\zeta$ are, $$\label{def:muk} \fl \mu_1=\frac{1}{|\zeta-1/2|}\sqrt{\frac{1}{2}+\frac{2}{3}|\zeta|^2},\quad \mu_2=\frac{1}{|\zeta+1/2|}\sqrt{\frac{1}{2}+\frac{2}{3}|\zeta|^2},\quad \mu_3=\sqrt{\frac{1}{2}+\frac{2}{3}|\zeta|^2}.$$ These three $\mu_k$ are also cyclically permuted by (\[theMap\]). Note that the exchange $q_i \leftrightarrow q_j$ makes the exchange $\mu_i \leftrightarrow \mu_j$. Therefore, $\mu=\mu_1+\mu_2+\mu_3$ is invariant under the permutations of $q_i$. The kinetic energy for the shape change must be invariant. Actually, we can easily check the invariance of $$\label{metric1} \frac{4}{3}\frac{|d\zeta|^2}{(1+\frac{4}{3}|\zeta|^2)^2}.$$ So, it is natural to treat the space of $\zeta$ as a metric space whose distance is given by the equation (\[metric1\]), and the map (\[theMap\]) is the isometric transformation. Actually Wu-Yi Hsiang and Eldar Straume [@Hsiang1995; @Hsiang2006], Alain Chenciner and R. Montgomery [@CMfigure8], R. Montgomery [@Montgomery2002], and R. Mockel [@MoeckelShooting] showed that this metric space is the “shape sphere” and the distance (\[metric1\]) is the distance on the shape sphere. Kenji Hiro Kuwabara and Kiyotaka Tanikawa also noticed that the shape sphere is useful to investigate the equal-mass free-fall problem[@KuwabaraTanikawa; @TanikawaKuwabara]. The map (\[theMap\]) makes the shape sphere $2\pi/3$ rotation around the axis that connects the two Lagrange points. The map $\zeta \to -\zeta$ makes $\pi$ rotation around the axis that connects one of the Euler point (corresponds to $\mathbf{x}=0$) and one of two-body collision (corresponds to $\mathbf{x}=\infty$). Let us use the notations in the tensor analysis. We write $\zeta=x^1+ix^2$, $\mathbf{x}=(x,y)=(x^1, x^2)$ and $\partial_i = \partial/\partial x^i$. The metric tensor $g_{ij}$ and its inverse are $$g_{ij}=\frac{4}{3}\frac{\delta_{ij}}{(1+\frac{4}{3}|\mathbf{x}|^2)^2}, \quad \left(g_{ij}\right)^{-1}=g^{ij}=\frac{3}{4}\left(1+\frac{4}{3}|\mathbf{x}|^2\right)^2 \delta^{ij},$$ where $\delta_{ij}=\delta^{ij}$ are the Kronecker’s delta, $$\delta_{ij}=\delta^{ij}= \cases{ 1 &for $i=j$,\\ 0 &for $i\ne j$. }$$ Let $|g|$ be the determinant of $g_{ij}$, $$|g|=\textrm{det}(g_{ij})=\frac{16}{9}\frac{1}{(1+\frac{4}{3}|\mathbf{x}|^2)^4}.$$ As mentioned above, the configurational measure $\mu$ is invariant. One obvious invariant is the magnitude of the gradient vector of $\mu$. We write $$|\nabla \mu|^2 =\sum_{i,j} g^{ij}(\partial_i \mu) (\partial_j \mu) =\frac{3}{4}\left(1+\frac{4}{3}|\mathbf{x}|^2\right)^2 \left| \frac{\partial \mu}{\partial \mathbf{x}} \right|^2. %\left( % \left(\frac{\partial \mu}{\partial x}\right)^2 % + % \left(\frac{\partial \mu}{\partial y}\right)^2 %\right)$$ Therefore, the first term of the right hand side of the necessary condition (\[nesCondition\]) is simply $-2C/|\nabla \mu|$. The other obvious invariant is the Laplacian of $\mu$, $$\Delta \mu =\sum_{ij}\frac{1}{\sqrt{|g|}}\partial_i\left(g^{ij}\sqrt{|g|}\partial_j \mu\right) =\frac{3}{4}\left(1+\frac{4}{3}|\mathbf{x}|^2\right)^2 %\sum_{i} (\partial_i \partial_i \mu). %\left(\frac{\partial^2}{\partial x^2} % +\frac{\partial^2}{\partial y^2} %\right)\mu. \frac{\partial}{\partial \mathbf{x}} \cdot \frac{\partial \mu}{\partial \mathbf{x}}$$ Now, let us consider the following invariant, $$%\fl \lambda= \sum_{ij}g^{ij} (\partial_i \mu)(\partial_j |\nabla \mu|^2) =\frac{3}{4}\left(1+\frac{4}{3}|\mathbf{x}|^2\right)^2 %\left( % \frac{\partial \mu}{\partial x} % \frac{\partial}{\partial x} % + % \frac{\partial \mu}{\partial y} % \frac{\partial }{\partial y} %\right) \frac{\partial \mu}{\partial \mathbf{x}} \cdot \frac{\partial}{\partial \mathbf{x}} |\nabla\mu|^2.$$ Explicitly performing the differentials, it yields $$%hf \fl \lambda %=\frac{3}{4}\left(1+\frac{4}{3}|\mathbf{x}|^2\right)^2 % \sum_i % (\partial_i \mu) % \partial_i \left( % \frac{3}{4}\left(1+\frac{4}{3}|\mathbf{x}|^2\right)^2 % \left| \frac{\partial \mu}{\partial \mathbf{x}} \right|^2 % \right) =3\left(1+\frac{4}{3}|\mathbf{x}|^2\right)^3 \left(\mathbf{x}\cdot\frac{\partial \mu}{\partial \mathbf{x}}\right) \left|\frac{\partial \mu}{\partial \mathbf{x}}\right|^2 + \frac{9}{16} \left(1+\frac{4}{3}|\mathbf{x}|^2\right)^4 %hf \sum_i (\partial_i \mu) %hf \partial_i %hf \left( \sum_\ell (\partial_\ell \mu)^2 \right). \frac{\partial \mu}{\partial \mathbf{x}} \cdot \frac{\partial}{\partial \mathbf{x}} \left| \frac{\partial \mu}{\partial \mathbf{x}} \right|^2$$ Using this expression, the second and the third terms in the necessary condition (\[nesCondition\]) is simply expressed as, $\lambda/(2|\nabla \mu|^4) -\Delta \mu/|\nabla \mu|^2$. Thus, the necessary condition is expressed in the following invariant form, $$\label{theNesCondInvariantForm} \sqrt{I} =-\frac{2C}{|\nabla \mu|} +\frac{\lambda}{2|\nabla \mu|^4} - \frac{\Delta \mu}{|\nabla \mu|^2}.$$ The last obvious invariant what we will use is $$\label{defD} %\fl D\phi =\frac{1}{\sqrt{|g|}}\sum_{i,j} \epsilon^{ij} (\partial_i \mu) (\partial_j \phi) =\frac{3}{4}\left(1+\frac{4}{3}|\mathbf{x}|^2\right)^2 % \left( % (\partial_1 \mu)(\partial_2 \phi) % -(\partial_2 \mu)(\partial_1 \phi) % \frac{\partial \mu}{\partial x}\frac{\partial}{\partial y} % - % \frac{\partial \mu}{\partial y}\frac{\partial}{\partial x} % \right) \phi, \frac{\partial \mu}{\partial \mathbf{x}} \wedge \frac{\partial \phi}{\partial \mathbf{x}}$$ for any invariant $\phi$. Where, $\epsilon^{ij}$ is the Levi-Civita’s anti-symmetric symbol, $$\epsilon^{ij}= \cases{ 1 &for $(i,j)=(1,2)$,\\ -1 &for $(i,j)=(2,1)$,\\ 0 &for $i=j$. }$$ Then, using equations (\[defS\]), (\[velocity\]) and (\[defD\]), we have $$\label{dandD} \frac{d\phi}{dt} =\frac{\epsilon}{I}\frac{D\phi}{|\nabla \mu|}.$$ Invariant variables ------------------- For the Newton potential, it is natural to use the variables $r_1$ and $r_2$ defined by (\[def:r1andr2\]). Relations between $\mu_k$ defined in (\[def:muk\]) and $r_1$, $r_2$ are $$\label{eq:defOfMus} \mu_1=r_1^{-1}\mu_3,\quad \mu_2=r_2^{-1}\mu_3,\quad \mu_3=\sqrt{(1+r_1^2+r_2^2)/3}.$$ Now, consider the expression for the above invariants $|\nabla\mu|^2$, $\Delta\mu$, $\lambda$ in terms of $r_1$ and $r_2$. Let us write one of them $\psi(r_1, r_2)$. It is composed by differentials of $\mu$ by $r_1$ or $r_2$ and products of $r_1$ and $r_2$. Then, the result is composed of terms of rational function of $\sqrt{(1+r_1^2+r_2^2)/3}$, $r_1$ and $r_2$, namely $\mu_3$, $\mu_3/\mu_1$ and $\mu_3/\mu_2$. Then, $\psi$ has the following form $$\fl \psi =f(r_1,r_2)+g(r_1,r_2)\sqrt{\frac{1+r_1^2+r_2^2}{3}} =f\left(\frac{\mu_3}{\mu_1},\frac{\mu_3}{\mu_2}\right) +g\left(\frac{\mu_3}{\mu_1},\frac{\mu_3}{\mu_2}\right)\mu_3.$$ Here, $f$ and $g$ represent some rational functions. The function $\psi$ is invariant under the permutation of $q_i$, namely the permutation of $\mu_i$. So, it must be a ratio of some symmetric polynomials of $\mu_i$. Therefore, it must have the following expression $$\label{invariantsInMuandRho} \psi=h(\mu,\nu,\rho),$$ where $h$ is a rational function of elementary symmetric polynomials $$\mu=\mu_1+\mu_2+\mu_3,\quad \nu=\mu_1\mu_2+\mu_2\mu_3+\mu_3\mu_1,\quad \rho=\mu_1\mu_2\mu_3.$$ Expression in terms of $\mu_k$ or in terms of $\mu$, $\nu$, $\rho$ is not unique, since, by the relation (\[eq:defOfMus\]), there is an identity $\mu_1^{-2}+ \mu_2^{-2}+\mu_3^{-2}=3$. Namely, $$\label{identity} \mu_1^2\mu_2^2+\mu_2^2\mu_3^2+\mu_3^2\mu_1^2 =3\mu_1^2\mu_2^2\mu_3^2.$$ Therefore, we can eliminate $\nu$, using $$\nu =\sqrt{2\mu\rho+3\rho^2}.$$ The expression of $\psi=h(\mu,\sqrt{2\mu\rho+3\rho^2},\rho)$ is unique. Thus, the necessary condition will be expressed by a function of invariant shape variables $\mu$ and $\rho$. Let us express $|\nabla \mu|^2$ by $\mu$ and $\rho$. In terms of $r_i$, it is $$\fl |\nabla \mu|^2 =\frac{(1+r_1^2+r_2^2)^2}{3} \left( \left(\frac{\partial \mu}{\partial r_1}\right)^2 +\left(\frac{\partial \mu}{\partial r_2}\right)^2 +\frac{r_1^2+r_2^2-1}{r_1 r_2} \frac{\partial \mu}{\partial r_1} \frac{\partial \mu}{\partial r_2} \right).$$ By a direct calculation, we get $$\begin{aligned} \fl |\nabla \mu|^2 =\frac{1+r_1^2+r_2^2}{9r_1^4 r_2^4} \Bigg( 2r_1^4r_2^4(r_1^2+r_2^2)\nonumber\\ +r_1^4r_2^4(r_1+r_2) -r_1r_2(r_1^7+r_2^7) -r_1^4r_2^4 % -4r_1^3r_2^3(r_1+r_2)\nonumber\\ +(2r_1^6+r_1^5r_2-r_1^4r_2^2 -4r_1^3r_2^3 -r_1^2r_2^4 +r_1r_2^5 +2r_2^6)\nonumber\\ +r_1r_2(r_1^3+r_2^3) +2(r_1^4+r_2^4) -r_1r_2 \Bigg).\end{aligned}$$ Substituting $r_1=\mu_3/\mu_1$ and $r_2=\mu_3/\mu_2$, we obtain, $$\begin{aligned} \label{eq:nablaMuSquare1} \fl |\nabla \mu|^2 =\frac{(\mu_1^2\mu_2^2+\mu_2^2\mu_3^2+\mu_3^2\mu_1^2)} {9\mu_1^6 \mu_2^6 \mu_3^6} \Bigg( -(\mu_1^7\mu_2^7+\mu_2^7\mu_3^7+\mu_3^7\mu_1^7) \nonumber\\ -\mu_1^4\mu_2^4\mu_3^4(\mu_1^2+\mu_2^2+\mu_3^2) -4\mu_1^4\mu_2^4\mu_3^4 (\mu_1\mu_2+\mu_2\mu_3+\mu_3\mu_1) \nonumber\\ +2(\mu_1^8\mu_2^4\mu_3^2+\dots) +(\mu_1^7\mu_2^4\mu_3^3+\dots) \Bigg).\end{aligned}$$ In the last line, dots in parentheses represent similar 5 terms of permutation of $\mu_1, \mu_2, \mu_3$. Then expressing by $\mu$, $\nu$, $\rho$, we obtain a expression, $$\begin{aligned} \label{eq:nablaMuSquare2} \fl |\nabla \mu|^2 =\frac{\nu^2-2\mu\rho}{9\rho^6} \Bigg( -\nu ^7+7 \mu \nu ^5 \rho +2 \mu ^4 \nu ^2 \rho ^2 \nonumber\\ -22 \mu ^2 \nu ^3 \rho ^2-3 \nu ^4 \rho ^2-4 \mu ^5 \rho ^3 +24 \mu ^3 \nu \rho ^3+18 \mu \nu ^2 \rho ^3-27 \mu ^2 \rho ^4 \Bigg).\end{aligned}$$ As mentioned above, the expressions (\[eq:nablaMuSquare1\]) and (\[eq:nablaMuSquare2\]) are not unique due to the identity (\[identity\]). Eliminating $\nu$, we finally get the following unique expression $$\fl |\nabla \mu|^2 =-\mu^2+2\mu^4+6\mu\rho-9\rho^2 -3(2\mu^2-\mu\rho+3\rho^2) \sqrt{2\mu\rho+3\rho^2}. \label{nablamu2}$$ Thus, we get the expression for $|\nabla \mu|^2$ in manifestly invariant variables $\mu$ and $\rho$. By a similar way, $\Delta \mu$ in $(r_1, r_2)$ and $(\mu, \rho)$ are $$\fl \Delta \mu =\frac{(1+r_1^2+r_2^2)^2}{3} \left( \frac{1}{r_1}\frac{\partial}{\partial r_1} \left(r_1\frac{\partial\mu}{\partial r_1}\right) + \frac{1}{r_2}\frac{\partial}{\partial r_2} \left(r_2\frac{\partial\mu}{\partial r_2}\right) + \frac{r_1^2+r_2^2-1}{r_1 r_2} \frac{\partial^2 \mu}{\partial r_1\partial r_2} \right),$$ $$\fl \Delta \mu =\mu+2\mu^3+6\rho -6\mu\sqrt{2\mu\rho+3\rho^2}. \label{Deltamu}$$ Similarly, the expressions for $\lambda$ are $$\fl \lambda =\frac{(1+r_1^2+r_2^2)^2}{3} \left( \frac{\partial\mu}{\partial r_1}\frac{\partial}{\partial r_1} + \frac{\partial\mu}{\partial r_2}\frac{\partial}{\partial r_2} + \frac{r_1^2+r_2^2-1}{2r_1 r_2} \left( \frac{\partial\mu}{\partial r_1}\frac{\partial}{\partial r_2} + \frac{\partial\mu}{\partial r_2}\frac{\partial}{\partial r_1} \right) \right)|\nabla \mu|^2,$$ $$\begin{aligned} \fl \lambda=\frac{1}{2}\Bigg( 4\mu^3-24\mu^5+32\mu^7-72\mu^2\rho+660\mu^4\rho+324\mu\rho^2 \nonumber\\ +36\mu^3\rho^2-432\rho^3+891\mu^2\rho^3+2349\mu\rho^4-243\rho^5 \nonumber\\ +3\Big(24\mu^3-60\mu^5-156\mu^2\rho+28\mu^4\rho+324\mu\rho^2 \nonumber\\ -93\mu^3\rho^2 -216\rho^3-27\mu^2\rho^3+81\mu\rho^4\Big)\sqrt{2\mu\rho+3\rho^2} \Bigg). \label{lambda}\end{aligned}$$ Finally, $(D\rho)^2$ is also invariant under the exchange of $q_i$, therefore, it has an expression by $\mu$ and $\rho$, $$\fl (D\rho)^2 =\frac{(1+r_1^2+r_2^2)^4 \Big(2(r_1^2+r_2^2)-(r_1^2-r_2^2)^2-1\Big)}{36r_1^2r_2^2} \left( \frac{\partial \mu}{\partial r_1}\frac{\partial \rho}{\partial r_2} - \frac{\partial \mu}{\partial r_2}\frac{\partial \rho}{\partial r_1} \right)^2,$$ $$\begin{aligned} \fl (D\rho)^2 =\frac{\rho^2(2\mu+3\rho)}{4} \Bigg( -(2\mu+3\rho) (4\mu^4+134\mu\rho-12\mu^3\rho-177\rho^2+9\mu^2\rho^2) \nonumber\\ +2(28\mu^3+108\rho-36\mu^2\rho-45\mu\rho^2+54\rho^3) \sqrt{2\mu\rho+3\rho^2} \Bigg).\end{aligned}$$ Proof of the Saari’s conjecture {#sec:proof} =============================== In the previous section, we find the expression for the necessary condition (\[theNesCondInvariantForm\]) in terms of $\mu$ and $\rho$ by (\[nablamu2\]), (\[Deltamu\]) and (\[lambda\]). Since, we are assuming $\mu=$ constant, time dependent variable is only $\rho$. Therefore, $d \sqrt{I}/dt = (\partial \sqrt{I}/\partial \rho)(d\rho/dt)$. Using (\[dandD\]), $$\left(\frac{d \sqrt{I}}{dt}\right)^2 = \frac{1}{I^2} \frac{(D\rho)^2}{|\nabla \mu|^2} \left(\frac{\partial \sqrt{I}}{\partial \rho}\right)^2.$$ Substituting this expression and the necessary condition (\[theNesCondInvariantForm\]) into the expression of the energy (\[EbyI\]), we obtain the necessary condition for $\rho$ with three parameters $E$, $C$ and $\mu$, $$\label{nesCond2} E=\frac{1}{2I^2} \frac{(D\rho)^2}{|\nabla \mu|^2} \left(\frac{\partial \sqrt{I}}{\partial \rho}\right)^2 +\frac{C^2+1}{2I}-\frac{\mu}{\sqrt{I}}.$$ If there is some finite motion with $\mu=$ constant and non-homographic, this condition must be satisfied by some finite range of $\rho$. However, since the right hand side of (\[nesCond2\]) is analytic function of $\rho$, the condition (\[nesCond2\]) must be satisfied for all range of $\rho$. In the vicinity of $\rho=0$, we have the expansion of (\[theNesCondInvariantForm\]) $$\sqrt{I}=a_0+a_{1/2}\sqrt{\rho}+a_1 \rho + O(\rho^{3/2}),$$ with $$\begin{aligned} a_0=\frac{2(1-\mu^2+C\sqrt{-1+2\mu^2})}{\mu(1-2\mu^2)},\\ a_{1/2}=\frac{3\sqrt{2}\big((-2+\mu^2)\sqrt{-1+2\mu^2}-2C(-1+2\mu^2)\big)} {(1-2\mu^2)^2\sqrt{\mu(-1+2\mu^2)}},\\ a_1=\frac{3\big((-2+\mu^2)(1+6\mu^2)-2C(1+7\mu^2)\sqrt{-1+2\mu^2}\big)} {\mu^2(-1+2\mu^2)^3},\end{aligned}$$ and $$\begin{aligned} |\nabla \mu|^2&=\mu^2(-1+2\mu^2)-6\sqrt{2}\mu^{5/2}+6\mu\rho+O(\rho^{3/2}),\\ (D\rho)^2&=-4\mu^6\rho^2+O(\rho^{5/2}).\end{aligned}$$ Then we obtain the power series expansion of (\[nesCond2\]) by $\sqrt{\rho}$ up to the order $\rho$ at $\rho=0$. The term of order $\rho^0$ in (\[nesCond2\]) determine $E$. Therefore, this order gives no information for $C$ and $\mu$. The coefficient of order $\sqrt{\rho}$ is $$%\fl 0=\frac{-3\mu^{5/2} \Big(1+C\sqrt{-1+2\mu^2}\Big)^2 \Big( (-2+\mu^2)-2C\sqrt{-1+2\mu^2} \Big)} { 4\sqrt{2} \Big( -1+\mu^2-C\sqrt{-1+2\mu^2} \Big)^3 }.$$ The solutions $C$ of this equation are, $$C=-\frac{1}{\sqrt{-1+2\mu^2}},\quad\frac{-2+\mu^2}{2\sqrt{-1+2\mu^2}}.$$ For the case $C=-1/\sqrt{-1+2\mu^2}$, $$\sqrt{I} =\frac{2\mu}{-1+2\mu^2} +\frac{3\sqrt{2}\mu^{3/2}}{(-1+2\mu^2)^2}\sqrt{\rho} +\frac{9(1+2\mu^2)}{(-1+2\mu^2)^3}\rho +O(\rho^{3/2}),$$ and the order $\rho^1$ coefficient in the equation (\[nesCond2\]) is $$0 =-\frac{9\mu(-2+\mu^2)}{16(-1+2\mu^2)}.$$ While the right hand side is always negative since $\mu=\sqrt{(1+r_1^2+r_2^2)/3}\,(1+1/r_1+1/r_2)\ge 3$. For the case $C=(-2+\mu^2)/(2\sqrt{-1+2\mu^2})$, the coefficient $a_{1/2}$ vanish, $$\sqrt{I} =\frac{\mu}{4(-1+2\mu^2)} -\frac{3(-2+\mu^2)}{4(-1+2\mu^2)^3}\rho +O(\rho^{3/2}),$$ and the coefficient of order $\rho^1$ in the equation (\[nesCond2\]) is $$0=\frac{3\mu(-2+\mu^2)}{4(-1+2\mu^2)}.$$ While the right hand side is always positive for $\mu\ge 3$. Thus, there is no parameters $C$ and $\mu$ that satisfies the necessary condition (\[nesCond2\]). This completes the proof for the Saari’s homographic conjecture. Discussions {#sec:discussions} =========== We have proved the Saari’s conjecture for equal-mass planar three-body problem under the Newton gravity. The symmetry under the permutation of the positions $\{q_1, q_2, q_3\}$ has a clucial role for our method. For equal mass and Newton potential case, the necessary condition (\[theNesCondInvariantForm\]) is a symmetric rational function of $\mu_1$, $\mu_2$ and $\mu_3$. Thus, it is a function of $\mu$ and $\rho$ as in equation (\[invariantsInMuandRho\]). This makes our proof simple. The next step will be the case with general mass ratio and general homogeneous potential $U=\sum m_i m_j /r_{ij}^\alpha$, $\alpha>0$. For this case, however, a invariant function under the permutation for suffix of bodies will not have a simple form of manifestly invariant variables such as $\mu$ and $\rho$. We hope, someday, someone may find a proof for the conjecture for general mass ratio under the Newton potential in some extension of our method. On the other hand, we are afraid that it is hard to extend our method to general $\alpha$. We would have to find a completely new method for general $\alpha$. This research of one of the author T. Fujiwara has been supported by Grand-in-Aid for Scientific Research 23540249 JSPS. A. Chenciner and R. Montgomery, *A remarkable periodic solution of the three-body problem in the case of equal masses*, Annals of Mathematics, **152**, 881–901, 2000. F. Diacu, T. Fujiwara, E. Pérez-Chavela and M. Santoprete, *Saari’s homographic conjecture of the three-body problem*, Transactions of the American Mathematical Society, **360**, 12, 6447–6473, 2008. T. Fujiwara, H. Fukuda, H. Ozaki and T. Taniguchi, *SaariÕs homographic conjecture for planar equal-mass three-body problem under a strong force potential*, J. Phys. A: Math. Theor. 45 (2012) 045208. W. Y. Hsiang and E. Straume, *Kinematic geometry of triangles with given mass distribution*, PAM-636 (1995), Univ. of Calif., Berkeley. W. Y. Hsiang and E. Straume, *Kinematic geometry of triangles and the study of the three-body problem*, arXiv:math-ph/0608060, 2006. K. H. Kuwabara and K. Tanikawa, *A new set of variables in the theee-body problem*, Publications of the Astronomical Society of Japan Vol. 62, pp 1–7, 2010. R. Moeckel, *Shooting for the eight – a topological existence proof for a figure-eight orbit of the three-body problem*, http://www.math.umn.edu/ `~`rmoeckel/research/FigureEight12.pdf, 2007. R. Moeckel and R. Montgomery, private communications. R. Montgomery, *Infinitely Many Syzygies*, Archive for Rational Mechanics and Analysis, 2002, Volume 164, Number 4, pp 311–340 D. Saari, *Collisions, rings, and other Newtonian N-body problems*, CBMS Regional Conference Series in Mathematics, number 104, American Mathematical Society, 2005. D. Saari, *Some ideas about the future of Celestial Mechanics*, Conf. Saarifest (Guanajuato, Mexico, 8 April), 2005. See, Donald Saari, *Reflections on my conjecture, and several new ones* (http://math.uci.edu/`~`dsaari/conjecture-revisited.pdf) K. Tanikawa and K. Kuwabara, *The planar three-body problem with angular momentum*, in ‘Resonances, Stabilozation, and Stable Chaos in Hierarchical Triple Systems’, pp. 71–76, Eds. V.V. Orlov and A.V. Rubinov, Proceedings of a workshop held in Saint Petersburg, Russia, 26–29 August 2007, Saint Petersburg University, Saint Petersburg, 2008.
--- abstract: | Health economic evaluations face the issues of non-compliance and missing data. Here, non-compliance is defined as non-adherence to a specific treatment, and occurs within randomised controlled trials (RCTs) when participants depart from their random assignment. Missing data arises if, for example, there is loss to follow-up, survey non-response, or the information available from routine data sources is incomplete. Appropriate statistical methods for handling non-compliance and missing data have been developed, but they have rarely been applied in health economics studies. Here, we illustrate the issues and outline some of the appropriate methods to handle these with an application to a health economic evaluation that uses data from an RCT. In an RCT the random assignment can be used as an instrument for treatment receipt, to obtain consistent estimates of the complier average causal effect, provided the underlying assumptions are met. Instrumental variable methods can accommodate essential features of the health economic context such as the correlation between individuals’ costs and outcomes in cost-effectiveness studies. Methodological guidance for handling missing data encourages approaches such as multiple imputation or inverse probability weighting, that assume the data are Missing At Random, but also sensitivity analyses that recognise the data may be missing according to the true, unobserved values, that is, Missing Not at Random. Future studies should subject the assumptions behind methods for handling non-compliance and missing data to thorough sensitivity analyses. Modern machine learning methods can help reduce reliance on correct model specification. Further research is required to develop flexible methods for handling more complex forms of non-compliance and missing data. author: - | Karla  DiazOrdaz, Richard  Grieve\ London School of Hygiene and Tropical Medicine title: 'Non-compliance and missing data in health economic evaluation' --- Key words: non-compliance; missing data; instrumental variables; multiple imputation; inverse probability weighting; sensitivity analyses Introduction ============ Health economic studies that evaluate technologies, health policies or public health interventions are required to provide unbiased, efficient estimates of the causal effects of interest. Health economic evaluations are recommended to use data from well-designed randomised controlled trials (RCTs). However, a common problem in RCTs is non-compliance, in that trial participants may depart from their randomised treatment. @Brilleman2015 highlight that when faced with non-compliance, health economic evaluations have resorted to ‘per protocol’ (PP) analyses. A PP analysis excludes those patients who depart from their randomised allocation, and as the decision to switch treatment is likely to be influenced by prognosis, is liable to provide biased estimates of the causal effect of treatment receipt [@Ye2014]. Another concern is that the required data on outcomes, resource use or covariates may be missing for a substantial proportion of patients. Missing data can arise when patients are lost to follow-up, fail to complete the requisite questionnaires, or when information from routine data sources is incomplete. Most published health economic studies use methods that assume the data are Missing Completely at Random [@Noble2012; @Leurent2017]. A related concept is ‘censoring’ which refers specifically to when some patients are follow-up for less than the full period of follow-up. For example, in an RCT, those participants who are recruited later to the study, may have their survival data censored at the end of the study period, and it may be reasonable to assume that these data are ‘censored completely at random’ also known as non-informative censoring [@Willan2006book]. More generally in health economic studies, neither non-compliance, nor missing data are likely to be completely at random, and could be associated with the outcome of interest. Unless the underlying mechanisms for the non-compliance or missing data are recognised in the analytical methods, health economic studies may provide biased, inefficient estimates of the causal effects of interest. Appropriate methods for handling non-compliance and missing data have been developed in the wider biostatistics and econometrics literatures (see for example @Angrist1996 [@Robins1994a; @Bang2005; @lr02; @mk07; @Heckman1979]), but reviews report low uptake of these methods in applied health economics studies [@Noble2012; @Latimer2014; @Brilleman2015; @Leurent2017]. Several authors have proposed approaches for handling non-compliance and missing data, suitable for the applied health economics context. In particular, @DiazOrdaz2017, propose methods for handling non-compliance in cost-effectiveness studies that use RCTs, @Latimer2014 and @White2015 exemplify methods for handling non-compliance with survival (time to event) outcomes, while @Blough2009 [@Briggs2003; @Burton2007; @DiazOrdaz2014; @DiazOrdaz2016; @Faria2014] and @Gomes2013 exemplify methods for handling missing data in health economic evaluations. The objective of this chapter is to describe and critique alternative methods for handling non-compliance and missing data in health economics studies. The methods are illustrated with a health economic evaluation that uses data from a single RCT, but we also consider these methods in health economic studies more widely, and identify future research priorities. The chapter proceeds as follows. First, we exemplify the problems of both non-compliance and missing data with the REFLUX case study. Second, we define the necessary assumptions for identifying the parameters of interest, and propose estimation strategies for addressing non-compliance, and then missing data. Third, we contrast some alternative methods in the context of trial-based health economic evaluation. Fourth, we briefly review developments from the wider methodological literature, and identify key research priorities for future health economics studies. Motivating example: Cost-effectiveness analysis using the REFLUX study {#Sec:Reflux} ====================================================================== The REFLUX study was a UK multicentre RCT with a parallel design, in which patients with moderately severe gastro-oesophageal reflux disease (GORD), were randomly assigned to medical management or laparoscopic surgery [@Grant2008; @Grant2013]. Resource use and health-related quality of life (QoL), assessed by the EQ-5D (3 levels), were recorded annually for up to five years. Table 1 reports the main characteristics of the study. The original cost-effectiveness analysis (CEA) estimated the linear additive treatment effect on mean costs, $Y_{1i}$ and QALYs, $Y_{2i}$ with a seemingly unrelated regression (SUR) model [@Zellner1962; @Willan2004]. This model allows for correlation between individual QALYs and costs, and can adjust for different baseline covariates in each of the equations. For example, below we have a system of equations that adjust both outcomes for the EQ-5D utility score at baseline, denoted by $\mbox{EQ5D}_{0}$: $$\label{sur} \begin{array}{c} Y_{1i} = \beta_{0,1}+ \beta_{1,1} \mbox{treat}_i + \beta_{1,2} \mbox{$\mbox{EQ5D}_{0}$}_{i} + \epsilon_{1i}\\ Y_{2i}= \beta_{0,2}+\beta_{1,2} \mbox{treat}_i + \beta_{2,2} \mbox{$\mbox{EQ5D}_{0}$}_{i} + \epsilon_{2i} \end{array}$$ here $\beta_{1,1}$ and $\beta_{1,2}$ represent the incremental costs and QALYs respectively. The error terms are required to satisfy $E[\epsilon_{1i}]=E[\epsilon_{2i}]=0, \ E[\epsilon_{ \ell i} \epsilon_{\ell'i}] =\sigma_{\ell \ell'},\ E[\epsilon_{ \ell i} \epsilon_{\ell'j}]= 0,\ \mbox{\ for\ }\ell,\ \ell'\in\{1,2\}, \mbox{\ and for\ }i\neq j.$ This is an example of a special type of SUR model, in which the same set of covariates is used in all the equations, the so-called multivariate regression model. Since each equation in the SUR system is a regression, consistent estimation could be achieved by OLS, but would be inefficient. The efficient estimation is generalised least squares (GLS), which requires that the covariance matrix is known [^1], or the feasible version (FGLS), which uses a consistent estimate of the covariance matrix when estimating the coefficients in equation [@Zellner1962; @Zellner1962a]. There are some situations where GLS (and FGLS) will not be any more efficient than OLS estimation: (1) if the regressors in a block of equations in the system are a subset of those in another block of the same system, then GLS and OLS are identical when estimating the parameters of the smaller set of equations, and (2) in the special case of multivariate regression, where the SUR equations have identical explanatory variables, OLS estimation is identical to GLS [@Davidson2004]. The SUR approach can be used to estimate incremental cost and health effects, which in turn can be used to produced incremental cost-effectiveness ratios (ICERs) and incremental net monetary benefits (INBs). Here we focus on the INB, defined as $\mbox{INB}(\lambda) = \lambda \beta_{1,2}-\beta_{1,1}$, where $\lambda$ is the willingness to pay threshold. The standard error of the INB can be calculated from the standard errors and correlations of the estimated incremental costs and QALYs $\hat{\beta}_{1,1}$ and $\hat{\beta}_{1,2}$, following usual rules for estimating the variance of a linear combination of two random variables. In the REFLUX trial a sizeable minority of the patients crossed over from their randomised treatment assignment (see Table 1), and the proportions of patients who switched in the RCT were higher than in routine clinical practice [@Grant2008; @Grant2013]. In the RCT, cost and QoL data were missing for a high proportion of the patients randomly assigned to either strategy. 1ex ![Table 1: The REFLUX study: descriptive statistics and levels of missing data over the five year follow-up period](Table1RefluxChapter.pdf){width="100.00000%"} The original study reported a PP analysis [@Grant2013], but this is liable to provide a biased estimate of the causal effect of the treatment received. Faced with missing data, the REFLUX study used multiple imputation (MI) and complete case analysis (CCA). In Section \[Sec:ResultsReflux\], we re-analyse the REFLUX study with the methods described in the next sections to report a causal effect of the receipt of surgery versus medical management that uses all the available data, and recognises the joint distribution of costs and QALYs. Next, we consider approaches taken to the issue of non-compliance in health economic evaluation more widely, and provide a framework for estimating causal effects in CEA. Non-compliance in health economic evaluation ============================================ The intention-to-treat (ITT) estimand can provide an unbiased estimate of the intention to receive a particular treatment, but not of the causal effect of the treatment actually received. Instrumental variable (IV) methods can identify the complier average causal effect (CACE), also known as the local average treatment effect (LATE). In RCTs with non-compliance, random assignment can act as the instrument for treatment receipt, provided it meets the IV criteria for identification [@Angrist1996]. An established approach to IV estimation is two-stage least squares (2sls), which provides consistent estimates of the CACE when the outcome model is linear, and non-compliance is binary [@Baiocchi2014]. In CEA that use RCT data, there is interest in estimands, such as the relative cost-effectiveness for compliers. An estimate of the causal effect of the treatment received can help with the interpretation of results from RCTs with different levels of non-compliance to those in the target population. The causal effect of treatment receipt is also of interest in RCTs with encouragement designs, for example of behavioural interventions to encourage uptake of new vaccines [@Duflo2008], and for trial designs, common in oncology, which allow treatment switching according to a variable time period, such as after disease progression [@Latimer2014] (see Section \[subsec:survival\]). In CEA, methods that report the causal effect of the treatment have received little attention. @Brilleman2015 found that most studies that acknowledged the problem of non-compliance reported PP analyses, while @Hughes2016 suggested that further methodological development was required. The context of trial-based CEA raises important complexities for estimating CACEs that arise more generally in studies with multivariate outcomes. Here to provide accurate measures of the uncertainty surrounding a composite measure of interest, for example the INB, it is necessary to recognise the correlation between the endpoints, in this case, cost and health outcomes [@Willan2003; @Willan2006a]. We now provide a framework for identifying and estimating the causal effects of the treatment received. First, we present the three stage least squares (3sls) method [@Zellner1963], which allows the estimation of a system of simultaneous equations with endogenous regressors. Next, we consider a bivariate Bayesian approach, whereby the outcome variables and the treatment received are jointly modelled as dependent on random assignment. This extends IV unrestricted reduced form [@Kleibergen2003], to the setting with multivariate outcomes. Complier Average Causal effects with bivariate outcomes {#Sec:CACE} ======================================================= We begin by defining more formally our estimand and assumptions. Let $\Yc$ and $\Ye$ be the continuous bivariate outcomes, and $Z_i$ and $D_i$ the binary random treatment allocation and treatment received respectively, corresponding to the $i$-th individual. The bivariate endpoints $Y_{1i}$ and $Y_{2i}$ belong to the same individual $i$, and thus are correlated. We assume that there is an unobserved confounder $U$, which is associated with the treatment received and either or both of the outcomes. We assume that the **(i) Stable Unit Treatment Value Assumption (SUTVA)** holds: the potential outcomes of the $i$-th individual are unrelated to the treatment status of all other individuals (known as *no interference*), and that for those who actually received treatment level $z$, their observed outcome is the potential outcome corresponding to that level of treatment. Under SUTVA, we can write the potential treatment received by the $i$-th subject under the random assignment at level $z_i \in \{0, 1\}$ as $D_i\left(z_i\right)$. Similarly, $Y_{\ell i}\left(z_i,d_i\right)$ with $\ell\in\{1,2\}$ denotes the corresponding potential outcome for endpoint $\ell$, if the $i$-th subject were allocated to level $z_i$ of the treatment and received level $d_i$. There are four potential outcomes. Since each subject is randomised to one level of treatment, only one of the potential outcomes per endpoint $\ell$, is observed, [*i.e.* ]{}$Y_{\ell i}=Y_{\ell i}(z_i,D_i(z_i))=Y_i(z_i)$. The CACE for outcome $\ell$ can now be defined as $$\label{cace} \theta_{\ell}=E\left[\{Y_{\ell i}(1)-Y_{\ell i}(0)\}\big|\{D_i(1)-D_i(0)=1\}\right].$$ In addition to SUTVA, the following assumptions are sufficient for identification of the CACE, [@Angrist1996]: 1ex **(ii) Ignorability of the treatment assignment**: $Z_i$ is independent of unmeasured confounders (conditional on measured covariates) and the potential outcomes $Z_{i} \ci U_i,D_{i}(0),D_{i}(1),Y_{i}(0),Y_{i}(1).$\ **(iii) The random assignment predicts treatment received**: $Pr\{D_i(1)=1\}\neq Pr\{D_i(0)=1\}$.\ **(iv) Exclusion restriction**: The effect of $Z$ on $Y_\ell$ must be via an effect of $Z$ on $D$; $Z$ cannot affect $Y_\ell$ directly.\ **(v) Monotonicity:** $D_i(1)\geq D_i(0)$. The CACE can now be identified from equation without any further assumptions about the unobserved confounder; in fact, $U$ can be an effect modifier of the relationship between $D$ and $Y$ [@Didelez2010]. In the REFLUX study, the assumptions concerning the random assignment, (ii) and (iii), are justified by design. The exclusion restriction assumption is plausible for the cost endpoint, as the costs of surgery are only incurred if the patient actually has the procedure, and it seems unlikely that assignment rather than receipt of surgery would have a direct effect on QALYs. The monotonicity assumption rules out the presence of defiers, and it seems reasonable to assume that there are no trial participants who would only receive surgery if they were randomised to medical management, and vice versa. Equation implicitly assumes that receiving the intervention has the same average effect on the linear scale, regardless of the level of $Z$ and $U$. Since random allocation, $Z$, satisfies assumptions (ii)-(iv), we say it is an instrument for $D$. For a binary instrument, the simplest approach to estimate equation within the IV framework is the Wald estimand [@Angrist1996]: $${\beta}_{\ell} = \frac{E(Y|Z=1) - E(Y|Z=0)}{E(D|Z=1) - E(D|Z=0)}$$ Typically, estimation of these conditional expectations proceeds via the so-called two-stage least squares (2sls). The first stage is a linear regression model that estimates the effect of the instrument on the exposure of interest, here treatment received on treatment assigned. The second stage is the outcome regression model, but fitted on the predicted treatment received from the stage one regression: $$\begin{aligned} \label{2sls} \nonumber D_i &= &\alpha_0 + \alpha_1Z_i + \epsilon_{0,i}\\ Y_{\ell i} &=& \beta_0 + \beta_{1, \ell} \widehat{D_i} + \epsilon_{\ell,i}\end{aligned}$$ where $\widehat{\beta}_{1, \ell} $ is an estimator for $\beta_{\ell}$. Covariates can be included in both model stages. For 2sls to be consistent, the first stage model must be the parametric linear regression implied by the second stage, that is, it must include all the covariates and interactions that appear in the second stage model [@Wooldridge2010]. The asymptotic standard error for the 2sls estimate of the CACE is given in @Imbens1994, and is available in commonly used software packages. However, 2sls can only be readily applied to univariate outcomes, which raises an issue for CEA as ignoring the correlation between the two endpoints would provide inaccurate measures of uncertainty. A simple way to address this problem would be to apply 2sls directly within a net benefit regression approach [@Hoch2006]. However, it is known that net benefit regression is very sensitive to outliers, and to distributional assumptions [@Willan2004], [@Mantopoulos2016]. Instead, we focus on strategies for jointly estimating the CACE on QALYs and costs. The first approach combines SURs (equation \[sur\]) with 2sls (equation \[2sls\]) to obtain CACEs for both outcomes accounting for their correlation, and is known as three-stage least squares (3sls). The second is a Bayesian estimation method for the system of simultaneous equations. Three-stage least squares ------------------------- Three-stage least squares (3sls) was developed as a generalisation of 2sls for systems of equations with endogenous regressors, [*i.e.* ]{}any explanatory variables which are correlated with the error term in its corresponding equation [@Zellner1963]. $$\begin{aligned} \label{3sls} \nonumber D_i &=& \alpha_0 + \alpha_1Z_i + e_{0i}\\ Y_{1i} &=& \beta_{01} + \beta_{IV,1} {D_i} + e_{1i} \\ \label{3sls2}Y_{2i} &=& \beta_{02} + \beta_{IV,2} {D_i} + e_{2i}\end{aligned}$$ As with 2sls, the models can be extended to include baseline covariates. All the parameters appearing in the system of equations and are estimated jointly. Firstly, the IV model is estimated “outcome by outcome”, for example by applying 2sls. This will be consistent but inefficient. The residuals from this 2sls models, that is $e_{1i}$ and $e_{2i}$, can be now used to estimate the covariance matrix that relates the outcome models. This is similar to the step used on a SUR with exogenous regressors (equation ) for estimating the covariance matrix of the error terms from the two equations and . This estimated covariance matrix is used when solving the estimating equations formed by stacking the equations vertically [@Davidson2004]. Provided that the identification assumptions (i)-(v) are satisfied, Z is independent of the residuals at each stage, i.e. $Z \ci e_{0i}$, $Z \ci e_{1i}$, and $Z \ci e_{2i}$, the estimating equations can be solved by FGLS which avoids distributional assumptions, and is also robust to heteroscedasticity of the errors across the different linear models for the outcomes [@Greene2002]. As the 3sls method uses an estimated covariance matrix, it is only asymptotically efficient [@Greene2002]. If the error terms in each equation of the system and the instrument are not independent, the 3sls estimator based on FGLS is not consistent, and other estimation approaches, such as generalised methods of moments (GMM) warrant consideration [@Schmidt1990]. In the just-identified case, that is when there are as many endogenous regressors as there are instruments, classical theory shows that the GMM and the FGLS estimators coincide [@Greene2002]. Bayesian estimators ------------------- @Nixon2005 propose Bayesian bivariate models for the expectations of the two outcomes (e.g. costs and QALYs) in CEA, which have a natural appeal for this context as each endpoint can be specified as having a different distribution. The parameters in the models are simultaneously estimated, allowing for proper Bayesian feedback and propagation of uncertainty. On the other hand, univariate instrumental variables models within the Bayesian framework have been previously developed [@Burgess2012; @Kleibergen2003; @Lancaster2004]. This method recasts the first and second stage equations familiar from 2sls, eqs. , in terms of a recursive equation model, which can be “solved” by substituting the parameters of the first into the second. Such solved system of equations is called the *reduced form*, and it expresses explicitly how the endogenous variable $D$ and the outcome $Y_{\ell }$ jointly depend on the instrument. $$\begin{aligned} \label{reduced} \nonumber D &= &\alpha_0 + \alpha_1Z+ \nu_{0}\\ Y_{\ell } &=& \beta_0^* + \beta_{1, \ell} \alpha_1Z + \nu_{\ell}\end{aligned}$$ where $\beta_0^*=\beta_0+ \beta_{1, \ell}\alpha_0$, $\nu_{0}=\epsilon_0$ and $\nu_{\ell}= \epsilon_\ell + \beta\epsilon_0$. The parameter of interest $\beta_{1, \ell}$ is identified, since by the IV assumptions, $\alpha_1\neq 0$. The extension of this reduced form to multivariate outcomes proceeds as follows. Let $(D_{i}, Y_{1i}, Y_{2i})^\top$ be the transpose of the vector of outcomes, which now includes the endogeneous variable $D$ as well as the bivariate endpoints of interest. The reduced form can now be written in terms of the linear predictors of $D_{i}, Y_{1i}, Y_{2i}$ as : $$\begin{array}{l} \mu_{0i} =\beta_{0,0} + \beta_{1,0}Z_i \\ \mu_{1i} = \beta_{0,1}+ \beta_{1,1} \beta_{1,0}Z_i \\ \mu_{2i}= \beta_{0,2}+\beta_{1,2}\beta_{1,0}Z_i \end{array}$$ with $\beta_{0,0}=\alpha_0$, $\beta_{1,0}=\alpha_1$. We treat $D_{i}, Y_{1i}, Y_{2i}$ as multivariate normally distributed, so that: $$\begin{array}{l} \left(\!\!\begin{array}{c} D_{i}\\ Y_{1i} \\ Y_{2i}\end{array}\!\!\right) \sim N\left\{\!\!\left(\begin{array}{c} \mu_{0i}\\ \mu_{1i}\\ \mu_{2i}\end{array}\right), \Sigma= \left(\!\!\!\begin{array}{ccc} \s_{0} & s_{01} & s_{02} \\ s_{01}& \s_{1} & s_{12} \\ s_{02}& s_{12} & \s_{2} \end{array} \!\!\!\right) \!\!\right\}; \end{array}$$ where $s_{ij}=\mbox{cov}(Y_i,Y_j)$, and the causal treatment effect estimates are $\beta_{1,1}$ and $\beta_{1,2}$ respectively. For the implementation, we use vague normal priors for the regression coefficients, [*i.e.* ]{}$\beta_{m,j}\sim N(0, 10^2)$, for $j \in\{0,1,2\}, m\in\{0,1\}$, and a Wishart prior for the inverse of $\Sigma$ [@GelmanHill]. Comparison of Bayesian versus 3sls estimators for CEA with non-compliance ------------------------------------------------------------------------- The performance of 3sls and Bayesian methods for obtaining compliance-adjusted estimates was found to be similar in a simulation study [@DiazOrdaz2017]. Under the IV and monotonicity assumptions, both approaches performed well in terms of bias and confidence interval coverage, though the Bayesian estimator reported wide CIs around the estimated INB in small sample size settings. The 3sls estimator reported low levels of bias and good CI coverage throughout. These estimators rely on a valid IV and for observational data settings, the assumptions required for identification warrant particular scrutiny. While the use of IV estimators in health economics studies has not been extensive, the development of new large, linked observational datasets offer new opportunities for harnessing IVs to estimate the causal effects of treatments received. In particular, areas such as Mendelian randomisation offer the possibility of providing valid instruments and have previously been used with the Bayesian estimators described above [@Burgess2012]. Methods are also available which do not rely on an IV, but still attempt to estimate a causal effect of treatment receipt. We briefly review one of these methods below. Inverse probability weighting for non-compliance ------------------------------------------------ Inverse probability weighting (IPW) can also be used to obtained compliance-adjusted estimates. Under IPW, observations that deviate from protocol are censored, similar to a PP analysis. To avoid selection bias, the data from those participants that continue with the treatment protocol are weighted, to represent the complete (uncensored) sample according to observed characteristics. The weights are given by the inverse of the probability of complying, conditional on the covariates included in the *non-compliance model*. The target estimand is the causal average treatment effect (ATE). For IPW to provide unbiased estimates of the ATE, requires that the model includes all baseline and time-dependent variables that predict both treatment non-adherence and outcomes. This is often referred to as the “no unobserved confounder” assumption [@Robins2000]. @Latimer2014 illustrate IPW for health technology assessment. The IPW method cannot be used when there are covariates values that perfectly predict treatment non-adherence, that is when there are covariate levels where the probability of non-adherence is equal to one [@Robins2000a; @Hernan2001; @Yamaguchi2004]. We consider further methods for handling non-compliance in health economic studies in Section \[Sec:FurtherTopics\]. We turn now to the problem of missing data. Missing data {#Sec:missdata} ============ The approach to handling missing data should be in keeping with the general aim of providing consistent estimates of the causal effect of the interventions of interest. However, most published health economic evaluations simply discard the observations with missing data, and undertake complete case analyses (CCA) [@Noble2012; @Leurent2017]. This approach is valid, even if inefficient, where the probability of missingness is independent of the outcome of interest given the covariates in the analysis model, that is there is covariate-dependent missingness (CDM) [@White2010]. A CCA is also valid when the data are missing completely at random (MCAR) [@lr02], that is the missing data do not depend on any observed, or unobserved value. Similarly, when data are censored for administrative reasons unrelated to the outcome, then they may be assumed to be censored completely at random [@Willan2006book]. In many health economic evaluations it is more realistic to assume that the data are missing (or censored) at random (MAR), that is, that the probability that the data are observed, only depends on observed data [@lr02]. For example, a likelihood-based approach that only uses the observed data, can still provide valid inferences under the MAR assumption, if the analysis adjusts for all those variables associated with the probability of missing data [@mk07] For SUR systems, it can often be the case that each equation has different number of observations, either because of differential missingness in the outcomes or because each equation included different regressors, which in turn have alternative missing data mechanisms. Estimation methods for SUR models with unequal numbers of observations have been considered by @Schmidt1977. Such estimates would then be valid under the asumptions of CDM or MCAR. In addition, if the estimation method was likelihood-based (MLE, Expectation-Maximisation and Bayesian), the estimates are also valid under MAR, again provided the models adjust for all the variables driving the missingness. Estimation of SUR systems with unequal number of observations based on Maximum-likelihood was presented by @Foschi2002 within the frequentist literature. Bayesian likelihood estimation of SURs with unequal number of observations was developed by @Swamy1975. In general, MI, IPW and full Bayesian methods are the recommended approaches if the data can be assumed to be MAR [@Rubin1987]. Methods that assume data are MAR have been proposed in health economics, for example to handle missing resource use and outcomes [@Briggs2003] or unit costs [@Grieve2010]. In the specific context of censored data, time to event parametric models have been adapted to the health economics context, and assume that the censoring is un-informative, that is the probability that the data are censored only depends on the observed data, see for example @Lin1997 [@Carides2000; @Raikou2004; @Raikou2006], and also section 7.3.3. An alternative assumption is that the missing values are associated with data that are not observed, that is the data are missing not at random (MNAR). Methods guidance for the analysis of missing data recommends that, while the primary or base case analysis should present results assuming MAR, sensitivity analyses should be undertaken that allow for the data to be MNAR [@Sterne2009]. The review by @Leurent2017 highlights that there are very few examples in health economics studies that consider MNAR mechanisms (see section \[Sec:sensMAR\]). Multiple Imputation (MI) {#sec:missing} ------------------------ MI is a principled tool for handling missing data [@Rubin1987]. MI requires the analyst to distinguish between two statistical models. The first model called the *substantive model*, *model of interest* or *analysis model* is the one that would have been used had the data been complete. The second model, called the [*imputation model*]{}, is used to describe the conditional distribution of the missing data, given the observed, and must include the outcome. Missing data are imputed with the imputation model, to produce several completed data sets. Each set is then analysed separately using the original analysis model, with the resultant parameter estimates and associated measures of precision combined by Rubin’s formulae [@Rubin1987], to produce the MI estimators and their variances. Under the MAR assumption, this will produce consistent estimators [@lr02; @schafer97]. MI can increase precision, by incorporating partly observed data, and remove potential bias from undertaking a CCA when data are MAR. If only outcomes are missing, MI may not improve upon a conventional likelihood analysis [@White2010]. Nevertheless, MI can offer improvements if the imputation model includes variables that predict missingness and outcome, additional to those in the analysis model. The inclusion of these so-called auxiliary variables can make the assumption that the data are MAR more plausible. A popular approach to MI is full-conditional specification (FCS) or [*chained equations*]{} MI, where draws from the joint distribution are approximated using a sampler consisting of a set of univariate models for each incomplete variable, conditional on all the other variables [@vb12]. FCS has been adapted to include interactions and other non-linear terms in the imputation model, so that the imputation model contains the analysis model. This approach is known as substantive model compatible FCS (SMCFCS) [@Bartlett2014]. @Faria2014 present a guide to handling missing data in CEA conducted within trials, and show how MI can be implemented in this context. IPW for valid inferences with missing data ------------------------------------------ IPW can address missing data, by using weights to re-balance complete cases so that they represent the original sample. The use of IPW for missingness is similar to re-weighting different sampling fractions within a survey. The units with a low probability of being fully observed, are given a relatively high weight so that they represent units with similar (baseline) characteristics who were not fully observed, and would be excluded from a CCA. The weighting model $p(R_i=1\vert X_i, Y_i) $ is called the *probability of missingness model* (POM). From this model, we can estimate the probability of being observed, by for example, fitting a logistic model and obtaining fitted probabilities $\widehat{\pi_i}$. The weights $w_i$, are then the inverse of these estimated probabilities, i.e. $w_i=\frac{1}{\widehat{\pi_i}}$. The IPW approach incorporates this re-weighting in applying the substantive model to the complete cases. IPW provides consistent estimators when the data are MAR and the POM models are correctly specified. The variance of the IPW estimator is consistently estimated provided the weighting is taken into account, by for example using a sandwich estimator [@Robins1994]. IPW is simple to implement when there is only one variable with missing data, and the POM model only includes predictors that are fully-observed. It is still fairly straight-forward, when there are missing data for more variables but the missing data pattern or missingness is *monotone*, which is the case when patients drop out of the study follow-up after a particular timepoint. The tutorial by [@Seaman2011] provides further detail on using IPW for handling missing data. IPW has rarely been used to address missing data in health economic evaluation [@Leurent2017; @Noble2012]. Here a particular concern is that there may be poor overlap between those observations with fully- versus partially- observed data according to the variables that are in the POM model, leading to unstable weights. There are precedents for using IPW in health economics settings with larger datasets, and where lack of overlap is less of a problem (see @Jones2006). IPW and MI both provide consistent estimates under MAR, if either the POM or imputation models are correctly specified. However, MI is often preferred to IPW, as it is usually more efficient and more flexible approach for handling non-monotone patterns of missing data, for example when data are missing at intermittent time-points during the study follow-up. Bayesian analyses ----------------- Bayesian analyses naturally distinguish between observed data and unobserved quantities. All unobserved quantities are viewed as unknown “parameters” with an associated probability distribution. From this perspective, missing values simply become extra parameters to model and obtain posterior distributions for. Let $\mathbf{\theta}$ denote the parameters of the full data model, $p(y,r\vert x; \mathbf{\theta})$. This model can be factorised as the substantive model times the POM, i.e. the model for the missingness mechanism $p(y\vert x; \gamma)p(r\vert y, x; \psi)$. For Bayesian ignorability to hold, we need to assume MAR, and in addition that $\gamma$ and $\psi$ are a distinct set of parameters, i.e. variation independent, with the prior distribution for $\mathbf{\theta}$ factorising into $p(\mathbf{\theta})= p(\gamma)p(\psi)$. This means that under Bayesian ignorability, if there are covariates in the analysis model which have missing data, additional models for the distribution of these covariates are required. These models often involved parametric assumptions about the full data. The Bayesian approach also uses priors to account for uncertainty about the POM. This is in contrast to IPW, for example, which ignores the uncertainty in the parametric POM, and relies on this model being correctly specified. It is possible to use auxiliary variables under Bayesian ignorability. Full details are provided by [@Daniels2008]. Application to estimating the CACE with non-response in the REFLUX study {#Sec:ResultsReflux} ======================================================================== We now apply 3sls and Bayesian IV models to obtain CACE of the surgery vs medical management treatment, using the REFLUX example. We compare CCA with MI, IPW and full Bayesian approaches assuming the missing data under MAR. Only 48% of trial participants had completely observed costs and QoL data, with a further 13 missing baseline $\mbox{EQ5D}_{0}$. For the CCA, SUR is applied to the cases with full information to report the INB according to the ITT and PP estimands. As Table 2 shows, the PP estimate is somewhat lower than the ITT and CACE estimates, reflecting that those patients who switch from surgery to medical management and are excluded from this analysis, have a somewhat better prognosis than those who follow their assigned treatment. Assuming that randomisation is a valid IV and that monotonicity holds, either 2sls or Bayesian methods provide CACE estimates of the INB for compliers. However, these CCA assume that the missingness is independent of the outcomes, given the treatment received and the baseline EQ5D, that is there is covariate-dependent missingness (CDM). This assumption may be implausible. We now consider strategies valid under a MAR assumption, specifically, we use MI and IPW coupled with 3sls, and a full Bayesian analysis, to obtain valid inferences for the CACE, that use all the available data. For the MI, we considered including auxiliary variables in the imputation model, but none of the additional covariates were associated with the missingness and the value of the variables to be imputed. We imputed total cost, QALYs and baseline $\mbox{EQ5D}_{0}$, 50 times by FCS, with predictive mean matching (PMM), taking the five nearest neighbours as donors [@White2011]. The imputation models must contain, all the variables in the analyses models, and so we included treatment receipt in the imputation model, and stratified by randomised arm. We then applied 3sls to each of the 50 MI sets, and combined the results with Rubin’s formulae [@Rubin1987]. The IPW required POM models for the baseline EQ5D, cost, and QALY respectively. Let $R_{0i}$, $R_{1i}$ and $R_{2i}$ be the respective missingness indicators. The missingness pattern is almost monotone, with 156 individuals with observed $\mbox{EQ5D}_{0}$, i.e. $R_{0i}=1$, then a further 16 with $R_{0i}=R_{1i}=1$, and 10 with all $R_{ji}=0$.[^2] With this monotone missing pattern we have POMs for: $R_{0i}$ on all other fully observed baseline covariates; $R_{1i}$ on fully observed baseline covariates, randomised arm treatment receipt, $R_0$ and $\mbox{EQ5D}_{0}$; and $R_{2i}$ on fully observed baseline covariates, randomised arm, treatment receipt, $R_0$, $\mbox{EQ5D}_{0}$, $R_1$, and cost. We fitted these models with logistic regression, used backward stepwise selection, and only kept those regressors with p-values less than 0.1. This resulted in POMs as follows: an empty model for $R_{0i}$; age, randomised group, treatment received, and $\mbox{EQ5D}_{0}$, for the $R_{1i}$, and only $\mbox{EQ5D}_{0}$ for $R_{2i}$. We obtained the predicted probabilities of being observed, and weighted the complete cases in the 3sls analysis by the inverse of the product of these three probabilities. In the Bayesian analyses, to provide a posterior distribution for the missing values, a model for the distribution of baseline $\mbox{EQ5D}_{0}$ was added to the treatment received and outcome models. Bayesian posterior distributions of the mean costs, QALYs and INB were then summarised by their median value and 95% credible intervals. Table 2 shows that the CACE estimates under MAR are broadly similar to those from the CCA, but in this example the MI and Bayesian approaches provided estimates with wider CI. In general, MI can result in more precise estimates than CCA, when the imputation model does include auxiliary variables. The general conclusion of the study is that surgery is relatively cost-effective in those patients with GORD who are compliers. The results are robust to a range of approaches for handling the non-compliance and missing data. ![Table 2: The REFLUX study: cost-effectiveness results according to estimand, estimator and approach taken to missing data[]{data-label="Table1"}](Table2Reflux.pdf){width="100.00000%"} Further topics {#Sec:FurtherTopics} ============== We now consider further methodological topics relevant to health economic studies faced with non-compliance and missing data. First, we discuss sensitivity analyses for departures from the IV assumptions (section \[Sec:sensIV\]), and the MAR assumption (section \[Sec:sensMAR\]). We also consider approaches for addressing non-compliance and missing data for non-continuous outcomes (section \[Sec:nonnormal\]), and clustered data (section \[Sec:clus\]) before offering conclusions and key topics for further research (section \[Sec:discussion\]). Sensitivity analyses to departures from the IV assumptions to adjust for non-compliance {#Sec:sensIV} --------------------------------------------------------------------------------------- The preceding sections detailed how an instrumental variable for the endogeneous exposure of interest can be used to provide consistent estimates of the CACE as long as particular assumptions are met. However, it is helpful to consider whether conclusions are robust to departures from these assumptions, in particular in those settings where these underlying assumptions might be less plausible. For example, sensitivity to the exclusion restriction assumption, can be explored by extending the Bayesian model described. The analyst can specify priors on the non-zero direct effect of the IV on the outcome [@Conley2012; @Hirano2000]. Here it must be recognised that as the models are only weakly identified, the results may be highly sensitive to the parametric choices for the likelihood and the prior distributions. Within the frequentist IV framework, sensitivity analyses can build on 3sls to consider potential violations of the exclusion restriction and monotonicity assumptions. See @Baiocchi2014 for a tutorial. Another important assumption is that the IV strongly predicts treatment receipt, and this might well be satisfied in most clinical trials [@Zhang2014]. In health economics studies that use RCTs with encouragement designs, or Mendelian randomisation observational studies where the IV is a gene, the IV may well be weak, and this can lead to biased estimates. Here, Bayesian IV methods have been shown to perform better than 2sls methods [@Burgess2012]. Sensitivity analyses to departures from MAR assumptions {#Sec:sensMAR} ------------------------------------------------------- There has been much progress in the general econometrics and biostatistics literature on developing methods for handling data that are assumed MNAR (see for example @Heckman1976 [@Rubin1987; @ck13]), but there are few examples in the applied health economics literature [@Noble2012; @Faria2014; @Leurent2017; @Jones2006] For settings when data are assumed to be MNAR, the two main groups of methods proposed are: selection models and pattern mixture models. Selection models postulate a mechanism by which the data are observed or ‘selected’, according to the true, underlying values of the outcome variable [@Heckman1979]. By contrast pattern-mixture models specify alternative distributions for the data according to whether or not they are observed [@lr02; @ck13]. Heckman selection models have been used in settings with missing survey responses particularly in HIV studies (see for example @Clark2014). Heckman’s original proposal was a two-step estimator, with a binary choice model to estimate the probability of observing the outcome in the first stage, with the second stage then using the estimates in a linear regression to model the outcomes [@Heckman1976]. Recent extensions allow for other forms of outcomes, such as binary measures, by using GLMs for the outcome equation [@Marra2013], and recognise the importance of appropriate model specification and distributional assumptions [@Vella1998; @Das2003; @Marchenko2012; @McGovern2015]. The Heckman selection model requires the selection model to include covariates not in the outcome model, to avoid collinearity issues. This relies on an important untestable assumption which tends to remain hidden: these variables must meet the criteria for the exclusion restriction, that is, they must predict missingness, and also be conditionally independent of the outcome [@Puhani2000]. Pattern mixture models (PMM) have been advocated under MNAR as they make more accessible, transparent assumptions about the missing data mechanism [@Molenberghs2014; @ck13]. The essence of the pattern-mixture model approach under MNAR is that it recognises that the conditional distribution of partially observed variables, given the fully observed variables, may differ between units that do, and do not, have fully observed data [@ck13]. Hence, a PMM approach allows the statistics of interest to be calculated across the units with observed data (pattern 1), and then for those with missing data (pattern 2). For the units with missing data, offsets terms are added to the statistics of interest calculated from the observed data (pattern 2). The offset terms, also known as sensitivity parameters, can differ according to the treatment groups defined according to treatment assigned, or if there is non compliance according to the treatment received. MI is a convenient way to perform PMM sensitivity analyses. The offset method can be easily implemented for variables which have been imputed under linear regression models. The software package SAS has implemented PMM for other models within the MI procedure `MI`. As @Leurent2017 highlight in the context of health technology assessment, few applied studies have used PMM. One potential difficulty is that studies may have little rationale for choosing values for the sensitivity parameters [@Leurent2018]. The general methodological literature has advocated expert opinion to estimate the sensitivity parameters, which can then be used as informative priors within a fully Bayesian approach [@White2007]. Progress has been made by @Mason2017 who developed a tool for eliciting expert opinion about missing outcomes for patients with missing versus observed data, and then used these values in a fully Bayesian analysis. In a subsequent paper @Mason2018 extend this approach to propose a framework for sensitivity analyses in CEA for when data are liable to be MNAR. Other type of outcomes {#Sec:nonnormal} ---------------------- The methods described have direct application to health economics studies with continuous endpoints such as costs and health outcomes such as QALYs. We now briefly describe extensions to settings with other forms of outcomes, namely binary (e.g. admission to hospital) or time to event (duration or survival) endpoints. ### Binary outcomes Often, researchers are interested in estimating the causal treatment effect on a binary outcome. The standard 2sls estimator requires that both stages to be linear. This is often a good estimator for the risk difference, but may result in estimates that do not respect the bounds of the probabilities, which must lie between 0 and 1. Several alternative two-stage estimators have been proposed. When odd ratios are of interest, two IV methods based on plug-in estimators have been proposed [@Terza2008]. The first-stage of these approaches is the same, a linear model of the endogenous regressor $D$ on the instrument $Z$ (and the covariates $X$ if using any in the logistic model for the outcome). Where they differ is in the outcome model, the second stage. The first strategy, so-called ‘standard’ IV estimator or two-stage predictor substitution (2SPS) regression estimates the causal log odds ratio with the coefficient for the fitted $\hat{D}$. However, this will be biased for the conditional odds ratio of interest, with the bias increasing with the strength of the association between $D$ and $Y$ given $Z$. [@Vansteelandt2011]. The second strategy, has been called ‘adjusted two-stage estimate’ or two-stage residual inclusion (2SRI). The second-stage equation, i.e. the outcome model, fits a logistic regression of $Y$ on $D$ and the residual from the first-stage regression (as well as other exogeneous baseline covariates $X$, if using any). However, the non-collapsibility of logistic regression means that 2SRI only provides asymptotically unbiased estimates of the CACE if there is no unmeasured confounding, i.e. when the model includes all the covariates that predict non-compliance and the outcomes [@Cai2011]. This is because when there is unobserved confounding, the estimate obtained by 2SRI is conditional on this unobserved confounding, and since it is unobserved, cannot be compared in any useful way with the population odds ratio of interest. If we had measured those confounder, we could marginalise over their distribution to obtain the population (marginal) odds ratio [@Burgess2013]. See @Clarke2012 for a comprehensive review of IV methods for binary outcomes. @Chib2002 developed simultaneous probit models, involving latent normal variables for both the endogenous discrete regressor and the discrete dependent variable from the Bayesian perspective, while @Stratmann1992 developed the full maximum likelihood version. The MI approaches previously described for handling missing data under MAR and MNAR can all be applied to binary outcomes, for example by using a logistic imputation model within FCS MI [@lr02; @ck13]. The IPW strategies proceed for binary outcomes in the same way as for continuous outcomes. 2ex ### Time to event data {#subsec:survival} The effect of treatment on time to event outcomes, is often of interest for example in oncology trials, or in evaluations of behavioural interventions, where time to stop particular behaviour (e.g. time to smoking cessation) are often reported. A common IV method for estimating the causal effect of treatment receipt on survival outcomes is the so-called rank preserving structural failure time models (RPSFT) [@Robins1991], later extended to account for censoring by @White1999 and applied to health technology assessment by @Latimer2014a [@Latimer2014]. These models are called rank preserving, because it is assumed that if subject $i$ has the event before subject $j$ when both received the treatment, then subject $i$ would also have a shorter failure time if neither received the treatment i.e. $T_i(a)<T_j(a)$, for $a\in\{0,1\}$. This is, randomisation is assumed to be an IV, and therefore to meet the exclusion restriction such that the counterfactual survival times are assumed independent of the randomised arm. The target estimand is the average treatment effect amongst those who follow the same treatment regime, i.e the average treatment effect on the treated (ATT), as opposed to a CACE. In the simplest case, where treatment is a one-off, all-or-nothing (e.g. surgery), let $\lambda_{T(1)}(t)$ be the hazard of the subjects who received the treatment. We can use the g-estimation [@Robins2000b] procedure to obtain the treatment-free hazard function $\lambda_{T(0)}(t)$ to obtain $\lambda_{T(1)}(t)= \lambda_{T(0)}(t)e^{-\beta}$. We refer to $\beta$ as the causal rate ratio, as treatment multiplies the treatment-free hazard by a $e^{-\beta}$. As this is an IV method, it assumes that $Z$ is a valid instrument, however, instead of monotonicity, it requires that treatment effect, expressed as $\beta$, is the same for both randomised arms, often referred to as *no effect modification by $Z$*, which may be plausible in RCTs that are double-blinded. This is a very strong assumption if, for example, the group randomised to the control regimen, receive the treatment later in the disease pathway (e.g. post progression), compared to the group randomised to treatment. This method has also been adapted for time-updated treatment exposures. ### Missing data in time to event analyses For survival analysis, the outcome consists of a variable $T$ representing time to the event of interest and an event indicator $Y$. If $Y_i=1$, then that individual had the event of interest at time $T_i$, but if $Y_i=0$, then $T_i$ records the censoring time, the last time at which a subject was seen, and still had not experienced the event. This is a special form of missing data, as we know that for individual $i$, the survival time exceeds $T_i$. The censoring mechanism can be categorised as censored completely at random (CCAR), where censoring is completely independent of the survival mechanism; censoring at random (CAR), where the censoring is independent of the survival time, *conditional* on the covariates that appear in the substantive model, for example treatment. If even after conditioning on covariates, censoring is dependent on the survival time, we say the censoring is not at random (CNAR). CCAR and CAR are usually referred to as ignorable or non-informative censoring. Under non-informative censoring, we distinguish between two situations. The first one is when the outcome is fully observed, but we have missing values in the covariates that appear in the substantive model. The second is when the survival time has missing values. In some settings covariates $X$ included in a Cox proportional hazards model have missing values, @White2009 showed that when imputing either a normally distributed or binary covariate $X$, we should use a linear (or logistic) regression imputation model, with $Y$ and the baseline cumulative hazard function as covariates. To implement this, we have to estimate the baseline cumulative hazard function. @White2009 suggested that when covariate effects are small, baseline cumulative hazard can be approximated by the Nelson-Aalen (marginal) cumulative hazard estimator, which ignores covariates and thus can be estimated using all subjects. Otherwise, the baseline cumulative hazard function can be estimated within the FCS algorithm by fitting the Cox proportional hazards model to the current imputed dataset, and once we have this, we can proceed to impute $X$. If the survival times are missing, according to CAR, that is conditionally on the covariates used in our analysis model, then the results will be valid. MI can be helpful in situations where although the data are CAR, we are interested in estimating either marginal survival distributions, or our analysis model of interest includes fewer variables than required for the CAR assumption to be plausible. A parametric imputation model, for example the Weibull, log-logistic or lognormal is then used to impute the missing survival times, see @ck13. 2ex ### Clustered data {#Sec:clus} Clustered data may arise when the health economics study uses data from a multicentre RCT a cluster randomised trial, or indeed an observational study where observations are drawn from high level units such as hospitals or schools. In these settings but also those with panel or longitudinal data, it is important to recognise the resultant dependencies within the data. Methods have been developed for handling clustering in health economics studies including the use of multilevel (random effects) models [@Grieve2010; @Gomes2011], but these have not generally been used in settings with non-compliance. More generally there are methods to obtain CACE that accommodate clustering. These range from simply using robust SE estimation in an IV analysis [@Baiocchi2014], to using multilevel mixture models within the principal stratification framework [@Frangakis2002; @Jo2008]. For extensions of principal stratification approaches to handle non-compliance at individual and cluster-level, see @Schochet2011. Regarding missing data, the FCS approach is not well-suited to proper multilevel MI and so, when using MI, a joint modelling algorithm is used [@schafer97]. The R package `jomo` can be used to multiply impute binary as well as continuous variables. @DiazOrdaz2014 [@DiazOrdaz2016] illustrate multilevel MI for bivariate outcomes such as costs and QALYs, to obtain valid inferences for cost-effectiveness metrics, and demonstrate the consequences of ignoring clustering in the imputation models. Discussion {#Sec:discussion} ========== This chapter illustrates and critiques methods for handling non-compliance and missing data in health economics studies. Relevant methods have been developed for handling non-compliance in RCTs with bivariate [@DiazOrdaz2017], or time to event endpoints [@Latimer2014a]. Methods for addressing missing data under MAR have been exemplified in health economics [@Briggs2003], including settings with clustered data [@DiazOrdaz2014]. Future studies are required that adapt the methods presented to the range of settings typically seen in applied health economics studies, in particular to settings with binary and time to event (duration) outcomes, and within longitudinal or panel datasets where data are missing at intermittent time points. To improve the way that non-compliance and missing data are handled, researchers are required to define the estimand of interest, transparently state the underlying assumptions, and undertake sensitivity analyses to departures from the missing data, and identification assumptions. Future health economics studies will have increased access to large-scale linked observational data, which will include measures of non-adherence. In such observational settings, there will be new possible sources of exogenous variation such as genetic variants, resource constraints, or measures of physician or patient preference that offer possible instrumental variables for treatment receipt [@vonHinke2016; @Brookhart2007]. Future studies will be required to carefully justify the requisite identification assumptions, but also develop sensitivity analyses to violation of these assumptions. Here, the health economics context is likely to provoke requirements for additional methodological development. For example, sensitivity analyses for the exclusion restriction [@Jo2002a; @Jo2002b; @Conley2012; @Hirano2000], may require refinement for settings where the exclusion restriction is satisfied for one endpoint (e.g. QALYs) but not for another (e.g. costs). Similarly, sensitivity analyses to the monotonicity assumption have been developed in the wider methodological literature [@Baiocchi2014; @Small2017], but they warrant careful consideration in the health economics context. Further methodological research is required to allow for the more complex forms of non-adherence that may be seen in applied health economic studies. Compliance may not always be all-or-nothing, or time invariant. For example, there may be interest in the causal effect of the dose received. Here, available methods may be highly dependent on parametric assumptions, such as the relationship between the level of compliance and the outcome; alternatively a further instrument is required in addition to randomisation [@Dunn2007; @Emsley2010]. A promising avenue of future research for providing less model-dependent IV estimates is to consider doubly robust estimators, such as targeted minimum loss estimation (TMLE) [@van2012targeted], paired with ensemble machine-learning approaches, for example the so-called Super Learner [@van2007super]. TMLE approaches for IV models have recently been proposed [@Toth2016]. In the health economics settings, TMLE has successfully been used for estimating the effects of continuous treatments [@Kreif2015]. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Mark Sculpher, Rita Faria, David Epstein, Craig Ramsey and the REFLUX study team for access to the data.\ Karla DiazOrdaz was supported by UK Medical Research Council Career development award in Biostatistics MR/L011964/1. This report is independent research supported by the National Institute for Health Research (Senior Research Fellowship, Richard Grieve, SRF-2013-06-016). The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health. Angrist, J. D., Imbens, G. W. and Rubin, D. B. (1996), ‘Identification of causal effects using instrumental variables’, [*Journal of the American Statistical Association*]{} [ **91**]{}(434), pp. 444–455. , J. D., [Pischke]{}, J. S. (2008), [*Mostly Harmless Econometrics: An Empiricist’s Companion*]{}. [Princeton University Press]{}. Baiocchi, M., Cheng, J. and Small, D. S. (2014), ‘Instrumental variable methods for causal inference’, [*Statistics in Medicine*]{} [**33**]{}(13), 2297–2340. Baker, S. (1998), ‘Analysis of survival data from a randomized trial with all-or-none compliance: estimating the cost-effectiveness of a cancer screening program.’ [*Journal of the American Statistical Association*]{} **93**:929–934. Bang, H.  and Robins, J. M. (2005) ‘Doubly robust estimation in missing data and causal inference models’, *Biometrics*, **61** (4), 962–973. Bartlett, J. W. Seaman, S. R.  White, I.W. and Carpenter, J.R. (2014), ‘Multiple Imputation of covariates by fully conditional specification: Accommodating the substantive model’, [*Statistical Methods in Medical Research*]{} [**24**]{}(4),  462 – 487. Blough DK, Ramsey S, Sullivan SD, Tusen R. ’The impact of using different imputation methods for missing quality of life scores on the estimation of the cost-effectiveness of lung-volume reduction surgery.’ [*Health Economics*]{} **1**8(1), 91–101. Briggs, A. Clark T, Wolstenholme J, Clarke P. (2003), ’Missing....presumed at random: Cost-analysis of incomplete data. Health Economics’, [**12**]{}(5),  pp.377–392. Brilleman, S., Metcalfe, C., Peters, T. and Hollingsworth, W. (2015), ‘The reporting of treatment non-adherence and its associated impact on economic evaluations conducted alongside randomised trials: a systematic review.’, [*Value in Health*]{} [**19** ]{}(1), pp., 99 – 108. Brookhart, M. A. and Schneeweiss, S. (2007), ‘Preference-based instrumental variable methods for the estimation of treatment effects: assessing validity and interpreting results,’ *The international journal of biostatistics*, **3**(1). Burgess, S. and Thompson, S. G. (2012), ‘Improving bias and coverage in instrumental variable analysis with weak instruments for continuous and binary outcomes’, [ *Statistics in Medicine*]{} [**31**]{}(15), 1582–1600. Burgess, S. (2013). Identifying the odds ratio estimated by a two-stage instrumental variable analysis with a logistic regression model. , 32(27):4726–4747. Burton, A. Billingham, L, Bryan S. (2007), Cost-effectiveness in clinical trials: using multiple imputation to deal with incomplete cost data. [*Clinical Trials*]{} **4**(2), pp.154–161. Cai B, Small D, Ten Have T. (2011), ‘Two-stage instrumental variable methods for estimating the causal odds ratio: analysis of bias.’ *Statistics in Medicine* **30**:1809–1824. Carides, G. W., Heyse, J. F., and Iglewicz, B. (2000). A regression-based method for estimating mean treatment cost in the presence of right-censoring. , 1(3):299–313. Carpenter, J.R., Goldstein, H. and Kenward, M.G. (2011) REALCOM-IMPUTE software for multilevel Multiple Imputation with mixed response types. *Journal of Statistical Software*, **45**, 1–14. Carpenter, J.R. and Kenward, M.G. (2013) [*Multiple Imputation and its Application.*]{} Chichester: Wiley. Chib, S. and Hamilton, B. H. (2002). Semiparametric bayes analysis of longitudinal data treatment models. , 110(1):67–89. Clark SJ, Houle B (2014). ‘Validation, replication and sensitivity testing of Heckman-type selection models to adjust estimates of HIV prevalence.’ *Plos One***9**(11):e112563 Clarke, P. S. and Windmeijer, F. (2012), ‘Instrumental Variable Estimators for Binary Outcomes.’ [*Journal of the American Statistical Association*]{} 107(500):1638–1652. Conley, T. G., Hansen, C. B. and Rossi, P. E. (2012), ‘Plausibly exogenous’, [*Review of Economics and Statistics*]{} [**94**]{}(1), 260–272. Cox DR, Oakes D. (1984) [*Analysis of Survival Data.*]{} London: Chapman and Hall. Daniel, R. M., Kenward, M. G., Cousens, S. N. and De Stavola, B. L. (2012), ‘Using causal diagrams to guide analysis in missing data problems’, [*Statistical Methods in Medical Research*]{}: 21(3), 243–256. Daniels, M. J.  and Hogan, J. W. (2008). [*[Missing Data in Longitudinal Studies]{},*]{} Chapman & Hall / CRC, Das M, Newey MK, Vella F (2003). ‘Nonparametric estimation of sample selection models.’ *Review of Economic studies* **70**(1), 33–58. Davidson, R.  and MacKinnon, J. G. (2004). [*Economic theory and methods*]{}, New York: Oxford UniversityPress. DiazOrdaz K, Kenward M, Grieve R. (2014), ’Handling missing values in cost-effectiveness analyses that use data from cluster randomised trials’, [*Journal of the Royal Statistical Society Series A*]{}: 177(2):457-474. DiazOrdaz K, Kenward M, Gomes M, Grieve R (2016). ’Multiple Imputation methods for bivariate outcomes in cluster randomised trials’, [*Statistics in Medicine*]{}: 10.1002/sim.6935. DiazOrdaz, K., Francini A, and Grieve, R. (2017) ’Methods for estimating complier-average causal effects for cost-effectiveness analysis’. *Journal of the Royal Statistical Society (series A)*, DOI 10.1111/rssa.12294 Didelez, V., Meng, S. and Sheehan, N. (2010) ‘ Assumptions of IV Methods for Observational Epidemiology’. [*Statistical Science*]{},[**25**]{}(1), 22–40. Diggle, P.J. and Kenward, M.G. (1994) ‘Informative dropout in longitudinal data analysis (with discussion)’, *Journal of the Royal Statistical Society Series C (Applied statistics)*, **43**, 49–94. Dodd, S., White, I. and Williamson, P. (2012), ‘Nonadherence to treatment protocol in published randomised controlled trials: a review’, [*Trials*]{} [**13**]{}(1), 84. Duflo E., Glennerster, R and Kermer, M. (2008), ‘Randomization in Development Economics Research: A Toolkit’, *Handbook of Development Economics*, **1**Vol 4 (5) Dunn, G. and Bentall, R. (2007). Modelling treatment-effect heterogeneity in randomized controlled trials of complex interventions (psychological treatments). , 26(26):4719–4745. Emsley R, Dunn G, White IR. (2010). ‘Mediation and moderation of treatment effects in randomised controlled trials of complex interventions.’ *Statistical Methods in Medical Research* **19**(3):237–270. Faria, R. Gomes M, Epstein D, White IR. (2014). A Guide to Handling Missing Data in Cost-Effectiveness Analysis Conducted Within Randomised Controlled Trials. [*PharmacoEconomics*]{} [**32**]{}(12), 1157–1170. Fiebig, D. G., McAleer, M., and Bartels, R. (1992). Properties of ordinary least squares estimators in regression models with nonspherical disturbances. , 54(1-3):321–334. Foschi, P. and Kontoghiorghes, E. J. (2002). Seemingly unrelated regression model with unequal size observations: computational aspects. , 41(1):211 – 229. Matrix Computations and Statistics. Frangakis CE, Rubin DB. (2002) ‘Principal stratification in causal inference.’ *Biometrics* **58** (1):21–29. Gelman, A. and Hill, J. (2006), [*Data Analysis Using Regression and Multilevel/Hierarchical Models*]{}, Analytical Methods for Social Research, Cambridge University Press. Goldstein, H., Carpenter, J.R., Kenward, M.G., and Levin, K. (2009) Multilevel models with multivariate mixed response types. *Statistical Modelling*, [**9**]{}, 173–197. omes, M., [N]{}g,E. S., [G]{}rieve, R., [N]{}ixon, R., [C]{}arpenter, J. R. and [T]{}hompson, S. G. (2012). eveloping appropriate analytical methods for cost-effectiveness analyses that use cluster randomized trials.  [**32**]{} (2), 350–361. Gomes M, Diaz-Ordaz K, Grieve R, Kenward M (2013). ‘Multiple Imputation methods for handling missing data in cost-effectiveness analyses that use data from hierarchical studies: an application to cluster randomized trials.’ *Medical Decision Making*,[**33**]{}, 1051–63. Grant, A. M., Boachie, C., Cotton, S. C., Faria, R., Bojke, L. and Epstein, D. ( 2013) ‘Clinical and economic evaluation of laparoscopic surgery compared with medical management for gastro-oesophageal reflux disease: a 5-year follow-up of multicentre randomised trial (the REFLUX trial).’, [*Health Technology Assessment*]{} [**17(22)**]{}. Grant, A., Wileman, S., Ramsay, C., Boyke, L., Epstein, D. and Sculpher, M. (2008), ‘The effectiveness and cost-effectiveness of minimal access surgery amongst people with gastro-oesophageal reflux disease- a uk collaborative study. the REFLUX trial.’ [*Health Technology Assessment*]{} [**12 (31)**]{}. Greene, W. (2002). [*Econometric Analysis*]{}, Prentice-Hall international editions, Prentice Hall. Grieve R, Cairns J, Thompson SG (2010). Improving costing methods in multicentre economic evaluation: the use of Multiple Imputation for unit costs. Health Economics 19(8): 939-954. Heckman, J. (1976) ‘The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models’, [*Annals of Economic and Social Measurement*]{}, **5**, 475–492. Heckman, J. (1977) ‘Dummy endogenous variables in a simultaneous equation system.’ *Econometrica***46**(6), 931. Heckman, J. (1979). ‘Sample Selection Bias as a Specification Error’. [*Econometrica*]{}, **47**(1), 153–161. Hernán, M. A. and Robins, J. M. ‘Instruments for causal inference: an epidemiologist’s dream?’, [*Epidemiology*]{} [**17**]{} (4), 360–372. Hernán, M. A., Brumback, B.,and Robins, J. M. (2001) ‘ Marginal structural models to estimate the joint causal effect of nonrandomized treatments.’ [ *Journal of the American Statistical Association* ]{} **96**(454):440–448. Hirano, K. , Imbens, G. W. , Rubin, D. B.  and Zhou, X. H. (2000) ‘Assessing the effect of an influenza vaccine in an encouragement design’, [*Biostatistics*]{} [**1**]{}, 69 –88. Hoch, J. S., Briggs, A. H. and Willan, A. R., (2002). ‘Something old, something new, something borrowed, something blue: A framework for the marriage of health econometrics and cost-effectiveness analysis’. [*Health economics*]{}, [**11**]{} (5) 415–430. Hughes, D., Charles, J., Dawoud, D., Edwards, R. T., Holmes,E. , Jones, C. , Parham, P. , Plumpton, C. , Ridyard, C., Lloyd-Williams, H., Wood, E., and Yeo,S. T. (2016). ‘Conducting economic evaluations alongside randomised trials: Current methodological issues and novel approaches.’ [*PharmacoEconomics*]{}, 1–15. Imbens, G. W. and Angrist, J. D. (1994), ‘Identification and estimation of local average treatment effects’, [*Econometrica*]{} [**62**]{}(2), 467–475. Imbens, G. W. and Rubin, D. B. (1997), ‘Bayesian inference for causal effects in randomized experiments with noncompliance’, [*The Annals of Statistics*]{} [**25**]{}(1), 305–327. Jo B. (2002a) ‘Estimating intervention effects with noncompliance: Alternative model specifications’. [*Journal of Educational and Behavioral Statistics.*]{} [**27**]{}:385 –420. Jo B. (2002b) ‘Model misspecification sensitivity analysis in estimating causal effects of interventions with noncompliance.’ [*Statistics in Medicine.*]{} [**21**]{}: 3161– 3181. Jo B. and Muthén B. O. (2001) ‘Modeling of intervention effects with noncompliance: a latent variable approach for randomised trials,’ [*In Marcoulides GA, Schumacker RE, eds. New developments and techniques in structural equation modeling.*]{} Lawrence Erlbaum Associates, Mahwah, New Jersey: 57–87. Jo B, Asparouhov T, Muth[é]{}n BO, Ialongo NS, Brown CH. Cluster randomized trials with treatment noncompliance. Psychological Methods. 2008;13(1):1. Jones AM, Koolman X, Rice N (2006) ‘Health-related non-response in the British Household panel survey and European community household panel: using inverse- probability weighted estimators in non-linear models. ’ *Journal of the Royal statistical Society Series A, Statistics in Society* **169**, 543-569. Kenward, M.G. and Carpenter, J.R. (2007) Multiple Imputation: current perspectives. [*Statistical Methods in Medical Research*]{}, [**16**]{}, 199–218. Kleibergen, F. and Zivot, E. (2003), ‘Bayesian and classical approaches to instrumental variable regression’, [*Journal of Econometrics*]{} [**114**]{}(1), 29 – 72. Kreif N, Grieve R, Dí­az I and Harrison D (2015). ‘Evaluation of the Effect of a Continuous Treatment: A Machine Learning Approach with an Application to Treatment for Traumatic Brain Injury.’ *Health Econ* **24**(9):1213–1228. Lancaster, T. (2004), [*Introduction to Modern Bayesian Econometrics*]{}, Wiley. Latimer, N. R., Abrams, K., Lambert, P., Crowther, M., Wailoo, A., Morden, J., Akehurst, R. and Campbell, M. (2014), ‘Adjusting for treatment switching in randomised controlled trials - a simulation study and a simplified two-stage method’, [*Statistical Methods in Medical Research*]{}: 0962280214557578. Latimer, N. R., Abrams, K., Lambert, P., Crowther, M., Wailoo, A., Morden, J., Akehurst, R. and Campbell, M. (2014), ‘Adjusting survival time estimates to account for treatment switching in randomised controlled trials ? an economic evaluation context: methods, limitations and recommendations.’ *Medical Decision Making* **33**(6) 743 – 754. Leurent, B., Gomes M, Carpenter J. (2017). ’Missing data in trial-based cost-effectiveness analysis: an incomplete journey.’ , 27(6):1024–1040. Leurent, B; Gomes, M; Faria, R; Morris, S; Grieve, R; Carpenter, JR; (2018) Sensitivity Analysis for Not-at-Random Missing Data in Trial-Based Cost-Effectiveness Analysis: A Tutorial. PharmacoEconomics. ISSN 1170-7690 DOI: https://doi.org/10.1007/s40273-018-0650-5 Lin, D., Feuer, E., Etzioni, R., and Wax, Y. (1997). Estimating medical costs from incomplete follow-up data. , pages 419–434. Little, R.J.A. and Rubin, D.B. (2002) [*Statistical Analysis with Missing Data. Second Edition.*]{} Hoboken: Wiley. Mantopoulos, T., Mitchell, P.M., Welton, N.J. McManus, R. and Andronis, L. (2016) ‘Choice of statistical model for cost-effectiveness analysis and covariate adjustment: empirical application of prominent models and assessment of their results’, [*The European Journal of Health Economics* ]{}[**17**]{}:(8) 927–938. Marchenko YV, and Genton MG (2012). ‘A heckman selection model. ’ *Journal of the American Statistical Association* **107**(497), 304-317. Marra G and Radice R (2013). ‘A penalized likelihood estimation approach to semiparametric sample selection binary response modelling.’ *Electronic Journal of Statistics* **7**:1432–1455. Mason AJ, Gomes M, Grieve R, Ulug P, Powell JT, and Carpenter J. (2017) ‘Development of a practical approach to expert elicitation for randomised controlled trials with missing health outcomes: Application to the IMPROVE trial.’ *Clin Trials.* **14**(4):357–367 Mason, AJ; Gomes, M; Grieve, R; Carpenter, JR. (2018) ’A Bayesian framework for health economic evaluation in studies with missing data.’ *Health Economics* **DOI**: https://doi.org/10.1002/hec.3793 McGovern ME, Barnighaousen T, Marra G and Radice R (2015). ‘On the assumption of bivariate normality in selection models a copula approach applied to estimating HIV prevalence.’ *Epidemiology* **26**(2), 229–237. Molenberghs, G. and Kenward, M. G. (2007) *Missing Data in Clinical Studies.* Chichester: Wiley. Molenberghs, G., Fitzmaurice, G., Kenward, M. G., Tsiatis, A., and Verbeke, G. (Eds.). (2014). *Handbook of missing data methodology.* CRC Press. \(2013) , [*[Guide to the Methods of Technology Appraisal]{}*]{}, National Institute for Health and Care Excellence, London, UK. ixon, R. M. and [T]{}hompson, S. G. (2005), ‘[Methods for incorporating covariate adjustment, subgroup analysis and between-centre differences into cost-effectiveness evaluations.]{}’ [*Health economics*]{} [**14**]{}(12), 1217–29. Noble, S. M., Hollingworth, W. and Tilling, K. (2012), ‘Missing data in trial-based cost-effectiveness analysis: the current state of play’. *Health Economics*, **21** 187–200. Puhani P. A. (2000). ‘The Heckman correction for sample selection and its critique. ’ *Journal of economic surveys* **14**(1):53–68. Raikou, M. and McGuire, A. (2004). Estimating medical care costs under conditions of censoring. , 23(3):443–470. Raikou, M. and McGuire, A. (2006). , page 429. Edward Elgar Publishing. Robins, J. M. (1994). Correcting for non-compliance in randomized trials using structural nested mean models. , 23(8):2379–2412. Robins J.M, (2000) ‘Marginal structural models versus structural nested models as tools for causal inference.’ *Statistical models in epidemiology, the environment, and clinical trials.* Springer; 95–133. Robins, J. M., Hernan, M. A., and Brumback, B. (2000). Marginal structural models and causal inference in epidemiology. , 11(5):550–560. Robins J.M, Finkelstein D.M. (2000) ‘Correcting for noncompliance and dependent censoring in an AIDS clinical trial with inverse probability of censoring weighted (IPCW) log-rank tests’. [*Biometrics*]{} 56: 779–788. Robins, J.M.  Rotnitzky, A. and Zhao, L.  P.  (1994), ‘Estimation of regression coefficients when some regressors are not always observed’ [*JASA*]{} [**89**]{},  846–866. Robins, J. M. and Tsiatis, A. A. (1991) ‘Correcting for non-compliance in randomized trials using rank preserving structural failure time models’, *Communications in Statistics ? Theory and Methods*, **20**(8), 2609–2631 . Rossi, P., Allenby, G. and McCulloch, R. (2012), [*Bayesian Statistics and Marketing*]{}, Wiley Series in Probability and Statistics, Wiley. ubin, D. (1987). [*[Multiple Imputation for Nonresponse in Surveys.]{}*]{} [Chichester: Wiley]{}. Schafer, J.L (1997) [*[A]{}nalysis of [I]{}ncomplete [M]{}ultivariate [D]{}ata.*]{} London: Chapman and Hall. Schafer, J.L. (2001) Multiple Imputation with PAN. In: *New methods for the analysis of change. Decade of Behavior.* L. M. Collins and A. G. Sayer, Eds, Washington: American Psychological Association. pp. 355–377. chafer, [J]{}.,L. and [Y]{}ucel, [R]{}. (2002) [C]{}omputational strategies for multivariate linear mixed-effects models with missing values. *Journal of Computational and Graphical Statistics*, **11**, 421–442. Schmidt, P. (1977). Estimation of seemingly unrelated regressions with unequal numbers of observations. , 5(3):365 – 377. Schmidt, P. (1990). ‘Three-stage least squares with different instruments for different equations’, [*Journal of Econometrics*]{} [**43**]{}(3), 389 – 394. Schochet PZ, Chiang HS. Estimation and identification of the complier average causal effect parameter in education RCTs. Journal of Educational and Behavioral Statistics. 2011;36(3):307–345. (2011). ‘Review of inverse probability weighting for dealing with missing data’, [*Statistical Methods in Medical Research*]{} [**22**]{}, 278–295. Small, DS, Tan Z, Ramsahai RR, Lorch S and Brookhart AM (2017). ‘Instrumental Variable Estimation with a Stochastic Monotonicity Assumption.’ Submitted to *Statistical Science*; previous version available arXiv:1407.7308. Sterne, J. A., White, I. R., Carlin, J. B., Spratt, M., Royston, P., Kenward, M. G., and Carpenter, J. R. (2009). ‘Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls.’ *BMJ*, **338**, b2393. Stratmann, T. (1992). The effects of logrolling on congressional voting. , pages 1162–1176. Swamy, P. and Mehta, J. (1975). On bayesian estimation of seemingly unrelated regressions when some observations are missing. , 3(2):157 – 169. Tchetgen Tchetgen, E. J. , Walter, S. , Vansteelandt, S., Martinussen, T., Glymour, M. (2015) ‘Instrumental variable estimation in a survival context.’ *Epidemiology.* **26**(3):402–410 Terza, J. V., Basu, A. and Rathouz, P. J. (2008). ‘Two-stage residual inclusion estimation: Addressing endogeneity in health econometric modeling’, [*Journal of Health Economics*]{} [**27**]{}(3), 531 – 543. Tóth, B. and van der Laan, M. J. (2016). for marginal structural models based on an instrument. van [B]{}uuren, [S]{}. and [G]{}roothuis-[O]{}udshoorn, [K]{} (2011) mice: [M]{}ultivariate [I]{}mputation by [C]{}hained [E]{}quations in [R]{}. *Journal of Statistical Software*, **45**, 1–67. van Buuren, S. (2012) *Flexible Imputation of Missing Data*. London: Chapman & Hall/CRC. van der Laan, M. J. and Gruber, S. (2012). Targeted minimum loss based estimation of causal effects of multiple time point interventions. , 8(1). van der Laan, M. J., Polley, E. C., and Hubbard, A. E. (2007). Super learner. , 6(1):1–21. Vella F (1998). ‘Estimating models with sample selection bias: a survey’ *Journal of Human Resources* **33**(1),127–169 Vansteelandt, S., Bowden, J., Babanezhad, M., Goetghebeur, E., et al. (2011). On instrumental variables estimation of causal odds ratios. , 26(3):403–422. von Hinke S, Davey Smith G, Lawlor D. A. , Propper C., and Windmeijer F. (2016). ‘Genetic Markers as Instrumental Variables.’ *Journal of Health Economics* **45** 131–148. White, I. R., Babiker, A. G., Walker, S. and Darbyshire, J. H. (1999 ‘Randomization-based methods for correcting for treatment changes: examples from the Concorde trial.’ [*Statistics in Medicine*]{}, [**18**]{}: 2617–2634 White, I. R. and Carlin, J. B. (2010). ‘Bias and efficiency of Multiple Imputation compared with complete-case analysis for missing covariate values’. [*Statistics in Medicine*]{}, [**28**]{}: 2920–2931. White, I. R., Carpenter, J., Evans, S. and Schroter, S. (2007) ‘Eliciting and using expert opinions about non-response bias in randomised controlled trials.’ *Clinical Trials*, **4**, 125–139. White, I. R., Royston, P.  and Wood, A. M. (2011). ‘Multiple Imputation using chained equations: issues and guidance for practice’, [*Statistics in Medicine*]{}, [**30**]{} (4), 377–399. White, I. R. and Royston, P. (2009). ‘Imputing missing covariate values for the Cox model’. [*Statistics in Medicine*]{}, [**28**]{}: 1982–1998. White IR.(2015). ’Uses and limitations of randomization-based efficacy estimators, Statistical Methods in Medical Research’. [*, Statistical Methods in Medical Research*]{}, [**14**]{}: 327–347. illan,A. R. (2006). ‘[S]{}tatistical [A]{}nalysis of cost-effectiveness data from randomised clinical trials’. [*[Expert Revision Pharmacoeconomics Outcomes Research]{}*]{} [**6**]{}, 337–346. illan, A. and [B]{}riggs, A. (2006). John Wiley & Sons Ltd. illan, A. R., [B]{}riggs, A. and [H]{}och,J. (2004). ‘[R]{}egression methods for covariate adjustment and subgroup analysis for non-censored cost-effectiveness data’. [*Health Econ.*]{} [*13(5)*]{}, 461–475. illan, A. R., [C]{}hen, E., [C]{}ook, R. and [L]{}in, D. (2003). ‘[I]{}ncremental net benefit in randomized clinical trials with qualify-adjusted survival’. [*Statistics in Medicine*]{} [**22**]{}, 353–362. Wooldridge, J. M. (2010). . MIT Press. Yamaguchi, T., Ohashi, Y. (2004) ‘Adjusting for differential proportions of second-line treatment in cancer clinical trials. Part I: Structural nested models and marginal structural models to test and estimate treatment arm effects.’ *Statistics in Medicine* **23**(13):1991–2003. Ye, C., Beyene, J. and Browne G. (2014). Estimating treatment effects in randomised controlled trials with non-compliance: a simulation study. *BMJ Open* **4**e005362. doi: 10.1136/bmjopen-2014-005362 ucel, [R]{}. and [D]{}ermitas, [H]{} (2010) [I]{}mpact of non-normal random effects on inference by Multiple Imputation: [A]{} simulation assessment. *Computational Statistics and Data Analysis*, **54**, 790–801. Zellner, A. (1962). ‘An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias’. [*Journal of the American Statistical Association*]{} [**57**]{}(298), pp. 348–368. Zellner, A. and Huang D. S. (1962). ‘Further Properties of Efficient Estimators for Seemingly Unrelated Regression Equations’. [*International Economic Review*]{} [**3**]{}(3), pp. 300–313. Zellner, A. and Theil, H. (1962), ‘Three-stage least squares: Simultaneous estimation of simultaneous equations’. [*Econometrica*]{} [**30**]{}(1), pp. 54–78. Zhang, Z., Peluso, M. J., Gross, C. P., Viscoli, C. M. and Kernan, W. N. (2014). ‘Adherence reporting in randomized controlled trials’. [*Clinical Trials*]{} [**11**]{}(2), 195–204. [^1]: If we are prepared to assume the errors are bivariate normal, estimation can proceed by maximum likelihood. [^2]: For simplicity we enforced a monotone pattern of missingness by excluding the three individuals with missing $\mbox{EQ5D}_{0}$ but observed costs, i.e. $R_{0i}=0$ but $R_{1i}=1$.
--- abstract: 'A famous result of Fre[ĭ]{}man describes the sets $A$, of integers, for which $|A+A| \leq K|A|$. In this short note we address the analagous question for subsets of vector spaces over $\mathbb{F}_2$. Specifically we show that if $A$ is a subset of a vector space over $\mathbb{F}_2$ with $|A+A| \leq K|A|$ then $A$ is contained in a coset of size at most $2^{O(K^{3/2}\log K)}|A|$, which improves upon the previous best, due to Green and Ruzsa, of $2^{O(K^2)}|A|$. A simple example shows that the size may need to be at least $2^{\Omega(K)}|A|$.' address: | Department of Pure Mathematics and Mathematical Statistics\ University of Cambridge\ Wilberforce Road\ Cambridge CB3 0WA\ England author: - Tom Sanders bibliography: - 'master.bib' title: 'A note on Fre[ĭ]{}man’s theorem in vector spaces' --- Introduction ============ If $A$ and $B$ are subsets of an abelian group $G$ then we define the *sumset*, $A+B$, to be the set of all elements formed by adding an element of $A$ to an element of $B$ i.e. $A+B=\{a+b:a \in A \textrm{ and }b \in B\}$. There is a famous result of Fre[ĭ]{}man [@GAF] which in some sense describes the sets $A \subset \mathbb{Z}$ for which $|A+A|\leq K|A|$. This note concerns what in modern parlance would be called the finite field analogue of Fre[ĭ]{}man’s result. Specifically it concerns the following theorem. \[FfF\]*(Finite Field Fre[ĭ]{}man)* Suppose that $G$ is a vector space over $\mathbb{F}_2$. Suppose that $A \subset G$ is a finite set with $|A+A| \leq K|A|$. Then $A$ is contained in a coset of size at most $f(K)|A|$. While finite field models are an important tool for understanding problems in general abelian groups this result has independent significance in coding theory and has been pursued by a number of authors. We do not attempt a comprehensive survey here, but mention a few papers which are important from our standpoint. The paper [@JMDFHAP] of Deshouillers, Hennecart and Plagne provides an overview of the problem and records a quantitative version of Theorem \[FfF\] due to Ruzsa; it is a relatively simple argument which shows that one may take $f(K) \leq K2^{\lfloor K^3\rfloor -1}$. The bulk of their paper concerns refined estimates for the case when $K$ is small; by contrast our interest lies in the asymptotics. In a recent paper of Green and Ruzsa [@BJGIZR1] the authors improve Ruzsa’s bound from [@JMDFHAP] when they show that one may take $f(K) \leq K^22^{\lfloor 2K^2-2\rfloor}$. In this note we refine this further by proving that one may take $f(K) \leq 2^{O(K^{3/2}\log K)}$. Formally, then, we shall prove the following theorem. \[maintheorem\] Suppose that $G$ is a vector space over $\mathbb{F}_2$. Suppose that $A \subset G$ is a finite set with $|A+A| \leq K|A|$. Then $A$ is contained in a coset of size at most $2^{O(K^{3/2}\log K)}|A|$. For comparison we record the following well known example. Let $H$ be a finite subgroup of $G$ and $g_1+H,...,g_{K-1}+H$ be $K-1$ linearly independent cosets of $H$ in the quotient space $G/H$. Let $A$ be the union of $H$ and the representatives $g_1,...,g_{K-1}$. Then $|A|=|H| + K-1 \sim |H|$ and $$|A+A|=K(|H|+(K-1)/2) \sim K|H| \lesssim K|A|.$$ However $A$ contains a linearly independent set of size $\dim H + K-1$ and so $A$ is not contained in a coset of dimension less than $\dim H + K-2$ hence if $H'$ is a coset containing $A$ then $$|H'| \geq 2^{\dim H + K-2} = 2^{K-2}|H| \gtrsim 2^{K-2}|A|.$$ This example perhaps suggests that one could take $f(K) \leq 2^{O(K)}$ in Theorem \[FfF\]. In fact there are other more compelling reasons to believe this, however it does not seem to reflect the underlying situation; in [@BJGPFR] Green addresses this concern by introducing (a special case of) the Polynomial Fre[ĭ]{}man-Ruzsa conjecture (attributed to Marton in [@IZRArb]) which, if true, seems to have some very important applications. For a detailed discussion of this see either the paper of Green or Chapter 5 of the book [@TCTVHV] of Tao and Vu. Proof of Theorem \[maintheorem\] ================================ In [@BJGIZR], Green and Ruzsa extended Ruzsa’s proof of Fre[ĭ]{}man’s theorem from [@IZRF] to arbitrary abelian groups; for an exposition see [@BJGEdin]. Our proof is a refinement of theirs. Their method becomes significantly simpler in the vector space setting, and would immediately give us the following weak version of the main theorem. Suppose that $G$ is a vector space over $\mathbb{F}_2$. Suppose that $A \subset G$ is a finite set with $|A+A| \leq K|A|$. Then $A$ is contained in a coset of size at most $2^{O(K^2\log K)}|A|$. The proof involves three main step. - *(Finding a good model)* First we use the fact that $|A+A| \leq K|A|$ to show that $A$ can be embedded as a *dense* subset of $\mathbb{F}_2^n$ in a way which preserves much of its additive structure. - *(Bogolubov’s argument)* Next we show that if $A$ is a dense subset of a compact vector space over $\mathbb{F}_2$ and $A+A$ is not much bigger than $A$ then $2A-2A$ contains a large subspace. - *(Pullback and covering)* Finally we use our embedding to pull back this subspace to a coset in the original setting. A covering argument then gives us the result. In the remainder of the note we follow through this programme with our refinement occurring at the second stage. Finding a good model -------------------- The appropriate notion of structure preserving was introduced by Fre[ĭ]{}man in [@GAF]; we record the definition now. If $G$ and $G'$ are two abelian groups containing the sets $A$ and $A'$ respectively then we say that $\phi:A\rightarrow A'$ is a *Fre[ĭ]{}man $s$-homomorphism* if whenever $a_1,...,a_s,b_1,...,b_s \in A$ satisfy $$a_1+...+a_s = b_1+...+b_s$$ we have $$\phi(a_1)+...+\phi(a_s) = \phi(b_1)+...+\phi(b_s).$$ If $\phi$ has an inverse which is also an $s$-homomorphism then we say that $\phi$ is a *Fre[ĭ]{}man $s$-isomorphism*. A simple but elegant argument of Green and Ruzsa establishes the existence of a small vector space into which we can embed our set via a Fre[ĭ]{}man isomorphism. Specifically they prove the following proposition. \[goodmodel\] *(Proposition 6.1, [@BJGIZR])* Suppose that $A$ is a subset of a vector space over $\mathbb{F}_2$. Suppose that $|A+A| \leq K|A|$. Then there is a vector space $G'$ over $\mathbb{F}_2$ with $|G'| \leq K^{2s}|A|$, a set $A' \subset G'$, and a Fre[ĭ]{}man $s$-isomorphism $\phi:A \rightarrow A'$. Bogolyubov’s argument --------------------- In this section we show that if $A$ is a subset of a compact vector space over $\mathbb{F}_2$ then $2A-2A$ contains a large subspace. Originally (in [@IZRF]) Ruzsa employed an argument of Bogolubov with the Fourier transform. This was refined by Chang in [@MCC], and the improvement of this note rests on a further refinement. We shall need some notation for the Fourier transform and we record this now; Rudin, [@WR], includes all the results which we require. Suppose that $G$ is a compact vector space over $\mathbb{F}_2$. Write ${\widehat}{G}$ for the dual group, that is the discrete vector space over $\mathbb{F}_2$ of continuous homomorphisms $\gamma:G \rightarrow S^1$, where $S^1:=\{z \in \mathbb{C}:|z|=1\}$. $G$ may be endowed with Haar measure $\mu_G$ normalised so that $\mu_G(G)=1$ and as a consequence we may define the Fourier transform ${\widehat}{.}:L^1(G) \rightarrow \ell^\infty({\widehat}{G})$ which takes $f \in L^1(G)$ to $${\widehat}{f}: {\widehat}{G} \rightarrow \mathbb{C}; \gamma \mapsto \int_{x \in G}{f(x)\gamma(-x)d\mu_G(x)}.$$ In [@MCC] Chang proved the following result. (Although in [@MCC] it is stated for $\mathbb{Z}/N\mathbb{Z}$, the same proof applies to any compact abelian group and in particular to compact vector spaces over $\mathbb{F}_2$.) Suppose that $G$ is a compact vector space over $\mathbb{F}_2$. Suppose that $A \subset G$ has density $\alpha$ and $\mu_G(A+A) \leq K\mu_G(A)$. Then $2A-2A$ contains (up to a null set) a subspace of codimension $O(K \log \alpha^{-1})$. We prove the following refinement of this. \[newchang\] Suppose that $G$ is a compact vector space over $\mathbb{F}_2$. Suppose that $A \subset G$ has density $\alpha$ and $\mu_G(A+A) \leq K \mu_G(A)$. Then $2A-2A$ contains (up to a null set) a subspace of codimension $O(K^{1/2}\log \alpha^{-1})$. To prove this we require the following *pure density* version of the proposition. \[aaa\] *(Theorem 2.4, [@TSASS])* Suppose that $G$ is a compact vector space over $\mathbb{F}_2$. Suppose that $A \subset G$ has density $\alpha$. Then $2A-2A$ contains (up to a null set) a subspace of codimension $O(\alpha^{-1/2})$. The proof in [@TSASS] is significantly simpler in the vector space setting. Since the ideas are important we include the proof here; the basic technique is iterative. *(Iteration lemma)*\[itlem2\] Suppose that $G$ is a compact vector space over $\mathbb{F}_2$. Suppose that $A \subset G$ has density $\alpha$. Then at least one of the following is true. 1. $2A-2A$ contains all of $G$ (up to a null set). 2. There is a subspace $V$ of ${\widehat}{G}$ with dimension 1, an element $x \in G$ and a set $A' \subset V^\perp$ with the following properties. - $x+A' \subset A$; - $\mu_{V^\perp}(A') \geq \alpha(1+2^{-1}\alpha^{1/2})$. As usual with problems of this type studying the sumset $2A-2A$ is difficult so we turn instead to $g:=\chi_{A} \ast \chi_A \ast \chi_{-A} \ast \chi_{-A}$ which has support equal to $2A-2A$. One can easily compute the Fourier transform of $g$ in terms of that of $\chi_{A}$: $${\widehat}{g}(\gamma)=|{\widehat}{\chi_{A}}(\gamma)|^4 \textrm{ for all } \gamma \in {\widehat}{G},$$ from which it follows that $g$ is very smooth. Specifically ${\widehat}{g} \in \ell^{\frac{1}{2}}({\widehat}{G})$ since $$\begin{aligned} \nonumber \sum_{\gamma \in {\widehat}{G}}{|{\widehat}{g}(\gamma)|^{\frac{1}{2}}} & = & \sum_{\gamma \in {\widehat}{G}}{|{\widehat}{\chi_{A}}(\gamma)|^2}\\ \label{smoothness}& = & \alpha \textrm{ by Parseval's theorem.}\end{aligned}$$ We may assume that $\mu_G(2A-2A)<1$ since otherwise we are in the first case of the lemma, so $S:= (2A-2A)^c$ has positive density, say $\sigma$. Plancherel’s theorem gives $$0 = \langle \chi_S, g \rangle = \sum_{\gamma \in {\widehat}{G}}{\overline{{\widehat}{\chi_S}(\gamma)}{\widehat}{g}(\gamma)} \Rightarrow | {\widehat}{\chi_S}(0_{{\widehat}{G}}){\widehat}{g}(0_{{\widehat}{G}})| \leq \sum_{\gamma \neq 0_{{\widehat}{G}}}{|{\widehat}{\chi_S}(\gamma){\widehat}{g}(\gamma)|}.$$ ${\widehat}{g}(0_{{\widehat}{G}})=\alpha^4$, ${\widehat}{\chi_S}(0_{{\widehat}{G}})=\sigma$ and $|{\widehat}{\chi_S}(\gamma)| \leq \|\chi_S\|_1=\sigma$, so the above yields $$\sigma\alpha^4 \leq \sigma\sum_{\gamma \neq 0_{{\widehat}{G}}}{|{\widehat}{g}(\gamma)|} \Rightarrow \alpha^4 \leq \sum_{\gamma \neq 0_{{\widehat}{G}}}{|{\widehat}{g}(\gamma)|} \textrm{ since }\sigma>0.$$ Finding a non-trivial character at which ${\widehat}{g}$ is large is now simple since ${\widehat}{g} \in \ell^{\frac{1}{2}}({\widehat}{G})$. $$\alpha^4 \leq \sup_{\gamma \neq 0_{{\widehat}{G}}}{|{\widehat}{g}(\gamma)|^{\frac{1}{2}}}\left(\sum_{\gamma \in {\widehat}{G}}{|{\widehat}{g}(\gamma)|^{\frac{1}{2}}}\right) \leq \sup_{\gamma \neq 0_{{\widehat}{G}}}{|{\widehat}{\chi_A}(\gamma)|^2}. \alpha$$ by (\[smoothness\]). Rearranging this we have $$\label{usefullinearbias} \sup_{\gamma \neq 0_{{\widehat}{G}}}{|{\widehat}{\chi_A}(\gamma)|} \geq \alpha^{\frac{3}{2}}.$$ The set $\Gamma:=\{\gamma \in {\widehat}{G}: |{\widehat}{\chi_A}(\gamma)| \geq \alpha^{\frac{3}{2}} \}$ has size at most $\alpha^{-2}$ since $$|\Gamma|.(\alpha^{\frac{3}{2}})^{2} \leq \sum_{\gamma \in {\widehat}{G}}{|{\widehat}{\chi_A}(\gamma)|^{2}} = \alpha \textrm{ by Parseval's theorem.}$$ It follows that the supremum in (\[usefullinearbias\]) is really a maximum and we may pick a character $\gamma$ which attains this maximum. We now proceed with a standard $L^\infty$-density-increment argument. Let $V:=\{0_{{\widehat}{G}},\gamma\}$ and $f:=\chi_{A} - \alpha$. Then $$\int{f \ast \mu_{V^\perp} d\mu_G}=0 \textrm{ and } \|f \ast \mu_{V^\perp}\|_1 \geq \|{\widehat}{f}{\widehat}{\mu_{V^\perp}}\|_\infty = |{\widehat}{\chi_{A}}(\gamma)|.$$ Adding these we conclude that $$\begin{aligned} |{\widehat}{\chi_{A}}(\gamma)| & \leq &2\int{(f \ast \mu_{V^\perp})_+d\mu_G}\\ & = & 2\int{(\chi_{A} \ast \mu_{V^\perp} - \alpha)_+d\mu_G}\\ & \leq & 2(\|\chi_{A} \ast \mu_{V^\perp}\|_\infty - \alpha).\end{aligned}$$ $\chi_A \ast \mu_{V^\perp}$ is continuous so there is some $x \in G$ with $$\chi_A \ast \mu_{V^\perp}(x) = \|\chi_A \ast \mu_{V^\perp}\|_\infty \geq \alpha(1+2^{-1}\alpha^{1/2}).$$ The result follows on taking $A'=x+A$. We define a nested sequence of finite dimensional subspaces $V_0 \leq V_1 \leq ... \leq {\widehat}{G}$, elements $x_k \in V_k^\perp$ and subsets $A_k$ of $V_k^\perp$ with density $\alpha_k$, such that $x_k+ A_{k} \subset A_{k-1}$. We begin the iteration with $V_0:=\{0_{{\widehat}{G}}\}$, $A_0:=A$ and $x_0=0_{G}$. Suppose that we are at stage $k$ of the iteration. If $\mu_{V_k^\perp}(2A_k-2A_k)<1$ then we apply Lemma \[itlem2\] to $A_k$ considered as a subset of $V_k^\perp$. We get a vector space $V_{k+1}$ with $\dim V_{k+1} = 1+ \dim V_k$, an element $x_{k+1} \in G$ and a set $A_{k+1}$ such that $$x_{k+1} + A_{k+1} \subset A_k \textrm{ and } \alpha_{k+1} \geq \alpha_k(1+2^{-1}\alpha_k^{1/2}).$$ It follows from the density increment that if $m_k=2\alpha_k^{-1/2}$ then $\alpha_{k+m_k} \geq 2\alpha_k$. Define the sequence $(N_{l})_l$ recursively by $N_0=0$ and $N_{l+1}=m_{N_l}+N_l$. The density $\alpha_{N_l}$ is easily estimated: $$\alpha_{N_l} \geq 2^l\alpha \textrm{ and } N_l \leq \sum_{s=0}^l{2\alpha_{N_s}^{-1/2}} \leq 2 \alpha^{-1/2}\sum_{s=0}^l{2^{-s/2}} = O(\alpha^{-1/2}).$$ Since density cannot be greater than 1 there is some stage $k$ with $k = O(\alpha^{-1/2})$ when the iteration cannot proceed i.e. for which $2A_k-2A_k$ contains all of $V_k^\perp$ (except for a null set). By construction of the $A_k$s there is a translate of $A_k$ which is contained in $A_0=A$ and hence $2A_k-2A_k$ is contained in $2A-2A$. It follows that $2A-2A$ contains (up to a null set) a subspace of $G$ of codimension $k=O(\alpha^{-1/2})$. The key ingredient in the proof of Proposition \[newchang\] is the following iteration lemma, which has a number of similarities with Lemma \[itlem2\]. \[itl\] Suppose that $G$ is a compact vector space over $\mathbb{F}_2$. Suppose that $A, B \subset G$ have $\mu_G(A+B) \leq K \mu_G(B)$. Write $\alpha$ for the density of $A$. Then at least one of the following is true. 1. $B$ contains (up to a null set) a subspace of codimension $O(K^{1/2})$. 2. There is a subspace $V$ of ${\widehat}{G}$ with dimension 1, elements $x,y \in G$ and sets $A',B' \subset V^\perp$ with the following properties. - $x+A' \subset A \textrm{ and } y+B' \subset B$; - $\mu_{V^\perp}(A') \geq \alpha(1+2^{-3/2}K^{-1/2})$; - $\mu_{V^\perp}(A'+B') \leq K\mu_{V^\perp}(B')$. If $\mu_G(B) \geq (2K)^{-1}$ then we apply Proposition \[aaa\] to get that $B$ contains (up to a null set) a subspace of codimension $O(K^{1/2})$ and we are in the first case of the lemma. Hence we assume that $\mu_G(B) \leq (2K)^{-1}$. Write $\beta$ for the density of $B$. We have $$\begin{aligned} \nonumber (\alpha\beta)^2 & = & \left(\int{\chi_A \ast \chi_Bd\mu_G}\right)^2\\ \nonumber & \leq & \mu_G(A+B)\int{(\chi_A \ast \chi_B)^2d\mu_G} \textrm{ by Cauchy-Schwarz,}\\ \nonumber & \leq & K \beta \int{(\chi_A \ast \chi_B)^2d\mu_G} \textrm{ by hypothesis,}\\ \label{ytu} & = & K\beta \sum_{\gamma \in {\widehat}{G}}{|{\widehat}{\chi_A}(\gamma)|^2|{\widehat}{\chi_B}(\gamma)|^2} \textrm{ by Parseval's theorem.}\end{aligned}$$ The main term in the sum on the right is the contribution from the trivial character, in particular $$|{\widehat}{\chi_A}(0_{{\widehat}{G}})|^2|{\widehat}{\chi_B}(0_{{\widehat}{G}})|^2=\alpha^2\beta^2,$$ while $$\begin{aligned} \sum_{\gamma \neq 0_{{\widehat}{G}}}{|{\widehat}{\chi_A}(\gamma)|^2|{\widehat}{\chi_B}(\gamma)|^2} & \leq & \sup_{\gamma \neq 0_{{\widehat}{G}}}{|{\widehat}{\chi_A}(\gamma)|^2} \sum_{\gamma \in {\widehat}{G}}{|{\widehat}{\chi_B}(\gamma)|^2}\\ & = & \beta \sup_{\gamma \neq 0_{{\widehat}{G}}}{|{\widehat}{\chi_A}(\gamma)|^2}\textrm{ by Parseval's theorem for $\chi_B$}.\end{aligned}$$ Putting these last two observations in (\[ytu\]) gives $$\alpha^2\beta^2 \leq K\beta^3\alpha^2 + K\beta^2\sup_{\gamma \neq 0_{{\widehat}{G}}}{|{\widehat}{\chi_A}(\gamma)|^2}.$$ Since $K\beta \leq 2^{-1}$ we can rearrange this to conclude that $$\label{usefullinearbias2} \sup_{\gamma \neq 0_{{\widehat}{G}}}{|{\widehat}{\chi_A}(\gamma)|} \geq (2K)^{-1/2}\alpha.$$ The set $\Gamma:=\{\gamma \in {\widehat}{G}: |{\widehat}{\chi_A}(\gamma)| \geq (2K)^{-1/2}\alpha \}$ has size at most $2K\alpha^{-1}$ since $$|\Gamma|.((2K)^{-1/2}\alpha)^{2} \leq \sum_{\gamma \in {\widehat}{G}}{|{\widehat}{\chi_A}(\gamma)|^{2}} = \alpha \textrm{ by Parseval's theorem.}$$ It follows that the supremum in (\[usefullinearbias2\]) is really a maximum and we may pick a character $\gamma$ which attains this maximum. We now proceed with a standard $L^\infty$-density-increment argument. Let $V:=\{0_{{\widehat}{G}},\gamma\}$ and $f:=\chi_{A} - \alpha$. Then $$\int{f \ast \mu_{V^\perp} d\mu_G}=0 \textrm{ and } \|f \ast \mu_{V^\perp}\|_1 \geq \|{\widehat}{f}{\widehat}{\mu_{V^\perp}}\|_\infty = |{\widehat}{\chi_{A}}(\gamma)|.$$ Adding these we conclude that $$\begin{aligned} |{\widehat}{\chi_{A}}(\gamma)| & \leq &2\int{(f \ast \mu_{V^\perp})_+d\mu_G}\\ & = & 2\int{(\chi_{A} \ast \mu_{V^\perp} - \alpha)_+d\mu_G}\\ & \leq & 2(\|\chi_{A} \ast \mu_{V^\perp}\|_\infty - \alpha).\end{aligned}$$ Since $\chi_A \ast \mu_{V^\perp}$ is continuous it follows that there is some $x$ for which $$\chi_A \ast \mu_{V^\perp}(x) \geq \alpha (1+2^{-3/2}K^{-1/2}).$$ Let $x'+V^\perp:=G \setminus (x+V^\perp)$ be the *other* coset of $V^\perp$ in $G$. Write $A_1=A \cap (x+V^\perp)$,$B_1=B \cap (x+V^\perp)$ and $B_2=B \cap (x'+V^\perp)$. Now $A_1 \subset A$ so $$(A_1+B_1) \cup (A_1+B_2) \subset A + B_1 \cup B_2,$$ and $A_1+B_1 \subset V^\perp$ while $A_1+B_2 \subset x+x'+V^\perp$ so these two sets are disjoint and we conclude that $$\begin{aligned} \mu_G(A_1+B_1) + \mu_G(A_1+B_2) & = & \mu_G( (A_1+B_1) \cup (A_1+B_2)) \\ & \leq & \mu_G( A + B_1 \cup B_2)\\ & \leq & K \mu_G(B_1 \cup B_2)\textrm{ by hypothesis }\\ & \leq & K(\mu_G(B_1) + \mu_G(B_2)).\end{aligned}$$ Hence, by averaging, there is some $i$ such that $$\mu_G(A_1+B_i) \leq K\mu_G(B_i).$$ We take $A'=x+A_1$ and, if $i=1$, $B'=x+B_1$ and $y=x$, while if $i=2$, $B'=x'+B_2$ and $y=x'$. The result follows. We define a nested sequence of finite dimensional subspaces $V_0 \leq V_1 \leq ... \leq {\widehat}{G}$, elements $x_k,y_k \in V_k^\perp$, and subsets $A_k$ and $B_k$ of $V_k^\perp$ such that $A_{k} + x_k \subset A_{k-1}$ and $B_k + y_k \subset B_{k-1}$ and $\mu_{V_k^\perp}(A_k+B_k) \leq K \mu_{V_k^\perp}(B_k)$. We write $\alpha_k$ for the density of $A_k$ in $V_k^\perp$. Begin the iteration with $V_0:=\{0_{{\widehat}{G}}\}$, $B_0=A_0:=A$ and $x_0=y_0=0_{G}$. Suppose that we are at stage $k$ of the iteration. We apply Lemma \[itl\] to $A_k$ and $B_k$ inside $V_k^\perp$ (which we can do since $\mu_{V_k^\perp}(A_k+B_k) \leq K \mu_{V_k^\perp}(B_k)$). It follows that either $2B_k-2B_k$ contains (up to a null set) a subspace of codimension $O(K^{1/2})$ in $V_k^\perp$ or we get a subspace $V_{k+1} \leq {\widehat}{V_{k}}$ with $\dim V_{k+1}=1+\dim V_k$, elements $x_{k+1},y_{k+1} \in V_k^\perp$ and sets $A_{k+1}$ and $B_{k+1}$ with the following properties. - $x_{k+1}+A_{k+1} \subset A_k \textrm{ and } y_{k+1}+B_{k+1} \subset B_k$; - $\mu_{V^\perp}(A_{k+1}) \geq \alpha_{k}(1+2^{-3/2}K^{-1/2})$; - $\mu_{V^\perp}(A_{k+1}+B_{k+1}) \leq K\mu_{V^\perp}(B_{k+1})$. It follows from the density increment that if $m=2^{3/2}K^{1/2}$ then $\alpha_{k+m} \geq 2\alpha_k$, and hence the iteration must terminate (because density can be at most 1) at some stage $k$ with $k=O(K^{1/2}\log \alpha^{-1})$. The iteration terminates if $2B_k-2B_k$ contains (up to a null set) a subspace of codimension $O(K^{1/2})$ in $V_k^\perp$, from which it follows that $2A-2A \supset 2B_k -2B_k$ contains (up to a null set) a subspace of codimension $k+O(K^{1/2}) = O(K^{1/2}\log \alpha^{-1})$. Pullback and covering --------------------- We now complete the proof of the main theorem using a covering argument. We are given $A \subset G$ finite with $|A+A| \leq K|A|$. By Proposition \[goodmodel\] there is a finite vector space $G'$ with $|G'| \leq K^{16}|A|$ and a subset $A'$ with $A'$ Fre[ĭ]{}man 8-isomorphic to $A$. It follows that $$\mu_{G'}(A') \geq K^{-16} \textrm{ and } \mu_{G'}(A'+A') \leq K\mu_{G'}(A').$$ We apply Proposition \[newchang\] to conclude that $2A'-2A'$ contains a subspace of codimension $O(K^{1/2}\log K)$. However, $A$ is 8-isomorphic to $A'$ so $2A-2A$ is 2-isomorphic to $2A'-2A'$ and it is easy to check that the 2-isomorphic pullback of a subspace is a coset so $2A-2A$ contains a coset of size $$2^{-O(K^{1/2}\log K)}|G'| \geq 2^{-O(K^{1/2}\log K)}|A|.$$ The following covering result of Chang [@MCC] converts this large coset contained in $2A-2A$ into a small coset containing $A$. It is true in more generality than we state; we only require the version below. Suppose that $G$ is a vector space over $\mathbb{F}_2$. Suppose that $A \subset G$ is a finite set with $|A+A| \leq K|A|$. Suppose that $2A-2A$ contains a coset of size $\eta|A|$. Then $A$ is contained in a coset of size at most $2^{O(K\log K\eta^{-1})}|A|$. Theorem \[maintheorem\] follows immediately from this proposition and the argument preceding it. Concluding remarks ================== It is worth making a couple of concluding remarks. First, all the implicit constants in the work are effective however they are not particularly neat or significant so it does not seem to be important to calculate them. Secondly, and more importantly, it seems likely that one could modify Proposition \[newchang\] to fall within the more general framework of approximate groups pioneered by Bourgain in [@JB]. It does not seem that this would lead to any improvement in Fre[ĭ]{}man’s theorem for $\mathbb{Z}$, essentially because of the need to narrow the Bohr sets at each stage of the iteration. Acknowledgements {#acknowledgements .unnumbered} ================ I should like to thank Tim Gowers and Ben Green for encouragement and supervision.
--- abstract: '[The oscillation spectrum of a perturbed neutron star is intimately related to the physical properties of the star, such as the equation of state. Observing pulsating neutron stars therefore allows one to place constraints on these physical properties. However, it is not obvious exactly how much can be learnt from such measurements. If we observe for long enough, and precisely enough, is it possible to learn everything about the star? A classical result in the theory of spectral geometry states that one cannot uniquely ‘hear the shape of a drum’. More formally, it is known that an eigenfrequency spectrum may not uniquely correspond to a particular geometry; some ‘drums’ may be indistinguishable from a normal-mode perspective. In contrast, we show that the drum result does not extend to perturbations of simple neutron stars within general relativity – in the case of axial (toroidal) perturbations of static, perfect fluid stars, a quasi-normal mode spectrum uniquely corresponds to a stellar profile. We show in this paper that it is not possible for two neutron stars, with distinct fluid profiles, to oscillate in an identical manner. This result has the information-theoretic consequence that gravitational waves completely encode the properties of any given oscillating star: unique identifications are possible in the limit of perfect measurement.]{}' author: - | A. G. Suvorov[^1]\ School of Physics, University of Melbourne, Parkville VIC 3010, Australia date: 'Accepted ?. Received ?; in original form ?' title: Astroseismology of neutron stars from gravitational waves in the limit of perfect measurement --- \[firstpage\] stars: neutron – stars: oscillations – gravitational waves. Introduction ============ Studying pulsations of neutron stars allows us to get a glimpse into the properties of matter in extreme, astrophysical environments [@unno79]. For example, a young neutron star, formed due to a compact object merger or core collapse supernova, tends to oscillate and emit gravitational radiation (‘ring-down’) as it attempts to attain an equilibrium state [@nsring1; @nsring2]. In general, fluid or local spacetime perturbations cause the host star to ring with a discrete set of oscillation frequencies for a certain period of time [@thorne1]. Because oscillation modes often couple to gravitational waves (GWs), the associated eigenfrequencies are complex [@vish70]. The real part gives the oscillation frequency of the mode, and the imaginary part gives the inverse of the damping time due to the emission of gravitational radiation [@schmidtrev]. These resonant frequencies can be studied through quasi-normal modes (QNMs), defined as the eigenmodes of oscillation, which can be used to describe the pulsations of any given star [@nollert99; @sterg03]. The QNM spectrum is sensitive to the properties of the host star, such as the equation of state (EOS) [@e0s1; @e0s2], the magnetic field strength [@magfield1; @magfield2], or the presence of superfluidity [@superfluid1]. It is remarkable then that certain universal relations have been found to exist amongst QNMs for different kinds of neutron stars [@ilq1; @ilq2]; though see [@ilq3]. If there are in fact universal relations between QNMs of neutron stars, it is unclear precisely how much information can be gleaned from GW and other observations [@and96; @kokkapo]. In the extreme case that two stars have the same spectrum, one faces a distinguishably problem of sorts. Similar issues arise in non-astrophysics contexts. One such problem is discussed by [@kac], who poses the question of “can one hear the shape of a drum?”. The problem effectively asks whether or not knowledge of the spectrum of the Laplace operator (‘hearing’ the frequencies) allows one to uniquely identify the geometry of a space (‘drum’). The answer turns out to be no, meaning that one cannot always uniquely determine the geometry from the spectrum [@drum1]; several drums may be *isospectral* [@drum2; @gordon]. An example pair of isospectral drums in two dimensions are pictured in Fig. \[drums\] [@gordon; @gordon2]. [Furthermore, [@srid94] used differently shaped microwave cavities to experimentally demonstrate that isospectral domains exist.]{} The drum problem has appeared with various generalisations in the literature, such as considering drums on Riemannian manifolds [@riemdrum], studying theoretical ‘fractal drums’ to analyse Zeta functions [@zeta2; @zeta1], and hypothesising ‘quantum drums’ whose spectra give insights into the nature of entanglement entropy [@ent4]. In this paper we consider a problem analogous to the one of [@kac], but for oscillating neutron stars in general relativity: given the QNM spectrum of a ringing neutron star, can one uniquely determine its fluid properties? We show that, in contrast to the drum problem, the answer turns out to be yes for some simple models, meaning that two ringing neutron stars with different fluid properties are always distinguishable from a QNM perspective (Sec. 3). The implication of this result is that GW measurements can, in the limit of ‘perfect’[^2] measurement, uniquely identify all physical characteristics of any given perturbed star. We focus primarily on axial (sometimes called toroidal) perturbations of static, spherically symmetric stars because they are in many respects the simplest class (in a sense that is made precise in Sec. 2) of stellar pulsations [@chand75; @wmode1]. Some discussion on astrophysical implications is offered in Sec. 4. ![An example pair of geometrical ‘drums’ in two dimensions, first reported by [@gordon], which admit the same spectrum for the (Dirichlet) Laplace operator; they are isospectral. An observer listening only to sound waves produced by perturbing either of these drums would not be able to distinguish between them. \[drums\]](GordonDrums.png){width="47.30000%"} Stellar structure ================= In general, a static, spherically-symmetric compact object can be described through the line element[^3] $$\label{eq:sphlinel} \begin{aligned} ds^2 &= g_{\alpha \beta} dx^{\alpha} dx^{\beta} \\ &= -e^{2\nu} dt^2 + e^{2\lambda} dr^2 + r^2 d \theta^2 + r^2 \sin^2\theta d \phi^2, \end{aligned}$$ where $(t,r,\theta,\phi)$ are the usual Schwarzschild coordinates, and $\nu$ and $\lambda$ are functions of $r$ only. The Einstein equations $$\label{eq:einstein} G_{\mu \nu} = 8 \pi T_{\mu \nu},$$ for the stress-energy tensor associated with a single, perfect fluid, viz. $$\label{eq:stresstensor} T^{\mu \nu} = \left( \rho + p \right) u^{\mu} u^{\nu} - p g^{\mu \nu},$$ where $\rho$ is the energy-density, $p$ is the stellar pressure, $\boldsymbol{g}$ is the metric tensor defined in , and $\boldsymbol{u}$ is the 4-velocity of a generic fluid element, describe the structure of a static, non-rotating star [@tov1; @tov2; @unno79]. In particular, the metric function $\lambda$ is related to the mass distribution function $m(r)$, defined as the mass inside the circumferential radius $r$, through $$e^{-2 \lambda} = 1 - \frac {2 m(r)} {r}.$$ The functions $\nu(r)$, $\rho(r)$, and $p(r)$ are related through the equations of [@tov1; @tov2], which form the following system of differential equations $$\label{eq:tov1} \frac {d \nu} { d r} = \frac {1} {p(r) + \rho(r)} \frac {d p} {d r},$$ $$\label{eq:tov2} \frac {d m} {d r} = 4 \pi r^2 \rho(r),$$ and $$\label{eq:tov3} \frac {d p} {d r} = - \frac {\left[ \rho(r) + p(r) \right] \left[ m(r) + 4 \pi r^3 p(r) \right]} { r^2 \left[ 1 - \frac {2 m(r)} {r} \right]}.$$ Supplemented by an EOS of the form $p = p(\rho)$, equations – uniquely determine the equilibrium stellar structure. The stellar radius $R_{\star}$ is defined by the vanishing of the stellar pressure, $p(R_{\star}) = 0$, and the total stellar mass $M_{\star}$ is given by $M_{\star} = m(R_{\star})$. Outside of the star $(r > R_{\star})$, where $p = \rho = 0$, the metric reduces to the Schwarzschild metric of mass $M_{\star}$ [@wald]. Axial perturbations ------------------- In general, neutron star oscillations of small amplitude can be studied by simultaneously perturbing the Einstein equations and the stress-energy tensor given some background metric [@wald; @schmidtrev]. One can introduce perturbations into quantities $q$ (e.g. $\rho, p, \boldsymbol{g}, \boldsymbol{u}$) by writing $q \rightarrow q^{(0)} + \delta q$, where $|\delta q|/ |q^{(0)}| \ll 1$ for equilibrium values $q^{(0)}$. Fourier-expanding each of the perturbed hydrodynamical and metric functions into angular and azimuthal harmonics of order $\ell,k$ \[see e.g. [@ferrari08] for details\] leads to a decoupled set[^4] of differential equations, one governing *polar* (poloidal) perturbations and the other governing *axial* (toroidal) perturbations, i.e. the total spacetime metric tensor may be written as $g_{\mu \nu} = g_{\mu \nu}^{(0)} + \delta g_{\mu \nu}^{\text{axial}} + \delta g_{\mu \nu}^{\text{polar}}$, and the linearised Einstein equations for $\delta g_{\mu \nu}^{\text{axial}}$ and $\delta g_{\mu \nu}^{\text{polar}}$ decouple [@thorne1; @moncrief; @allen98]. The axial and polar modes are defined by how they transform under parity; under the transformation $\theta \rightarrow \pi - \theta$ and $\phi \rightarrow \pi + \phi$, axial modes of order $\ell, k$ transform as $(-1)^{\ell+1}$ while polar modes of the same order transform as $(-1)^{\ell}$ [@regwhel57; @zerilli70]. Polar perturbations correspond to regional compressions of the star (e.g. $f$-modes), whereas axial perturbation induce continuous differential rotations in the fluid (e.g. $r$-modes) [@wmode1; @and96]. In this paper we concentrate on axial perturbations, since, as it turns out, axial oscillations do not modify the energy density or pressure of the fluid for static stars [@thorne1; @thorne2]. Hence accounting for fluid back-reaction effects is trivial, which makes for a simple treatment of ‘axial-isospectrality’ (see Sec. 3). After performing the aforementioned harmonic expansions, an axial perturbation of a static, spherically symmetric spacetime reduces to a single, Schr[ö]{}dinger-like wave equation for functions $\tilde{Z}_{\ell}^{-}(t,r)$ defined as specific combinations of the tensor components of the metric perturbation $\delta g_{\mu \nu}^{\text{axial}}$ \[see equation (20) of [@ferrari08]; similar equations arise for polar perturbations $\tilde{Z}_{\ell}^{+}(t,r)$\]. Writing $$\label{eq:qnmexpansion} \tilde{Z}_{\ell}^{-}(t,r) = e^{i \omega t} Z^{-}_{\ell}(r),$$ where $\omega$ is related to the angular velocity of the perturbed star and plays the role of the complex eigenfrequency of the system [@schmidtrev], we obtain [@kokk94] $$\label{eq:perteqn} \frac {d^2 Z^{-}_{\ell}} {dr_{\star}^2} + \left[ \omega^2 - V^{-}_{\ell}(r) \right] Z^{-}_{\ell} = 0,$$ where $r_{\star}(r) = \int^{r}_{0} dr e^{\lambda - \nu}$ is the tortoise coordinate, and $$\label{eq:potential} V^{-}_{\ell}(r) = \frac {e^{2 \nu}} {r^3} \left\{ \ell \left( \ell +1 \right) r + 4 \pi r^3 \left[ \rho(r) + p(r)\right] - 6 m(r) \right\},$$ is the interior potential function. Outside of the star, the spacetime looks like a non-linear superposition of a Schwarzschild spacetime and GWs [@schmidtrev], and the potential reduces to the usual Regge-Wheeler form [@regwhel57], $$\label{eq:regwhel} V^{-}_{\ell}(r) = \frac {1} {r^3} \left(1 - \frac{ 2 M_{\star}}{ r} \right) \left[ \ell \left( \ell + 1 \right) r - 6 M_{\star} \right].$$ The axial QNM spectrum is therefore determined by solving equation , using continuous ‘total’ potential functions $V_{\ell}(r)$ given by inside the star $(r \leq R_{\star})$ and outside of the star $(r > R_{\star})$. The boundary conditions on are chosen such that we have pure outgoing radiation at infinity, i.e. that at $r_{\star} \rightarrow \infty$ we have [@chand75; @kokk94] $$\label{eq:atinfinity} Z^{-}_{\ell}(r) \sim e^{-i \omega r_{\star}(r)},$$ while at the center of the star $(r=0)$ the perturbation functions $Z^{-}_{\ell}$ are assumed to be regular, $$\label{eq:atcentre} Z^{-}_{\ell}(r) \sim r^{\ell +1}.$$ Equations –, subject to the boundary conditions and , constitute an eigenvalue problem [@thorne1; @thorne2; @nondec2]. Since we are concerned with certain uniqueness properties of axial perturbations (Sec. 3), it is important to discuss whether an expansion of the form is always permitted over the background . Spacetimes which allow for such a decomposition are said to be *complete* with respect to QNMs; in this context, completeness means that it is possible to express the evolution of a wavefunction as a sum over eigenfunctions [@price92; @nollert99]. In fact, this is largely still an open problem within general relativity [@and93; @ching95; @beyer99; @pal15]. For example, it has been argued that if the potential $V$ has a significant tail at large radii, the GWs emitted due to the perturbation may only have power-law decays at late times, which means that the signal cannot be represented through QNMs, which decay exponentially [@price92; @ching96]. However, [@ching96] developed a set of criteria (e.g. the integral $\int_{0}^{\infty} d r_{\star} r_{\star} |V|$ is finite) which, if satisfied for a given spacetime \[e.g. \], imply that the QNM spectrum is complete; see also [@newt60; @chand75; @ho98]. We expect the conditions of [@ching96] to hold for any physically reasonable star [@chand75; @price92], though our analysis would need to be performed at the GW level (see Sec. 2.2) if this were not the case since we cannot apply . Gravitational waves ------------------- While axial perturbations of neutron stars do not modify the stellar pressure or density, a time varying quadrupole moment is generated by the inducement of a continuous, non-varying differential rotation [@kokk94; @and96]. This leads to the release of gravitational radiation [@thorne2; @thorne80]. The properties of the emitted GWs are encapsulated in the QNM spectrum, which is used to describe the properties of $\delta g_{\mu \nu}^{\text{axial}}$ [@tominaga; @detweiler]. The plus $h^{+}$ and cross $h^{\times}$ GW polarisations of the perturbed system are given by [@thorne2; @tominaga] \[see also equations (1) through (5) of [@ferrari08]\] $$\label{eq:plus} h^{+}(t,r,\theta,\phi) = -\frac {1} {2 \pi} \int d \omega \frac {e^{i \omega \left( t - r_{\star}\right)}} {r} \sum_{\ell m} \left[ \frac {Z^{-}_{\ell m}(r, \omega)} {i \omega} \frac {X^{\ell m}(\theta,\phi)} {\sin \theta} \right],$$ and $$\label{eq:times} h^{\times}(t,r,\theta,\phi) = \frac {1} {2 \pi} \int d \omega \frac {e^{i \omega \left( t - r_{\star}\right)}} {r} \sum_{\ell m} \left[ \frac {Z^{-}_{\ell m}(r, \omega)} {i \omega} \frac {W^{\ell m}(\theta,\phi)} {\sin \theta} \right],$$ respectively, where $$\label{eq:xharmonic} X^{\ell m}(\theta,\phi) = 2 \left( \frac {\partial^2} {\partial \theta \partial \phi} - \cot \theta \frac {\partial} {\partial \phi} \right) Y^{\ell m}(\theta,\phi),$$ and $$\label{eq:wharmonic} W^{\ell m}(\theta,\phi) = \left( \frac {\partial^2} {\partial \theta^2} - \cot \theta \frac {\partial} {\partial \theta} - \csc^2\theta \frac {\partial^2} {\partial \phi^2} \right) Y^{\ell m}(\theta,\phi),$$ are angular functions defined in terms of the spherical harmonics $Y^{\ell m}(\theta, \phi)$. Some combination of expressions and can, in principle, be observed with an interferometer such as the Laser Interferometer Gravitational-Wave Observatory (LIGO) [@jaran98], which then allows one to determine the stellar properties. For our purposes, the important point is the following: in a QNM-complete spacetime, a given potential $V^{-}_{\ell}$ uniquely \[due to standard theorems in Sturm-Liouville theory [@sturm1]\] determines a function $Z^{-}_{\ell}$ from , which, through expressions and , determines the resulting GW signal. GW analysis can therefore allow us to, in the limit of perfect measurement, uniquely reconstruct $V^{-}_{\ell}$ experimentally, in-turn uniquely identifying all stellar properties from if no two stars can ever be axially-isospectral. Axial-isospectrality ==================== The key ingredient for determining the properties of axial perturbations are the variables $Z_{\ell}^{-}$, which solve the scalar eigenvalue problem described in Sec. 2.1. However, the $Z_{\ell}^{-}$ are themselves uniquely determined by the potential functions and , which depend on the metric and hydrodynamical variables, which are in-turn constrained by the Tolman-Oppenheimer-Volkoff (TOV) equations. In fact, the structure of a static, spherically symmetric, perfect fluid neutron star is ultimately determined by three functions (noting that $\lambda$, $m$, and $\rho$ are not independent): $\nu, m,$ and $p$. We say that a pair of stars are ‘axially-isospectral’ if their associated axial potentials $V_{\ell}^{-}$ and Regge-Wheeler potentials are identical; such stars will have isomorphic axial QNM spectra. No axially-isospectral stars ---------------------------- [Here we show that there does not exist distinct solutions to the TOV equations – which also satisfy the axial-isospectrality conditions, i.e. which have matching interior and exterior potentials. To this end, we show that a given total potential $V_{\ell}$ uniquely corresponds to a set of metric and hydrodynamical variables. This implies that distinct stars with the same functions $V_{\ell}$ cannot exist, which allows us to conclude that non-trivially axially-isospectral, static, perfect fluid stars cannot exist.]{} [From expression , we see that $V_{\ell}$ can be decomposed into a part which is dependent on $\ell$ and a part which is independent from $\ell$. Given $V_{\ell}$, one can therefore identify both of these two components independently. The key point is now that these two pieces of information together uniquely determine an EOS, thereby yielding a unique solution to the TOV equations – [@shap82]. ]{} [To see this explicitly, suppose we are ‘given’ some (set of) $V_{\ell}$, which, inside the star, can be deconstructed as (say) $V_{\ell}^{-}(r) = A(r) + \ell \left(\ell + 1\right) B(r)$, where $A$ and $B$ are known functions which do not depend on $\ell$. Under these circumstances, we may treat as a first-order, linear, inhomogeneous differential equation for $p$, viz. $$\label{eq:pint2} 0 = \frac {d p} {d r} - \left[ p(r) + \rho(r) \right] \frac {2 B(r) + r \frac {d B}{ d r}} {2 r B(r)},$$ where we have made use of the relation $B(r) = e^{2\nu(r)}/r^2 $ from . Expression alone is not enough to determine the EOS because the relationship between $\rho$ and $r$ is unconstrained. However, the term $m(r)$ can be expressed in terms of $p$, $\rho$, and the known functions $A$ and $B$. As such, equation becomes $$\label{eq:pint3} \begin{aligned} 0 =& 4 \pi r^2 B(r) \left\{ 3 \left[ p(r) - \rho(r)\right] + r \left( \frac {d p} {d r} + \frac {d \rho} { dr} \right) \right\} \\ &+ A(r) \left[\frac {2 B(r) + r \frac {d B}{ d r}} {B(r)} - 3 \right] - r \frac {d A} {d r}, \end{aligned}$$ where we have made the identification $A(r) = 4 \pi e^{2 \nu(r)} \left[ \rho(r) + p(r) \right] - 6 e^{2\nu(r)} m(r)/r^3$. Equation acts as a second, linear, first-order differential equation for the variables $p$ and $\rho$. Combining the above, we have that knowledge of $V_{\ell}$ translates into two linear equations in two unknowns $\rho$ and $p$, i.e. and . Note that and must necessarily be consistent with , else the given functions $V_{\ell}$ could not correspond to *any* star. ]{} [The boundary conditions for and are the standard ones applied to the TOV equations, namely [@shap82] $$\label{eq:bc1} m(0) = 0,$$ and $$\label{eq:bc2} \nu(R_{\star}) = \frac {1} {2} \ln \left( 1 - \frac {2 M_{\star}} {R_{\star}} \right).$$ The former condition ensures that the circumferential mass $m(r)$ is well defined at the origin and acts as an initial condition for $\rho$, while the latter is the Schwarzschild matching condition which, although satisfied by $\nu(r)$, necessarily constrains the relationship between $\rho$ and $p$ near the boundary through by noting that $p(R_{\star}) = 0$ and $m(R_{\star}) = M_{\star}$ by definition. One therefore has a well posed system and can, in principle, solve for $p$ and $\rho$ in terms of the known functions $A$ and $B$ uniquely due to standard theorems on ordinary differential equations [@sturm1].]{} [In summary, given $V_{\ell}$, one can uniquely identify $\nu$, $p$, and $\rho$ through the TOV equations and subjected to the standard boundary conditions and . As noted in Sec. 3, this implies that the stellar structure is completely determined from $V_{\ell}$, and we can therefore conclude that no two stars can admit the same axial QNM spectrum.]{} Discussion ========== In this paper, inspired by the drum problem popularised by [@kac], we explore the possibility that neutron stars, with distinct fluid profiles, might exhibit the same QNM spectrum and thereby ‘ring’ in an identical way. It is known, for example, that two stars which have different magnetic field topologies can admit the same mass quadrupole moments [@mastr1]. We consider the case of axial perturbations and show that ‘axially-isospectral’ neutron stars cannot exist. This result suggests that one can, in principle, uniquely determine the nuclear EOS (and all other observables of interest) from GW and QNM measurements[^5]. From an information-theoretic standpoint, there is a one-to-one relationship between ‘perfect’ GW measurements and stellar structure. This is in contrast to the drum problem, where it is known that it may be impossible to uniquely determine the geometry from the normal mode spectrum [@drum2; @gordon; @drum1]; see also Fig. \[drums\]. Eventually, a measurement of continuous GWs from a ringing neutron star, using e.g. LIGO [@ligo1], will provide insights into the behaviour of nuclear matter at very high $(\sim 10^{18} \text{ kg m}^{-3})$ densities [@unno79; @and96]. In this paper we consider axial perturbations, which generate a time varying quadrupole moment by inducing a continuous, non-varying differential rotation [@thorne1; @kokk94]. Axial modes of rotating neutron stars are known to be prone to various instabilities wherein the canonical energy becomes negative, such as the instability discussed by [@fm98] \[see also [@chand70; @fs78]\]. These instabilities can cause axial (e.g. $r-$) mode amplitudes to grow \[see [@hask15] for a review\], which could allow for most of the rotational energy and angular momentum of the star to be carried away by the GWs [@fm98; @koj99]. There is a well-known observational discrepancy that all known neutron stars spin much slower than the break-up limit, which is particularly puzzling for low-mass X-ray binaries (LMXBs) where one expects accretion to significantly spinup the star [@chak03]. Depending on the importance of other dissipative processes that may compete with GW emission in removing energy from the star (e.g. neutrino diffusion or viscosity), low LMXB spins may potentially be explained by unstable axial modes [@bildstenpaper]. If axial mode amplitudes are large enough to account for the discrepancy, it is possible that the associated GWs will be detected in the near future. While the result presented here cannot be used to assist in detection efforts, and applies only to the idealised situation of ‘perfect’ measurement, it implies that GW spectra provide complete information about their hosts which further supports the importance of GW astronomy. Aside from (axial-)isospectrality, one might ask whether two stars could be ‘nearly-isospectral’ in some appropriate sense. In reality, a combination of instrumental \[e.g. thermal noise [@lev98]\] and systematic \[e.g. use of response functions [@respfun]\] error prevents one from ever perfectly knowing a QNM spectrum. There may also be issues of sampling since, for example, the Nyquist bound [@nyquist] prevents an exact waveform identification for all $\omega$ from discrete measurements. Therefore, if two stars were to admit QNM spectra which deviate, in some as-of-yet undefined way, by a small, non-zero amount, they may still be experimentally indistinguishable [@bai17]. These two stars would appear as effectively isospectral, even if there is not an exact mapping between the two spectra. Questions of this sort have interesting implications about the information content of GW signals. The details of ‘near-isospectrality’ will be investigated in future work. It would also be worthwhile considering whether isospectral stars in modified theories of gravity can exist, since the properties of GWs can vary [@gwmod1; @gwmod2]. The solution generating techniques discussed by [@suvmel2] may be useful in this direction. Acknowledgements ================ We thank Prof. Bill Moran for discussions. [We thank the anonymous referee for providing useful feedback which improved the quality of the manuscript.]{} Abadie, J., Abbott, B. P., Abbott, R., et al. 2010, Nuclear Instruments and Methods in Physics Research A, 624, 223 Abbott, B. P., Abbott, R., Adhikari, R., et al. 2009, Reports on Progress in Physics, 72, 076901 Allen, G., Andersson, N., Kokkotas, K. D., & Schutz, B. F. 1998, , 58, 124012 Amore, P. 2012, Journal of Mathematical Physics, 53, 123519 Andersson, N. 1993, Classical and Quantum Gravity, 10, L61 Andersson, N., & Kokkotas, K. D. 1996, Physical Review Letters, 77, 4134 Baibhav, V., Berti, E., Cardoso, V., & Khanna, G. 2017, arXiv:1710.02156 Balian, R., & Bloch, C. 1970, Annals of Physics, 60, 401 Bauswein, A., Janka, H.-T., Hebeler, K., & Schwenk, A. 2012, , 86, 063001 Benhar, O., Berti, E., & Ferrari, V. 1999, , 310, 797 Beyer, H. R. 1999, Communications in Mathematical Physics, 204, 397 Bildsten, L. 1998, , 501, L89 Chakrabarty, D., Morgan, E. H., Muno, M. P., et al. 2003, , 424, 42 Chandrasekhar, S. 1970, Physical Review Letters, 24, 611 Chandrasekhar, S., & Detweiler, S. 1975, Proceedings of the Royal Society of London Series A, 344, 441 Chandrasekhar, S., & Ferrari, V. 1991, Proceedings of the Royal Society of London Series A, 432, 247 Ching, E. S. C., Leung, P. T., Suen, W. M., & Young, K. 1995, Physical Review Letters, 74, 4588 Ching, E. S. C., Leung, P. T., Suen, W. M., & Young, K. 1996, , 54, 3778 Chirenti, C., de Souza, G. H., & Kastaun, W. 2015, , 91, 044034 Cornelissen, G., & Marcolli, M. 2008, Journal of Geometry and Physics, 58, 619 Ferrari, V., & Gualtieri, L. 2008, General Relativity and Gravitation, 40, 945 Fradkin, E., & Moore, J. E. 2006, Physical Review Letters, 97, 050404 Friedman, J. L., & Morsink, S. M. 1998, , 502, 714 Friedman, J. L., & Schutz, B. F. 1978, , 222, 281 Giraud, O., & Thas, K. 2010, Reviews of Modern Physics, 82, 2213 Gordon, C., Webb, D. L., & Wolpert, S. 1992, Bulletin of the American Mathematical Society, 27, 134 Gordon, C., & Webb, D. L. 1996, American Scientist, 84, 46 Haskell, B. 2015, International Journal of Modern Physics E, 24, 1541007 Ho, K. C., Leung, P. T., Maassen van den Brink, A., & Young, K. 1998, , 58, 2965 Jaranowski, P., Kr[ó]{}lak, A., & Schutz, B. F. 1998, , 58, 063001 Kac, M. 1966, Amer. Math. Monthly, 73, 1 Kojima, Y. 1992, , 46, 4289 Kojima, Y., & Hosonuma, M. 1999, , 520, 788 Kokkotas, K. D. 1994, , 268, 1015 Kokkotas, K. D., Apostolatos, T. A., & Andersson, N. 2001, , 320, 307 Kokkotas, K. D., & Schmidt, B. G. 1999, Living Reviews in Relativity, 2, 2 Kokkotas, K. D., & Schutz, B. F. 1992, , 255, 119 Lau, H. K., Leung, P. T., & Lin, L. M. 2010, , 714, 1234 Lee, U. 2007, , 374, 1015 Lee, U. 2008, , 385, 2069 Levin, Y. 1998, , 57, 659 Levitan, B. M., & Sargsjan, I. S. 1991, Kluwer, Dordrecht, 1991 Lindblom, L., & Detweiler, S. L. 1983, , 53, 73 Mastrano, A., Suvorov, A. G., & Melatos, A. 2015, , 447, 3475 Moncrief, V. 1974, Annals of Physics, 88, 343 Morsink, S. M., Stergioulas, N., & Blattnig, S. R. 1999, , 510, 854 Newton, R. G. 1960, Journal of Mathematical Physics, 1, 319 Nollert, H. P. 1999, Classical and Quantum Gravity, 16, R159 Nyquist, H., 1928, Transactions of the American Institute of Electrical Engineers, Volume 47, Issue 2, pp. 617-624, 47, 617 Oppenheimer, J. R., & Volkoff, G. M. 1939, Physical Review, 55, 374 Pal, S., Rajeev, K., & Shankaranarayanan, S. 2015, International Journal of Modern Physics D, 24, 1550083-253 Price, R., & Thorne, K. S. 1969, , 155, 163 Price, R. H., & Husain, V. 1992, Physical Review Letters, 68, 1973 Regge, T., & Wheeler, J. A. 1957, Physical Review, 108, 1063 Samuelsson, L., & Andersson, N. 2009, Classical and Quantum Gravity, 26, 155016 Shapiro, S. L., & Teukolsky, S. A. 1983, Research supported by the National Science Foundation. New York, Wiley-Interscience, 1983, 663 p., Sridhar, S., & Kudrolli, A. 1994, Physical review letters, 72, 2175 Stergioulas, N. 2003, Living Reviews in Relativity, 6, 3 Sunada., T. 195, Annals of Mathematics, 121, 169 Suvorov, A. G., & Melatos, A.  2016, , 94, 044045 Suvorov, A. G., & Melatos, A. 2017, , 96, 064032 Thorne, K. S. 1980, Reviews of Modern Physics, 52, 299 Thorne, K. S., & Campolattaro, A. 1967, , 149, 591 Tolman, R. C. 1939, Physical Review, 55, 364 Tominaga, K., Saijo, M., & Maeda, K.-I. 1999, , 60, 024004 Tsui, L. K., & Leung, P. T. 2005, Physical Review Letters, 95, 151101 Unno, W., Osaki, Y., Ando, H., & Shibahashi, H. 1979, Tokyo, University of Tokyo Press; Forest Grove, Ore., ISBS, Inc., 1979. 330 p., Vishveshwara, C. V. 1970, , 1, 2870 Wald, R. M. 1984, Chicago, University of Chicago Press, 1984, 504 p., Will, C. M. 1993, Theory and Experiment in Gravitational Physics, by Clifford M. Will, pp. 396. ISBN 0521439736. Cambridge, UK: Cambridge University Press, March 1993., 396 Zerilli, F. J. 1970, Physical Review Letters, 24, 737 Zwerger, T., & Mueller, E. 1997, , 320, 209 \[lastpage\] [^1]: E-mail: [email protected] [^2]: In this sense, *perfect* means having a set of observations from which one can construct the full spectrum of oscillation eigenmodes. [^3]: Throughout this work we adopt natural units with $G = c =1$, where $G$ is Newton’s constant and $c$ is the speed of light. [^4]: For rotating stars, the perturbation equations will, in general, not decouple in a simple way [@nondec2; @nondec1]. [^5]: Assuming that the spacetime is QNM-complete in the sense discussed in Sec. 2.1.
--- abstract: 'Refining previous work in [@Z.3; @MaZ.3; @Ra; @HZ; @HR], we derive sharp pointwise bounds on behavior of perturbed viscous shock profiles for large-amplitude Lax or overcompressive type shocks and physical viscosity. These extend well-known results of Liu [@Liu97] obtained by somewhat different techniques for small-amplitude Lax type shocks and artificial viscosity, completing a program set out in [@ZH]. As pointed out in [@Liu91; @Liu97], the key to obtaining sharp bounds is to take account of cancellation associated with the property that the solution decays faster along characteristic than in other directions. Thus, we must here estimate characteristic derivatives for the entire nonlinear perturbation, rather than judicially chosen parts as in [@Ra; @HR]. a requirement that greatly complicates the analysis.' author: - 'Peter Howard[^1] Mohammadreza Raoofi[^2] and Kevin Zumbrun [^3]' title: Sharp pointwise bounds for perturbed viscous shock waves --- Introduction ============ In the landmark paper [@Liu97], Liu established sharp pointwise bounds on the asymptotic behavior of a perturbed viscous shock profile, for small-amplitude, Lax type shocks and artificial viscosity. A long-standing program of the authors, set out in [@ZH], has been to extend to large-amplitudes, general type profiles, and physical (partially parabolic) viscosities results obtained by Liu and others for small amplitude Lax type profiles and artificial viscosity. For various results in this direction, see, e.g., [@Z.3; @MaZ.3; @Ra; @HZ; @HR]. In this paper, we achieve a definitive result recovering the full bounds of [@Liu97] for general Lax or overcompressive type shocks and physical or artificial viscosity, thus completing the program of [@ZH]. The analysis involves an interesting blend of the spectral techniques introduced by the authors to deal with large amplitudes, described in Sections \[preliminaries\] and \[nonlinear\], a delicate type of cancellation estimate introduced by Liu in the small-amplitude context, described in Section \[liu\], and sharp convolution estimates of the type developed in [@H.1; @H.2; @H.3; @HZ; @HR], taking account of transversality but not cancellation of two interacting signals. Consider a (possibly) large-amplitude [*viscous shock profile*]{}, or traveling-wave solution $$\label{profile} \bar{u} (x - st); \quad \lim_{x \to \pm \infty} \bar{u} (x) = u_{\pm},$$ of systems of partially or fully parabolic conservation laws $$\label{main} \begin{aligned} u_t + F(u)_x &= (B(u)u_x)_x, \\ \end{aligned}$$ $x \in \mathbb{R}$, $u$, $F \in \mathbb{R}^n$, $B\in\mathbb{R}^{n \times n}$, where $$u = \begin{pmatrix} u^I \\ u^{II} \end{pmatrix}, \quad B = \begin{pmatrix} 0 & 0 \\ b_1 & b_2 \end{pmatrix},$$ $u^I \in \mathbb{R}^{n-r}$, $u^{II} \in \mathbb{R}^r$, $r$ some positive integer, possibly $n$ (full regularization), and $$\text{Re }\sigma(b_2) \ge \theta > 0.$$ Here and elsewhere, $\sigma$ denotes spectrum of a matrix or other linear operator. Working in a coordinate system moving along with the shock, we may without loss of generality consider a standing profile $\bar{u} (x)$, $s=0$. Following [@Z.2], we assume that, by some invertible change of coordinates $u\to w(u)$, followed if necessary by multiplication on the left by a nonsingular matrix function $S(w)$, equations (\[main\]) may be written in the [*quasilinear, partially symmetric hyperbolic-parabolic form*]{} $$\tilde A^0 w_t + \tilde A w_{x}= (\tilde B w_{x})_{x} + G, \quad w=\left(\begin{matrix} w^I \\w^{II}\end{matrix}\right), \label{symm}$$ $w^I\in \mathbb{R}^{n-r}$, $w^{II}\in \mathbb{R}^r$, $x\in \mathbb{R}$, $t\in \mathbb{R}_+$, where, defining $w_\pm:= w(u_\pm)$:\ (A1)$\tilde A(w_\pm)$, $\tilde A_{11}$, $\tilde A^0$ are symmetric, $\tilde A^0 >0$.\ (A2)Dissipativity: no eigenvector of $ dF(u_\pm)$ lies in the kernel of $ B(u_\pm)$. (Equivalently, no eigenvector of $ \tilde A (\tilde A^0)^{-1} (w_\pm)$ lies in the kernel of $\tilde B (\tilde A^0)^{-1}(w_\pm)$.)\ (A3) $ \tilde B= \left(\begin{matrix} 0 & 0 \\ 0 & \tilde b \end{matrix}\right) $, $ \tilde G= \left(\begin{matrix} 0 \\ \tilde g\end{matrix}\right) $, with $ Re \tilde b(w) \ge \theta $ for some $\theta>0$, for all $W$, and $\tilde g(w_x,w_x)=\mathbf{O}(|w_x|^2)$. Here, the coefficients of (\[symm\]) may be expressed in terms of the original equation (\[main\]), the coordinate change $u\to w(u)$, and the approximate symmetrizer $S(w)$, as $$\begin{aligned} \tilde A^0&:= S(w)(\partial u/\partial w),\quad \tilde A:= S(w)d F(u(w))(\partial u/\partial w),\\ \tilde B&:= S(w)B(u(w))(\partial u/\partial w), \quad G= -(dS w_{x}) B(u(w))(\partial u/\partial w) w_{x}. \end{aligned} \label{coeffs}$$ Alternatively, we assume, simply, (B1)Strict parabolicity: $n=r$, or, equivalently, $\Re \sigma(B)>0$.\ Along with the above structural assumptions, we make the technical hypotheses:\ (H0)$F$, $B$, $w$, $S\in C^{10}$.\ (H1)The eigenvalues of $\tilde A_*:=\tilde{A}_{11}(\tilde{A}^0_{11})^{-1}$ are (i) distinct from $0$; (ii) of common sign; and (iii) of constant multiplicity with respect to $u$.\ (H2)The eigenvalues of $dF(u_\pm)$ are real, distinct, and nonzero.\ ([H3]{})Nearby $\bar u$, the set of all solutions of – connecting the same values $u_\pm$ forms a smooth manifold $\{\bar u^\delta\}$, $\delta\in \mathcal{U}\subset \mathbb{R}^\ell$, $\bar u^0=\bar u$. \[profileegs\] \[type\] An ideal shock $$\label{shock} u(x,t) = \begin{cases} u_- &x < st, \\ u_+ &x > st, \end{cases}$$ is classified as [*undercompressive*]{}, [*Lax*]{}, or [*overcompressive*]{} type according as $i-n$ is less than, equal to, or greater than $1$, where $i$, denoting the sum of the dimensions $i_-$ and $i_+$ of the center–unstable subspace of $df(u_-)$ and the center–stable subspace of $df(u_+)$, represents the total number of characteristics incoming to the shock. A viscous profile is classified as [*pure undercompressive*]{} type if the associated ideal shock is undercompressive and $\ell=1$, [*pure Lax*]{} type if the corresponding ideal shock is Lax type and $\ell=i-n$, and [*pure overcompressive*]{} type if the corresponding ideal shock is overcompressive and $\ell=i-n$, $\ell$ as in (H3). Otherwise it is classified as [*mixed under–overcompressive*]{} type; see [@ZH]. Pure Lax type profiles are the most common type, and the only type arising in standard gas dynamics, while pure over- and undercompressive type profiles arise in magnetohydrodynamics (MHD) and phase-transitional models. In this paper, we restrict to the case of pure Lax or overcompressive type shocks. Finally, we assume that the profile satisfies a linearized stability criterion based on the [*Evans function*]{}. As described, e.g., in [@AGJ; @E; @GZ; @J; @KS; @MaZ.3; @Z.3; @ZH], the Evans function $D(\lambda)$, a Wronskian constructed from solutions of the associated eigenvalue equation, serves as a characteristic function for the linear operator $L$ that arises upon linearization of (\[main\]) about $\bar{u} (x)$. More precisely, away from essential spectrum, zeros of the Evans function correspond in location and multiplicity with eigenvalues of $L$ [@AGJ; @GZ; @ZH]. It was shown in [@ZH] and [@MaZ.3], respectively for the strictly parabolic and real viscosity cases, that $L^1 \cap L^p \to L^p$ linearized orbital stability of the profile, $p>1$, is equivalent to the Evans function condition ($\mathcal{D}$) There exist precisely $\ell$ zeroes of $D(\cdot)$ in the nonstable half-plane $\R \lambda \ge 0$, necessarily at $\lambda=0$, where $\ell$ as in (H3) is the dimension of the manifold connecting $u_-$ and $u_+$. Under assumptions (A0)–(A3) \[alt. (B1)\] and (H0)–(H3), condition ($\mathcal{D}$) is equivalent to (i) [*strong spectral stability*]{}, $\sigma(L)\subset \{\R \lambda \le 0\}\cup \{0\}$, (ii) [*hyperbolic stability*]{} of the associated ideal shock, and (iii) [*transversality*]{} of $\bar u$ as a solution of the connection problem in the associated traveling-wave ODE, where hyperbolic stability is defined for Lax and undercompressive shocks by the Lopatinski condition of [@M.1; @M.2; @M.3; @Fre] and for Lax and overcompressive shocks by the analogous long-wave stability condition ($\mathcal{D}$ii), below; see [@ZH; @MaZ.2; @ZS; @Z.2] for further explanation. \[satisfaction\] Setting $A_\pm:=df(u_\pm)$, $\Gamma_\pm:=d^2f(u_\pm)$, and $B_\pm:=B(u_\pm)$, denote by $$a_1^-< a_2^-<\dots < a_n^- \quad \text{\rm and } a_1^+< a_2^+<\dots < a_n^+ \label{a}$$ the eigenvalues of $A_-$ and $A_+$, and $l_j^\pm$, $r_j^\pm$ left and right eigenvectors associated with each $a_j^\pm$, normalized so that $(l_j^{T}r_k)_\pm=\delta^j_k$, where $\delta^j_k$ is the Kronecker delta function, returning $1$ for $j=k$ and $0$ for $j\ne k$. Define scalar diffusion coefficients $$\beta_j^\pm:= (l_j^T Br_j)_\pm \label{beta}$$ and scalar coupling coefficients $$\gamma_j^\pm:= (l_j^T \Gamma (r_j,r_j))_\pm. \label{gamma}$$ Under this notation, hyperbolic stability (a consequence of the assumed ($\mathcal{D})$) of a Lax or overcompressive shock profile $\bar u$ is the condition: ($\mathcal{D}$ii)The set $\{r_j^\pm: a_j^\pm \gtrless 0\} \cup \{\int_{-\infty}^{+\infty}\frac{\partial \bar{u}^\delta}{\partial \delta_i} dx; i=1, \cdots, \ell \}$ forms a basis for $\mathbb{R}^n$, with $\int_{-\infty}^{+\infty}\frac{\partial \bar{u}^\delta}{\partial \delta_i} dx$ computed at $\delta=0.$\ \[majda\] Following [@Liu85; @Liu97], define for a given mass $m_j^-$ the scalar diffusion waves $\varphi_j^-(x,t;m_j^-)$ as (self-similar) solutions of the Burgers equations $$\varphi_{j,t}^- + a^-_j\varphi^-_{j,x} - \beta^-_j \varphi_{j,xx}^- = -\gamma^-_j ((\varphi_{j}^-)^2)_x \label{diffusionwaves}$$ with point-source initial data $$\varphi_j^- (x, -1) = m_j^-\delta_0(x), \label{burgersdata}$$ and similarly for $\varphi_j^+(x,t;m_j^+)$. Given a collection of masses $m_j^\pm$ prescribed on outgoing characteristic modes $a_j^-<0$ and $a_j^+>0$, define $$\varphi(x,t) =\sum_{a_j^- <0}\varphi_j^-(x,t; m_j^-) r^-_j + \sum_{a_j^+>0}\varphi_j^+(x,t; m_j^+) r^+_j. \label{phi}$$ In the setting described above, we will determine estimates on perturbed viscous shock profiles in terms of $\phi$ and a refined collection of  template  functions (terminology following [@ZH]; notation following \[Liu97\]). Let $$\label{templates} \begin{aligned} \psi_1 (x, t) &= \sum_{a_k^\pm \gtrless 0} (1 + |x - a_k^\pm t| + t^{1/2})^{-3/2} \\ \psi_2 (x, t) &= (1+|x|)^{-1/2} (1 + |x| + t)^{-1/2} (1 + |x| + t^{1/2})^{-1/2} \chi \\ \psi_3 (x, t) &= (1 + |x| + t)^{-1} (1 + |x|)^{-1} \chi \\ \psi_4 (x, t) &= (1 + |x| + t)^{-7/4} \chi, \end{aligned}$$ where $\chi$ denotes an indicator function on $x \in [a_1^- t, a_n^+ t]$, and also $$\begin{aligned} \psi_1^{j, \pm} (x, t) &= (1 + |x - a_j^\pm t| + t^{1/2})^{-3/2} \\ \bar{\psi}_1^{j,\pm} (x, t) &= \psi_1 (x, t) - \psi_1^{j, \pm}. \end{aligned}$$ The goal of our analysis is to establish the following sharp pointwise description of asymptotic behavior, generalizing bounds obtained by Liu [@Liu97] for small-amplitude Lax type profiles with artificial viscosity $B=I$. \[pointwiseestimates\] Assume (A1)–(A3) \[alt. (B1)\], (H0)–(H3) and $(\mathcal{D})$ hold, and $\bar{u}$ is a pure Lax or overcompressive shock profile. Assume also that $\tilde{u}$ solves (\[main\]) with initial data $\tilde{u}_0$ and that, for initial perturbation $u_0:=\tilde{u}_0-\bar{u}$, we have $|u_0|_{H^5} \le E_0$ and $|u_0(x)|$, $|\partial_x u_0(x)|$, and $|\partial_x^2 u_0(x)|\le E_0(1+|x|)^{-\frac32}$, for $E_0$ sufficiently small. Then, the solution $\tilde u$ continues globally in time, with $\tilde u-\bar u\in L^1\cap H^5$. Moreover, there exist a choice of $|m_j^\pm|$, $|\delta_*|\le CE_0$ and a function $\delta(t)$ (determined, respectively, by (\[masses\] and (\[delta\])), such that, for $v:=\tilde{u}-\bar{u}^{\delta_*}- \varphi-\frac{\partial \bar{u}^\delta}{\partial \delta}(\delta_*)\delta$, $$\label{1stestimate} \begin{aligned} |v (x,t)| &\le C E_0 \Big[\psi_1 + \psi_2 \Big] (x, t) \\ |\partial_x v (x,t)| &\le C E_0 \Big[ t^{-1/2} (\psi_1+\psi_2) + \psi_3 + \psi_4 \Big] (x,t) \\ |\delta (t)|&\le C E_0(1+t)^{-\frac 12} \\ |\dot \delta (t)| &\le C E_0(1+t)^{-1} \end{aligned}$$ and, for all $a_j^\pm \gtrless 0$, $$\label{1stest2} |(\partial_t + a_j^\pm \partial_x) v(x,t)| \le CE_0 \Big[t^{-1} (1+t)^{1/4} \psi_1^{j,\pm} + t^{-1/2} (\bar{\psi}_1^{j,\pm} + \psi_2) + \psi_3 + \psi_4 \Big] (x, t), $$ for some constant $C$ (independent of $x,t$ and $E_0$). \[taylor\] \[modeests\] Liu’s analysis involved essentially two main ingredients, which were strongly coupled. The first was to obtain approximate Green function bounds taking advantage of the weakly coupled nature of the equations in the small-amplitude case to approximate by a superposition of solutions of scalar conservation laws; the second, by delicate pointwise interaction estimates between the approximate Green kernel and various algebraically decaying source terms, to close a nonlinear iteration and obtain the result. The “coupling” we mention refers to the fact that the Green function estimates blow up as amplitude goes to zero and the shock becomes more and more characteristic, whereas the source terms decay with amplitude; thus, the two effects must be delicately balanced to close the iteration and achieve a correct result. In the large-amplitude case, bounded away from the characteristic limit, the issues are somewhat different; namely, we do not have this “characteristic coupling” problem, but on the other hand we cannot as in [@Liu97] obtain approximate Green function bounds by asymptotic development in the amplitude. Likewise, for physical, partially parabolic viscosity, there are new difficulties associated with regularity and the need to gain derivatives. These new difficulties have largely been surmounted in the authors’ previous work. In particular, (i) sharp Green function bounds have been obtained by Mascia–Zumbrun [@MaZ.2] in great generality using Laplace transform/stationary phase estimates, and (ii) global existence and sharp $H^s\cap L^p$ estimates on the nonlinear residual $v$ have been obtained by Raoofi [@Ra] using the linearized estimates of [@MaZ.3], a key cancellation estimate of Liu, and a nonstandard energy estimate to control higher derivatives. These results are described in more detail in Sections \[preliminaries\] and \[liu\], and we shall use them freely in our analysis. A major advantage of this “bootstrap” approach is that we need not close a nonlinear iteration, but only carry out a [*linear*]{} fixed-point argument to obtain our result. This was used in the previous work [@HR] to establish“nearly optimal” pointwise bounds in a relatively uncomplicated manner. As pointed out in [@ZH; @HR], however, to obtain the full bounds of Liu requires estimates not only on $v$ and $v_x$ as in [@HR], but also [*characteristic derivatives*]{} $(\partial_t +a_j^\pm\partial_x)v$ as in [@Liu97]. To carry out these bounds requires an immense ammount of additional work, with consideration of numerous different cases, and accounts for most of the work of this paper. \[sp\] \[energysp\] [**Plan of the paper.**]{} In Section \[preliminaries\], we recall the results of [@MaZ.2; @HR] for our use. In Section \[nonlinear\], we give the brief argument establishing Theorem \[pointwiseestimates\], subject to certain integral estimates. In Section \[liu\], we motivate the analysis to follow by reviewing a key cancellation estimate of Liu [@Liu85] upon which our analysis, and that of [@Liu97], depends. In Sections \[integralestimatesL\], \[integralestimatesNLI\], and \[integralestimatesNLII\] we carry out the main work of the paper, establishing the deferred integral estimates. Preliminaries ============= We start by recalling some needed, known results. Profile estimates {#profileestimates} ----------------- We first recall the profile analysis carried out in [@MaZ.2], generalizing results of [@MP] in the strictly parabolic case. Profile $\bar u (x)$ satisfies the standing-wave ordinary differential equation (ODE) $$B(\bar u) \bar u'=F(\bar u)-F(u_-). \label{ODE}$$ Considering the block structure of $B$, this can be written as: $$F^I(u^I, u^{II})\equiv F^I(u_-^I, u_-^{II})\label{eq1}$$ and $$b_1(u^I)' + b_2(u^{II})'= F^{II}(u^I, u^{II}) - F^{II}(u_-^I, u_-^{II}). \label{eq2}$$ \[expdecay\]\[[@MaZ.2]\] Given (H1)–(H3), (\[eq2\]) determines a smooth $r$-dimensional manifold on which (\[eq2\]) determines a nondegenerate ODE. Moreover, endstates $u_\pm$ are hyperbolic restpoints of this ODE, i.e., the coefficients of the linearized equations about $u_\pm$, written in local coordinates, have no center subspace. In particular, under regularity (H0), $$D_x^j D_\delta^i(\bar u^\delta(x)-u_{\pm})=\mathbf{O}(e^{-\alpha|x|}) \, \, \text{\rm as $x\rightarrow\pm\infty$}, \qquad \alpha>0, \, 0\le j\le 10, \,i=0,1.$$ Linearized equations and Green distribution bounds {#linearized} -------------------------------------------------- We next recall some linear theory from [@MaZ.2; @ZH; @HZ]. Linearizing (\[main\]) about ${\bar{u}}^{\delta_*}(\cdot)$, $\delta_*$ to be determined later, gives $$v_t=Lv:=-(Av)_x+(Bv_x)_x, \label{linearov}$$ with $$B(x):= B({\bar{u}}^{\delta_*}(x)), \quad A(x)v:= df({\bar{u}}^{\delta_*}(x))v-dB({\bar{u}}^{\delta_*}(x))v{\bar{u}}^{\delta_*}_x. \label{AandBov}$$ Denoting $A^\pm := A(\pm \infty)$, $B^\pm:= B(\pm \infty)$, and considering Lemma \[expdecay\], it follows that $$|A(x)-A^-|= \mathbf {O} (e^{-\eta |x|}), \quad |B(x)-B^-|= \mathbf {O} (e^{-\eta |x|}) \label{ABboundsov}$$ as $x\to -\infty,$ for some positive $\eta.$ Similarly for $A^+$ and $B^+,$ as $x\to +\infty.$ Also $|A(x)-A^\pm|$ and $|B(x)-B^\pm|$ are bounded for all $x$. Define the [*(scalar) characteristic speeds*]{} $a^\pm_1< \cdots < a_n^\pm$ (as above) to be the eigenvalues of $A^\pm$, and the left and right [*(scalar) characteristic modes*]{} $l_j^\pm$, $r_j^\pm$ to be corresponding left and right eigenvectors, respectively (i.e., $A^\pm r_j^\pm = a_j^\pm r_j^\pm,$ etc.), normalized so that $l^+_j \cdot r^+_k=\delta^j_k$ and $l^-_j \cdot r^-_k=\delta^j_k$. Following Kawashima [@Kaw], define associated [*effective scalar diffusion rates*]{} $\beta^\pm_j:j=1,\cdots,n$ by relation $$\left( \begin{matrix} \beta_1^\pm &&0\\ &\vdots &\\ 0&&\beta_n^\pm \end{matrix} \right) \quad = \hbox{diag}\ L^\pm B^\pm R^\pm, \label{betaov}$$ where $L^\pm:=(l_1^\pm,\dots,l_n^\pm)^t$, $R^\pm:=(r_1^\pm,\dots,r_n^\pm)$ diagonalize $A^\pm$. Assume for $A$ and $B$ the block structures: $$A=\left(\begin{matrix}A_{11}\quad A_{12}\\A_{21}\quad A_{22}\end{matrix}\right), B=\left(\begin{matrix}0& 0\\B_{21}& B_{22}\end{matrix}\right).$$ Also, let $a^{*}_j(x)$, $j=1,\dots,(n-r)$ denote the eigenvalues of $$A_{*}:= A_{11}- A_{12} B_{22}^{-1}B_{21}, \label{A*}$$ with $l^*_j(x)$, $r^*_j(x)\in {\mathbb{R}}^{n-r}$ associated left and right eigenvectors, normalized so that $l^{*t}_jr_j\equiv \delta^j_k$. More generally, for an $m_j^*$-fold eigenvalue, we choose $(n-r)\times m_j^* $ blocks $L_j^*$ and $R_j^*$ of eigenvectors satisfying the [*dynamical normalization*]{} $$L_j^{*t}\partial_x R_j^{*}\equiv 0,$$ along with the usual static normalization $L^{*t}_jR_j\equiv \delta^j_kI_{m_j^*}$; as shown in Lemma 4.9, [@MaZ.1], this may always be achieved with bounded $L_j^*$, $R_j^*$. Associated with $L_j^*$, $R_j^*$, define extended, $n\times m_j^*$ blocks $$\mathcal{L}_j^*:=\left(\begin{matrix} L_j^* \\ 0\end{matrix}\right), \quad \mathcal{R}_j^*:= \left(\begin{matrix} R_j^*\\ -B_{22}^{-1}B_{21} R_j^*\end{matrix}\right). \label{CalLR}$$ Eigenvalues $a_j^*$ and eigenmodes $\mathcal{L}_j^*$, $\mathcal{R}_j^*$ correspond, respectively, to short-time hyperbolic characteristic speeds and modes of propagation for the reduced, hyperbolic part of degenerate system (\[main\]). Define local, $m_j\times m_j$ [*dissipation coefficients*]{} $$\eta_j^*(x):= -L_j^{*t} D_* R_j^* (x), \quad j=1,\dots,J\le n-r, \label{eta}$$ where $$\aligned &{D_*}(x):= \, A_{12}B_{22}^{-1} \Big[A_{21}-A_{22} B_{22}^{-1} B_{21}+ A_{*} B_{22}^{-1} B_{21} + B_{22}\partial_x (B_{22}^{-1} B_{21})\Big] \endaligned \label{D*}$$ is an effective dissipation analogous to the effective diffusion predicted by formal, Chapman–Enskog expansion in the (dual) relaxation case. The [*Green distribution*]{} (fundamental solution) associated with (\[linearov\]) is defined by $$G(x,t;y):= e^{Lt}\delta_y (x). \label{ov3.7}$$ Recalling the standard notation $ \textrm{errfn} (z) := \frac{1}{2\pi} \int_{-\infty}^z e^{-\xi^2} d\xi, $ we have the following pointwise description. \[greenbounds\][@MaZ.2] Under assumptions (A1)–(A3) \[alt. (B1)\], (H0)–(H3), and $(\mathcal D)$, the Green distribution $G(x,t;y)$ associated with the linearized evolution equations may be decomposed as $$G(x,t;y)= H + E+ S + R, \label{ourdecomp}$$ where, for $y\le 0$: $$\begin{aligned} H(x,t;y)&:= \sum_{j=1}^{J} a_j^{*-1}(x) a_j^{*}(y) \mathcal{R}_j^*(x) \zeta_j^*(y,t) \delta_{x-\bar a_j^* t}(-y) \mathcal{L}_j^{*t}(y)\\ &= \sum_{j=1}^{J} \mathcal{R}_j^*(x) \mathcal{O}(e^{-\eta_0 t}) \delta_{x-\bar a_j^* t}(-y) \mathcal{L}_j^{*t}(y), \end{aligned} \label{multH}$$ where the averaged convection rates $\bar a_j^*= \bar a_j^*(x,t)$ in (\[multH\]) denote the time-averages over $[0,t]$ of $a_j^*(x)$ along backward characteristic paths $z_j^*=z_j^*(x,t)$ defined by $$dz_j^*/dt= a_j^*(z_j^*), \quad z_j^*(t)=x,$$ and the dissipation matrix $\zeta_j^*=\zeta_j^*(x,t)\in \mathbb{R}^{m_j^*\times m_j^*}$ is defined by the [*dissipative flow*]{} $$d\zeta_j^*/dt= -\eta_j^*(z_j^*)\zeta_j^*, \quad \zeta_j^*(0)=I_{m_j};$$ and $\delta_{x-\bar a_j^* t}$ denotes Dirac distribution centered at $x-\bar a_j^* t$. $$\label{E} E(x,t;y):=\sum_{j=1}^\ell \frac{\partial \bar u^\delta(x)}{\partial \delta_j}_{|\delta=\delta_*}e_j(y,t),$$ $$\label{e} e_j(y,t):=\sum_{a_k^{-}>0} \left(\textrm{errfn }\left(\frac{y+a_k^{-}t}{\sqrt{4\beta_k^{-}t}}\right) -\textrm{errfn }\left(\frac{y-a_k^{-}t}{\sqrt{4\beta_k^{-}t}}\right)\right) l_{jk}^{-}$$ $$\begin{aligned} S(x,t;y)&:= \chi_{\{t\ge 1\}}\sum_{a_k^-<0}r_k^- {l_k^-}^t (4\pi \beta_k^-t)^{-1/2} e^{-(x-y-a_k^-t)^2 / 4\beta_k^-t} \\ &+\chi_{\{t\ge 1\}} \sum_{a_k^- > 0} r_k^- {l_k^-}^t (4\pi \beta_k^-t)^{-1/2} e^{-(x-y-a_k^-t)^2 / 4\beta_k^-t} \left({\frac {e^{-x}}{e^x+e^{-x}}}\right)\\ &+ \chi_{\{t\ge 1\}}\sum_{a_k^- > 0, \, a_j^- < 0} [c^{j,-}_{k,-}]r_j^- {l_k^-}^t (4\pi \bar\beta_{jk}^- t)^{-1/2} e^{-(x-z_{jk}^-)^2 / 4\bar\beta_{jk}^- t} \left({\frac{e^{ -x}}{e^x+e^{-x}}}\right),\\ &+\chi_{\{t\ge 1\}} \sum_{a_k^- > 0, \, a_j^+ > 0} [c^{j,+}_{k,-}]r_j^+ {l_k^-}^t (4\pi \bar\beta_{jk}^+ t)^{-1/2} e^{-(x-z_{jk}^+)^2 / 4\bar\beta_{jk}^+ t} \left({\frac{e^{ x}}{e^x+e^{-x}}}\right),\\ \end{aligned} \label{Sov}$$ with $$z_{jk}^\pm(y,t):=a_j^\pm\left(t-\frac{|y|}{|a_k^-|}\right) \label{zjkov}$$ and $$\bar \beta^\pm_{jk}(x,t;y):= \frac{x^\pm}{a_j^\pm t} \beta_j^\pm + \frac{|y|}{|a_k^- t|} \left( \frac{a_j^\pm}{a_k^-}\right)^2 \beta_k^-, \label{betaaverageov}$$ The remainder $R$ and its derivatives have the following bounds. $$\begin{aligned} R(x,t;y)&= \mathbf{O}(e^{-\eta(|x-y|+t)})\\ &+\sum_{k=1}^n \mathbf{O} \left( (t+1)^{-1/2} e^{-\eta x^+} +e^{-\eta|x|} \right) t^{-1/2}e^{-(x-y-a_k^{-} t)^2/Mt} \\ &+ \sum_{a_k^{-} > 0, \, a_j^{-} < 0} \chi_{\{ |a_k^{-} t|\ge |y| \}} \mathbf{O} ((t+1)^{-1/2} t^{-1/2}) e^{-(x-a_j^{-}(t-|y/a_k^{-}|))^2/Mt} e^{-\eta x^+}, \\ &+ \sum_{a_k^{-} > 0, \, a_j^{+}> 0} \chi_{\{ |a_k^{-} t|\ge |y| \}} \mathbf{O} ((t+1)^{-1/2} t^{-1/2}) e^{-(x-a_j^{+} (t-|y/a_k^{-}|))^2/Mt} e^{-\eta x^-}, \\ \end{aligned} \label{Rbounds}$$ $$\begin{aligned} R_y(x,t;y)&= \sum_{j=1}^J \mathbf{O}(e^{-\eta t})\delta_{x-\bar a_j^* t}(-y) + \mathbf{O}(e^{-\eta(|x-y|+t)})\\ &+\sum_{k=1}^n \mathbf{O} \left( (t+1)^{-1/2} e^{-\eta x^+} +e^{-\eta|x|}+e^{-\eta|y|} \right) t^{-1} e^{-(x-y-a_k^{-} t)^2/Mt} \\ &+ \sum_{a_k^{-} > 0, \, a_j^{-} < 0} \chi_{\{ |a_k^{-} t|\ge |y| \}} \mathbf{O} ((t+1)^{-1/2} t^{-1}) e^{-(x-a_j^{-}(t-|y/a_k^{-}|))^2/Mt} e^{-\eta x^+} \\ &+ \sum_{a_k^{-} > 0, \, a_j^{+} > 0} \chi_{\{ |a_k^{-} t|\ge |y| \}} \mathbf{O} ((t+1)^{-1/2} t^{-1}) e^{-(x-a_j^{+}(t-|y/a_k^{-}|))^2/Mt} e^{-\eta x^-}, \\ \end{aligned} \label{Rybounds}$$ $$\begin{aligned} R_x(x,t;y)&= \sum_{j=1}^J \mathbf{O}(e^{-\eta t})\delta_{x-\bar a_j^* t}(-y) + \mathbf{O}(e^{-\eta(|x-y|+t)})\\ &+\sum_{k=1}^n \mathbf {O} \left( (t+1)^{-1} e^{-\eta x^+} +e^{-\eta|x|} \right) t^{-1} (t+1)^{1/2} e^{-(x-y-a_k^{-} t)^2/Mt} \\ &+ \sum_{a_k^{-} > 0, \, a_j^{-} < 0} \chi_{\{ |a_k^{-} t|\ge |y| \}} \mathbf{O}((t+1)^{-1/2} t^{-1}) e^{-(x-a_j^{-}(t-|y/a_k^-|))^2/Mt} e^{-\eta x^+} \\ &+ \sum_{a_k^{-} > 0, \, a_j^{+} > 0} \chi_{\{ |a_k^{-} t|\ge |y| \}} \mathbf{O}((t+1)^{-1/2} t^{-1}) e^{-(x-a_j^{+}(t-|y/a_k^{-}|))^2/Mt} e^{-\eta x^-}. \\ \end{aligned} \label{Rxbounds}$$ Moreover, for $|x-y|/t$ sufficiently large, $|G|\le Ce^{-\eta t}e^{-|x-y|^2/Mt)}$ as in the strictly parabolic case. Setting $\tilde{G} := S+R$, so that $G = H + E + \tilde{G}$, we have the following useful alternative bounds for $\tilde{G}$. \[altGFbounds\] Under the assumptions of Proposition \[greenbounds\], $\tilde{G}$ has the following bounds. $$\label{Gbounds} \begin{aligned} |\partial_{x,y}^\alpha &\tilde G(x,t;y)|\le \sum_{j=1}^J\sum_{\beta=0}^{\max \{0, |\alpha|-1\}} \mathbf{O}(e^{-\eta t})\partial_y^\beta \delta_{x-\bar a_j^* t}(-y) + \mathbf{O}(e^{-\eta(|x-y|+t)})\\ &+ C(t^{-|\alpha|/2} + |\alpha_x| e^{-\eta|x|}) \Big( \sum_{k=1}^n t^{-1/2}e^{-(x-y-a_k^{-} t)^2/Mt} e^{-\eta x^+} \\ &+ \sum_{a_k^{-} > 0, \, a_j^{-} < 0} \chi_{\{ |a_k^{-} t|\ge |y| \}} t^{-1/2} e^{-(x-a_j^{-}(t-|y/a_k^{-}|))^2/Mt} e^{-\eta x^+}, \\ &+ \sum_{a_k^{-} > 0, \, a_j^{+}> 0} \chi_{\{ |a_k^{-} t|\ge |y| \}} t^{-1/2} e^{-(x-a_j^{+} (t-|y/a_k^{-}|))^2/Mt} e^{-\eta x^-}\Big), \\ |(\partial_t + a_l^- \partial_x) &\tilde G(x,t;y)|\le \sum_{j=1}^J \mathbf{O}(e^{-\eta t})\delta_{x-\bar a_j^* t}(-y) + \mathbf{O}(e^{-\eta(|x-y|+t)})\\ &+C t^{-3/2} \Big( e^{-(x-y-a_l^{-} t)^2/Mt} e^{-\eta x^+} + \sum_{a_k^- > 0} e^{-(x-a_l^{-}(t-|y/a_k^{-}|))^2/Mt} e^{-\eta x^+} \Big) \\ &+ Ct^{-1/2} \Big( e^{-(x-y-a_l^{-} t)^2/Mt} e^{-\eta |x|} I_{\{x \ge 0\}} + \sum_{a_k^- > 0} e^{-(x-a_l^{-}(t-|y/a_k^{-}|))^2/Mt} e^{-\eta |x|} I_{\{x \ge 0\}} \Big) \\ & + C(t^{-1} + e^{-\eta|x|}) \Big( \sum_{k \ne l} t^{-1/2}e^{-(x-y-a_k^{-} t)^2/Mt} e^{-\eta x^+} \\ &+ \sum_{a_k^{-} > 0, \, a_j^{-} < 0, j \ne l} \chi_{\{ |a_k^{-} t|\ge |y| \}} t^{-1/2} e^{-(x-a_j^{-}(t-|y/a_k^{-}|))^2/Mt} e^{-\eta x^+}, \\ &+ \sum_{a_k^{-} > 0, \, a_j^{+}> 0} \chi_{\{ |a_k^{-} t|\ge |y| \}} t^{-1/2} e^{-(x-a_j^{+} (t-|y/a_k^{-}|))^2/Mt} e^{-\eta x^-}\Big), \\ \end{aligned}$$ $0\le |\alpha|\le 2$, for $y\le 0$, and symmetrically for $y\ge 0$, for some $\eta$, $C$, $M>0$, where $a_j^\pm$ are as in Proposition \[greenbounds\], $\beta_k^\pm>0$, $x^\pm$ denotes the positive/negative part of $x$, and indicator function $\chi_{\{ |a_k^{-}t|\ge |y| \}}$ is $1$ for $|a_k^{-}t|\ge |y|$ and $0$ otherwise. Moreover, all estimates are uniform in the supressed parameter $\delta_*$. \[eboundsrmk\] $L^p$ decay {#Lp} ----------- Finally, we recall the $H^s\cap L^p$ result of [@Ra] established using the linearized bounds of [@MaZ.2] and Hausdorff–Young’s inequality together with an $H^s$ energy estimate and the cancellation estimate described in Section \[liu\], below. \[propra\] Under the conditions of Theorem \[pointwiseestimates\], there exists a unique, global solution $\tilde u$ of (\[main\]), $\tilde u\in L^1\cap H^5$. Moreover, for $v:=\tilde{u}-\bar{u}^{\delta_*}- \varphi-\frac{\partial \bar{u}^\delta}{\partial \delta}(\delta_*)\delta$, with $|m_j^\pm|$, $|\delta_*|\le CE_0$ and $\delta(t)$ defined as in (\[masses\] and (\[delta\]), $$\label{vlp} |v(\cdot,t)|_{L^p}\le C E_0 (1+t)^{-\frac{1}{2}(1-\frac1p)-\frac 14}, \qquad 1 \leq p \leq \infty,$$ and $$\label{vhs} |v(\cdot,t)|_{H^5}\le C E_0 (1+t)^{-\frac{1}{2}},$$ for some constant $C>0$, Nonlinear analysis: Proof of Theorem \[pointwiseestimates\] {#nonlinear} =========================================================== We now carry out the proof of Theorem \[pointwiseestimates\] assuming certain integral estimates to be established in Sections \[integralestimatesL\], \[integralestimatesNLI\], and \[integralestimatesNLII\]. Let $\tilde{u}$ be the solution of (\[main\]) guaranteed by Proposition \[propra\]. Following [@Liu85; @Liu97; @Ra], let $|m_i|$, $|\delta_*|\le CE_0$ be the unique solutions guaranteed by ($\cal{D}$ii) and the Implicit Function Theorem of $$\label{masses} \int_{-\infty}^{+\infty} (\tilde{u} (x, 0) -\bar{u}^{\delta_*})(x)\, dx = \sum_{a_j^- <0}m_j r_j^- + \sum_{a_j^+ >0}m_j r_j^+,$$ or, equivalently, $\int_{-\infty}^{+\infty} \tilde{u} (x, 0) =\int_{-\infty}^{+\infty} (\bar{u}^{\delta_*}+\phi)(x)\, dx. $ This determines the asymptotic state, by conservation of mass. Setting $u(x,t):=\tilde{u}(x,t)-\bar{u}^{\delta_*}(x)$, use Taylor’s expansion around $\bar{u}^{\delta_*}(x)$ to find $$u_t + (A(x)u)_x - (B(x)u_x)_x = -(\Gamma(x)(u,u))_x + Q(u, u_x)_x, \label{utaylor}$$ where $\Gamma(x)(u,u) = d^2f(\bar{u}^{\delta_*})(u,u)- d^2B(\bar{u}^{\delta_*})(u,u)\bar{u}^{\delta_*}_x$ and $$Q(u, u_x) = \mathbf {O} (|u||u_x|+|u|^3),$$ with $\Gamma_\pm = \Gamma(\pm \infty).$ Define constant coefficients $b^{\pm}_{ij}$ and $\Gamma_{ijk}^{\pm}$ to satisfy $$\Gamma_\pm (r^\pm_j, r^\pm_k) = \sum_{i=1}^n \Gamma_{ijk}^{\pm} r^\pm_i,\quad B_\pm r^\pm_j= \sum_{i=1}^n b^{\pm}_{ij} r^\pm_i. \label{coefshock}$$ Then, of course, $\beta^\pm_i = b^{\pm}_{ii}$ and $\gamma^\pm_i := \Gamma_{iii}^{\pm}.$ Continuing, set $v:= u -\varphi-\frac{\partial\bar{u}^\delta}{\partial\delta}\delta(t)$, with $\delta(t)$ to be defined later, and assuming $\delta(0)=0$. Notice that, by our choice of $\delta_*$ and diffusion waves $\varphi_i^\pm$’s, we have zero initial mass of $v$, i.e., $$\label{initial0mass} \int_{-\infty}^{+\infty} v(x,0)dx = 0.$$ Replacing $u$ with $v + \varphi+\frac{\partial\bar{u}^\delta}{\partial\delta}\delta(t)$ in (\[utaylor\]) ( $\frac{\partial \bar{u}^\delta}{\partial \delta_i} $ computed at $\delta=\delta_*$), and using the fact that $\frac{\partial\bar{u}^\delta}{\partial \delta_i}$ satisfies the linear time independent equation $Lv=0$, we obtain $$v_t - Lv = \Phi(x,t)+ \mathcal{F}(\varphi, v, \frac{\partial\bar{u}^\delta}{\partial\delta}\delta(t))_x \frac{\partial\bar{u}^\delta}{\partial\delta}+ \dot \delta (t), \label{khati} $$ where $$\label{fcalt} \begin{split} \mathcal{F}(\varphi, v, \frac{\partial\bar{u}^\delta}{\partial\delta}\delta &) = \mathbf {O} ( |v|^2 + |\varphi||v| + |v||\frac{\partial\bar{u}^\delta}{\partial\delta}\delta| + |\varphi|| \frac{\partial\bar{u}^\delta}{\partial\delta} \delta| +|\frac{\partial\bar{u}^\delta}{\partial\delta}\delta|^2 \\ &+ |(\varphi + v+\frac{\partial\bar{u}^\delta}{\partial\delta}\delta) (\varphi + v + \frac{\partial\bar{u}^\delta}{\partial\delta}\delta)_x| + |\varphi +v+\frac{\partial\bar{u}^\delta}{\partial\delta}\delta|^3 ). \end{split}$$ Furthermore, $$\begin{aligned} \mathcal{F}(v, \varphi, \frac{\partial\bar{u}^\delta}{\partial\delta}\delta(t))_x& = \mathbf{O}\big(\mathcal{F}(v,\varphi,\frac{\partial\bar{u}^\delta}{\partial\delta}\delta)\\ &+ |(v+\varphi+\frac{\partial\bar{u}^\delta}{\partial\delta}\delta)_x| |(v^{II}+\varphi+\frac{\partial\bar{u}^\delta}{\partial\delta}\delta)_x|\\ &+|v+\varphi+\frac{\partial\bar{u}^\delta}{\partial\delta}\delta| |(v^{II}+\varphi+\frac{\partial\bar{u}^\delta}{\partial\delta}\delta)_{xx}|\big), \label{fcalx} \end{aligned}$$ and $\Phi(x,t) := -\varphi_t - (A(x)\varphi)_x + (B(x)\varphi_x)_x -(\Gamma(x)(\varphi , \varphi))_x$. For $\Phi$ we write $$\begin{split} \Phi(x,t) = &-(\varphi_t +A \varphi_x - B \varphi_{xx} + \Gamma(\varphi, \varphi)_x) \\ = &-\sum_{a_i^- < 0} \varphi_t^i r_i^- + (A(x)\varphi^i r_i^-)_x - (B(x)\varphi_x^i r_i^-)_x + (\Gamma(x)(\varphi^ir_i^-,\varphi^ir_i^-))_x\\ &- \sum_{a_i^+ > 0} \varphi_t^i r_i^+ + (A(x)\varphi^i r_i^+)_x - (B(x)\varphi_x^i r_i^+)_x + (\Gamma(x)(\varphi^ir_i^+,\varphi^ir_i^+))_x\\ & - \sum_{i\neq j}(\varphi_i \varphi_j \Gamma(x)(r_i^\pm, r_j^\pm))_x. \end{split} \label{ghati}$$ Let us write a typical term of the first summation ($a_i^- < 0$) in the following form: $$\begin{split} \varphi_t^i & r_i^- + (A(x)\varphi^i r_i^-)_x - (B(x)\varphi_x^i r_i^-)_x + (\Gamma(x)(\varphi^ir_i^-,\varphi^ir_i^-))_x\\ &= \left[(A(x)-A^-)\varphi^i r_i^- - (B(x)-B^-)\varphi_x^i r_i^- + (\Gamma(x)-\Gamma^-)(\varphi^ir_i^-,\varphi^ir_i^-)\right]_x\\ &+ \varphi_t^i r_i^- + (\varphi^i_xA^- r_i^-) - (\varphi_{xx}^i B^- r_i^-) + ((\varphi^{i})^2_x\Gamma^-(r_i^-,r_i^-)). \end{split} \label{ghatitar}$$ Now we use the definition of $\varphi^i$ in (\[diffusionwaves\]) and the definition of coefficients $b_{ij}$ and $\Gamma_{ijk}$ in (\[coefshock\]) to write the last part of (\[ghatitar\]) in the following form: $$\begin{split} \varphi_t^i r_i^- + (\varphi^i_xA^- r_i^-) - &(\varphi_{xx}^i B^- r_i^-) + ((\varphi^i)^2_x\Gamma^-(r_i^-,r_i^-))\\ &= -\varphi^i_{xx}\sum_{j\ne i} b^-_{ij}r^-_j - (\varphi^i)^2_{x}\sum_{j\ne i} \Gamma^-_{jii}r^-_j. \end{split} \label{ajabaa}$$ Similar statements hold for $a_i^+ > 0$ with minus signs replaced with plus signs. Applying Duhamel’s principle, we obtain from (\[khati\]) $$\begin{aligned} v(x,t) &=\int^{+\infty}_{-\infty} G(x,t;y)v_0(y)dy\\ &+\int^t_0 \int^{+\infty}_{-\infty}G(x,t-s;y)\mathcal{F}(\varphi, v, \frac{\partial\bar{u}^\delta}{\partial\delta}\delta)_y(y,s) dy \, ds,\\ &+ \int^t_0\int^{+\infty}_{-\infty}G(x,t-s;y)\Phi(y,s) dy \, ds + \frac{\partial\bar{u}^\delta}{\partial\delta} \delta(t),\\ \label{shswgreen} \end{aligned}$$ where we have used the identity $\int_{-\infty}^{+\infty} G(x,t;y)\frac{\partial\bar{u}^\delta}{\partial \delta_i}(y)dy=e^{Lt}\frac{\partial\bar{u}^\delta}{\partial \delta_i}=\frac{\partial\bar{u}^\delta}{\partial \delta_i}$ and $\delta(0)=0$. Assuming $$\begin{aligned} \delta_i (t) &=-\int^\infty_{-\infty}e_{i}(y,t) v_0(y)dy\\ &-\int^t_0\int^{+\infty}_{-\infty} e_{i}(y,t-s)\mathcal{F}(\varphi, v,\frac{\partial\bar{u}^\delta}{\partial\delta}\delta)_y(y,s) dy ds\\ &-\int^t_0\int^{+\infty}_{-\infty}e_i(x,t-s;y)\Phi(y,s)dy \, ds, \end{aligned} \label{delta}$$ and using (\[shswgreen\]), (\[delta\]) and $G=H+E+\tilde{G}$ we obtain: $$\begin{aligned} v(x,t) &=\int^{+\infty}_{-\infty} (H+ \tilde{G})(x,t;y)v_0(y)dy\\ &+\int^t_0 \int^{+\infty}_{-\infty}(H+\tilde{G})(x,t-s;y)\mathcal{F}(\varphi, v,\frac{\partial\bar{u}^\delta}{\partial\delta}\delta)_y(y,s) dy \, ds\\ &+ \int^t_0\int^{+\infty}_{-\infty}(H+\tilde{G})(x,t-s;y)\Phi(y,s) dy \, ds, \label{tilde{G}} \end{aligned}$$ which we can clearly augment with derivative estimates through differentiation on both sides. In addition to $v(x, t)$ and $\delta (t)$, we will keep track in our argument of $v_x (x, t)$, $(v_t+a^\pm_jv_x)$, and $\dot \delta (t)$, the latter of which satisfies $$\begin{aligned} \dot \delta_i (t) &=-\int^\infty_{-\infty}\partial_t e_{i}(y,t) v_0(y)dy\\ &-\int^t_0\int^{+\infty}_{-\infty} \partial_t e_{i}(y,t-s)\mathcal{F}(\varphi, v,\frac{\partial\bar{u}^\delta}{\partial\delta}\delta)_y(y,s) dy ds\\ &-\int^t_0\int^{+\infty}_{-\infty} \partial_t e_i(y,t-s)\Phi(y,s)dy \, ds, \end{aligned} \label{dotdelta}$$ where we have taken advantage of the observation, apparent from (\[ebounds\]), that $e(y,0)=0$. Define $$\label{zetadefined} \begin{aligned} \zeta(t) &:= \sup_{y, 0 \le s \le t} \frac{|v (y, s)|}{\psi_1 (y, s) + \psi_2 (y,s)} \\ &+ \sup_{y, 0 \le s \le t} \frac{|v_y (y, s)|}{s^{-1/2} (\psi_1 (y, s) + \psi_2 (y,s)) + \psi_3 (y, s) + \psi_4 (y, s)} \\ &+ \sum_{a_j^- < 0, a_j^+ > 0} \sup_{y, 0 \le s \le t} \frac{|(\partial_s + a_j^\pm \partial_y) v (y, s)|} {s^{-1} (1+s)^{1/4} \psi_1^{j, \pm} (y, s) + s^{-1/2} (\bar{\psi}_1^{j,\pm} (y, s) + \psi_2 (y, s)) + \psi_3 (y, s) + \psi_4 (y, s)} \\ &+ \sup_{0 \le s \le t} \frac{|\delta (s)|}{(1+s)^{-1/2}} + \sup_{0 \le s \le t} \frac{|\dot{\delta} (s)|}{(1+s)^{-1}}. \end{aligned}$$ The desired estimates follow easily using the following integral estimates, to be established in Sections \[integralestimatesL\], \[integralestimatesNLI\], and \[integralestimatesNLII\]. \[linearintegralestimates\] If $\int_{-\infty}^{+\infty} v_0 (y) dy = 0$ and $|v_0 (y)| \le E_0 (1 + |y|)^{-3/2}$, $E_0 > 0$, then $$\label{Ginit} \begin{aligned} \Big|\int_{-\infty}^{+\infty} \tilde{G} (x, t; y) v_0 (y) dy \Big| &\le C E_0 \psi_1 (x, t) \\ \Big|\int_{-\infty}^{+\infty} \partial_x \tilde{G} (x, t; y) v_0 (y) dy \Big| &\le CE_0 (t^{-1/2} + e^{-\eta |x|}) \psi_1 (x, t) \\ \Big|\int_{-\infty}^{+\infty}(\partial_t + a_j^\pm \partial_x) \tilde{G} (x, t; y) v_0 (y) dy \Big| &\le CE_0 \Big(t^{-1} \psi_1^{j, \pm} (x,t) + t^{-1/2} \bar{\psi}_1^{j, \pm}(x,t) + e^{-\eta |x|} \psi_1 (x, t) \Big) \\ \Big|\int_{-\infty}^{+\infty} e_i (y,t) v_0 (y) dy \Big| &\le C E_0 (1 + t)^{-1/2} \\ \Big|\int_{-\infty}^{+\infty} \partial_t e_i(y, t) v_0 (y) dy \Big| &\le C E_0 (1+t)^{-3/2}. \end{aligned}$$ \[Hlinear\] If $|v_0(x)|$, $|\partial_x v_0(x)|$, $|\partial_x^2 v_0(x)|\le E_0 (1+|x|)^{-\frac32}$, $E_0>0$, then, for some $\theta>0$, $$\label{Hinit} \begin{aligned} \int_{-\infty}^{+\infty} H(x, t; y) v_0(y) dy&\le C E_0 e^{-\theta t}(1+|x|)^{-\frac32} \le CE_0\psi_1(x,t), \\ \int_{-\infty}^{+\infty} H_x(x, t; y) v_0(y) dy&\le C E_0 e^{-\theta t}(1+|x|)^{-\frac12} \le CE_0 (1+t)^{-1/2}\psi_1(x,t), \\ \int_{-\infty}^{+\infty}(\partial_t+a_j^\pm\partial_x) H_x(x, t; y) v_0(y) dy&\le C E_0 e^{-\theta t}(1+|x|)^{-\frac12} \le CE_0(1+t)^{-3/4}\psi_1(x,t). \\ \end{aligned}$$ \[Nonlinear estimates I.\] \[nonlinearintegralestimates\] If $\|v(y, s)\|_{L^\infty} \le C E_0 (1 + s)^{-3/4}$, $\|v(y, s)\|_{H^4} \le C E_0 (1 + s)^{-1/2}$, and $\zeta (t)<+\infty$ (see \[zetadefined\]), then $$\label{Ginteract} \begin{aligned} \Big| \int_0^t &\int_{-\infty}^{+\infty} \tilde{G} (x, t-s; y) \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta} \delta)_y dy ds \Big| \le CE_0 \zeta(t) [\psi_1 (x, t) + \psi_2 (x, t)] \\ \Big|\int_0^t &\int_{-\infty}^{+\infty} \tilde{G} (x, t-s; y) \Phi (y, s) dy ds \Big| \le CE_0 \psi_1 (x, t) \\ \Big|\int_0^{t} &\int_{-\infty}^{+\infty} \tilde{G}_{x} (x, t-s; y) \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta} \delta)_y dy ds \Big| \\ &\le CE_0 \zeta (t) \Big[t^{-1/2}(\psi_1 (x, t) + \psi_2 (x, t)) + \psi_3 (x, t) + \psi_4 (x, t)] \\ \Big|\int_0^{t} &\int_{-\infty}^{+\infty} \tilde{G}_{x} (x, t-s; y) \Phi (y, s) dy ds \le CE_0 t^{-1/2} \psi_1 (x, t) \\ \Big| (\partial_t + a_j^\pm \partial_x) \int_0^t &\int_{-\infty}^{+\infty} \tilde{G} (x, t-s; y) \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta} \delta)_y dy ds \Big| \\ &\le CE_0 \zeta (t) \Big[t^{-1} (1+t)^{1/4} \psi_1^{j, \pm} (x, t) + t^{-1/2}(\bar{\psi}_1^{j, \pm} (x, t) + \psi_2 (x, t)) + \psi_3 (x, t) + \psi_4 (x, t)] \\ \Big|(\partial_t + a_j^\pm \partial_x) \int_0^t &\int_{-\infty}^{+\infty} \tilde{G} (x, t-s; y) \Phi (y, s) dy ds \Big| \\ &\le CE_0 \Big[t^{-1} (1+t)^{1/4} \psi_1^{j,\pm} (x, t) + t^{-1/2} (\bar{\psi}_1^{j,\pm}(x,t) + \psi_2 (x,t)) \Big] \\ \Big|\int_0^t &\int_{-\infty}^{+\infty} \partial_y e_i (y, t-s) \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta} \delta)_y dy ds \Big| \le C E_0 \zeta (t) (1 + t)^{-3/4} \\ \Big|\int_0^t &\int_{-\infty}^{+\infty} \partial_{yt} e_i (y, t-s) \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta} \delta)_y dy ds \Big| \le C E_0 \zeta (t) (1 + t)^{-1} \\ \Big|\int_0^t &\int_{-\infty}^{+\infty} e_i (y, t-s) \Phi (y, s) dy ds \Big| \le C E_0 (1 + t)^{-1/2} \\ \Big|\int_0^t &\int_{-\infty}^{+\infty} \partial_t e_i (y, t-s) \Phi (y, s) dy ds \Big| \le C E_0 (1 + t)^{-1}. \\ \end{aligned}$$ \[Hnonlinear\] If $\|v(y, s)\|_{L^\infty} \le C E_0 (1 + s)^{-3/4}$, $\|v(y, s)\|_{H^5} \le C E_0 (1 + s)^{-1/2}$, and $\zeta (t)<+\infty$, then $$\label{Hinteract} \begin{aligned} \Big| \int_0^t &\int_{-\infty}^{+\infty} H (x, t-s; y) \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta} \delta)_y dy ds \Big| \le CE_0 \zeta(t) [\psi_1 (x, t) + \psi_2 (x, t)] \\ \Big|\int_0^t &\int_{-\infty}^{+\infty} H (x, t-s; y) \Phi (y, s) dy ds \Big| \le CE_0 \psi_1 (x, t) \\ \Big|\int_0^{t} &\int_{-\infty}^{+\infty} H_{x} (x, t-s; y) \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta} \delta)_y dy ds \Big| \\ &\le CE_0 \zeta (t) \Big[t^{-1/2}(\psi_1 (x, t) + \psi_2 (x, t)) + \psi_3 (x, t) + \psi_4 (x, t)] \\ \Big|\int_0^{t} &\int_{-\infty}^{+\infty} H_{x} (x, t-s; y) \Phi (y, s) dy ds \le CE_0 t^{-1/2} (\psi_1+ \psi_2) (x, t) \\ \Big| (\partial_t + a_j^\pm \partial_x) \int_0^t &\int_{-\infty}^{+\infty} {H} (x, t-s; y) \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta} \delta)_y dy ds \Big| \\ &\le CE_0 \zeta (t) \Big[t^{-1} (1+t)^{1/4} \psi_1^{j, \pm} (x, t) + t^{-1/2}(\bar{\psi}_1^{j, \pm} (x, t) + \psi_2 (x, t)) + \psi_3 (x, t) + \psi_4 (x, t)] \\ \Big|(\partial_t + a_j^\pm \partial_x) \int_0^t &\int_{-\infty}^{+\infty} H (x, t-s; y) \Phi (y, s) dy ds \Big| \\ &\le CE_0 \Big[t^{-1} (1+t)^{1/4} \psi_1^{j,\pm} (x, t) + t^{-1/2} (\bar{\psi}_1^{j,\pm}(x,t) + \psi_2 (x,t)) \Big] .\\ \end{aligned}$$ [**Proof of Theorem \[pointwiseestimates\].**]{} We prove Theorem \[pointwiseestimates\] directly from the integral representations (\[delta\]) and (\[tilde[G]{}\]), augmented with similar representations for $\dot{\delta}_i (t)$, $v_x (x, t)$, and $(\partial_t + a_j^\pm \partial_x) v(x, t)$, obtained through direct differentiation of (\[delta\]) and (\[tilde[G]{}\]). Recalling the definition of our iteration variable $\zeta(t)$ in (\[zetadefined\]), our goal will be to employ the estimates of Lemmas \[linearintegralestimates\]–\[Hnonlinear\] to establish the inequality $$\label{continuousinduction1} \zeta(t) \le CE_0 (1 + \zeta (t)),$$ where $E_0$ is precisely as in Theorem \[pointwiseestimates\]. Choosing, then, $E_0 \le \frac{1}{2C}$, and noting that the possibility of $\zeta (t)$ jumping discontinuously from a finite value to an infinite value is precluded by short-time theory (see Remark 3.6 in [@HR])[^4], we will be able to conclude $$\label{continuousinduction2} \zeta(t) \le \frac{C E_0}{1 - C E_0} \le 1.$$ The estimates of Theorem \[pointwiseestimates\] follow immediately from (\[continuousinduction2\]) and (\[zetadefined\]). We begin with a careful consideration of the integral represention for $v(t, x)$, (\[tilde[G]{}\]), which we will separate for clarity into terms that arise in the case of strict parabolicity—involving $\tilde{G}$—and the additional terms arising from the relaxation of strict parabolicity—involving $H$. For the terms involving $\tilde{G}$, we estimate $$\begin{aligned} \Big|\int_{-\infty}^{+\infty} \tilde{G} (x, t; y) v_0 (y) dy \Big| & + \Big|\int_0^t \int_{-\infty}^{+\infty} \tilde{G} (x, t - s; y) \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta} \delta)_y (y, s) dy ds \Big| \\ &+ \Big|\int_0^t \int_{-\infty}^{+\infty} \tilde{G} (x, t - s; y) \Phi (y, s) dy ds \Big| \\ &\le CE_0 \psi_1 (x, t) + CE_0 \zeta(t) [\psi_1 (x, t) + \psi_2(x, t)] + CE_0 \psi_1 (x, t), \end{aligned}$$ where the estimates follow respectively from the first estimate of Lemma \[linearintegralestimates\], the first estimate of Lemma \[nonlinearintegralestimates\], and the second estimate of Lemma \[nonlinearintegralestimates\] (though the the first and last estimates are the same, we write both for clarity). For the terms in (\[tilde[G]{}\]) involving $H$, we have similarly $$\begin{aligned} \Big|\int_{-\infty}^{+\infty} H (x, t; y) v_0 (y) dy \Big| & + \Big|\int_0^t \int_{-\infty}^{+\infty} H (x, t - s; y) \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta} \delta)_y (y, s) dy ds \Big| \\ &+ \Big|\int_0^t \int_{-\infty}^{+\infty} H (x, t - s; y) \Phi (y, s) dy ds \Big| \\ &\le CE_0 \psi_2 (x, t) + CE_0 \zeta(t) [\psi_1 (x, t) + \psi_2(x, t)] + CE_0 \psi_1 (x, t), \end{aligned}$$ for which the estimates follow respectively from the first estimate of Lemma \[Hlinear\], the first estimate of Lemma \[Hnonlinear\], and the second estimate of Lemma \[Hnonlinear\]. Combining these estimates, we conclude $$|v(x, t)| \le CE_0 [\psi_1 (x, t) + \psi_2 (x, t)] + CE_0 \zeta (t) [\psi_1 (x, t) + \psi_2 (x, t)],$$ which can be rearranged as $$\frac{|v(t, x)|}{\psi_1 (x, t) + \psi_2 (x, t)} = C E_0 (1 + \zeta (t)).$$ Keeping in mind that $\zeta (t)$ is a nondecreasing function of $t$, we have at last $$\label{quotientbound} \sup_{y, 0 \le s \le t} \frac{|v(y, s)|}{\psi_1 (y, s) + \psi_2 (y, s)} \le C E_0 (1 + \zeta (t)).$$ Proceeding similarly for each of the expressions $\delta_i (t)$, $\dot{\delta}_i (t)$, $v_x (x, t)$, and $(\partial_t + a_j^\pm \partial_x) v(x, t)$, we can bound each summand in the definition of $\zeta (t)$ in precisely the same way by $C E_0 (1 + \zeta (t))$. We conclude the sought estimate $$\zeta (t) \le C E_0 (1 + \zeta (t)).$$ As discussed in (\[continuousinduction1\]) and (\[continuousinduction2\]), we conclude $\zeta (t) \le 1$, and from this the estimates of Theorem \[pointwiseestimates\]. $\square$ Liu’s cancellation estimate {#liu} =========================== Before carrying out the deferred integral estimates, we revisit a key estimate of Liu [@Liu85] that at once determines the ultimate rate of decay and motivates the analysis to follow. Consider the illustrative convolution $$\begin{aligned} \label{dovomi} u(x,t)&= \int_0^t \int_{-\infty}^{+\infty}g(x-y, t-s)(K(y-s, s)^2)_y \, dy \, ds\\ &= \int_0^t \int_{-\infty}^{+\infty}g_y(x-y, t-s) K(y-s, s)^2 \, dy \, ds, \end{aligned}$$ $g(x,t)= K(x,t)=(4\pi t)^{-\frac 12}e^\frac{-x^2}{4t}$, similar to quadratic interaction integrals arising through the integration of scattering terms against diffusion waves (see Remark \[charbds\]). If we replace the integrands in (\[dovomi\]) by their absolute value, we obtain (see [@HZ]) the sharp estimate $$\label{abs} |u(x,t)| \leq C \left( g(x, 4t) + g(x-t, 4t) \right)\\ + C \chi_{\{\sqrt{t} \leq x \leq t-\sqrt{t}\}} ( x^{-\frac 12} (t-x)^{-\frac12}).$$ By taking account of cancellation, however, we may obtain the following stronger bound (also sharp) pointed out by Liu [@Liu85]. We follow the notation of Raoofi [@Ra], and also the proof, based on integration by parts in the characteristic direction, which abstracts from the more concrete calculation of Liu the central ideas that will be of use here. \[sect1main\] For $u(x,t)$ defined in (\[dovomi\]) and $t \geq 1$, $$\begin{split} |u(x,t)| &\leq C {t^{-\frac 14}}\left( g(x, 8t) + g(x-t, 8t) \right) + C \chi_{\{\sqrt{t} \leq x \leq t-\sqrt{t}\}} \left(t^{-\frac 12} (t-x)^{-1}+ x^{- \frac 32} \right), \end{split}\label{liubounds}$$ where $\chi$ stands for the indicator function, and $C$ is a constant independent of $t$ and $x$.\ The same result holds if $K^2$ in (\[dovomi\]) is replaced with $K_x$. This was first pointed out in As $K^2 \sim K_x$, the proof will be stated only for $K^2$. It is straightforward to verify that the same argument works for $K_x$ at every step. We first state a simple lemma. \[stlem\] If $0\leq s\leq \sqrt{t}$, then $e^{\frac{-(x\pm s)^2}{4t}} \leq Ce^{\frac{-x^2}{8t}}$ with $C$ independent of $t,s$ and $x$. The statement of the lemma is equivalent to $$\frac{-(x\pm s)^2}{4t} \leq \frac{-x^2}{8t}+D \notag$$ for some $D$, which (after some calculation) in its turn is equivalent to $(x\pm 2s)^2-2s^2 \geq -8Dt$, which holds for $D > \frac 14$, since $s^2 < t$. The argument relies on the following simple properties of the heat kernel $g$. $$\begin{aligned} \int_{-\infty}^{+\infty}g(x-y, t)g(y, t') dy = g(x,t+t') \label{gg1}\\ |g_x(x,t)|\leq C{t^{-\frac 12}}g(x,2t) \label{gg2}\\ |g_t(x,t)|\leq C{t^{-1}}g(x,2t) \label{gg3}\\ |g(x,t)| \leq C\, {t^{-\frac 12}}. \label{gg4}\end{aligned}$$ Rewriting (\[dovomi\]), we have $$\begin{aligned} u(x,t)&= \int_0^t \int_{-\infty}^{+\infty}g(x-y, t-s)(g(y-s,s)^2)_y \, dy \, ds\\ &=\int_0^{\sqrt{t}} \int_{-\infty}^{+\infty}g(x-y, t-s)(g(y-s, s)^2)_y \, dy \, ds\\ &+ \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty}g(x-y, t-s)(g(y-s, s)^2)_y \, dy \, ds\\ &+ \int_{t-\sqrt{t}}^t \int_{-\infty}^{+\infty}g(x-y, t-s)(g(y-s, s)^2)_y \, dy \, ds\\ &=: I + II + III. \end{aligned} \label{shekaf}$$ $(I)$ and $(III)$ are easy to estimate: $$\begin{aligned} |I| &= \left| \int_0^{\sqrt{t}} \int_{-\infty}^{+\infty}g(x-y, t-s)(g(y-s, s)^2)_y \, dy \, ds\right| \notag\\ &= \left| \int_0^{\sqrt{t}} \int_{-\infty}^{+\infty}g_y(x-y, t-s) g(y-s, s)^2 \, dy \, ds\right|.\notag\end{aligned}$$ By (\[gg2\]) and (\[gg4\]), the above is less than or equal to $$C\int_0^{\sqrt{t}} \int_{-\infty}^{+\infty}(t-s)^{-\frac12} s^{-\frac12} g(x-y, 2(t-s))\,g(y-s, 2s) \, dy \, ds$$ which, by (\[gg1\]), is less than or equal to $ C\int_0^{\sqrt{t}} (t-s)^{-\frac12} s^{-\frac12} g(x-s, 2t) \, ds. \notag $ Now, using Lemma \[stlem\], the above is $$\begin{aligned} \leq C\,g(x, 4t) \int_0^{\sqrt{t}} (t-s)^{-\frac12} s^{-\frac12} \, ds &\leq C{t^{-\frac 12}}g(x, 4t) \int_0^{\sqrt{t}} s^{-\frac12} \, ds\\ &\leq C{t^{-\frac 14}}g(x,4t). \end{aligned}$$ Part $(III)$ in (\[shekaf\]) can be handled similarly. The more difficult part is part $(II)$ of (\[shekaf\]): $$II= \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty}g(x-y, t-s)(g(y-s, s)^2)_y \, dy \, ds.$$ In order to estimate $(II)$, let us write $g=g(x, \tau)$. Then we have $$\begin{aligned} g(x-y, t-s)(g(y-s, s)^2)_y \,&= (g(x-y, t-s)g(y-s, s)^2)_s \\ & - \,g_\tau (x-y, t-s)g(y-s, s)^2\\ &+\, g(x-y, t-s)(g^2)_\tau(y-s, s) . \end{aligned} \label{shekaf2}$$ We will do the estimates piece by piece. The first part of (\[shekaf2\]) can be decomposed as $$\begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (g(x-y, t-s)g(y-s, s)^2)_s\, dy\, ds\\ = \int_{-\infty}^{+\infty} g(x-y, \sqrt{t})\,g(y-t+\sqrt{t}, t-\sqrt{t})^2\,dy\\ - \int_{-\infty}^{+\infty} g(x-y, t-\sqrt{t})\,g(y-\sqrt{t}, \sqrt{t})^2\,dy.\end{aligned}$$ Using (\[gg1\]) and (\[gg4\]), it follows that $$\int_{-\infty}^{+\infty} g(x-y, \sqrt{t})\,g(y-t+\sqrt{t}, t-\sqrt{t})^2\,dy \leq C{t^{-\frac 12}}g(x-t+\sqrt{t}, t),$$ and $$\int_{-\infty}^{+\infty} g(x-y, t-\sqrt{t})\,g(y-\sqrt{t}, \sqrt{t})^2\,dy \leq C{t^{-\frac 14}}g(x-\sqrt{t}, t),$$ but, by Lemma \[stlem\], $g(x-\sqrt{t}, t) \leq g(x, 2t)$ and $g(x-t+\sqrt{t}, t) \leq g(x-t, 2t).$ These terms fit in the right hand side of (\[liubounds\]). For the other parts in (\[shekaf2\]), we use (\[gg3\]) to obtain $$\begin{aligned} &\left| \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} g_\tau (x-y, t-s) g(y-s, s)^2 \, dy\, ds \right| \\ \leq &\int_{\sqrt{t}}^{t-\sqrt{t}} s^{-\frac 12} (t-s)^{-1} g(x-s, 2t) ds \\ =&\int_{\sqrt{t}}^{\frac t2} s^{-\frac 12} (t-s)^{-1} g(x-s, 2t) ds \\ + &\, \int_{\frac t2}^{t-\sqrt{t}} s^{-\frac 12} (t-s)^{-1} g(x-s, 2t) ds \\ =: \,&\mathcal{A} \,+\, \mathcal{B}. \end{aligned} \label{AB}$$ If $x \leq \sqrt{t}$, then $$\begin{aligned} \mathcal{A} &\leq C\,{t^{-1}}g(x-\sqrt{t}, 2t)\int_{\sqrt{t}}^{\frac t2} s^{-\frac 12}\, ds\\ &\leq C \, t^{-\frac 12} \, g(x-\sqrt{t}, 2t)\\ &\leq C {t^{-\frac 12}}g(x, 4t). \end{aligned}$$ Similarly when $x\ge t-\sqrt{t}$, we obtain $\mathcal A \le C{t^{-\frac 12}}g(x-t, 4t).$ Now for $\sqrt{t} \leq x \leq t-\sqrt t$ we have $$\begin{aligned} \mathcal{A} &= \int_{\sqrt{t}}^{\frac t2} s^{-\frac 12} (t-s)^{-1} g(x-s, 2t) ds \\ &\leq C t^{-1} \int_{\sqrt{t}}^{\frac t2} s^{-\frac 12} g(x-s, 2t) ds \\ &= C t^{-1} \int_{\sqrt{t}}^{\frac x2} s^{-\frac 12} g(x-s, 2t) ds + t^{-1} \int_{\frac x2}^{\frac t2} s^{-\frac 12} g(x-s, 2t) ds \\ &\le C t^{-1} g(\frac x2, 2t)\int_{\sqrt{t}}^{\frac x2} s^{-\frac 12} ds + t^{-1} x^{-\frac 12} \int_{\frac x2}^{\frac t2} g(x-s, 2t) ds\\ &\le C\left( t^{-\frac 12} g(x, 4t) + t^{-1} x^{-\frac 12}\right) \end{aligned}$$ also acceptable, as $ t^{-1} x^{-\frac 12}\le x^{-\frac32}$ for $\sqrt t \le x \le t-\sqrt t.$ Part $\mathcal{B} $ in (\[AB\]) can be estimated similarly. We carry it out briefly only for $\frac t2\le x\le t-\sqrt t.$ Let $\xi:= t-\sqrt t -x.$ Then $$\begin{aligned} \mathcal{B} &\le C t^{-\frac 12}\left(\int_{\frac t2}^{x+\xi}+\int_{x+\xi}^{t-\sqrt t}\right)(t-s)^{-1} g(x-s, 2t)ds\\ &\le C t^{-\frac 12}(t-x)^{-1} + C t^{-\frac 14}g(\frac {t-\sqrt t-x}2, 2t), \end{aligned}$$ to which we apply Lemma \[stlem\]. There remains the last part of (\[shekaf2\]), i.e., $$\left| \int_{\sqrt{t}}^{t-\sqrt{t}}\int_{-\infty}^{+\infty} g(x-y, t-s)(g^2)_\tau(y-s, s) \, dy\, ds\right|,\label{lastone}$$ which can easily be shown to be less than or equal to $$C \int_{\sqrt{t}}^{t-\sqrt{t}}s^{-\frac 32} g(x-s, 2t)\, ds. \label{aan}$$ If $x\,\leq\, \sqrt{t}$, then (\[aan\]) is of the order $$C\, g(x-\sqrt{t}, 2t)\int_{\sqrt{t}}^{t-\sqrt{t}}s^{-\frac 32}\, ds$$ $$\leq C\,{t^{-\frac 14}}\, g(x-\sqrt{t}, 2t)$$ $$\leq C\,{t^{-\frac 14}}\, g(x,4t).$$ For $x\, \geq \, t-\sqrt{t}$ we use a similar method.\ For $\sqrt{t} \, \leq \,x\, \leq \, t-\sqrt{t}$, we use a similar method to those used for the previous cases to get $$(\ref{aan}) \, \leq \, C \left({t^{-\frac 14}}g(x,4t) + x^{-\frac 32}\right).$$\ This completes the proof. \[charbds\] \[refinedcharbds\] Linear integral Estimates {#integralestimatesL} ========================= It remains to establish the deferred integral estimates used in Section \[nonlinear\]. We begin, in this section, with the linear integral estimates of Lemmas \[linearintegralestimates\] and \[Hlinear\]. [**Proof of Lemma \[linearintegralestimates\].**]{} The first, fourth, and fifth estimates of Lemma \[linearintegralestimates\] have been established in [@HR]. The second and third follow in a similar fashion from the estimates of Proposition \[altGFbounds\]. $\square$ [**Proof of Lemma \[linearintegralestimates\].**]{} Looking at (\[multH\]), we notice that in order to estimate $\int_{-\infty}^{+\infty}H(x,t;y)v_0(y) dy$ it suffices to estimate $$\label{randomcalc} \begin{aligned} \int_{-\infty}^{+\infty}\mathcal{R}_j^*(x) \mathcal{O}(e^{-\eta_0 t}) \delta_{x-\bar a_j^* t}(-y) \mathcal{L}_j^{*t}(y)v_0(y) dy& \le CE_0 e^{-\eta_0 t}v_0(\bar a_j^*t-x)\\ &\le CE_0 e^{-\eta_0 t} (1+|\bar a_j^*t-x|)^{-\frac32} \\ &\le CE_0 e^{-\eta_0 t} (1+|x|)^{-\frac32}(1+|\bar a_j^*t|)^{\frac32}\\ &\le CE_0 e^{-\frac{\eta_0 t}2} (1+|x|)^{-\frac32}.\\ \end{aligned}$$ Here we used the crude inequality $$\label{abineq} \frac {1}{1+|a+b|}\le \frac{1+|b|}{1+|a|}$$ and the fact that $\bar a_j^*$ $\mathcal{R}_j^*$ and $\mathcal{L}_j^{*t}$ are bounded. Observing that $e^{- \frac {eta_0t}{4}}\le C(N)(1+t)^{-N}$ for any $N$, and $(1+|x-at|)\le C(1+t)(1+|x|)$, we can bound the righthand side of (\[randomcalc\]) in turn by $CE_0 e^{-\frac{\eta_0 t}4} \psi_1(x,t)$, giving (\[Hinit\])(i). Estimates (\[Hinit\])(ii) and (\[Hinit\])(iii) are obtained similarly. $\square$ $\square$ Nonlinear integral Estimates I {#integralestimatesNLI} ============================== In this section, we carry out the main work of the paper, establishing the nonlinear integral estimates of Proposition \[nonlinearintegralestimates\]. [**Proof of Lemma \[nonlinearintegralestimates\].**]{} Under the assumption that $\zeta (t)$ is bounded, we have the estimates $$\begin{aligned} |v (x,t)| &\le \zeta(t) \Big[\psi_1 + \psi_2 \Big] (x, t), \\ |\partial_x v (x,t)| &\le \zeta(t) \Big[ t^{-1/2} (\psi_1+\psi_2) + \psi_3 + \psi_4 \Big] (x,t) \\ |(\partial_t + a_j^\pm \partial_x) v(x,t)| &\le \zeta(t) \Big[t^{-1} (1+t)^{1/4} \psi_1^{j,\pm} + t^{-1/2} (\bar{\psi}_1^{j,\pm} + \psi_2) + \psi_3 + \psi_4 \Big] (x, t). \end{aligned}$$ In the analysis that follows, we will omit $\zeta(t)$ from our estimates in most cases and focus only on the terms with a given form, the [*template*]{}. We observe at the outset that in proving estimates of form $\psi_1 (x ,t)$, we will frequently make use of the inequality $$\label{kerneltopsi1} (1+t)^{-3/4} e^{-\frac{(x - a_j^\pm t)^2}{Lt}} \le C (1 + |x - a_j^\pm t| + t^{-1/2})^{-3/2}.$$ In the case $|x - a_k^\pm t| \le K \sqrt{t}$, for some constant $K$, this inequality is immediate. On the other hand, for $|x - a_k^\pm t| \ge K \sqrt{t}$, we observe $$\begin{aligned} (1+t)^{-3/4} e^{-\frac{(x - a_j^\pm t)^2}{Lt}} &= (1+t)^{-3/4} |x - a_j^\pm t|^{-3/2} t^{3/4} \frac{|x - a_j^\pm t|^{3/2}}{t^{3/4}} e^{-\frac{(x - a_j^\pm t)^2}{Lt}} \\ &\le C_1 (1+t)^{-3/4} t^{3/4} |x - a_j^\pm t|^{-3/2} e^{-\frac{(x - a_j^\pm t)^2}{2Lt}} \le C (1 + |x - a_j^\pm t| + t^{1/2})^{-3/2}, \end{aligned}$$ where the seeming blow-up as $|x - a_j^\pm t| \to 0$ is controlled by the size of $\sqrt{t}$, which must be smaller than $|x - a_j^\pm t|$. For the first estimate in Lemma \[nonlinearintegralestimates\], we begin by considering the nonlinearities $(v(y,s)^2)_y$ and $(v(y,s)v_y(y,s))_y$. Here and in the remaining cases, the analyses of the convection, reflection, and transmission contributions to the Green’s kernel $\partial_y \tilde{G} (x, t; y)$ are similar, and we provide full details only for the case of convection. We observe that the contribution $$\sum_{j=1}^J\sum_{\beta=0}^{\max \{0, |\alpha|-1\}} \mathbf{O}(e^{-\eta t})\partial_y^\beta \delta_{x-\bar a_j^* t}(-y)$$ is similar to, though less singular than, terms arising in $\partial_y H(x, t; y)$ (see \[multH\]), and can be analyzed as in the proof of Lemma \[Hnonlinear\]. Finally, we remark that the contribution $$\mathbf{O}(e^{-\eta(|x-y|+t)})$$ has no effect on the iteration. [*(\[Ginteract\](i)), term one.*]{} We first consider integration of our convecting Green’s kernel against the nonlinearity $s^{-1/2} (1 + s)^{-1/4} \psi_1 (y, s)$. In this case, we have integrals of the form $$\label{gf1nl1} \int_0^t \int_{-\infty}^0 (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + |y - a_k^- s| + s^{1/2})^{-3/2} s^{-1/2} (1+s)^{-1/4} dy ds,$$ with a similar integral for $y \ge 0$. Proceeding as in [@HR], we write $$\label{maindecomp} x - y - a_j^- (t - s) = (x - a_j^- (t-s) -a_k^- s) - (y - a_k^- s),$$ from which we have the estimate $$\label{gf1nl1balance1} \begin{aligned} &e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + |y - a_k^- s| + s^{1/2})^{-\gamma} \\ &\le C\Big[ e^{-\epsilon \frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} (1 + |y - a_k^- s| + s^{1/2})^{-\gamma} \\ &+ e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + |y - a_k^- s| + |x - a_j^- (t-s) - a_k^- s| + s^{1/2})^{-\gamma} \Big], \end{aligned}$$ where $\gamma = 3/2$. (Here and in future cases, we will state estimates in terms of a parameter $\gamma$ for general applicability.) For the first estimate in (\[gf1nl1balance1\]), we have integrals $$\label{gf1nl1balance1first} \int_0^t \int_{-\infty}^0 (t-s)^{-1} e^{-\epsilon \frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} (1 + |y - a_k^- s| + s^{1/2})^{-3/2} s^{-1/2} (1 + s)^{-1/4} dy ds.$$ We have three cases to consider: $a_k^- < 0 < a_j^-$, $a_k^- \le a_j^- < 0$, and $a_j^- < a_k^- < 0$. We will proceed by analyzing the second of these in detail and observing that no qualitatively new calculations arise in the remaining two. For this second case and for $|x| \ge |a_k^-| t$, we write $$\label{tminussdecomp} x - a_j^- (t-s) - a_k^- s = (x - a_k^- t) - (a_j^- - a_k^-) (t-s),$$ for which we observe that there is no cancellation between summands. We immediately obtain an estimate on (\[gf1nl1balance1first\]) by $$\label{gf1nl1nocanc1} \begin{aligned} C_1 &t^{-1} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_0^{t/2} (1 + s^{1/2})^{-1/2} s^{-1/2} (1 + s)^{-1/4} ds \\ &+ C_2 (1 + t^{1/2})^{-3/2} t^{-1/2} (1+t)^{-1/4} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{t/2}^t (t-s)^{-1/2} ds \\ &\le C (1+t)^{-1} \ln (e+t) e^{-\frac{(x - a_k^- t)^2}{Lt}}, \end{aligned}$$ which is sufficient by (\[kerneltopsi1\]). For $|x| \le |a_j^-| t$, we write $$\label{sdecomp} x - a_j^- (t-s) - a_k^- s = (x - a_j^- t) - (a_k^- - a_j^-) s,$$ for which we observe that there is no cancellation between summands, and proceeding as in (\[gf1nl1nocanc1\]), we obtain an estimate by $$C (1+t)^{-1} \ln (e+t) e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ which again is sufficient. We remark that this concludes the analysis in the case $a_j^- = a_k^-$, so from here on we may take $a_k^- < a_j^-$ and the case $|a_j^-| t \le |x| \le |a_k^-| t$. In this case, and for $s \in [0, t/2]$, we observe through (\[sdecomp\]) the inequality $$\label{gf1nl1balance2} \begin{aligned} &e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} (1 + |y - a_k^- s| + s^{1/2})^{-3/2} (1 + s)^{-\gamma} \\ &\le C \Big[ e^{-\frac{(x - a_j^- t)^2}{Lt}} (1 + |y - a_k^- s| + s^{1/2})^{-3/2} (1 + s)^{-\gamma} \\ &+ e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} (1 + |y - a_k^- s| + |x - a_j^- t|^{1/2})^{-3/2} (1 + |x-a_j^- t|)^{-\gamma} \Big], \end{aligned}$$ with $\gamma = 3/4$. For the first estimate in (\[gf1nl1balance2\]), we proceed similarly as in (\[gf1nl1nocanc1\]), while for the second, we have, upon integration of the $\epsilon$-kernel in $y$, an estimate by $$\begin{aligned} C_1 &t^{-1/2} (1+|x - a_j^- t|^{1/2})^{-3/2} (1 + |x - a_j^- t|)^{-3/4} \int_0^{t/2} s^{-1/2} (1+s)^{1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} ds \\ &\le C (1 + |x - a_j^- t|)^{-3/2}, \end{aligned}$$ where we have used in this last inequality that $a_j^- \ne a_k^-$. We observe that this is clearly sufficient in the case $|x - a_j^- t| \ge \sqrt{t}$, whereas in the case $|x - a_j^- t| \le \sqrt{t}$, decay of kernel type $\exp((x-a_j^-t)^2/(Lt))$ is immediate, for $L$ sufficiently large, and so we only require decay at rate $t^{-3/4}$, which is straightforward. For $s \in [t/2, t]$, we observe through (\[tminussdecomp\]) the inequality $$\label{gf1nl1balance3} \begin{aligned} (t-s)^{-\gamma} & e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} \\ &\le C \Big[ |x - a_k^- t|^{-\gamma} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} + (t - s)^{-\gamma} e^{-\frac{(x - a_j^- t)^2}{Lt}} \Big], \end{aligned}$$ where $\gamma = 1$. For the second estimate in (\[gf1nl1balance3\]), we proceed similarly as in (\[gf1nl1nocanc1\]), while for the first we have, upon integration in $y$ of the nonlinearity, an estimate by $$\begin{aligned} C_2 &(1+t^{1/2})^{-1/2} t^{-1/2} (1+t)^{-1/4} |x - a_k^- t|^{-1} \int_{t/2}^t e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} ds \\ &\le C (1 + t)^{-1/2} |x - a_k^- t|^{-1}. \end{aligned}$$ Recalling that we are currently working in the case $t \ge |x|/|a_k^-|$, we see that this final estimate is sufficient for the case $|x - a_k^- t| \ge \sqrt{t}$. For the second term in (\[gf1nl1balance1\]), we have integrals of the form $$\label{gf1nl1balance1second} \int_0^t \int_{-\infty}^0 (t - s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + |y - a_k^- s| + |x - a_j^- (t-s) - a_k^- s| + s^{1/2})^{-3/2} s^{-1/2} (1+s)^{-1/4} ds.$$ We continue to focus on the case $a_k^- \le a_j^-$. For the cases $|x| \ge |a_k^-| t$ and $|x| \le |a_j^-| t$, respectively, we have no cancellation between the summands in (\[tminussdecomp\]) and (\[sdecomp\]) and consequently we obtain estimates $$\label{gf1nl1nocanc2} C (1+t)^{-1/4} \Big[(1+|x - a_k^- t|)^{-3/2} + (1+|x - a_j^- t|)^{-3/2} \Big].$$ For the case $|a_j^-| t \le |x| \le |a_k^-| t$, we divide the analysis into sub-intervals $s \in [0, t/2]$ and $s \in [t/2, t]$. For $s \in [0, t/2]$, we observe through (\[sdecomp\]) the inequality $$\label{gf1nl1balance4} \begin{aligned} &\Big(1 + |y - a_k^- s| + |x - a_j^- (t-s) - a_k^- s| + s^{1/2}\Big)^{-3/2} (1+s)^{-\gamma} \\ &\le C\Big[ (1 + |x - a_j^- t| + s^{1/2})^{-3/2} (1+s)^{-\gamma} \\ &+ (1 + |y - a_k^- s| + |x - a_j^- (t-s) - a_k^- s| + |x - a_j^- t|^{1/2})^{-3/2} (1+|x-a_j^- t|)^{-\gamma} \Big], \end{aligned}$$ for $\gamma = 3/4$. For the first estimate in (\[gf1nl1balance4\]), we proceed as in (\[gf1nl1nocanc2\]), while for the second we have, upon integration of the kernel, an estimate on (\[gf1nl1balance1second\]) by $$\begin{aligned} C_1 &t^{-1/2} (1+|x-a_j^- t|)^{-3/4} \int_0^{t/2} s^{-1/2} (1+s)^{1/2} (1 + |x - a_j^- (t-s) - a_k^- s| + |x - a_j^- t|^{1/2})^{-3/2} ds \\ &\le C (1+t)^{-1/2} (1+|x-a_j^- t|)^{-1}. \end{aligned}$$ For $s \in [t/2, t]$, we observe through (\[tminussdecomp\]) the inequality $$\label{gf1nl1balance5} \begin{aligned} (t - s)^{-1} &(1 + |x - a_j^- (t-s) - a_k^- s| + s^{1/2})^{-3/2} \\ &\le C \Big[|x - a_k^- t|^{-1} (1 + |x - a_j^- (t-s) - a_k^- s| + s^{1/2})^{-3/2} \\ &+ (t - s)^{-1} (1 + |x - a_j^- (t-s) - a_k^- s| + |x - a_j^- t| + s^{1/2})^{-3/2} \Big]. \end{aligned}$$ For the second estimate in (\[gf1nl1balance5\]), we proceed similarly as in (\[gf1nl1nocanc2\]), while for the first we have, upon integration of the kernel, an estimate by $$\begin{aligned} C_2 &|x - a_k^- t|^{-1} t^{-1/2} (1+t)^{-1/4} \int_{t/2}^t (t-s)^{1/2} (1 + |x - a_j^- (t-s) - a_k^- s| + t^{1/2})^{-3/2} ds \\ &\le C |x - a_k^- t|^{-1} (1+t)^{-1/4} (1+t^{1/2})^{-1/2}, \end{aligned}$$ which is sufficient for $t \ge |x|/|a_j^-|$. [*(\[Ginteract\](i)), term two.*]{} We next consider integration against the nonlinearity $s^{-1/2} (1+s)^{-1/4} \psi_2 (y, t)$, for which we have integrals of the form $$\label{gf1nl2} \int_0^t \int_{-|a_1^-| s}^0 (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+|y|)^{-1/2} (|y| + s)^{-1/2} (1 + |y| + s)^{-3/4} (1 + |y| + s^{1/2})^{-1/2} dy ds,$$ wherein we have observed that for $y \in [-|a_1^-| s, 0]$, $s$ decay yields also decay in $y$. We first observe an immediate time decay estimate by $$\label{timedecayestimate2} \begin{aligned} C_1 &t^{-1} \int_0^{t/2} (1+s)^{-3/4} (1+s^{1/2})^{-1/2} ds + C_2 t^{-1/2} (1+t)^{-3/4} (1+t^{1/2})^{-1/2} \int_{t/2}^t (t-s)^{-1/2} ds \\ &\le C (1 + t)^{-1} \ln (e+t). \end{aligned}$$ In order to determine estimates in space as well, we observe the inequality $$\label{gf1nl2balance1} \begin{aligned} &e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+|y|)^{-1/2} (|y|+s)^{-1/2} (1 + |y| + s)^{-3/4} (1 + |y| + s^{1/2})^{-1/2} \\ &\le C \Big[e^{-\epsilon \frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M}(t-s)}} (1+|y|)^{-1/2} (|y|+s)^{-1/2} (1 + |y| + s)^{-3/4} (1 + |y| + s^{1/2})^{-1/2} \\ &+ e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+|x-a_j^- (t-s)|)^{-1/2} (|x-a_j^- (t-s)| + s)^{-1/2} \\ &\times (1 + |x-a_j^- (t-s)| + s)^{-3/4} (1 + |x-a_j^- (t-s)| + s^{1/2})^{-1/2} \Big]. \end{aligned}$$ For the first estimate in (\[gf1nl2balance1\]), we have integrals $$\label{gf1nl2balance1first} \begin{aligned} \int_0^t &\int_{-|a_1^-| s}^0 (t-s)^{-1} e^{-\epsilon \frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M} (t-s)}} \\ &\times (1+|y|)^{-1/2} (|y| + s)^{-1/2} (1 + |y| + s)^{-3/4} (1 + |y| + s^{1/2})^{-1/2} dy ds. \end{aligned}$$ We have two cases to consider here, $a_j^- < 0$ and $a_j^- > 0$, of which we focus on the former. In this case, for $|x| \ge |a_j^-| t$, there is no cancellation between $x - a_j^- t$ and $a_j^- s$, and so proceeding as in (\[timedecayestimate2\]), we obtain an estimate by $$C (1+t)^{-1} \ln (e+t) e^{-\frac{(x - a_j^- t)^2}{Lt}}.$$ For $|x| \le |a_j^-| t$, we divide the analysis into two cases, $s \in [0, t/2]$ and $s \in [t/2, t]$. For $s \in [0, t/2]$, we observe the inequality $$\label{gf1nl2balance2} \begin{aligned} &e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M} (t-s)}} (|y| + s)^{-1/2} (1 + |y| + s)^{-3/4} (1 + |y| + s^{1/2})^{-1/2} \\ &\le C \Big[e^{-\frac{(x - a_j^- t)^2}{L t}} (|y|+s)^{-1/2} (1 + |y| + s)^{-3/4} (1 + |y| + s^{1/2})^{-1/2} \\ &+ e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M} (t-s)}} (s + |x - a_j^- t|)^{-1/2} (1 + s + |x - a_j^- t|)^{-3/4} (1 + s^{1/2} + |x - a_j^- t|^{1/2})^{-1/2} \Big]. \end{aligned}$$ For the first estimate in (\[gf1nl2balance2\]), we proceed similarly as in (\[timedecayestimate2\]), while for the second we have, upon integration of the $\epsilon$ kernel, an estimate by $$\begin{aligned} C_1 &t^{-1/2} (|x - a_j^- t|)^{-1/2} (1 + |x - a_j^- t|)^{-1} \int_0^{t/2} e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M} (t-s)}} ds \\ &\le C (|x - a_j^- t|)^{-1/2} (1 + |x - a_j^- t|)^{-1}, \end{aligned}$$ where the seeming blow-up as $|x - a_j^- t| \to 0$ can be eliminated by proceeding alternatively for $|x - a_j^- t|$ bounded. For $s \in [t/2, t]$, we have $$\label{gf1nl2balance3} (t-s)^{-\gamma} e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M} (t-s)}} \le C |x|^{-\gamma} e^{-\frac{(x - a_j^- (t-s))^2}{2 \bar{M} (t-s)}},$$ $\gamma = 1/2$, for which we have an estimate by $$C_2 |x|^{-1/2} t^{-1/2} (1+t)^{-3/4} (1+t^{1/2})^{-1/2} \int_{t/2}^t e^{-\frac{(x - a_j^- (t-s))^2}{2 \bar{M} (t-s)}} ds \le C |x|^{-1/2} (1+t)^{-1},$$ where the seeming blow-up as $|x| \to 0$ can be eliminated by an alternative calculation in the case of $|x|$ bounded. In the current case of $|x| \le |a_j^-| t$, this final estimate is bounded by $\psi_2 (x, t)$. For the second estimate in (\[gf1nl2balance1\]), we have integrals $$\label{gf1nl2balance1second} \begin{aligned} \int_0^t &\int_{-|a_1^-| s}^0 (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+|x - a_j^- (t-s)|)^{-1/2} \\ &\times (|x - a_j^- (t-s)| + s)^{-1/2} (1 + |x - a_j^- (t-s)| + s)^{-3/4} (1 + |x - a_j^- (t-s)| + s^{1/2})^{-1/2} dy ds. \end{aligned}$$ Focusing again on the case $a_j^- < 0$, we observe that for $|x| \ge |a_j^-| t$, there is no cancellation between $x - a_j^- t$ and $a_j^- s$, and consequently that we have an estimate, upon integration of the kernel, by $$\label{gf1nl2nocanc2} \begin{aligned} C_1 &t^{-1/2} (1 + |x - a_j^- t|)^{-7/4} \int_0^{t/2} (|x - a_j^- (t-s)| + s)^{-1/2} ds \\ &+ C_2 (1 + |x - a_j^- t|)^{-7/4} t^{-1/2} \int_{t/2}^t (t-s)^{-1/2} ds \\ &\le C (1 + |x - a_j^- t|)^{-7/4}, \end{aligned}$$ which, for $|x - a_j^- t| \ge \sqrt{t}$ decays faster than the claimed estimates. For the case $|x - a_j^- t| \le \sqrt{t}$, we require only $t^{-3/4}$ decay, which is clear from (\[timedecayestimate2\]). For $|x| \le |a_j^-| t$, we divide the analysis into cases, $s \in [0, t/2]$ and $s \in [t/2, t]$. For $s \in [0, t/2]$, we observe the inequality $$\label{gf1nl2balance4} \begin{aligned} &(1+|x - a_j^- (t-s)|)^{-1/2} (|x - a_j^- (t-s)| + s)^{-1/2} \\ &\times \quad (1 + |x - a_j^- (t-s)| + s)^{-3/4} (1 + |x - a_j^- (t-s)| + s^{1/2})^{-1/2} \\ &\le C\Big[ (1+|x - a_j^- t|)^{-1/2} (|x - a_j^- t| + s)^{-1/2} (1 + |x - a_j^- t| + s)^{-3/4} (1 + |x - a_j^- t| + s^{1/2})^{-1/2} \\ &+ (1+|x - a_j^- (t-s)|)^{-1/2} (|x - a_j^- t| + s)^{-1/2} (1 + |x - a_j^- t| + s)^{-3/4} (1 + |x - a_j^- t|^{1/2} + s^{1/2})^{-1/2} \Big]. \end{aligned}$$ For the first estimate in (\[gf1nl2balance4\]), we obtain an estimate, upon integration of the kernel, by $$C_1 t^{-1/2} (1+|x - a_j^- t|)^{-1} \int_0^{t/2} (|x - a_j^- t| + s)^{-1/2} (1 + |x - a_j^- t| +s)^{-3/4} ds \le C t^{-1/2} (1 + |x - a_j^- t|)^{-5/4},$$ while for the second we obtain an estimate by $$C_1 t^{-1/2} (1 + |x - a_j^- t|)^{-3/2} \int_0^{t/2} (|x - a_j^- (t-s)|)^{-1/2} ds \le C (1 + |x - a_j^- t|)^{-3/2}.$$ For $s \in [t/2, t]$, we write $$\label{gf1nl2balance5} \begin{aligned} (t-s)^{-1/2} &(1+|x - a_j^- (t-s)|)^{-1/2} \\ &\le C\Big[|x|^{-1/2} (1+|x - a_j^- (t-s)|)^{-1/2} + (t-s)^{-1/2} (1+|x - a_j^- (t-s)| + |x|)^{-1/2} \Big]. \end{aligned}$$ For the first estimate in (\[gf1nl2balance5\]), we obtain an estimate by $$C_2 |x|^{-1/2} t^{-1/2} (1+t)^{-3/4} (1+t^{1/2})^{-1/2} \int_{t/2}^t (1 + |x - a_j^- (t-s)|)^{-1/2} ds \le C (1+t)^{-1} |x|^{-1/2},$$ which for $|x|$ bounded away from 0 is bounded by $\psi_2 (x, t)$, while for the second estimate in (\[gf1nl1balance5\]), we obtain an estimate by $$C_2 (1+|x|)^{-1/2} t^{-1/2} (1+t)^{-3/4} (1 + t^{1/2})^{-1/2} \int_{t/2}^t (t-s)^{-1/2} ds \le C (1 + |x|)^{-1/2} (1+t)^{-1}.$$ [*(\[Ginteract\](i)), term three.*]{} We next consider integration against the nonlinearity $(1+s)^{-1} e^{-\eta |y|}$ (which arises, for example, from the term $(\frac{\partial \bar{u}^\delta}{\partial \delta} \delta)^2$), for which, in the case of our convection Green’s kernel estimate, we have integrals of the form $$\label{gf1nl3} \int_0^t \int_{-\infty}^0 (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-1} e^{-\eta |y|} dy ds.$$ First, we observe a straightforward time decay estimate by $$\label{gf1nl3timedecay} \begin{aligned} C_1 &t^{-1} \int_0^{t/2} (1+s)^{-1} ds + C_2 (1 + t)^{-1} \int_{t/2}^{t-1} (t-s)^{-1} ds + C_3 (1 + t)^{-1} \int_{t-1}^t (t-s)^{-1/2} ds \\ &\le C (1+t)^{-1} \ln (e+t). \end{aligned}$$ In order to determine estimates in space as well, we observe the inequality $$\label{gf1nl3balance1} \begin{aligned} &e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} e^{-\eta |y|} \\ &\le \Big[e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M}(t-s)}} e^{-\eta |y|} + e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} e^{-\eta_1 |x - a_j^- (t-s)|} e^{-\eta_2 |y|} \Big]. \end{aligned}$$ For the first estimate in (\[gf1nl3balance1\]), we have integrals $$\label{gf1nl3balance1first} \int_0^t \int_{-\infty}^0 (t-s)^{-1} e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M}(t-s)}} (1+s)^{-1} e^{-\eta |y|} dy ds.$$ We have two cases to consider, $a_j^- < 0$ and $a_j^- > 0$, of which we focus on the former. For $|x| \ge |a_j^-| t$, there is no cancellation between $x - a_j^- t$ and $a_j^- s$, and proceeding almost precisely as in (\[gf1nl3timedecay\]), we obtain an estimate by $$C (1+t)^{-1} \ln (e+t) e^{-\frac{(x - a_j^- t)^2}{Lt}}.$$ For $|x| \le |a_j^-| t$, we divide the analysis into cases, $s \in [0, t/2]$ and $s \in [t/2, t]$. For $s \in [0, t/2]$, we observe the estimate $$\label{gf1nl3balance2} \begin{aligned} &e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M}(t-s)}} (1+s)^{-1} \\ &\le C \Big[ e^{-\frac{(x - a_j^- t)^2}{Lt}} (1+s)^{-1} + e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M}(t-s)}} (1 + |x - a_j^- t|)^{-1} \Big]. \end{aligned}$$ For the first estimate in (\[gf1nl3balance2\]), we proceed similarly as in (\[gf1nl3timedecay\]), while for the second we have an estimate by $$C_1 t^{-1} (1+|x-a_j^- t|)^{-1} \int_0^{t/2} e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M}(t-s)}} ds \le C (1 + t)^{-1/2} (1 + |x - a_j^- t|)^{-1},$$ which is sufficient for $t \ge |x|/|a_j^-|$. For $s \in [t/2, t]$, we have $$(t-s)^{-1/2} e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M}(t-s)}} \le C |x|^{-1/2} e^{-\frac{(x - a_j^- (t-s))^2}{2\bar{M}(t-s)}},$$ from which we obtain an estimate on (\[gf1nl3balance1first\]) by $$C_2 |x|^{-1/2} (1+t)^{-1} \int_{t/2}^t (t-s)^{-1/2} e^{-\frac{(x - a_j^- (t-s))^2}{2\bar{M}(t-s)}} ds \le C |x|^{-1} (1+t)^{-1},$$ for which we have oberved the bound $$\int_{t/2}^t (t-s)^{-1/2} e^{-\frac{(x - a_j^- (t-s))^2}{2\bar{M}(t-s)}} ds \le C.$$ The second estimate in (\[gf1nl3balance1\]) can be analyzed similarly. [*(\[Ginteract\](i)), nonlinearity $v(y, s) \varphi (y, s)$.*]{} We next consider integration against the critical nonlinearity $v(y, s) \varphi (y, s)$, which constituted the limiting estimate of the analysis in [@HR]. Here, we refine the analysis of [@HR] both through refined estimates on $v(y, s)$ and through the application of the approach of Liu [@Liu85; @Liu97] described in Section \[liu\], which takes advantage of improved decay for derivatives along the characteristic direction. It is precisely for this analysis that we must keep track of estimates on the characteristic derivatives $(\partial_t + a_j^- \partial_x) v$. We proceed by dividing the integration over $s$ as, $$\label{threeintegrals1} \begin{aligned} \int_0^t &\int_{-\infty}^{+\infty} \tilde{G} (x, t-s; y) \Big(v(y, s) \varphi (y, s) \Big)_y dy ds \\ &= \int_0^{\sqrt{t}} \int_{-\infty}^{+\infty} \tilde{G} (x, t-s; y) \Big(v(y, s) \varphi (y, s) \Big)_y dy ds \\ &+ \int_{\sqrt{t}}^{t - \sqrt{t}} \int_{-\infty}^{+\infty} \tilde{G} (x, t-s; y) \Big(v(y, s) \varphi (y, s) \Big)_y dy ds \\ &+ \int_{t - \sqrt{t}}^t \int_{-\infty}^{+\infty} \tilde{G} (x, t-s; y) \Big(v(y, s) \varphi (y, s) \Big)_y dy ds. \end{aligned}$$ For the first integral on the right-hand side of (\[threeintegrals1\]), we integrate by parts in $y$ and use the supremum estimate $\|v (y, s)\|_{L^\infty} \le C (1+s)^{-3/4}$ (valid for either of our estimates on $v(y,s)$) to obtain integrals of the form, $$\int_0^{\sqrt{t}} \int_{-\infty}^0 (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + s)^{-5/4} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds,$$ for which we observe the equality $$\label{completedsquare} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} e^{-\frac{(y - a_k^- s)^2}{Ms}} = e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} e^{-\frac{t}{Ms(t-s)}(y - \frac{xs - (a_j^- + a_k^-)s(t-s)}{t})^2}$$ (see Lemma 6 of [@HZ]). Integrating over $y$, we immediately obtain an estimate by $$C t^{-1/2} \int_0^{\sqrt{t}} (t-s)^{-1/2} (1+s)^{-3/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds.$$ For $|x - a_j^- t| \le K \sqrt{t}$, any fixed $K$, we have kernel decay $\exp(-(x-a_j^- t)^2/(Lt))$ by boundedness, while for $|x - a_j^- t| \ge K \sqrt{t}$, with $K$ sufficiently large, we have $$\label{kerneljuggle1} |x - a_j^- (t-s) - a_k^- s| = |(x - a_j^- t) + (a_j^- - a_k^-) s| \ge (1 - \frac{a_j^- - a_k^-}{K}) |x - a_j^- t|,$$ and we again have kernel decay $\exp(-(x-a_j^- t)^2/(Lt))$. In either case, we obtain a final estimate by $$C (1 + t)^{-3/4} e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ which is sufficient by (\[kerneltopsi1\]). Similarly, for the third integral in (\[threeintegrals1\]), we obtain, upon integration in $y$ precisely as above, an estimate by $$C t^{-1/2} \int_{t-\sqrt{t}}^t (t-s)^{-1/2} (1+s)^{-3/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds.$$ Proceeding similarly as above, we obtain an estimate in this case of the form $$(1+t)^{-3/4} e^{-\frac{(x - a_k^- t)^2}{Lt}}.$$ For the second integral in (\[threeintegrals1\]), we first consider the case $j = k$, for which, proceedingly similarly as above, we have integrals of the form $$C t^{-1/2} \int_{\sqrt{t}}^{t - \sqrt{t}} (t-s)^{-1/2} (1+s)^{-3/4} e^{-\frac{(x - a_j^- t)^2}{Mt}} ds,$$ which has been shown sufficient above. In the case $j \ne k$, the claimed estimate does not follow from such a direct method, and we employ a Liu-type cancellation estimate, as described in Section \[liu\], based on integration by parts in the characteristic direction. In order to clarify our analysis, we define the non-convecting variables $$\label{nonconvectingvariables} \begin{aligned} g (x, t; y) &= \tilde{G}^j (x, t, y-a_j^- t) \\ \phi (y, s) &= \varphi_k^- (y + a_k^- s, s) \\ V(y, s) &= v(y + a_k^- s, s), \end{aligned}$$ where $$\label{convectionkernel} \tilde{G}^j (x, t; y) = ct^{-1/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{4\beta_j^- t}} \quad c = r_j^- (l_j^-)^\text{tr} / \sqrt{4\pi\beta_j^-},$$ and $\varphi_k^-$ is as in (\[diffusionwaves\]). (We will consider corrections to $\tilde{G}^j$ at the end of the analysis.) In this notation, the second integral in (\[threeintegrals1\]) becomes $$\label{thirdintegral1} \int_{\sqrt{t}}^{t - \sqrt{t}} \int_{-\infty}^{+\infty} g(x, t-s; y + a_j^- (t-s)) \Big(V(y - a_k^- s, s) \phi(y - a_k^- s, s) \Big)_y dy ds.$$ Setting $\xi = y + a_j^- (t-s)$, (\[thirdintegral1\]) becomes $$\label{thirdintegral1shifted} \int_{\sqrt{t}}^{t - \sqrt{t}} \int_{-\infty}^{+\infty} g(x, t-s; \xi) \Big(V(\xi - a_j^- (t-s) - a_k^- s, s) \phi(\xi - a_j^- (t-s) - a_k^- s, s) \Big)_\xi d\xi ds.$$ Denoting by $\partial_\tau$ differentiation with respect to the second dependent variable of each function (effectively, a characteristic derivative $\partial_t + a_k^- \partial_x$ on our original variables), we observe the differential relationship $$\label{partsins} \begin{aligned} &\Big(g(x,t-s;\xi) V (\xi - a_j^- (t-s) - a_k^- s, s) \phi (\xi - a_j^- (t-s) - a_k^- s, s) \Big)_s \\ &= - g_\tau (x,t-s;\xi) V (\xi - a_j^- (t-s) - a_k^- s, s) \phi (\xi - a_j^- (t-s) - a_k^- s, s) \\ &+ (a_j^- - a_k^-) g(x,t-s;\xi) \Big(V (\xi - a_j^- (t-s) - a_k^- s, s) \phi (\xi - a_j^- (t-s) - a_k^- s, s)\Big)_\xi \\ &+ g(x,t-s;\xi) \Big(V (\xi - a_j^- (t-s) - a_k^- s, s) \phi (\xi - a_j^- (t-s) - a_k^- s, s) \Big)_\tau. \end{aligned}$$ Recalling that the case $j = k$ has already been considered, we can rearrange (\[partsins\]) so that the integrand of (\[thirdintegral1shifted\]) can be written as $$\label{partsinsrearranged} \begin{aligned} g(x,t-s;\xi) &\Big(V (\xi - a_j^- (t-s) - a_k^- s, s) \phi (\xi - a_j^- (t-s) - a_k^- s, s)\Big)_\xi \\ &= (a_j^- - a_k^-)^{-1} \Big(g(x,t-s;\xi) V (\xi - a_j^- (t-s) - a_k^- s, s) \phi (\xi - a_j^- (t-s) - a_k^- s, s) \Big)_s \\ &+(a_j^- - a_k^-)^{-1} g_\tau (x,t-s;\xi) V (\xi - a_j^- (t-s) - a_k^- s, s) \phi (\xi - a_j^- (t-s) - a_k^- s, s) \\ &-(a_j^- - a_k^-)^{-1} g(x,t-s;\xi) \Big(V (\xi - a_j^- (t-s) - a_k^- s, s) \phi (\xi - a_j^- (t-s) - a_k^- s, s) \Big)_\tau. \end{aligned}$$ For the integration over the first expression on the right-hand side of (\[partsinsrearranged\]), we change the order of integration, and evaluate integration over $s$ to obtain $$\label{partsinsrearrangedfirst} \begin{aligned} &\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} \Big(g(x,t-s;\xi) V (\xi - a_j^- (t-s) - a_k^- s, s) \phi (\xi - a_j^- (t-s) - a_k^- s, s) \Big)_s d\xi ds \\ &= \int_{-\infty}^{+\infty} g(x,\sqrt{t};\xi) V (\xi - a_j^- \sqrt{t} - a_k^- (t - \sqrt{t}), t-\sqrt{t}) \phi (\xi - a_j^- \sqrt{t} - a_k^- (t-\sqrt{t}), t-\sqrt{t}) d\xi \\ &- \int_{-\infty}^{+\infty} g(x,t-\sqrt{t};\xi) V (\xi - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t}, \sqrt{t}) \phi (\xi - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t}, \sqrt{t}) d\xi. \end{aligned}$$ In each integral on the right hand side of (\[partsinsrearrangedfirst\]), we employ the estimate $$\label{vinfty} \|V(\cdot, t)\|_{L^\infty} \le C (1 + t)^{-3/4},$$ and proceed similarly as in (\[completedsquare\]). For the first, we obtain an estimate by $$\begin{aligned} \int_{-\infty}^{+\infty} &(\sqrt{t})^{-1/2} e^{-\frac{(x - \xi)^2}{M \sqrt{t}}} (1 + (t-\sqrt{t}))^{-5/4} e^{-\frac{(\xi - a_j^- \sqrt{t} - a_k^- (t-\sqrt{t})^2}{M(t-\sqrt{t})}} d\xi \\ &\le C t^{-1/2} (1 + t)^{-3/4} e^{-\frac{(x - a_j^- \sqrt{t} - a_k^- (t-\sqrt{t})^2}{Mt}}, \end{aligned}$$ which gives an estimate by $$C t^{-1/2} (1 + t)^{-3/4} e^{-\frac{(x - a_k^- t)^2}{Mt}}$$ (see (\[kerneljuggle1\])). For the second integral on the right hand side of (\[partsinsrearrangedfirst\]), we obtain an estimate by $$\begin{aligned} \int_{-\infty}^{+\infty} &(t-\sqrt{t})^{-1/2} e^{-\frac{(x - \xi)^2}{M (t-\sqrt{t})}} (1 + \sqrt{t})^{-5/4} e^{-\frac{(\xi - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t})^2}{M\sqrt{t}}} d\xi \\ &\le C t^{-1/2} (1 + \sqrt{t})^{-3/4} e^{-\frac{(x - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t})^2}{M\sqrt{t}}}, \end{aligned}$$ which gives an estimate by $$C (1 + t)^{-7/8} e^{-\frac{(x - a_j^- t)^2}{Mt}}.$$ For integration over the second integrand on the right hand side of \[partsinsrearranged\], we have integrals $$\label{partsinsrearrangedsecond} \begin{aligned} \Big| \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} g_\tau (x,t-s;\xi) V (\xi - a_j^- (t-s) - a_k^- s, s) \phi (\xi - a_j^- (t-s) - a_k^- s, s) d\xi ds \Big| \\ &\le C_1 \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (t-s)^{-3/2} e^{-\frac{(x-\xi)^2}{M(t-s)}} (1+s)^{-5/4} e^{-\frac{(\xi - a_j^- (t-s) -a_k^- s)^2}{Ms}} d\xi ds \\ &\le C_2 t^{-1/2} \int_{\sqrt{t}}^{t-\sqrt{t}} (t-s)^{-1} (1+s)^{-3/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Ms}} ds, \end{aligned}$$ where for the first inequality in (\[partsinsrearrangedsecond\]), we have employed the estimate (\[vinfty\]), while for the second we have used a calculation similar to (\[completedsquare\]). In this last integral, we have three cases to consider, $a_k^- < 0 < a_j^-$, $a_k < a_j^- < 0$, and $a_j^- < a_k^- < 0$, for which we focus on the second. (We recall that the case $a_k^- = a_j^-$ has already been considered above.) For $|x| \ge |a_k^-| t$, there is no cancellation between summands in (\[tminussdecomp\]), and we obtain kernel decay, $$C t^{-1/2} (1+t)^{-3/4} \ln (e+t) e^{-\frac{(x - a_k^- t)^2}{Lt}},$$ while for $|x| \le |a_j^-| t$, there is no cancellation between summands in (\[sdecomp\]), and we obtain kernel decay $$C t^{-1/2} (1+t)^{-3/4} \ln (e+t) e^{-\frac{(x - a_j^- t)^2}{Lt}}.$$ In either case, the seeming blow-up as $t \to 0$ can be eliminated by an alternative analysis in the case of $t$ bounded. For $|a_j^-| t \le |x| \le |a_k^-| t$, we divide the analysis into cases, $s \in [\sqrt{t}, t/2]$ and $s \in [t/2, t-\sqrt{t}]$. For $s \in [\sqrt{t}, t/2]$, we observe through (\[sdecomp\]) the inequality $$\label{gf1nl4balance1} \begin{aligned} &(1+s)^{-\gamma} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Ms}} \\ &\le C\Big[ (1+s)^{-\gamma} e^{-\frac{(x - a_j^- t)^2}{Lt}} + (1+|x - a_j^- t|)^{-\gamma} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Ms}} \Big], \end{aligned}$$ where $\gamma = 3/4$. For the first estimate in (\[gf1nl4balance1\]), we proceed as in the case $|x| \le |a_j^-| t$, while for the second we obtain an estimate by $$\begin{aligned} C_1 &t^{-3/2} (1+|x - a_j^- t|)^{-3/4} \int_{\sqrt{t}}^{t/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Ms}} ds \\ &\le C t^{-1/2} (1+t)^{-1/2} (1+|x - a_j^- t|)^{-3/4}, \end{aligned}$$ which is sufficient. For $s \in [t/2, t - \sqrt{t}]$, we observe through (\[tminussdecomp\]) the inequality (\[gf1nl1balance3\]) with $\gamma = 1$. For the second estimate in (\[gf1nl1balance3\]), we proceed as in the case $|x| \ge |a_k^-|t$, while for the first we have an estimate by $$\begin{aligned} C_2 &t^{-1/2} (1+t)^{-3/4} |x - a_k^- t|^{-1} \int_{t/2}^{t - \sqrt{t}} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Ms}} ds \\ &\le C (1+t)^{-3/4} |x - a_k^- t|^{-1}, \end{aligned}$$ which is sufficient for $|x - a_k^- t|$ bounded away from 0. In the case of $|x - a_k^- t|$ bounded, we proceed alternatively. The third term in (\[partsinsrearranged\]) can be regarded as the crucial piece, since it is here that we must keep track not only of estimates on $v(y,s)$, but also on estimates of characteristic derivatives on $v$. We begin by expanding the characteristic derivative, $$\label{partsinsrearrangedthird} \begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} g(x, t-s; \xi) \Big(V(\xi - a_j^- (t-s) - a_k^- s, s) \phi(\xi - a_j^- (t-s) - a_k^- s, s)\Big)_\tau d\xi ds \\ &= \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} g(x, t-s; \xi) V(\xi - a_j^- (t-s) - a_k^- s, s) \phi_\tau (\xi - a_j^- (t-s) - a_k^- s, s) d\xi ds \\ &+ \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} g(x, t-s; \xi) V_\tau (\xi - a_j^- (t-s) - a_k^- s, s) \phi(\xi - a_j^- (t-s) - a_k^- s, s) d\xi ds. \end{aligned}$$ For the first integral on the right-hand side of (\[partsinsrearrangedthird\]), we employ the estimate (\[vinfty\]) to obtain an estimate by $$\begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-1/2} e^{-\frac{(x-\xi)^2}{M(t-s)}} (1+s)^{-3/4} (1+s)^{-3/2} e^{-\frac{(\xi - a_j^- (t-s) - a_k^- s)^2}{Ms}} d\xi ds \\ &\le C t^{-1/2} \int_{\sqrt{t}}^{t-\sqrt{t}} (1 + s)^{-7/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds, \end{aligned}$$ which can be analyzed similarly as was (\[partsinsrearrangedsecond\]). For the second integral on the right-hand side of (\[partsinsrearrangedthird\]), we note the following relation, also useful in calculations below, $$\label{charestonv} \begin{aligned} &|V_\tau (\xi - a_j^- (t-s) - a_k^- s, s)| = |(\partial_t + a_k^- \partial_x) v| (\xi - a_j^- (t-s), s) \\ &\le C s^{-1} s^{1/4} (1 + |\xi - a_j^- (t-s) - a_k^- s| + s^{1/2})^{-3/2} \\ &+ C s^{-1/2} \sum_{a_l^\pm \gtrless 0, l\ne k} (1 + |\xi - a_j^- (t-s) - a_l^- s| + s^{1/2})^{-3/2} \\ &+C s^{-1/2} (1 + |\xi - a_j^- (t-s)|)^{-1/2} (1 + |\xi - a_j^- (t-s)| + s)^{-1/2} \\ &\times (1 + |\xi - a_j^- (t-s)| + s^{1/2})^{-1/2} I_{\{a_1^- s \le (\xi - a_j^- (t-s)) \le a_n^+ s \}} \\ &+ (1 + |\xi - a_j^- (t-s)|)^{-1} (1 + s)^{-1} + (1 + |\xi - a_j^- (t-s)| + s)^{-7/4}. \end{aligned}$$ For the first estimate in (\[charestonv\]), we use a supremum norm to obtain integrals $$\begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-1/2} e^{-\frac{(x-\xi)^2}{M(t-s)}} s^{-1} (1+s)^{-1} e^{-\frac{(\xi - a_j^- (t-s) - a_k^- s)^2}{Ms}} d\xi ds \\ &\le C t^{-1/2} \int_{\sqrt{t}}^{t-\sqrt{t}} s^{-1/2} (1 + s)^{-1} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Ms}} ds, \end{aligned}$$ which can be analyzed similarly as was (\[partsinsrearrangedsecond\]). For the remaining estimates in (\[charestonv\]), the critical observation is that when integrated against the convecting diffusion kernel, $$e^{-\frac{(\xi - a_j^- (t-s) - a_k^- s)^2}{Ms}},$$ they give increased decay in $s$, from which the claimed estimates can readily be observed. This completes the proof of (\[Ginteract\])(i). [**Proof of (\[Ginteract\](ii)).**]{} Integration against the nonlinearity $\Phi(y,s)$ has been considered in [@HR], and we need only refine one estimate from that calculation in order to conclude our claim. The critical calculation regards integrals $$t^{-1/2} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1} (1+s)^{-1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds,$$ which was estimated in [@HR] by the expression, $$(1 + t)^{-3/4} (1 + |x - a_k^- t|)^{-1/2}.$$ Here, we observe that alternatively, we may proceed by observing through (\[tminussdecomp\]) the inequality (\[gf1nl1balance3\]) with $\gamma = 1$. For the first estimate in (\[gf1nl1balance3\]), we have an estimate by $$\begin{aligned} C_2 &t^{-1/2} (1+t)^{-1/2} |x - a_k^- t|^{-1} \int_{t/2}^{t-\sqrt{t}} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \\ &\le C (1+t)^{-1/2} |x - a_k^- t|^{-1}, \end{aligned}$$ while for the second we have an estimate by $$\begin{aligned} C_2 &t^{-1/2} (1+t)^{-1} e^{-\frac{(x-a_k^- t)^2}{Lt}} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1} ds \\ &\le C (1 + t)^{-3/2} \ln (e+t) e^{-\frac{(x-a_k^- t)^2}{Lt}}, \end{aligned}$$ either of which is sufficient. This completes the proof of (\[Ginteract\](ii)). [**Proof of (\[Ginteract\](iii)–(iv)), x-derivatives.**]{} As will be clear from our analysis of characteristic derivatives just below, analysis of $x$-derivatives is almost precisely the same as the case of characteristic derivatives for which the direction of differentiation matches neither the convection rate of the kernel or the convection rate of the nonlinearity. Since we consider the case of characteristic derivatives in great detail, we will omit the case of $x$-derivatives. [**Proof of (\[Ginteract\](v)–(vi)), characteristic derivatives.**]{} We next develop estimates on characteristic derivatives $(\partial_t + a_l^\pm \partial_x)$, $a_l^\pm \gtrless 0$, on the nonlinear interaction integrals. These estimates are the primary new contribution of the current analysis. Observing that $\tilde{G} (x,0;y) = \delta_y (x)I$ by construction, we have, for $a_l^\pm \gtrless 0$, $$\label{charderstart} \begin{aligned} (\partial_t + a_l^\pm \partial_x) \int_0^t &\int_{-\infty}^{+\infty} \tilde{G} (x,t-s;y) \Big[\Phi (y,s) + \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta}\delta)_y (y,s) \Big] dy ds \\ &= \Phi (x,t) + \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta}\delta)_x (x,t) \\ &+ \int_0^t \int_{-\infty}^{+\infty} (\partial_t + a_l^\pm \partial_x) \tilde{G} (x,t-s;y) \Big[\Phi (y,s) + \mathcal{F} (\varphi, v, \frac{\partial \bar{u}^\delta}{\partial \delta}\delta)_y (y,s) \Big] dy ds. \end{aligned}$$ For $\zeta(t) > 0$ so that $$\begin{aligned} |v(x,t)| &\le \zeta (t) (\psi_1 (x, t) + \psi_2 (x, t)) \\ |\partial_x v (x,t)| &\le \zeta(t) \Big[ t^{-1/2} (\psi_1 (x, t) + \psi_2 (x, t)) + \psi_3 (x,t) + \psi_4 (x,t) \Big], \end{aligned}$$ the first two terms on the right-hand side of (\[charderstart\]) can be seen to be bounded by $$\begin{aligned} C &E_0 (1+t)^{-3/4} \sum_{a_j^\pm \gtrless 0} (1 + |x - a_j^\pm t| + t^{1/2})^{-3/2} \\ &+\zeta (t) \Big[t^{-1} (1+t)^{1/4} \psi_1^{j, \pm} (x, t) + t^{-1/2} ( \bar{\psi}_1^{j, \pm} + \psi_2 (x,t) ) + \psi_3 (x, t) + \psi_4 (x, t) \Big]. \end{aligned}$$ [**Proof of (\[Ginteract\](vi)), nonlinearity $\Phi (y, s)$.**]{} The basic elements of our proof are more easily seen in the case of estimate (\[Ginteract\](vi))—involving the diffusion wave nonlinearity $\Phi (y, s)$—and so our approach will be to establish (\[Ginteract\](vi)) first and return to (\[Ginteract\](v)). Proceeding from the right-hand side of (\[charderstart\]), with nonlinearity $\Phi (y, s)$, we have integrals of the form $$\label{charderdiffusionwaves} \int_0^t \int_{-\infty}^{+\infty} (\partial_t + a_l^\pm \partial_x) \tilde{G} (x, t-s; y) \Phi (y, s) dy ds.$$ In particuar, we focus on the nonlinearity $({\varphi_k^-}^2)_y$, with $\varphi_k^- (y, s)$ as defined in (\[diffusionwaves\]), and the leading order Green’s kernel, $$\tilde{G}^j (x, t; y) = ct^{-1/2} e^{-\frac{(x - y - a_j^- t)^2}{4 \beta_j t}},$$ with $c = r_j^- (l_j^-)^\text{tr}/\sqrt{4\pi\beta_j^-}$ (we will discuss corrections to $\tilde{G}^j (x, t; y)$ at the end of the analysis). We divide the analysis into three parts, $$\label{threeparts} \begin{aligned} \int_0^t &\int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t-s; y) ({\varphi_k^- (y, s)}^2)_y dy ds \\ &= \int_0^{\sqrt{t}} \int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t-s; y) ({\varphi_k^- (y, s)}^2)_y dy ds \\ &+ \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t-s; y) ({\varphi_k^- (y, s)}^2)_y dy ds \\ &+ \int_{t-\sqrt{t}}^t \int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t-s; y) ({\varphi_k^- (y, s)}^2)_y dy ds. \end{aligned}$$ According to the estimates of Proposition \[greenbounds\], the case $j=k$ does not occur (the diffusion waves have been chosen to eliminate precisely this case), leaving three cases to consider, $l = j \ne k$, $l = k \ne j$, and $l \ne j, l \ne k, j\ne k$. We remark that it is in this third case that the characteristic derivative acts similarly as a derivative with respect to $x$ only. [*Case 1: $l = j \ne k$.*]{} For the case $l = j \ne k$, the characteristic derivative of $\tilde{G}^j$ behaves like a time derivative of the heat kernel, and we have $$|(\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t-s; y)| \le C (t-s)^{-3/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M (t-s)}}.$$ For the first estimate in (\[threeparts\]), upon integration by parts in $y$, we have integrals of the form $$\int_0^{\sqrt{t}} \int_{-\infty}^{+\infty} (t-s)^{-2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M (t-s)}} (1+s)^{-1} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds,$$ for which we observe the equality (\[completedsquare\]). Integration over $y$ leads immediately to an estimate by $$C t^{-1/2} \int_0^{\sqrt{t}} (t-s)^{-3/2} (1+s)^{-1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds.$$ In the event that $|x - a_j^- t| \le K \sqrt{t}$, we immediately have decay with scaling $\exp((x-a_j^-t)^2/(Lt))$, while for $|x - a_j^- t| \ge K \sqrt{t}$ we have $$\label{largechar} |x - a_j^- (t-s) - a_k^- s| = |(x - a_j^- t) - (a_k^- - a_j^-) s| \ge (1 - \frac{a_k^- - a_j^-}{K}) |x - a_j^- t|,$$ for which, with $K$ taken sufficiently large, we again have decay with scaling $\exp((x-a_j^-t)^2/(Lt))$. We have, then, an estimate on this term of the form $$C_1 t^{-2} (1 + t^{1/2})^{-1/2} \int_0^{\sqrt{t}} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \le C t^{-3/2} (1 + t^{1/2})^{-1/2} e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ where in obtaining this last inequality we have reserved a small part of the kernel $\exp(-(x - a_j^- (t-s) - a_k^- s)^2/(Mt))$ for integration. Finally, the blow-up as $t \to 0$ can be reduced by putting derivatives on the nonlinearity for small time. For the third integral in (\[threeparts\]), we integrate the charactericstic derivative by parts to avoid blow-up near $s = t$. Observing the relation $$(\partial_t + a_j^- \partial_x) \tilde{G}^j (x, t-s; y) = - (\partial_s + a_j^- \partial_y) \tilde{G}^j (x, t-s; y),$$ we have $$\label{charparts} \begin{aligned} \int_{t-\sqrt{t}}^t &\int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t- s; y) ({\varphi_k^- (y, s)}^2)_y dy ds \\ &= \int_{t-\sqrt{t}}^t \int_{-\infty}^{+\infty} (- \partial_s - a_l^- \partial_y) \tilde{G}^j (x, t- s; y) ({\varphi_k^- (y, s)}^2)_y dy ds \\ &= \int_{-\infty}^{+\infty} \tilde{G}^j (x, \sqrt{t}; y) ({\varphi_k^- (y, t - \sqrt{t})}^2)_y dy \\ &- \int_{-\infty}^{+\infty} \tilde{G}^j (x, 0; y) ({\varphi_k^- (y, t)}^2)_y dy \\ &+ \int_{t-\sqrt{t}}^t \int_{-\infty}^{+\infty} \tilde{G}^j (x, t- s; y) (\partial_s + a_l^- \partial_y) ({\varphi_k^- (y, s)}^2)_y dy ds. \end{aligned}$$ For the first integral in (\[charparts\]), we have $$\begin{aligned} \int_{-\infty}^{+\infty} &(\sqrt{t})^{-1/2} e^{-\frac{(x - y - a_j^- \sqrt{t})}{M \sqrt{t}}} (1+(t-\sqrt{t}))^{-3/2} e^{-\frac{(y - a_k^- (t-\sqrt{t}))^2}{M(t-\sqrt{t})}} dy \\ &\le C t^{-1/2} (1+ (t - \sqrt{t}))^{-1} e^{-\frac{(x - a_j^- \sqrt{t} - a_k^- (t - \sqrt{t}))^2}{Mt}}, \end{aligned}$$ which gives an estimate by $$t^{-1/2} (1 + t)^{-1} e^{-\frac{(x - a_k^- t)^2}{Lt}},$$ sufficient by an argument similar to (\[largechar\]). For the second integral in (\[charparts\]), $\tilde{G}^j (x, 0; y)$ is a delta function, over which integration yields an estimate by $$C (1+t)^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Mt}}.$$ For the third integral in (\[charparts\]), we have, upon noting that the characteristic derivative is not along the direction of propagation of $\varphi_k^-$, integrals of the form $$\begin{aligned} \int_{t - \sqrt{t}}^t &\int_{-\infty}^{+\infty} (t - s)^{-1/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M (t-s)}} (1+s)^{-3/2} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &\le C t^{-1/2} \int_{t - \sqrt{t}}^t (1+s)^{-3/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds. \end{aligned}$$ Proceeding similarly as in (\[largechar\]) and the surrounding estimates, we obtain an estimate by $$C (1+t)^{-3/2} e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ which has been shown sufficient in (\[kerneltopsi1\]). For the second integral in (\[threeparts\]), we must proceed by taking advantage of increased decay for derivatives along characteristic directions. Recalling our definitions (\[nonconvectingvariables\]), we can write this integral in the form $$\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} g_\tau (x, t-s; y+a_j^- (t-s)) (\phi (y - a_k^- s, s)^2)_y dy ds,$$ which upon the substitution $\xi = y + a_j^- (t-s)$ becomes $$\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} g_\tau (x, t-s; \xi) (\phi (\xi - a_j^- (t-s) - a_k^- s, s)^2)_\xi d\xi ds.$$ Proceeding similarly as in (\[partsins\]), we write $$\label{charderpartsins} \begin{aligned} &\Big(g_\tau (x,t-s;\xi) \phi (\xi - a_j^- (t-s) - a_k^- s, s)^2 \Big)_s \\ &= - g_{\tau \tau} (x,t-s;\xi) \phi (\xi - a_j^- (t-s) - a_k^- s, s)^2 \\ &+ (a_j^- - a_k^-) g_\tau (x,t-s;\xi) (\phi (\xi - a_j^- (t-s) - a_k^- s, s)^2)_\xi \\ &+ g_\tau (x,t-s;\xi) \Big(\phi (\xi - a_j^- (t-s) - a_k^- s, s)^2 \Big)_\tau, \end{aligned}$$ which can be rearranged as $$\label{charderpartsinsrearranged} \begin{aligned} &g_\tau (x,t-s;\xi) (\phi (\xi - a_j^- (t-s) - a_k^- s, s)^2)_\xi \\ &=(a_j^- - a_k^-)^{-1} \Big(g_\tau (x,t-s;\xi) \phi (\xi - a_j^- (t-s) - a_k^- s, s)^2 \Big)_s \\ &+(a_j^- - a_k^-)^{-1} g_{\tau \tau} (x,t-s;\xi) \phi (\xi - a_j^- (t-s) - a_k^- s, s)^2 \\ &-(a_j^- - a_k^-)^{-1} g_\tau (x,t-s;\xi) \Big(\phi (\xi - a_j^- (t-s) - a_k^- s, s)^2 \Big)_\tau, \end{aligned}$$ where we recall again that the case $j = k$ does not occur here. For integration over the first summand on the right-hand side of (\[charderpartsinsrearranged\]), we exchange order of integration to obtain $$\label{charderpartsinsrearrangedfirst} \begin{aligned} &\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} \Big(g_\tau (x,t-s;\xi) \phi (\xi - a_j^- (t-s) - a_k^- s, s)^2 \Big)_s d\xi ds \\ &= \int_{-\infty}^{+\infty} g_\tau (x,\sqrt{t};\xi) \phi (\xi - a_j^- \sqrt{t} - a_k^- (t-\sqrt{t}), t-\sqrt{t})^2 d\xi \\ &- \int_{-\infty}^{+\infty} g_\tau (x,t-\sqrt{t};\xi) \phi (\xi - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t}, \sqrt{t})^2 d\xi. \end{aligned}$$ For the first integral in (\[charderpartsinsrearrangedfirst\]), proceeding similarly as in (\[completedsquare\]), we estimate $$\begin{aligned} \int_{-\infty}^{+\infty} &(\sqrt{t})^{-3/2} e^{-\frac{(x-\xi)^2}{M\sqrt{t}}} (1 + (t-\sqrt{t}))^{-1} e^{-\frac{(\xi - a_j^- \sqrt{t} - a_k^- (t-\sqrt{t}))^2}{M(t-\sqrt{t})}} d\xi \\ &\le C t^{-1/2} (\sqrt{t})^{-1} (1 + (t-\sqrt{t}))^{-1/2} e^{-\frac{(x - a_j^- \sqrt{t} - a_k^- (t-\sqrt{t}))^2}{Mt}}, \end{aligned}$$ for which we conclude in a manner similar to (\[kerneljuggle1\]) an estimate by $$C t^{-1} (1+t)^{-1/2} e^{-\frac{(x - a_k^- t)^2}{Lt}}.$$ Proceeding similarly for the second integral in (\[charderpartsinsrearrangedfirst\]), we estimate $$\begin{aligned} \int_{-\infty}^{+\infty} &(t - \sqrt{t})^{-3/2} e^{-\frac{(x-\xi)^2}{M(t-\sqrt{t})}} (1 + \sqrt{t})^{-1} e^{-\frac{(\xi - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t})^2}{M\sqrt{t}}} d\xi \\ &\le C t^{-1/2} (t - \sqrt{t})^{-1} (1 + \sqrt{t})^{-1/2} e^{-\frac{(x - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t})^2}{Mt}}, \end{aligned}$$ for which we conclude an estimate by $$C t^{-3/2} (1+t)^{-1/4} e^{-\frac{(x - a_j^- t)^2}{Lt}}.$$ For integration over the second summand on the right-hand side of (\[charderpartsinsrearranged\]), we estimate $$\begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-5/2} e^{-\frac{(x-\xi)^2}{M(t-s)}} (1 + s)^{-1} e^{-\frac{(\xi - a_j^- (t-s) - a_k^- s)^2}{Ms}} d\xi ds \\ &\le Ct^{-1/2} \int_{\sqrt{t}}^{t-\sqrt{t}} (t-s)^{-2} (1+s)^{-1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds. \end{aligned}$$ For this last integral we have three cases to consider, $a_k^- < 0 < a_j^-$, $a_k^- < a_j^- < 0$, and $a_j^- < a_k^- < 0$, of which we focus on the second. In the case $|x| \ge |a_k^-| t$, there is no cancellation between summands in (\[tminussdecomp\]), and we have an estimate by $$\label{chardergf1nl1nocanc1} \begin{aligned} C_1 &t^{-5/2} (1+\sqrt{t})^{-1/2} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{\sqrt{t}}^{t/2} e^{-\epsilon \frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \\ &+ C_2 t^{-1/2} (\sqrt{t})^{-2} (1+t)^{-1/2} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{t/2}^{t-\sqrt{t}} e^{-\epsilon \frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \\ &\le C t^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}}, \end{aligned}$$ while similarly in the case $|x| \le |a_j^-| t$, there is no cancellation between summands in (\[sdecomp\]), and we obtain an estimate by $$Ct^{-3/2} e^{-\frac{(x - a_j^- t)^2}{Lt}}.$$ For the case $|a_j^-| t \le |x| \le |a_k^-| t$, we divide the analysis into cases $s \in [\sqrt{t}, t/2]$ and $s \in [t/2, t-\sqrt{t}]$ (we take $t$ large enough so that $\sqrt{t} < t/2$, proceeding alternatively for $t$ bounded, in which we need not establish $t$ decay). For $s \in [\sqrt{t}, t/2]$, we observe through (\[sdecomp\]) the inequality (\[gf1nl4balance1\]) with $\gamma = 1/2$. For the first estimate in (\[gf1nl4balance1\]), we proceed as in (\[chardergf1nl1nocanc1\]), while for the second we have an estimate by $$C_1 t^{-5/2} (1 + |x-a_j^- t|)^{-1/2} \int_{\sqrt{t}}^{t/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \le C t^{-3/2} (1+t)^{-1/2} (1 + |x - a_j^- t|)^{-1/2}.$$ For $s \in [t/2, t-\sqrt{t}]$, we observe through (\[tminussdecomp\]) the inequality (\[gf1nl1balance3\]) with $\gamma = 3/2$. For the second estimate in (\[gf1nl1balance3\]), we proceed as in (\[chardergf1nl1nocanc1\]), while for the first we obtain an estimate by $$\begin{aligned} C_2 t^{-1/2} &(1+t)^{-1/2} |x - a_k^- t|^{-3/2} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \\ &\le C t^{-1/4} (1+t)^{-1/2} |x - a_k^- t|^{-3/2}, \end{aligned}$$ which is better than the required estimate along the $a_k^-$ directions (since $j \ne k$). For integration over the third summand on the right-hand side of (\[charderpartsinsrearranged\]), we estimate $$\begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-3/2} e^{-\frac{(x-\xi)^2}{M(t-s)}} (1 + s)^{-2} e^{-\frac{(\xi - a_j^- (t-s) - a_k^- s)^2}{Ms}} d\xi ds \\ &\le Ct^{-1/2} \int_{\sqrt{t}}^{t-\sqrt{t}} (t-s)^{-1} (1+s)^{-3/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds, \end{aligned}$$ for which we can proceed almost exactly as in our analysis of the second summand on the right-hand side of (\[charderpartsinsrearranged\]) (the full analysis is omitted). [*Case 2: $l = k \ne j$.*]{} For the case $l = k \ne j$, the characteristic derivative of $\varphi_k^-$ behaves like a time derivative of the heat kernel, and we have $$(\partial_t + a_k^- \partial_x) \varphi_k^- (y,s) = {\mathbf O}((1+s)^{-3/2}) e^{-\frac{(y - a_k^- s)^2}{Ms}}.$$ In general, our strategy for this case will be to integrate by parts, shifting the characteristic derivative onto $\varphi$. For the first integral in (\[threeparts\]), after integration by parts in $y$, we estimate $$\begin{aligned} \int_0^{\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-3/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-1} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &\le C t^{-1/2} \int_0^{\sqrt{t}} (t-s)^{-1} (1+s)^{-1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds, \end{aligned}$$ which is bounded by $$C t^{-1/2} (1+t)^{-3/4} e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ sufficient for $j \ne l$ by the argument of (\[kerneltopsi1\]). For the third integral in (\[threeparts\]), we have $$\label{threepartsthree} \begin{aligned} \int_{t-\sqrt{t}}^t &\int_{-\infty}^{+\infty} (\partial_t + a_k^- \partial_x) \tilde{G}^j (x,t;y) \partial_y (\varphi_k^-(y,s)^2) dy ds \\ &= -\int_{t-\sqrt{t}}^t \int_{-\infty}^{+\infty} (\partial_s + a_k^- \partial_y) \tilde{G}^j (x,t;y) \partial_y (\varphi_k^-(y,s)^2) dy ds \\ &= -\int_{-\infty}^{+\infty} \tilde{G}^j (x,0;y) \partial_y (\varphi_k^- (y,t))^2 dy \\ &+\int_{-\infty}^{+\infty} \tilde{G}^j (x,\sqrt{t};y) \partial_y (\varphi_k^- (y,t-\sqrt{t}))^2 dy \\ &-\int_{t-\sqrt{t}}^t \int_{-\infty}^{+\infty} \partial_y \tilde{G}^j (x,t-s;y) (\partial_s + a_k^- \partial_y) ((\varphi_k^-)^2) dy ds. \end{aligned}$$ For the first integral in (\[threepartsthree\]), we have an estimate by $$|\varphi_k^- (x ,t) \partial_x \varphi_k^- (x, t)| \le C (1+t)^{-3/2} e^{-\frac{(x-a_k^- t)^2}{Lt}},$$ while for the second we estimate $$\label{refonce1} \begin{aligned} \int_{-\infty}^{+\infty} &(\sqrt{t})^{-1/2} e^{-\frac{(x - y - a_j^- \sqrt{t})^2}{M\sqrt{t}}} (1 + (t-\sqrt{t}))^{-3/2} e^{-\frac{(y - a_k^- (t-\sqrt{t}))}{M(t-\sqrt{t})^2}} dy \\ &\le C t^{-1/2} (1 + (t-\sqrt{t}))^{-1} e^{-\frac{(x - a_j^- \sqrt{t} - a_k^- (t-\sqrt{t}))^2}{M(t-\sqrt{t})}}, \end{aligned}$$ which gives an estimate by $$t^{-1/2} (1+t)^{-1} e^{-\frac{(x-a_k^- t)^2}{Lt}}.$$ For the third integral on the right hand side of (\[threepartsthree\]), we estimate $$\begin{aligned} \int_{t-\sqrt{t}}^t &\int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-2} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &\le C t^{-1/2} \int_{t-\sqrt{t}}^t (t-s)^{-1/2} (1+s)^{-3/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds, \end{aligned}$$ which gives an estimate by $$t^{-1/4} (1+t)^{-3/2} e^{-\frac{(x-a_k^- t)^2}{Lt}}.$$ For the second integral in (\[threeparts\]), we first integrate by parts, moving the characteristic derivative onto $\varphi^k$, and then proceed by a Liu-type characteristic derivative estimate as described in Section \[liu\]. We have $$\label{maincharderest} \begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} (\partial_t + a_k^- \partial_x) \tilde{G}^j (x,t;y) \partial_y (\varphi^k(y,s)^2) dy ds \\ &= -\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (\partial_s + a_k^- \partial_y) \tilde{G}^j (x,t;y) \partial_y ((\varphi^k)^2) dy ds \\ &= -\int_{-\infty}^{+\infty} \tilde{G}^j (x,\sqrt{t};y) \partial_y (\varphi^k(y,t-\sqrt{t}))^2 dy \\ &-\int_{-\infty}^{+\infty} \tilde{G}^j_y (x,t-\sqrt{t};y) \varphi^k(y,\sqrt{t})^2 dy \\ &+\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} \tilde{G}^j (x,t -s ;y) (\partial_s + a_k^- \partial_y) (\varphi^k(y,s)^2)_y dy ds. \end{aligned}$$ For the first integral on the right hand side of (\[maincharderest\]), we proceed exactly as in (\[refonce1\]) to obtain an estimate by $$t^{-1/2} (1+t)^{-1} e^{-\frac{(x-a_k^- t)^2}{Lt}},$$ while for the second we estimate $$\begin{aligned} \int_{-\infty}^{+\infty} &(t-\sqrt{t})^{-1} e^{-\frac{(x - y - a_j^- (t-\sqrt{t}))^2}{M(t-\sqrt{t})}} (1 + \sqrt{t})^{-1} e^{-\frac{(y - a_k^- \sqrt{t})^2}{M\sqrt{t}}} dy \\ &\le C t^{-1/2} (t-\sqrt{t})^{-1/2} (1 + \sqrt{t})^{-1} (\sqrt{t})^{1/2} e^{-\frac{(x - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t})^2}{Mt}}, \end{aligned}$$ which gives an estimate by $$t^{-3/4} (1+t)^{-1/2} e^{-\frac{(x-a_j^- t)^2}{Lt}}.$$ An argument similar to that of (\[kerneltopsi1\]) shows that this last estimate is sufficient for $j \ne l$, which is the current setting. For the third integral on the right hand side of (\[maincharderest\]), we proceed in terms of the non-convecting variables (\[nonconvectingvariables\]), for which the integral can be re-written as $$\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} g (x, t-s; y + a_j^- (t-s)) (\partial_\tau (\phi (y-a_k^- s, s)^2))_y dy ds,$$ where we recall that $\partial_\tau$ represents differentiation with respect to the second argument of $\phi$. Setting $\xi = y + a_j^- (t-s)$, this last integral becomes $$\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} g (x, t-s; \xi) \Big(\partial_\tau (\phi (\xi - a_j^- (t-s) - a_k^- s, s)^2) \Big)_\xi d\xi ds,$$ for which we write $$\label{partstrick2} \begin{aligned} &\Big(g (x, t-s; \xi) \partial_\tau (\phi (\xi - a_j^- (t-s) - a_k^- s, s)^2) \Big)_s \\ &= -g_\tau (x, t-s; \xi) \partial_\tau (\phi (\xi - a_j^- (t-s) - a_k^- s, s)^2) \\ &+ (a_j^- - a_k^-) g (x, t-s; \xi) (\partial_\tau (\phi (\xi - a_j^- (t-s) - a_k^- s, s)^2))_\xi \\ &+ g (x, t-s; \xi) \partial_{\tau \tau} (\phi (\xi - a_j^- (t-s) - a_k^- s, s)^2). \end{aligned}$$ For integration over the left hand side of (\[partstrick2\]), we exchange the order of integration to obtain $$\label{partstrick2left} \begin{aligned} \int_{-\infty}^{+\infty} &g (x, \sqrt{t}; \xi) \partial_\tau (\phi (\xi - a_j^- \sqrt{t} - a_k^- (t-\sqrt{t}), t-\sqrt{t})^2) d\xi \\ &- \int_{-\infty}^{+\infty} g (x, t-\sqrt{t}; \xi) \partial_\tau (\phi (\xi - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t}, \sqrt{t})^2) d\xi. \end{aligned}$$ For the first expression in (\[partstrick2left\]), we estimate $$\begin{aligned} \int_{-\infty}^{+\infty} &(\sqrt{t})^{-1/2} e^{-\frac{(x-\xi)^2}{M\sqrt{t}}} (1 + (t-\sqrt{t}))^{-2} e^{-\frac{(\xi - a_j^- \sqrt{t} - a_k^- (t-\sqrt{t}))^2}{M(t-\sqrt{t})}} d \xi \\ &\le C t^{-1/2} (1 + t)^{-3/2} e^{-\frac{(x - a_j^- \sqrt{t} - a_k^-(t- \sqrt{t}))^2}{Mt}}, \end{aligned}$$ which gives an estimate by $$C t^{-1/2} (1 + t)^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}}.$$ For the second integral in (\[partstrick2left\]), we estimate $$\begin{aligned} \int_{-\infty}^{+\infty} &(t-\sqrt{t})^{-1/2} e^{-\frac{(x-\xi)^2}{M(t-\sqrt{t})}} (1 + \sqrt{t})^{-2} e^{-\frac{(\xi - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t})^2}{M\sqrt{t}}} d \xi \\ &\le C t^{-1/2} (1 + \sqrt{t})^{-3/2} e^{-\frac{(x - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t})^2}{Mt}}, \end{aligned}$$ which gives an estimate by $$C t^{-1/2} (1 + t)^{-3/4} e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ sufficient for $l \ne j$. For integration over the first term on the right-hand side of (\[partstrick2\]), we estimate $$\label{partstrick2right1} \begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-3/2} e^{-\frac{(x-\xi)^2}{M(t-s)}} (1 + s)^{-2} e^{-\frac{(\xi - a_j^- (t-s) - a_k^- s)^2}{Ms}} d\xi ds \\ &\le C t^{-1/2} \int_{\sqrt{t}}^{t-\sqrt{t}} (t-s)^{-1} (1 + s)^{-3/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds. \end{aligned}$$ We have three cases to consider here, $a_k^- < 0 < a_j^-$, $a_k^- < a_j^- < 0$, and $a_j^- < a_k^- < 0$, of which we focus on the second. In the case $|x| \ge |a_k^-| t$, we observe that there is no cancellation between summands in (\[tminussdecomp\]), and consequently we obtain an estimate by $$\label{partstrick2right1nocanc} \begin{aligned} C_1 &t^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{\sqrt{t}}^{t/2} (1+s)^{-3/2} ds \\ &+ C_2 t^{-1/2} (1+t)^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1} ds \\ &\le Ct^{-1/2} (1+t)^{-1} e^{-\frac{(x - a_k^- t)^2}{Lt}}. \end{aligned}$$ Similarly, for $|x| \le |a_j^-| t$, there is no cancellation between summands in (\[sdecomp\]), and the same calculation gives an estimate by $$Ct^{-1/2} (1+t)^{-1} e^{-\frac{(x - a_j^- t)^2}{Lt}}.$$ For $|a_j^-| t \le |x| \le |a_k^-| t$, we divide the analysis into cases $s \in [\sqrt{t}, t/2]$ and $s \in [t/2, t-\sqrt{t}]$. For $s \in [\sqrt{t}, t/2]$, we observe through (\[sdecomp\]) the inequality (\[gf1nl4balance1\]) with $\gamma = 3/2$. For the first estimate in (\[gf1nl4balance1\]), we proceed as in (\[partstrick2right1nocanc\]), while for the second we obtain an estimate by $$C_1 t^{-3/2} (1+|x - a_j^- t|)^{-3/2} \int_{\sqrt{t}}^{t/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \le C t^{-1/2} (1+t)^{-1/2} (1+|x - a_j^- t|)^{-3/2},$$ which is sufficient. For $s \in [t/2, t-\sqrt{t}]$, we observe through (\[tminussdecomp\]) the estimate (\[gf1nl1balance3\]) with $\gamma = 1$. For the second estimate in (\[gf1nl1balance3\]), we proceed as in (\[partstrick2right1nocanc\]), while for the first we have an estimate by $$C_2 t^{-1/2} |x - a_k^- t|^{-1} (1+t)^{-3/2} \int_{t/2}^{t-\sqrt{t}} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \le C (1 + t)^{-3/2} |x - a_k^- t|^{-1},$$ which is sufficient since we are in the case $t \ge |x|/|a_k^-|$. For integration over the third term on the right-hand side of (\[partstrick2\]), we estimate $$\label{partstrick2right3} \begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-1/2} e^{-\frac{(x-\xi)^2}{M(t-s)}} (1 + s)^{-3} e^{-\frac{(\xi - a_j^- (t-s) - a_k^- s)^2}{Ms}} d\xi ds \\ &\le C t^{-1/2} \int_{\sqrt{t}}^{t-\sqrt{t}} (1 + s)^{-5/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds, \end{aligned}$$ which can be analyzed similarly as in the immediately preceeding case. [*Case 3: $l \ne j$, $l \ne k$, $k\ne j$.*]{} For the case $l \ne j$, $l \ne k$, $k \ne j$, we proceed directly in the first and third integrals of (\[threeparts\]) and through the method of Liu described in Section \[liu\] and elsewhere for the second integral in (\[threeparts\]). In each case, we obtain precisely the reduced decay estimate for characteristic directions other than $a_l^-$. This concludes the proof of Estimate (\[Ginteract\](vi)). [**Proof of (\[Ginteract\](v)), nonlinearity $\mathcal{F}$.**]{} The analysis of characteristic derivatives in the case of nonlinearity $\mathcal{F}$ constitutes the single most involved section of the paper. [*(\[Ginteract\](v)), Nonlinearity*]{} $\varphi (y, s) v(y, s)$. The critical nonlinearity of $\mathcal{F}$ is $\varphi (y, s) v(y, s)$, for which we consider integrals $$\label{charderphiv} \int_0^t \int_{-\infty}^{+\infty} (\partial_t + a_l^\pm \partial_x) \tilde{G}^j (x, t-s; y) (\varphi_k^\pm (y, s) v(y,s))_y dy ds.$$ [*Case 1: $l = j$.*]{} For the case $l=j$, and for $s \in [0, t/2]$, we integrate by parts in $y$ and employ a supremum norm on $|v|$ to arrive at integrals $$\label{charderphiv1} \begin{aligned} \int_0^{t/2} &\int_{-\infty}^{+\infty} (t-s)^{-2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-5/4} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &\le C t^{-1/2} \int_0^{t/2} (t-s)^{-3/2} (1+s)^{-3/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds. \end{aligned}$$ We have three cases to consider, $a_k^- < 0 < a_j^-$, $a_k^- \le a_j^- < 0$, and $a_j^- < a_k^- < 0$, of which we focus on the second. (We observe that for the nonlinearity $\varphi(y,s) v(y,s)$, the case $a_j^- = a_k^-$ arises.) For the case $a_k^- \le a_j^- < 0$, we first observe that in the case $|x| \ge |a_k^-| t$, there is no cancellation between summands in (\[tminussdecomp\]), and proceeding similarly as in (\[partstrick2right1nocanc\]) we determine an estimate by $$C t^{-2} (1+t)^{1/4} e^{-\frac{(x - a_k^- t)^2}{Lt}}.$$ In the case $|x| \le |a_j^-| t$, there is no cancellation between summands in (\[sdecomp\]) and we obtain an estimate by $$C t^{-2} (1+t)^{1/4} e^{-\frac{(x - a_j^- t)^2}{Lt}}.$$ We note in particular that the case $a_j^- = a_k^-$ has been accomodated in this analysis. For the critical case $|a_j^-| t \le |x| \le |a_k^-| t$ (now $j \ne k$), we observe through (\[sdecomp\]) the inequality (\[gf1nl4balance1\]) with $\gamma = 3/4$. For the first estimate in (\[gf1nl4balance1\]), we proceed similarly as in (\[partstrick2right1nocanc\]), while for the second, we have an estimate by $$C_1 t^{-2} (1 + |x - a_j^- t|)^{-3/4} \int_0^{t/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \le C t^{-1} (1+t)^{-1/2} (1 + |x - a_j^- t|)^{-3/4},$$ which is (precisely) sufficient for $t \ge |x|/|a_k^-|$. For $s \in [t/2, t-\sqrt{t}]$, we do not integrate by parts in $y$, and consequently obtain integrals $$\label{charderphiv2} \begin{aligned} \int_{t/2}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-3/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} s^{-1/2} (1+s)^{-5/4} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &\le C t^{-1/2} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1} (1+s)^{-5/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds, \end{aligned}$$ the last of which can be analyzed similarly as was (\[charderphiv1\]). We note that the form of the nonlinearity in (\[charderphiv2\]) arises from a supremum norm on $v$ (in the case that $\varphi$ is differentiated) and from the observation that the estimates on $v_y$ that do not decay at rate $s^{-5/4}$ have spatial decay different from that of $\varphi$ so that when multiplied by $\varphi$ the combination decays at rate $s^{-5/4}$ or better. In the final case, $s \in [t-\sqrt{t}, t]$, the expression $(t-s)^{-3/2}$ is not integrable up to $s = t$, and we must proceed by integrating the characteristic derivative by parts. We have $$\label{charparts2} \begin{aligned} \int_{t-\sqrt{t}}^t &\int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t- s; y) (\varphi_k^- (y, s) v(y,s))_y dy ds \\ &= \int_{t-\sqrt{t}}^t \int_{-\infty}^{+\infty} (- \partial_s - a_l^- \partial_y) \tilde{G}^j (x, t- s; y) (\varphi_k^- (y, s) v(y,s))_y dy ds \\ &= \int_{-\infty}^{+\infty} \tilde{G}^j (x, 0; y) (\varphi_k^- (y, t) v(y,t))_y dy \\ &- \int_{-\infty}^{+\infty} \tilde{G}^j (x, \sqrt{t}; y) (\varphi_k^- (y, t-\sqrt{t}) v(y,t-\sqrt{t}))_y dy \\ &- \int_{t-\sqrt{t}}^t \int_{-\infty}^{+\infty} \tilde{G}^j_y (x, t- s; y) (\partial_s + a_l^- \partial_y) (\varphi_k^- (y, s) v(y,s)) dy ds. \end{aligned}$$ For the first integral on the right hand side of (\[charparts2\]), we have the immediate estimates $$\begin{aligned} |\varphi^k_x (x, t) v (x, t)| &\le CE_0 \zeta(t) (1 + t)^{-7/4} e^{-\frac{(x - a_k^- t)^2}{Lt}} \\ |\varphi^k (x, t) v_x (x, t)| &\le CE_0 \zeta(t) t^{-1/2} (1 + t)^{-5/4} e^{-\frac{(x - a_k^- t)^2}{Lt}}, \end{aligned}$$ where the second of these follows from the observation above that the combination $\varphi^k (x, t) v_x (x, t)$ decays at a rate faster than a product of the supremum norms. For the second integral on the right hand side of (\[charparts2\]), we estimate $$\begin{aligned} \int_{-\infty}^{+\infty} &(\sqrt{t})^{-1/2} e^{-\frac{(x - y - a_j^- \sqrt{t})^2}{M\sqrt{t}}} (t - \sqrt{t})^{-1/2} (1+(t-\sqrt{t}))^{-5/4} e^{-\frac{(y - a_k^- (t-\sqrt{t}))^2}{M(t-\sqrt{t})}} dy \\ &\le C t^{-1/2} (1+t)^{-5/4} e^{-\frac{(x - a_j^- \sqrt{t} - a_k^- (t-\sqrt{t}))^2}{Mt}}, \end{aligned}$$ which gives an estimate by $$C t^{-1/2} (1+t)^{-5/4} e^{-\frac{(x - a_k^- t)^2}{L t}}.$$ For the third integral on the right hand side of (\[charparts2\]), we have two nonlinearities to consider. We begin with integrals $$\label{charparts2three1} \int_{t-\sqrt{t}}^t \int_{-\infty}^{+\infty} \tilde{G}^j_y (x, t-s; y) v(y, s) (\partial_s + a_j^- \partial_y) \varphi^k (y, s) dy ds,$$ for which we estimate $$\begin{aligned} \int_{t-\sqrt{t}}^t &\int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-7/4} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &\le C t^{-1/2} \int_{t-\sqrt{t}}^t (t-s)^{-1/2} (1+s)^{-7/4} s^{1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds, \end{aligned}$$ which gives an estimate by $$C (1+t)^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}}.$$ For the second nonlinearity, we have integrals $$\label{charparts2three2} \int_{t-\sqrt{t}}^t \int_{-\infty}^{+\infty} \tilde{G}^j_y (x, t-s; y) \varphi^k (y, s) (\partial_s + a_j^- \partial_y) v(y, s) dy ds.$$ Observing again that the combination $\varphi^k (y, s) (\partial_s + a_j^- \partial_y) v(y, s)$ decays at rate $s^{-1} (1 + s)^{-3/4}$, we see that we can proceed as in the previous case. [*Case 2: $l = k \ne j$.*]{} For the case $l = k$, we divide the analysis as in (\[threeparts\]) into integrals $$\label{threeparts2} \begin{aligned} \int_0^t &\int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t-s; y) (\varphi_k^- (y, s) v(y, s))_y dy ds \\ &= \int_0^{\sqrt{t}} \int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t-s; y) (\varphi_k^- (y, t) v(y, s))_y dy ds \\ &+ \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t-s; y) (\varphi_k^- (y, t) v(y,s))_y dy ds \\ &+ \int_{t-\sqrt{t}}^t \int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t-s; y) (\varphi_k^- (y, s) v(y, s))_y dy ds. \end{aligned}$$ For the first integral in (\[threeparts2\]), upon integration by parts in $y$, we estimate $$\begin{aligned} \int_0^{\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-3/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-5/4} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &\le C t^{-1/2} \int_0^{\sqrt{t}} (t-s)^{-1} (1+s)^{-5/4} s^{1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds, \end{aligned}$$ which gives an estimate by $$C t^{-1/2} (1+t)^{-3/4} e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ sufficient for $l \ne j$. For the third integral in (\[threeparts2\]), we do not integrate by parts in $y$, and consequently we can estimate $$\begin{aligned} \int_{t-\sqrt{t}}^t &\int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} s^{-1/2} (1+s)^{-5/4} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &\le C t^{-1/2} \int_{t-\sqrt{t}}^t (t-s)^{-1/2} (1+s)^{-5/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds, \end{aligned}$$ which gives an estimate by $$C t^{-1/2} (1+t)^{-1} e^{-\frac{(x - a_k^- t)^2}{Lt}},$$ which is sufficient. For the second estimate in (\[threeparts2\]), we integrate by parts as in (\[charparts2\]), moving the characteristic derivative onto the nonlinearity, and, when appropriate, moving the $y$ derivative onto $\tilde{G}^j$. We have $$\label{charparts3} \begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} (\partial_t + a_k^- \partial_x) \tilde{G}^j (x, t- s; y) (\varphi_k^- (y, s) v(y,s))_y dy ds \\ &= \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (- \partial_s - a_k^- \partial_y) \tilde{G}^j (x, t- s; y) (\varphi_k^- (y, s) v(y,s))_y dy ds \\ &= - \int_{-\infty}^{+\infty} \tilde{G}^j (x, \sqrt{t}; y) (\varphi_k^- (y, t-\sqrt{t}) v(y,t-\sqrt{t}))_y dy \\ &- \int_{-\infty}^{+\infty} \tilde{G}^j_y (x, t-\sqrt{t}; y) \varphi_k^- (y, \sqrt{t}) v(y,\sqrt{t}) dy \\ &- \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} \tilde{G}^j_y (x, t- s; y) (\partial_s + a_k^- \partial_y) (\varphi_k^- (y, s) v(y,s)) dy ds. \end{aligned}$$ For the first integral on the right hand side of (\[charparts3\]), we proceed precisely as with the second term in (\[charparts2\]). For the second integral on the right hand side of (\[charparts3\]), we estimate $$\begin{aligned} \int_{-\infty}^{+\infty} &(t-\sqrt{t})^{-1} e^{-\frac{(x - y - a_j^- (t-\sqrt{t}))^2}{M(t-\sqrt{t})}} (1 + \sqrt{t})^{-5/4} e^{-\frac{(y - a_k^2 \sqrt{t})^2}{M\sqrt{t}}} dy \\ &\le C t^{-1} (1 + \sqrt{t})^{-3/4} e^{-\frac{(x - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t})^2}{Mt}}, \end{aligned}$$ which gives an estimate by $$C t^{-1} (1 + t)^{-3/8} e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ suffificient for $l \ne j$. For the third integral on the right hand side of (\[charparts3\]), we have two integrals to consider, $$\label{charparts3three} \begin{aligned} &\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} \tilde{G}^j_y (x, t- s; y) v(y, s) (\partial_s + a_k^- \partial_y) \varphi_k^- (y, s) dy ds \\ &+ \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} \tilde{G}^j_y (x, t- s; y) \varphi_k^- (y, s) (\partial_s + a_k^- \partial_y) v(y,s) dy ds. \end{aligned}$$ For the first integral in (\[charparts3three\]), we employ the supremum norm on $|v(y,s)|$ and estimate $$\label{charparts3three1} \begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-9/4} e^{-\frac{(y-a_k^- s)^2}{Ms}} dy ds \\ &\le C t^{-1/2} \int_{\sqrt{t}}^{t-\sqrt{t}} (t-s)^{-1/2} (1+s)^{-7/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds. \end{aligned}$$ We have three cases to consider for this last integral, $a_k^- < 0 < a_j^-$, $a_k^- < a_j^- < 0$, and $a_j^- < a_k^- < 0$, of which we focus on the second (we have already considered the case $j = k$). For $|x| \ge |a_k^-| t$, there is no cancellation between summands in (\[tminussdecomp\]), and we obtain an estimate by $$C (1 + t)^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}},$$ while for $|x| \le |a_j^- t|$, there is no cancellation between summands in (\[sdecomp\]) and we similarly obtain an estimate by $$C (1 + t)^{-11/8} e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ sufficient for $l \ne j$. For the critical case $|a_j^-| t \le |x| \le |a_k^-| t$, we divide the analysis into cases, $s \in [\sqrt{t}, t/2]$ and $s \in [t/2, t-\sqrt{t}]$. For $s \in [\sqrt{t}, t/2]$ we observe through (\[sdecomp\]) the inequality (\[gf1nl4balance1\]) with $\gamma = 3/2$. For the first estimate in (\[gf1nl4balance1\]), we proceed as above to obtain an estimate by $$C (1 + t)^{-11/8} e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ while for the second we estimate $$\begin{aligned} C &t^{-1} (1+|x - a_j^- t|)^{-3/2} \int_{\sqrt{t}}^{t/2} (1+s)^{-1/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \\ &\le C (1 + t)^{-5/8} (1 + |x - a_j^- t|)^{-3/2}, \end{aligned}$$ sufficient for $l \ne j$. For $s \in [t/2, t - \sqrt{t}]$, we observe through (\[tminussdecomp\]) the inequality (\[gf1nl1balance3\]) with $\gamma = 1/2$. For the second estimate in (\[gf1nl1balance3\]), we immediately obtain an estimate by $$C (1 + t)^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}},$$ precisely as required, while for the first we estimate $$\begin{aligned} C &t^{-1/2} (1 + t)^{-7/4} |x - a_k^- t|^{-1/2} \int_{t/2}^{t-\sqrt{t}} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \\ &\le C (1+t)^{-7/4} |x - a_k^- t|^{-1/2}, \end{aligned}$$ which is sufficient since $t \ge |x|/|a_k^-|$. For the second integral in (\[charparts3three\]), we have five estimates on $|(\partial_s + a_k^- \partial_y) v|$ to consider (see (\[charestonv\])), beginning with integrals $$\label{charparts3three2} \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} s^{-1} (1+s)^{-1/4} (1 + |y - a_k^- s| + s^{1/2})^{-3/2} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds.$$ Writing $$x - y - a_j^- (t-s) = (x - a_j^- (t-s) - a_k^- s) - (y - a_k^- s),$$ we observe the estimate $$\label{charparts3three2balance1} \begin{aligned} &e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + |y - a_k^- s| + s^{1/2})^{-3/2} e^{-\frac{(y - a_k^- s)^2}{Ms}} \\ &\le C \Big[ e^{-\epsilon \frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} (1 + |y - a_k^- s| + s^{1/2})^{-3/2} e^{-\frac{(y - a_k^- s)^2}{Ms}} \\ &+ e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + |x - a_j^- (t-s) - a_k^- s| + s^{1/2})^{-3/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}s}} \Big]. \end{aligned}$$ For the first estimate in (\[charparts3three2balance1\]), we have integrals $$\label{charparts3three2balance1first} \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\epsilon \frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} s^{-1} (1+s)^{-1/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} (1 + |y - a_k^- s| + s^{1/2})^{-3/2} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds.$$ We have three cases to consider, $a_k^- < 0 < a_j^-$, $a_k^- < a_j^- < 0$, and $a_j^- < a_k^- < 0$, of which we focus on the second. For $|x| \ge |a_k^-| t$, we observe that there is no cancellation between summands in (\[tminussdecomp\]), and consequently that we obtain an estimate by $$\begin{aligned} C_1& t^{-1} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{\sqrt{t}}^{t/2} s^{-1/2} (1+s)^{-1} e^{-\frac{((a_j^- - a_k^-) (t-s))^2}{\bar{M}(t-s)}} ds \\ &+ C_2 t^{-1} (1+t)^{-1} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} \\ &\le C (1 + t)^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}}. \end{aligned}$$ For $|x| \le |a_j^-| t$, we observe that there is no cancellation between summands in (\[tminussdecomp\]), and consequently we obtain an estimate by $$\begin{aligned} C_1& t^{-1} e^{-\frac{(x - a_j^- t)^2}{Lt}} \int_{\sqrt{t}}^{t/2} e^{-\epsilon \frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} s^{-1/2} (1+s)^{-1} ds \\ &+ C_2 t^{-1} (1 + t)^{-1} e^{-\frac{(x - a_j^- t)^2}{Lt}} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1/2} e^{-\epsilon \frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} ds \\ &\le C t^{-1/2} (1+t)^{-3/4} e^{-\frac{(x - a_j^- t)^2}{Lt}}, \end{aligned}$$ which is sufficient for $j \ne l$. For $|a_j^-| t \le |x| \le |a_k^-| t$, we observe the inequality $$\label{charparts3three2balance2} \begin{aligned} &e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} s^{-1} (1+s)^{-1/2} \\ &\le C \Big[e^{-\frac{(x - a_j^- t)^2}{Lt}} e^{-\epsilon \frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} s^{-1} (1 + s)^{-1/2} \\ &+ e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} |x - a_j^- t|^{-1} (1 + |x - a_j^- t|)^{-1/2} \Big]. \end{aligned}$$ For integration over the first estimate in (\[charparts3three2balance2\]), we have an estimate by $$\begin{aligned} C_1 &t^{-1} e^{-\frac{(x - a_j^- t)^2}{Lt}} \int_{\sqrt{t}}^{t/2} s^{-1/2} (1+s)^{-1} e^{-\epsilon \frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} ds \\ &+ C_2 t^{-1} (1+t)^{-1} e^{-\frac{(x - a_j^- t)^2}{Lt}} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1/2} e^{-\epsilon \frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} ds \\ &\le C (1+t)^{-5/4} e^{-\frac{(x - a_j^- t)^2}{Lt}}, \end{aligned}$$ which is sufficient for $l \ne j$. For integration over the second estimate in (\[charparts3three2balance2\]), we have an estimate by $$\begin{aligned} C_1 &t^{-1} |x - a_j^- t|^{-1} (1 + |x - a_j^- t|)^{-1/2} \int_{\sqrt{t}}^{t/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} ds \\ &+ C_2 (1+t)^{-1/2} |x - a_j^- t|^{-1} (1 + |x - a_j^- t|)^{-1/2} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1/2} e^{-\epsilon \frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} ds \\ &\le C t^{-1/2} |x - a_j^- t|^{-1} (1 + |x - a_j^- t|)^{-1/2}, \end{aligned}$$ which, along with an alternative estimate in the case $|x - a_j^- t| \le \sqrt{t}$, is sufficient for $l \ne j$. We remark that the critical observation in this calculation was that since we require less decay along the $j$-characteristic, we use the same estimate (\[charparts3three2balance2\]) for both cases $s \in [\sqrt{t}, t/2]$ and $s \in [t/2, t-\sqrt{t}]$. For the second estimate in (\[charparts3three2balance1\]), we have integrals $$\label{charparts3three2balance1second} \begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} s^{-1} (1+s)^{-1/4} (1 + |x - a_j^- (t-s) - a_k^- s| + s^{1/2})^{-3/2} \\ &\times e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}s}} e^{- \epsilon \frac{(y - a_k^- s)^2}{Ms}} dy ds. \end{aligned}$$ We have three cases to consider, $a_k^- < 0 < a_j^-$, $a_k^- < a_j^- < 0$, and $a_j^- < a_k^- < 0$, of which we focus on the second. For $|x| \ge |a_k^-| t$, we observe that there is no cancellation between summands in (\[tminussdecomp\]), and consequently we obtain an estimate by $$\label{charparts3three2balance1secondnocanc} \begin{aligned} C_1 &t^{-1} (1 + |x - a_k^- t|)^{-3/2} \int_{\sqrt{t}}^{t/2} s^{-1/2} (1 + s)^{-1/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}s}} ds \\ &+ C_2 t^{-1} (1+t)^{-1/4} (1 + |x - a_k^- t|)^{-3/2} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1/2} ds \\ &\le C (1 + t)^{-3/4} (1 + |x - a_k^- t|)^{-3/2}, \end{aligned}$$ which is the required estimate since $l = k$. For the case $|x| \le |a_j^-| t$, we have no cancellation between summands in (\[sdecomp\]) and proceeding as in (\[charparts3three2balance1secondnocanc\]) we immediately obtain an estimate by $$C (1 + t)^{-3/4} (1 + |x - a_j^- t|)^{-3/2}.$$ For $|a_j^-| t \le |x| \le |a_k^-| t$, we divide the analysis into cases $s \in [\sqrt{t}, t/2]$ and $s \in [t/2, t-\sqrt{t}]$. For $s \in [\sqrt{t}, t/2]$, we observe the estimate (\[charparts3three2balance2\]). For integration over the first estimate in (\[charparts3three2balance2\]), we have an estimate by $$\begin{aligned} C_1 &t^{-1} e^{-\frac{(x - a_j^- t)^2}{Lt}} \int_{\sqrt{t}}^{t/2} s^{-1/2} (1+s)^{-1} e^{-\epsilon \frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}s}} ds \\ &\le C (1+t)^{-5/4} e^{-\frac{(x - a_j^- t)^2}{Lt}}, \end{aligned}$$ while for integration over the second estimate in (\[charparts3three2balance2\]), we have an estimate by $$\begin{aligned} C_1 &t^{-1} |x - a_j^- t|^{-1} (1 + |x - a_j^- t|)^{-1/2} \int_{t/2}^{t-\sqrt{t}} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}s}} ds \\ &\le C (1 + t)^{-1/2} |x - a_j^- t|^{-1} (1 + |x - a_j^- t|)^{-1/2}, \end{aligned}$$ which, along with an alternative estimate in the case $|x - a_j^- t| \le \sqrt{t}$, is sufficient for $l \ne j$. For $s \in [t/2, t-\sqrt{t}]$, we compute an estimate directly from (\[charparts3three2balance1second\]) $$\begin{aligned} C_2 &t^{-1} (1 + t)^{-1} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}s}} ds \\ &\le C (1 + t)^{-7/4}, \end{aligned}$$ which for $t \ge |x|/|a_k^-|$ gives an estimate by $$C (1 + |x| + t)^{-7/4}.$$ For the remaining estimates in $|(\partial_s + a_l^- \partial_y) v|$, we have decay with with a different scaling than in the diffusion wave. For example, we have terms of the form $$\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} s^{-1/2} (1+s)^{-1/2} (1 + |y - a_m^- s| + s^{1/2})^{-3/2} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds,$$ where $m \ne k$. In such cases, we observe that for $y$ near $a_m^- s$ we have exponential decay in $s$ (which gives exponential decay in $\sqrt{t}$), while for $y$ away from $a_m^- s$, we have integrals of the form $$\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} s^{-1/2} (1+s)^{-1/2} (1 + s)^{-3/2} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds,$$ which are better than previous cases (see (\[charparts3three1\])). [*Case 3: $l \ne k$, $l \ne j$.*]{} For the case $l \ne k$, $l \ne j$, we divide the analysis into precisely the same three terms as in (\[threeparts2\]). For the first integral in (\[threeparts2\]), upon integration by parts in $y$, we estimate $$\begin{aligned} \int_0^{\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-3/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-5/4} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &\le C_1 t^{-1/2} \int_0^{\sqrt{t}} (t-s)^{-1} (1+s)^{-3/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \le C t^{-1/2} (1+t)^{-7/8} e^{-\frac{(x - a_j^- t)^2}{Lt}}, \end{aligned}$$ which is sufficient for $l \ne j$. For the third integral in (\[threeparts2\]), we estimate $$\begin{aligned} \int_{t - \sqrt{t}}^t &\int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} s^{-1/2} (1+s)^{-5/4} e^{-\frac{(y - a_k^- s)^2}{Ms}} \\ &\le C t^{-1/2} \int_{t-\sqrt{t}}^t (t-s)^{-1/2} (1+s)^{-5/4} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \le C (1+t)^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}}, \end{aligned}$$ which is sufficient. For the second integral in (\[threeparts2\]), we begin with the case $j = k$, for which we write $$\label{jequalk} \begin{aligned} &\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t-s; y) \Big(\varphi^k (y, s) v(y, s) \Big)_y dy ds \\ &= - \int_{\sqrt{t}}^{t/2} \int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j_y (x, t-s; y) \varphi^k (y, s) v(y, s) dy ds \\ &+ \int_{t/2}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t-s; y) \Big(\varphi^k (y, s) v(y, s) \Big)_y dy ds. \end{aligned}$$ For the first integral on the right hand side of (\[jequalk\]), we estimate $$\begin{aligned} &\int_{\sqrt{t}}^{t/2} (t-s)^{-3/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-5/4} e^{-\frac{(y - a_j^- s)^2}{Ms}} ds \\ &\le C t^{-1/2} \int_{\sqrt{t}}^{t/2} (t-s)^{-1} (1 + s)^{-5/4} s^{1/2} e^{-\frac{(x - a_j^- t)^2}{Mt}} ds \\ &\le C_1 (1 + t)^{-5/4} e^{-\frac{(x - a_j^- t)^2}{Mt}} ds, \end{aligned}$$ which is sufficient for $l \ne j$. Similarly, for the second integral on the right hand side of (\[jequalk\]), we estimate $$\begin{aligned} &\int_{t/2}^{t-\sqrt{t}} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} s^{-1/2} (1+s)^{-5/4} e^{-\frac{(y - a_j^- s)^2}{Ms}} ds \\ &\le C t^{-1/2} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1/2} (1 + s)^{-5/4} e^{-\frac{(x - a_j^- t)^2}{Mt}} ds \\ &\le C_1 (1 + t)^{-5/4} e^{-\frac{(x - a_j^- t)^2}{Mt}} ds, \end{aligned}$$ which again is sufficient for $l \ne j$. In the case $j \ne k$, we employ the non-convecting variables (\[nonconvectingvariables\]), along with $$\label{charnonconvectingvariable} g_l (x, t; y + a_j^- (t-s)) := (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t-s; y),$$ for which we can write the second integral in (\[threeparts2\]) as $$\int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} g_l (x, t-s; y + a_j^- (t-s)) \Big(\phi (y - a_k^- s, s) V (y - a_k^- s, s) \Big)_y dy ds.$$ Setting $\xi = y + a_j^- (t-s)$, this becomes $$\label{threeparts2redux} \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} g_l (x, t-s; \xi) \Big(\phi (\xi - a_j^- (t-s) - a_k^- s, s) V (\xi - a_j^- (t-s) - a_k^- s, s) \Big)_y dy ds.$$ Proceeding similarly as in (\[partsins\]), we write $$\label{charderpartsinsredux} \begin{aligned} &\Big(g_l (x,t-s;\xi) \phi (\xi - a_j^- (t-s) - a_k^- s, s) V (\xi - a_j^- (t-s) - a_k^- s, s) \Big)_s \\ &= - g_{l \tau} (x,t-s;\xi) \phi (\xi - a_j^- (t-s) - a_k^- s, s) V (\xi - a_j^- (t-s) - a_k^- s, s) \\ &+ (a_j^- - a_k^-) g_l (x,t-s;\xi) \Big(\phi (\xi - a_j^- (t-s) - a_k^- s, s) V(\xi - a_j^- (t-s) - a_k^- s, s) \Big)_\xi \\ &+ g_l (x,t-s;\xi) \Big(\phi (\xi - a_j^- (t-s) - a_k^- s, s) V (\xi - a_j^- (t-s) - a_k^- s, s) \Big)_\tau. \end{aligned}$$ For integration over the left hand side of (\[charderpartsinsredux\]), we exchange the order of integration to obtain $$\label{charderpartsinsreduxfirst} \begin{aligned} &\int_{-\infty}^{+\infty} g_l (x,\sqrt{t};\xi) \phi (\xi - a_j^- \sqrt{t} - a_k^- (t-\sqrt{t}), t-\sqrt{t}) V (\xi - a_j^- \sqrt{t} - a_k^- (t-\sqrt{t}), t-\sqrt{t}) d\xi \\ &- \int_{-\infty}^{+\infty} g_l (x,t-\sqrt{t};\xi) \phi (\xi - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t}, \sqrt{t}) V (\xi - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t}, \sqrt{t}) d\xi. \end{aligned}$$ For the first integral in (\[charderpartsinsreduxfirst\]), using the supremum norm of $V$, we estimate $$\begin{aligned} \int_{-\infty}^{+\infty} &(\sqrt{t})^{-1} e^{-\frac{(x - \xi)^2}{M\sqrt{t}}} (1 + (t-\sqrt{t}))^{-5/4} e^{-\frac{(\xi - a_j^- \sqrt{t} - a_k^- (t - \sqrt{t}))^2}{M(t-\sqrt{t})}} d\xi \\ &\le C t^{-1/2} (\sqrt{t})^{-1/2} (1 + (t-\sqrt{t}))^{-5/4} (t - \sqrt{t})^{1/2} e^{-\frac{(x - a_j^- \sqrt{t} - a_k^- (t - \sqrt{t}))^2}{Mt}} \\ &\le C (1 + t)^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}}. \end{aligned}$$ For the second integral in (\[charderpartsinsreduxfirst\]), again using the supremum norm of $V$, we estimate $$\begin{aligned} \int_{-\infty}^{+\infty} &(t-\sqrt{t})^{-1} e^{-\frac{(x - \xi)^2}{M(t-\sqrt{t})}} (1 + \sqrt{t})^{-5/4} e^{-\frac{(\xi - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t})}{M(t-\sqrt{t})}} d\xi \\ &\le C t^{-1/2} (t-\sqrt{t})^{-1/2} (1 + \sqrt{t})^{-5/4} (\sqrt{t})^{1/2} e^{-\frac{(x - a_j^- (t-\sqrt{t}) - a_k^- \sqrt{t})}{Mt}} \\ &\le C (1 + t)^{-11/8} e^{-\frac{(x - a_j^- t)}{Lt}}, \end{aligned}$$ which is slightly better than required for $l \ne j$ (we require $t^{-10/8}$). For integration over the first term on the right hand side of (\[charderpartsinsredux\]), we estimate $$\label{charderpartsinsreduxsecond} \begin{aligned} \int_{\sqrt{t}}^{t-\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-2} e^{-\frac{(x - \xi)^2}{M(t-s)}} (1 + s)^{-5/4} e^{-\frac{(\xi - a_j^- (t-s) - a_k^- s)^2}{Ms}} d\xi ds \\ &\le C t^{-1/2} \int_{\sqrt{t}}^{t-\sqrt{t}} (t-s)^{-3/2} (1 + s)^{-5/4} s^{1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds. \end{aligned}$$ We have three cases to consider, $a_k^- < 0 < a_j^-$, $a_k^- < a_j^- < 0$, and $a_j^- < a_k^- < 0$, of which we focus on the second. For $|x| \ge |a_k^-| t$, there is no cancellation between summands in (\[tminussdecomp\]), and we can estimate $$\label{charderpartsinsreduxsecondnocanc1} \begin{aligned} C_1 &t^{-2} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{\sqrt{t}}^{t/2} (1 + s)^{-5/4} s^{1/2} ds + C_2 (1 + t)^{-5/4} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-3/2} ds \\ &\le C (1 + t)^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}}. \end{aligned}$$ In the case $|x| \le |a_j^-| t$, we have no cancellation between summands in (\[sdecomp\]) and similarly obtain an estimate by $$C (1 + t)^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}}.$$ For the critical case $|a_j^-| t \le |x| \le |a_k^-| t$, we divide the analysis into subcases $s \in [\sqrt{t}, t/2]$ and $s \in [t/2, t-\sqrt{t}]$. For $s \in [\sqrt{t}, t/2]$, we observe through (\[sdecomp\]) the inequality (\[gf1nl4balance1\]) with $\gamma = 3/4$. For the first estimate in (\[gf1nl4balance1\]), we proceed similarly as in (\[charderpartsinsreduxsecondnocanc1\]), while for the second we estimate $$C_1 t^{-2} (1 + |x - a_j^- t|)^{-3/4} \int_{\sqrt{t}}^{t-\sqrt{t}} (1+s)^{-1/2} s^{1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \le C t^{-3/2} (1 + |x - a_j^- t|)^{-3/4},$$ which is sufficient for $t \ge |x|/|a_k^-|$. For $s \in [t/2, t-\sqrt{t}]$, we observe through (\[tminussdecomp\]) the inequality (\[gf1nl1balance3\]) with $\gamma = 3/2$. For the second estimate in (\[gf1nl1balance3\]), we proceed similarly as in (\[charderpartsinsreduxsecondnocanc1\]), while for the first we estimate $$C_2 (1+t)^{-5/4} |x - a_k^- t|^{-3/2} \int_{t/2}^{t-\sqrt{t}} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \le C (1 + t)^{-3/4} |x - a_k^- t|^{-3/2}.$$ For the third expression on the right hand side of (\[charderpartsinsredux\]), we estimate $$\label{charderpartsinsreduxthird} \begin{aligned} \int_{\sqrt{t}}^{t - \sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - \xi)^2}{M(t-s)}} s^{-1} (1+s)^{-1} e^{-\frac{(\xi - a_j^- (t-s) - a_k^- s)^2}{Ms}} d\xi ds \\ &\le C t^{-1/2} \int_{\sqrt{t}}^{t - \sqrt{t}} (t-s)^{-1/2} s^{-1/2} (1 + s)^{-1} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Ms}} ds, \end{aligned}$$ where we have once again observe the increased rate of time decay for the combination $$\Big(\phi (\xi - a_j^- (t-s) - a_k^- s, s) V (\xi - a_j^- (t-s) - a_k^- s, s) \Big)_\tau.$$ We have three cases to consider, $a_k^- < 0 < a_j^-$, $a_k^- < a_j^- < 0$, and $a_j^- < a_k^- < 0$, of which we focus on the second. For $|x| \ge |a_k^-| t$, there is no cancellation between summands in (\[tminussdecomp\]), and we can estimate $$\label{charderpartsinsreduxthirdnocanc1} \begin{aligned} C_1 &t^{-1} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{\sqrt{t}}^{t/2} s^{-1/2} (1+s)^{-1} ds + C_2 t^{-1} (1 + t)^{-1} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1/2} ds \\ &\le C (1+t)^{-5/4} e^{-\frac{(x - a_k^- t)^2}{Lt}}, \end{aligned}$$ which is sufficient for $k \ne k$. For $|x| \le |a_j^-| t$, there is no cancellation between summands in (\[sdecomp\]) and we obtain an estimate by $$C (1+t)^{-5/4} e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ For $|a_j^-| t \le |x| \le |a_k^-| t$, we divide the analysis into cases, $s \in [\sqrt{t}, t/2]$ and $s \in [t/2, \sqrt{t}]$. For $s \in [\sqrt{t}, t/2]$, we observe the inequality (\[gf1nl4balance1\]) with $\gamma = 3/2$. For the first estimate in (\[gf1nl4balance1\]), we proceed as in (\[charderpartsinsreduxthirdnocanc1\]), while for the second we estimate $$\begin{aligned} C_1 t^{-1} (1 + |x - a_j^- t|)^{-3/2} \int_{\sqrt{t}}^{t/2} s^{-1/2} (1 + s)^{1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \le C (1 + t)^{-1/2} (1 + |x - a_j^- t|)^{-3/2}, \end{aligned}$$ which is precisely enough for $l \ne j$. For $s \in [t/2, t-\sqrt{t}]$, we observe through (\[tminussdecomp\]) the inequality (\[gf1nl1balance3\]) with $\gamma = 1/2$. For the second estimate in (\[gf1nl1balance3\]), we proceed similarly as in (\[charderpartsinsreduxthirdnocanc1\]), while for the first we estimate $$\begin{aligned} C_2 t^{-1} (1+t)^{-1} |x - a_k^- t|^{-1/2} \int_{t/2}^{t-\sqrt{t}} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \le C (1 + t)^{-3/2} |x - a_k^- t|^{-1/2}, \end{aligned}$$ which is sufficient for $l \ne k$. This ends the proof of (\[Ginteract\](vi)) for the leading order convection kernels $\tilde{G}^j (x, t; y)$. [*(\[Ginteract\](v)), Nonlinearity*]{} $(v(y, s)^2)_y$. We next consider integrals $$\label{chardervsquared} \begin{aligned} \int_0^t &\int_{-\infty}^{+\infty} (\partial_t + a_l^\pm \partial_x) \tilde{G}^j (x,t-s;y) (v(y,s)^2)_y dy ds \\ &= \int_0^{t/2} \int_{-\infty}^{+\infty} (\partial_t + a_l^\pm \partial_x) \tilde{G}^j (x,t-s;y) (v(y,s)^2)_y dy ds \\ &+ \int_{t/2}^t \int_{-\infty}^{+\infty} (\partial_t + a_l^\pm \partial_x) \tilde{G}^j (x,t-s;y) (v(y,s)^2)_y dy ds, \end{aligned}$$ for which we have two cases to consider, $l \ne j$ and $l = j$. [*Case 1: $l \ne j$.*]{} For the case $l \ne j$, we integrate the first integral on the right hand side of (\[chardervsquared\]) by parts in $y$ to obtain integrals $$\int_0^{t/2} \int_{-\infty}^{+\infty} \partial_y (\partial_t + a_l^\pm \partial_x) \tilde{G}^j (x,t-s;y) v(y,s)^2 dy ds.$$ Taking supremum norm on one $v(y, s)$, we have two cases two consider, one for each estimate on $v$. For the first, we have $$\label{chardervsquaredfirst} \int_0^{t/2} \int_{-\infty}^{+\infty} (t-s)^{-3/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + s)^{-3/4} (1 + |y - a_k^- s| + s^{1/2})^{-3/2} dy ds,$$ where $a_k^- < 0$. We observe through (\[maindecomp\]) the inequality (\[gf1nl1balance1\]) with $\gamma = 3/2$. For the first estimate in (\[gf1nl1balance1\]) we have integrals $$\int_0^{t/2} \int_{-\infty}^{+\infty} (t-s)^{-3/2} e^{-\epsilon \frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} (1 + s)^{-3/4} (1 + |y - a_k^- s| + s^{1/2})^{-3/2} dy ds.$$ We have three cases to consider $a_k^- < 0 < a_j^- s$, $a_k^- \le a_j^- < 0$, and $a_j^- < a_k^- < 0$, of which we focus on the second. In the case $s \in [0, \sqrt{t}]$, we have $$e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} \le C e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ from which we obtain an estimate by $$C_1 t^{-3/2} e^{-\frac{(x - a_j^- t)^2}{Lt}} \int_0^{\sqrt{t}} (1 + s)^{-1} ds \le C (1 + t)^{-3/2} \ln (1 + t) e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ sufficient for $l \ne j$. For $s \in [\sqrt{t}, t/2]$, we begin with the case $|x| \ge |a_k^-| t$, for which there is no cancellation between summands in (\[tminussdecomp\]), and we obtain an estimate by $$\label{chardervsquaredfirstnocanc1} C_1 t^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{\sqrt{t}}^{t/2} (1 + s)^{-1} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} ds.$$ In the event that $j = k$, we obtain an estimate by $$C (1 + t)^{-3/2} \ln (1 + t) e^{-\frac{(x - a_k^- t)^2}{Lt}},$$ which is sufficient for $k = j \ne l$, whereas in the event that $j \ne k$, we obtain an estimate by $$C (1 + t)^{-3/2} e^{-\frac{(x - a_k^- t)^2}{Lt}}.$$ Proceeding similarly in the case $|x| \le |a_j^-| t$, we obtain an estimate by $$C (1 + t)^{-3/2} \ln (1 + t) e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ which is sufficient for $l \ne j$. For $|a_j^-| t \le |x| \le |a_k^-| t$, we observe through (\[sdecomp\]) the inequality (\[gf1nl1balance2\]) with $\gamma = 3/4$. For the first estimate in (\[gf1nl1balance2\]), we proceed as in (\[chardervsquaredfirstnocanc1\]), while for the second we estimate $$C_1 t^{-3/2} (1 + |x - a_j^- t|)^{-1} \int_{\sqrt{t}}^{t/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{\bar{M}(t-s)}} ds \le C t^{-1} (1 + |x - a_j^- t|)^{-1},$$ which is sufficient for $t \ge |x|/|a_k^-|$ and $l \ne j$. For the second estimate in (\[gf1nl1balance1\]) we have integrals $$\int_{\sqrt{t}}^{t/2} \int_{-\infty}^{+\infty} (t-s)^{-3/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + s)^{-3/4} (1 + |y - a_k^- s| + |x - a_j^- (t-s) - a_k^- s| + s^{1/2})^{-3/2} dy ds,$$ for which we observe through (\[sdecomp\]) the inequality (\[gf1nl1balance4\]) with $\gamma = 3/4$. Integrating the exponential kernel, for the first estimate in (\[gf1nl1balance4\]) we obtain an estimate by $$C_1 t^{-1} (1 + |x - a_j^- t|)^{-3/2} \int_{\sqrt{t}}^{t/2} (1 + s)^{-3/4} ds \le C t^{-3/4} (1 + |x - a_j^- t|)^{-3/2},$$ while for the second estimate in (\[gf1nl1balance4\]), we obtain an estimate by $$C_1 t^{-1} (1 + |x - a_j^- t|)^{-3/4} \int_{\sqrt{t}}^{t/2} (1 + |x - a_j^- t| + s^{1/2})^{-3/2} ds \le C t^{-1} (1 + |x - a_j^- t|)^{-1}.$$ For the second estimate on $v(y, s)$, (\[chardervsquaredfirst\]) is replaced by $$\label{chardervsquaredsecond} \int_0^{t/2} \int_{-|a_1^-| s}^{0} (t-s)^{-3/2} e^{-\frac{(x - y - a_k^- (t-s))^2}{M(t-s)}} (1 + |y|)^{-1/2} (1 + |y| + s)^{-5/4} (1 + |y| + s^{1/2})^{-1/2} dy ds,$$ for which we observe through (\[sdecomp\]) the inequality (\[gf1nl2balance1\]). For the first estimate in (\[gf1nl2balance1\]), we have integrals $$\int_0^{t/2} \int_{-|a_1^-| s}^{0} (t-s)^{-3/2} e^{- \epsilon \frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M} (t-s)}} (1 + |y|)^{-1/2} (1 + |y| + s)^{-5/4} (1 + |y| + s^{1/2})^{-1/2} dy ds,$$ for which we have two cases to consider, $a_j^- < 0$ and $a_j^- > 0$. For the case $a_j^- > 0$, there is no cancellation between $x$ and $a_j^- (t-s)$ and the claimed estimate can be deduced in straightforward fashion. For the case $a_j^- < 0$, we first consider the subcase $|x| \ge |a_j^-| t$, for which there is no cancellation between summands on the right hand side of $$x - a_j^- (t-s) = (x - a_j^- t) + a_j^- s,$$ and we immediately obtain an estimate by $$\label{chardervsquaredsecondnocanc} C_1 t^{-3/2} e^{-\frac{(x - a_j^- t)^2}{Lt}} \int_0^{t/2} (1 + s)^{-1} ds \le C (1 + t)^{-3/2} \ln (e+t) e^{-\frac{(x - a_j^- t)^2}{Lt}},$$ which is sufficient for $l \ne j$. For $|x| \le |a_j^-| t$, we observe the inequality (\[gf1nl2balance2\]). For the first estimate in (\[gf1nl2balance2\]), we proceed similarly as in (\[chardervsquaredsecondnocanc\]), while for the second we estimate $$C_1 t^{-3/2} (1 + |x - a_j^- t|)^{-3/2} \int_0^{t/2} (1 + s)^{1/2} e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M} (t-s)}} \le C (1 + t)^{-1/2} (1 + |x - a_j^- t|)^{-3/2},$$ which is sufficient for $l \ne j$. For the second estimate in (\[gf1nl2balance1\]), we have integrals $$\begin{aligned} \int_0^{t/2} &\int_{-|a_1^-| s}^{0} (t-s)^{-3/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + |y| + |x - a_j^- (t-s)|)^{-1/2} (1 + |y| + |x - a_j^- (t-s)| + s)^{-5/4} \\ &\times (1 + |y| + |x - a_j^- (t-s)| + s^{1/2})^{-1/2} dy ds, \end{aligned}$$ for which we have two cases to consider, $a_j^- < 0$ and $a_j^- > 0$, and as before we need focus only on the former. For $a_j^- > 0$ and $|x| \ge |a_j^-| t$, there is no cancellation between $x - a_j^- t$ and $a_j^- s$, and we immediately obtain an estimate by $$\label{chardervsquaredsecondnocanc2} C_1 t^{-1} (1 + |x - a_j^- t|)^{-3/2} \int_0^{t/2} (1 + |x - a_j^- (t-s)|)^{-3/4} ds \le C (1+t)^{-3/4} (1 + |x - a_j^- t|)^{-3/2}.$$ For $|x| \le |a_j^-| t$, we observe the inequality (\[gf1nl2balance4\]). For the first estimate in (\[gf1nl2balance4\]), we proceed similarly as in (\[chardervsquaredsecondnocanc2\]), while for the second we obtain an estimate by $$C_1 t^{-1} (1 + |x - a_j^- t|)^{-3/2} \int_0^{t/2} (1 + |x - a_j^- (t-s)|)^{-1/2} ds \le C (1+t)^{-1/2} (1 + |x - a_j^- t|)^{-3/2},$$ which is sufficient for $l \ne j$. For the case $s \in [t/2, t]$, we need to shift the characteristic derivative onto the nonlinearity. We accomplish this precisely as in (\[charparts\]), computing $$\label{charpartsvsquared} \begin{aligned} \int_{t/2}^t &\int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) \tilde{G}^j (x, t- s; y) (v (y, s)^2)_y dy ds \\ &= \int_{t/2}^t \int_{-\infty}^{+\infty} (- \partial_s - a_l^- \partial_y) \tilde{G}^j (x, t- s; y) (v (y, s)^2)_y dy ds \\ &= - \int_{-\infty}^{+\infty} \tilde{G}^j_y (x, t/2; y) v (y, t/2)^2 dy \\ &- \int_{-\infty}^{+\infty} \tilde{G}^j (x, 0; y) (v (y, t)^2)_y dy \\ &- \int_{t/2}^t \int_{-\infty}^{+\infty} \tilde{G}^j_y (x, t- s; y) (\partial_s + a_l^- \partial_y) v (y, s)^2 dy ds, \end{aligned}$$ where in the first and last integrals on the right hand side we have additionally integrated by parts in $y$. For the first integral on the right hand side of (\[charpartsvsquared\]), and for the first estimate on $v (y, s)$, we have integrals $$\label{charpartsvsquaredfirst} \begin{aligned} \int_{-\infty}^{+\infty} &t^{-1} e^{-\frac{(x - y - a_k^- (t/2))^2}{M(t/2)}} (1 + |y - a_k^- (t/2)| + (t/2)^{1/2})^{-3/2} (1 + (t/2))^{-3/4} dy \\ &\le C t^{-1/2} (1 + t)^{-3/2}, \end{aligned}$$ which is sufficient for $t \ge c|x|$, for any constant $c > 0$. In the case $|x| \ge t/c$, we observe the decomposition $$x - y - \frac{1}{2} a_j^- t = (x - \frac{1}{2} a_j^- t - \frac{1}{2} a_k^- t) - (y - \frac{1}{2} a_k^- t),$$ through which we observe the inequality $$\label{charpartsvsquaredfirstbalance1} \begin{aligned} &e^{-\frac{(x - y - a_j^- (t/2))^2}{M(t/2)}} (1 + |y - \frac{1}{2}a_k^- t| + (t/2)^{1/2})^{-3/2} \\ &\le C\Big[ e^{-\epsilon \frac{(x - y - a_j^- (t/2))^2}{M(t/2)}} e^{-\frac{(x - a_j^- (t/2) - a_k^- (t/2))^2}{Lt}} (1 + |y - \frac{1}{2} a_k^- t| + (t/2)^{1/2})^{-3/2} \\ &+ e^{-\frac{(x - y - a_j^- (t/2))^2}{M(t/2)}} (1 + |y - \frac{1}{2}a_k^- t| + |x - a_j^- (t/2) - a_k^- (t/2)| + (t/2)^{1/2})^{-3/2}. \end{aligned}$$ In the event that $|x| \ge t/c$, for $c$ sufficiently small, we have exponential decay in both $|x|$ and $t$ for the first estimate in (\[charpartsvsquaredfirstbalance1\]), while for the second we have an estimate by $$C_1 t^{-1/2} (1 + t)^{-3/4} (1 + |x| + t)^{-3/2},$$ which again is sufficient. For the second estimate on $v(y, s)$ we have integrals $$\label{charpartsvsquaredfirstnl2} \begin{aligned} \int_{-|a_1^-| \frac{t}{2}}^{0} &t^{-1} e^{-\frac{(x - y - a_k^- (t/2))^2}{M(t/2)}} (1 + |y|)^{-1/2} (1 + |y| + (t/2))^{-1/2} (1 + |y| + (t/2)^{1/2})^{-1/2} (1 + (t/2))^{-3/4} dy \\ &\le C t^{-1/2} (1 + t)^{-3/2}, \end{aligned}$$ which is sufficient for $t \ge c|x|$, for any constant $c > 0$. The case $|x| \ge t/c$ can be analyzed similarly as in the immediately preceding case. For the second integral on the right hand side of (\[charpartsvsquared\]), we observe that $G^j (x,0;y)$ is a delta function with mass at $x = y$, and consequently, we have an estimate by $$C |v (x, t) v_x (x, t)| \le C (1 + t)^{-3/4} \Big[t^{-1/2} (\psi_1 (x, t) + \psi_2 (x,t)) + \psi_3 (x, t) + \psi_4 (x, t) \Big],$$ which is sufficient. For the third integral on the right hand side of (\[charpartsvsquared\]), and for the first estimate on $(\partial_s + a_l^- \partial_y) v (y, s)$, we have integrals $$\label{charpartsvsquaredthird} \begin{aligned} &\int_{t/2}^t \int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} s^{-1} (1+s)^{-1/2} (1 + |y - a_l^- s)| + s^{1/2})^{-3/2} dy ds \\ &\le C t^{-1/2} (1 + t)^{-1/4} \int_{t/2}^t \int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} s^{-1/2} (1+s)^{-1/4} (1 + |y - a_l^- s)| + s^{1/2})^{-3/2} dy ds. \end{aligned}$$ This last integral has already been considered in the analysis of (\[gf1nl1\]), and we obtain an estimate by $$C t^{-1/2} (1 + t)^{-1/4} \psi_1 (x, t).$$ We proceed in precisely the same manner for the second and third estimates on $(\partial_s + a_l^- \partial_y) v (y, s)$, obtaining an estimate by $$C \Big[t^{-1/2} (\bar{\psi}_1^{l,-} (x, t) + \psi_2 (x, t)) + \psi_3 (x, t) + \psi_4 (x, t) \Big].$$ For the fourth estimate on $(\partial_s + a_l^- \partial_y) v (y, s)$, we have integrals $$\label{charpartsvsquaredthird2} \begin{aligned} &\int_{t/2}^t \int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-3/4} (1 + |y| + s)^{-1} (1 + |y|)^{-1} dy ds. \end{aligned}$$ In this case, we observe the inequality $$\label{charpartsvsquaredthird2balance1} \begin{aligned} &e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + |y|)^{-1} \\ &\le C\Big[ e^{- \epsilon \frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M}(t-s)}} (1 + |y|)^{-1} + e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + |y| + |x - a_j^- (t-s)|)^{-1} \Big]. \end{aligned}$$ For the first estimate in (\[charpartsvsquaredthird2balance1\]), we have integrals $$\label{charpartsvsquaredthird2balance1first} \begin{aligned} &\int_{t/2}^t \int_{-|a_1^-| s}^{0} (t-s)^{-1} e^{- \epsilon \frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M}(t-s)}} (1+s)^{-3/4} (1 + |y| + s)^{-1} (1 + |y|)^{-1} dy ds, \end{aligned}$$ for which we have two cases to consider, $a_j^- < 0$ and $a_j^- > 0$, and we focus on the former. For $a_j^- < 0$ and additionally $|x| \ge |a_j^-| t$, we have no cancellation between $x - a_j^- t$ and $a_j^- s$, and we obtain an estimate by $$\label{charpartsvsquaredthird2balance1firstnocanc1} \begin{aligned} C_2 (1 + t)^{-7/4} \ln(e+t) e^{-\frac{(x - a_j^- t)^2}{Lt}} &\int_{t/2}^{t-1} (t-s)^{-1} ds + C_2 (1 + t)^{-7/4} \ln(e+t) e^{-\frac{(x - a_j^- t)^2}{Lt}} \int_{t-1}^t (t-s)^{-1/2} ds \\ &\le C (1+t)^{-7/4} [\ln (e+t)]^2 e^{-\frac{(x - a_j^- t)^2}{Lt}}, \end{aligned}$$ which is sufficient. For $|x| \le |a_j^-| t$, we observe the inequality (\[gf1nl2balance3\]) with $\gamma = 1/2$, for which we estimate $$\begin{aligned} C (1 + t)^{-7/4} \ln(e+t) (1 + |x|)^{-1/2} \int_{t/2}^t (t-s)^{-1/2} e^{-\frac{(x - a_j^- (t-s))^2}{\bar{M}(t-s)}} ds \le C (1 + t)^{-7/4} \ln(e+t) (1 + |x|)^{-1/2}, \end{aligned}$$ which is sufficient for $t \ge |x|/|a_j^-|$. For the second estimate in (\[charpartsvsquaredthird2balance1\]), we have integrals $$\begin{aligned} &\int_{t/2}^t \int_{-|a_1^-| s}^{0} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + |y| + |x - a_j^- (t-s)|)^{-1} (1 + s)^{-7/4} dy ds \\ &\le C (1+t)^{-7/4} \int_{t/2}^t (t-s)^{-1/2} (1 + |x - a_j^- (t-s)|)^{-1} ds \le C (1+t)^{-7/4}, \end{aligned}$$ which is sufficient for $|x| \le Kt$, some constant $K$. For $|x| \ge Kt$, we estimate $$\begin{aligned} &\int_{t/2}^t \int_{-|a_1^-| s}^{0} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + |y| + |x - a_j^- (t-s)|)^{-1} (1 + s)^{-7/4} dy ds \\ &\le C (1+t)^{-7/4} (1 + |x| + t)^{-1} \int_{t/2}^t (t-s)^{-1/2} ds \le C (1+t)^{-5/4} (1 + |x| + t)^{-1}. \end{aligned}$$ For the final estimate on $(\partial_s + a_l^- \partial_y) v(y,s)$, we have integrals $$\begin{aligned} &\int_{t/2}^t \int_{-|a_1^-| s}^{0} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-3/4} (1 + |y| + s)^{-7/4} dy ds \\ &\le C (1 + t)^{-5/2} \int_{t/2}^t (t-s)^{-1/2} ds \le C (1 + t)^{-2}, \end{aligned}$$ which is sufficient for $|x| \le Kt$. Since $|y|$ is bounded by $|a_1^-| s$, for $|x| \ge Kt$ and $K$ sufficiently large, we have exponential decay in both $|x|$ and $t$. [*Case 2: $l = j$.*]{} For the case $l = j$, we have additional decay at rate $(t-s)^{-1/2}$, from which we immediately recover the claimed estimates. [*(\[Ginteract\](v)), Nonlinearity*]{} $|\frac{\partial \bar{u}^\delta}{\partial \delta} \delta|^2$. We next consider integrals $$\int_0^t \int_{-\infty}^{+\infty} (\partial_t + a_l^\pm \partial_x) \tilde{G}^j_y (x,t-s;y) (1+s)^{-1} e^{-\eta |y|} dy ds,$$ where in this case additional $y$-derivatives on the nonlinearity give no additional decay. [*Case 1: $l \ne j$.*]{} For the case $l \ne j$, we have integrals $$\label{charderlastnonlin} \begin{aligned} &\int_0^{t-1} \int_{-\infty}^{+\infty} (t-s)^{-3/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + s)^{-1} e^{-\eta |y|} dy ds \\ &+ \int_{t-1}^t \int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1 + s)^{-1} e^{-\eta |y|} dy ds. \end{aligned}$$ In either case, we observe the inequality (\[gf1nl3balance1\]). For the first estimate in (\[gf1nl3balance1\]), we have, upon integration of $e^{-\eta |y|}$, integrals $$\label{charderlastnonlinfirst} \begin{aligned} &\int_0^{t-1} (t-s)^{-3/2} e^{-\frac{(x - a_j^- (t-s))^2}{M(t-s)}} (1 + s)^{-1} ds \\ &+ \int_{t-1}^t (t-s)^{-1/2} e^{-\frac{(x - a_j^- (t-s))^2}{M(t-s)}} (1 + s)^{-1} ds. \end{aligned}$$ Focusing as in previous cases on the subcase $a_j^- < 0$, we first observe that for $|x| \ge |a_j^-| t$, there is no cancellation between $x - a_j^- t$ and $a_j^- s$, and we consequently have an estimate by $$\label{charderlastnonlinfirstnocanc} \begin{aligned} C t^{-3/2} e^{-\frac{(x - a_j^- t)^2}{Lt}} &\int_0^{t/2} (1 + s)^{-1} ds + C_2 (1+t)^{-1} e^{-\frac{(x - a_j^- t)^2}{Lt}} \int_{t/2}^{t} (t-s)^{-1/2} e^{-\frac{(a_j^- s)^2}{M(t-s)}} ds \\ &\le C t^{-3/2} [\ln (e+t)] e^{-\frac{(x - a_j^- t)^2}{Lt}}, \end{aligned}$$ where in this last inequality we have observed that for $s \in [t/2, t]$, we have $$e^{-\frac{(a_j^- s)^2}{M(t-s)}} \le e^{-\eta_1 t},$$ for $\eta_1 > 0$. For $|x| \le |a_j^-| t$, we divide the analysis into cases, $s \in [0, t/2]$, $s \in [t/2, t-1]$, and $s \in [t-1, t]$. For $s \in [0, t/2]$, we observe the inequality (\[gf1nl3balance2\]). For the second estimate in (\[gf1nl3balance2\]), we obtain an estimate by $$C_1 t^{-3/2} (1+|x - a_j^- t|)^{-1} \int_0^{t/2} e^{-\frac{(x - a_j^- (t-s))^2}{M(t-s)}} ds \le C t^{-1} (1+|x - a_j^- t|)^{-1},$$ which is sufficient for $t \ge |x|/|a_j^-|$ and $l \ne j$. For the first estimate in (\[gf1nl3balance2\]), we proceed as in (\[charderlastnonlinfirstnocanc\]). For $s \in [t/2, t]$, we observe the inequality (\[gf1nl2balance3\]) with $\gamma = 1$, for which we have an estimate by $$\begin{aligned} C_2 (1+t)^{-1} |x|^{-1} &\int_{t/2}^{t-1} (t-s)^{-1/2} e^{-\frac{(x - a_j^- (t-s))^2}{2 \bar{M}(t-s)}} ds + C_3 (1+t)^{-1} |x|^{-1} \int_{t-1}^t ds \\ &\le C (1 + t)^{-1} |x|^{-1}. \end{aligned}$$ [*Case 2: $l = j$.*]{} For the case $l = j$, we have additional decay at rate $(t-s)^{-1/2}$, from which we immediately recover the claimed estimates. This ends the proof of estimate (\[Ginteract\](v)) for the leading order convection kernel $\tilde{G}^j (x, t; y)$. [*(\[Ginteract\](v)–(vi)), remainder estimates.*]{} In our proofs of (\[Ginteract\](v)–(vi)), we have considered only the leading order convection kernel $$\tilde{G}^j (x, t; y) = ct^{-1/2} e^{-\frac{(x - y - a_j^- t)^2}{4 \beta_j t}}.$$ We must also consider the remaining three scattering estimates (terms in $S (x, t; y)$) and additionally the remainder estimates $R(x, t; y)$. Beginning with the remainder estimate, we focus our attention on the nonlinearity $(\varphi^k (y, s)^2)_y$ (analysis of the remaining nonlinearities is similar). We have the decomposition $$\label{residualthreeparts} \begin{aligned} \int_0^t &\int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) R(x, t-s; y) (\varphi^k (y, s)^2)_y dy ds \\ &= \int_0^{\sqrt{t}} \int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) R(x, t-s; y) (\varphi^k (y, s)^2)_y dy ds \\ &+ \int_{\sqrt{t}}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) R(x, t-s; y) (\varphi^k (y, s)^2)_y dy ds \\ &+ \int_{t-\sqrt{t}}^t \int_{-\infty}^{+\infty} (\partial_t + a_l^- \partial_x) R(x, t-s; y) (\varphi^k (y, s)^2)_y dy ds, \end{aligned}$$ where for $y \le 0$ and $a_l^- < 0$, $$\label{charderresidual} \begin{aligned} (\partial_t + a_l^- \partial_x) &R (x,t;y) = \sum_{j=1}^J \mathbf{O}(e^{-\eta t})\delta_{x-\bar a_j^* t}(-y) + \mathbf{O}(e^{-\eta(|x-y|+t)})\\ &+ \mathbf {O} \left( (t+1)^{-3/2} e^{-\eta x^+} +e^{-\eta|x|} \right) t^{-1} (t+1)^{1/2} e^{-(x-y-a_l^{-} t)^2/Mt} \\ &+\sum_{k \ne l} \mathbf {O} \left( (t+1)^{-1} e^{-\eta x^+} +e^{-\eta|x|} \right) t^{-1} (t+1)^{1/2} e^{-(x-y-a_k^{-} t)^2/Mt} \\ &+ \chi_{\{ |a_k^{-} t|\ge |y| \}} \mathbf{O}((t+1)^{-1/2} t^{-1}) e^{-(x-a_l^{-}(t-|y/a_k^-|))^2/Mt} e^{-\eta x^+} \\ &+ \sum_{a_k^{-} > 0, \, a_j^{-} < 0, \, j\ne l} \chi_{\{ |a_k^{-} t|\ge |y| \}} \mathbf{O}((t+1)^{-1/2} t^{-1}) e^{-(x-a_j^{-}(t-|y/a_k^-|))^2/Mt} e^{-\eta x^+} \\ &+ \sum_{a_k^{-} > 0, \, a_j^{+} > 0} \chi_{\{ |a_k^{-} t|\ge |y| \}} \mathbf{O}((t+1)^{-1/2} t^{-1}) e^{-(x-a_j^{+}(t-|y/a_k^{-}|))^2/Mt} e^{-\eta x^-}. \end{aligned}$$ In each estimate with decay $t^{-3/2}$ or $t^{-2}$, we have time decay better than that of characteristic derivatives of $\tilde{G}^j (x, t; y)$, and we can proceed as in the above analyses. For terms $\sum_{j=1}^J \mathbf{O}(e^{-\eta t})\delta_{x-\bar a_j^* t}(-y)$, we proceed as in the proof of (\[Hnonlinear\]), in which the interactions arising from our relaxation of strict parabolicity are considered. The only genuinely new term is the exponentially decaying contribution, which has reduced decay in $t$. Focusing on this term, we integrate by parts in $y$, observing that in the Lax and overcompressive cases differentiation with respect to $y$ improves $t$ decay by a factor $t^{-1/2}$ (this is a fundamental point of difference between the Lax and overcompressive cases considered here, and the undercompressive case). We have integrals $$\label{cleanup} \begin{aligned} e^{-\eta |x|} \int_0^{\sqrt{t}} &\int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-1} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &\le C e^{-\eta |x|} t^{-1} \int_0^{\sqrt{t}} (1+s)^{-1} s^{1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \\ &\le C e^{-\eta |x|} t^{-1} e^{-\frac{(x - a_j^- t)^2}{Mt}} \int_0^{\sqrt{t}} (1+s)^{-1} s^{1/2} ds \\ & \le C (1+t)^{-3/4} e^{-\eta |x|} e^{-\frac{(x - a_j^- t)^2}{Mt}}, \end{aligned}$$ for which we observe that $$e^{-\eta |x|} e^{-\frac{(x - a_j^- t)^2}{Mt}} \le C e^{-\eta_1 |x|} e^{-\eta_2 t}.$$ For the third integral in (\[residualthreeparts\]), we estimate $$\begin{aligned} e^{-\eta |x|} \int_{t-\sqrt{t}}^t &\int_{-\infty}^{+\infty} (t-s)^{-1/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-3/2} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &\le C e^{-\eta |x|} t^{-1/2} \int_{t-\sqrt{t}}^t (1+s)^{-3/2} s^{1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \\ &\le C e^{-\eta |x|} t^{-1/2} (1+t)^{-1} e^{-\frac{(x - a_k^- t)^2}{Lt}} \int_{t-\sqrt{t}}^t e^{- \epsilon \frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \\ & \le C (1+t)^{-1} e^{-\eta |x|} e^{-\frac{(x - a_k^- t)^2}{Mt}}, \end{aligned}$$ which is sufficient, precisely as above. For the second integral in (\[residualthreeparts\]), and for $t$ large enough so that $\sqrt{t} < t/2$, we estimate (integrating by parts in $y$ for $s \in [\sqrt{t}, t/2]$) $$\label{residualthreepartssecond} \begin{aligned} e^{-\eta |x|} \int_{\sqrt{t}}^{t/2} &\int_{-\infty}^{+\infty} (t-s)^{-1} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-1} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &+ e^{-\eta |x|} \int_{t/2}^{t-\sqrt{t}} \int_{-\infty}^{+\infty} (t-s)^{-1/2} e^{-\frac{(x - y - a_j^- (t-s))^2}{M(t-s)}} (1+s)^{-3/2} e^{-\frac{(y - a_k^- s)^2}{Ms}} dy ds \\ &\le C_2 e^{-\eta |x|} t^{-1/2} \int_{t/2}^{t-\sqrt{t}} (t-s)^{-1/2} (1+s)^{-1} s^{1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \\ &+ C_2 e^{-\eta |x|} t^{-1/2} \int_{t/2}^{t-\sqrt{t}} (1+s)^{-3/2} s^{1/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds. \end{aligned}$$ For $s \in [\sqrt{t}, t/2]$, we first observe that for the case $j = k$, we have an estimate by $$\label{residualthreepartssecondnocanc1} C (1 + t)^{-1/2} e^{-\eta |x|} e^{-\frac{(x - a_j^- t)^2}{Mt}},$$ which is sufficient, as observed above. For the case $j \ne k$, we recall the inequality (\[gf1nl4balance1\]) with $\gamma = 1/2$. For the first estimate in (\[gf1nl4balance1\]), we proceed as in (\[residualthreepartssecondnocanc1\]), while for the second we estimate $$\begin{aligned} C_1 &e^{-\eta |x|} t^{-1} (1 + |x - a_j^- t|)^{-1/2} \int_{\sqrt{t}}^{t/2} e^{-\frac{(x - a_j^- (t-s) - a_k^- s)^2}{Mt}} ds \\ &\le C (1+t)^{-1/2} e^{-\eta |x|} (1 + |x - a_j^- t|)^{-1/2}. \end{aligned}$$ Observing the estimate $$e^{-\eta |x|} (1 + |x - a_j^- t|)^{-1/2} \le C e^{-\eta_1 |x|} (1 + t)^{-1/2},$$ we observe that this estimate is sufficient. For $s \in [t/2, t-\sqrt{t}]$, we first observe that in the event that $j = k$, this last integral in (\[residualthreepartssecond\]) provides an estimate by $$C t^{-1/2} e^{-\eta |x|} e^{-\frac{(x - a_j^- t)^2}{Mt}},$$ while in the event that $j \ne k$, we have an estimate by $$C (1+t)^{-1} e^{-\eta |x|},$$ either of which is sufficient. [*(\[Ginteract\](v)–(vi)), full scattering estimates.*]{} In the scattering estimates $S(x, t; y)$ of Proposition \[greenbounds\], we have three corrections to $\tilde{G}^j (x, t; y)$, respectively from convection, reflection, and transmission. For convection, the term is $$\tilde{G}_c^j (x, t; y) = ct^{-1/2} e^{-\frac{(x - y - a_j^- t)^2}{4 \beta_j t}} \Big{(}\frac{e^{-x}}{e^x + e^{-x}} \Big{)},$$ $a_j^- > 0$, which satisfies $$|(\partial_t + a_j^- \partial_x) \tilde{G}_c^j (x, t; y)| \le C \Big{[}t^{-3/2} + t^{-1/2} e^{-\eta |x|} \Big{]} e^{-\frac{(x - y - a_j^- t)^2}{4 \beta_j t}}.$$ The estimate with $t^{-3/2}$ decay is precisely as in the case of $\tilde{G}^j$, and can be analyzed similarly. For the estimate with exponential decay in $|x|$, we can proceed precisely as with the exponentially decaying term arising in the analysis of $R(x, t; y)$. The remaining corrections, from the reflection and transmission terms in $S(x, t; y)$ can be analyzed similarly. In our analysis of $\tilde{G}^j (x, t; y)$, we employed the relation $$\partial_x \tilde{G}^j (x, t; y) = - \partial_y \tilde{G}^j (x, t; y)$$ (see the argument of (\[charparts\]), in which the characteristic derivative $(\partial_t + a_j^- \partial_x) \tilde{G}^j (x, t-s; y)$ is converted into a characteristic derivative in the variables of integration $-(\partial_s + a_j^- \partial_y) \tilde{G}^j (x, t-s; y)$). Designating the scattering term arising from reflection as $$\tilde{G}_R^{j, k} (x, t; y) = c_R t^{-1/2} e^{-\frac{(x - (a_j^-/a_k^-)y - a_j^- t)^2}{4 \bar{\beta}_{jk}^- t}} \Big{(}\frac{e^{-x}}{e^x + e^{-x}}\Big{)},$$ we observe that the analogous estimate is $$\partial_x \tilde{G}_R^{j, k} (x, t; y) = -\frac{a_k^-}{a_j^-} \partial_y \tilde{G}_R^{j, k} (x, t; y) + {\mathbf O} (t^{-1/2} e^{-\eta |x|}) e^{-\frac{(x - (a_j^-/a_k^-)y - a_j^- t)^2}{4 \bar{\beta}_{jk}^- t}},$$ from which we see that the conversion from a characteristic derivative in variables $x$ and $t$ to a characteristic derivative in variables $y$ and $s$ takes the form $$\begin{aligned} (\partial_t + a_j^- \partial_x) \tilde{G}_R^{j, k} (x, t-s; y) & = -(\partial_s + a_k^- \partial_y) \tilde{G}_R^{j, k} (x, t-s; y) \\ & + {\mathbf O} ((t-s)^{-1/2} e^{-\eta |x|}) e^{-\frac{(x - (a_j^-/a_k^-)y - a_j^- (t-s))^2}{4 \bar{\beta}_{jk}^- (t-s)}}. \end{aligned}$$ In this way, we again have precisely the improved decay required by the analysis, namely $$(\partial_s + a_k^- \partial_y) \tilde{G}_R^{j, k} (x, t-s; y) = {\mathbf O} ((t-s)^{-3/2}) e^{-\frac{(x - (a_j^-/a_k^-)y - a_j^- (t-s))^2}{4 \bar{\beta}_{jk}^- (t-s)}}.$$ The argument involving transmission terms is entirely similar. Finally, in each case in which a characteristic derivative in $x$ and $t$ is shifted to one in $y$ and $s$ an exponentially decaying error term arises, such as (from reflection) $${\mathbf O} (t^{-1/2} e^{-\eta |x|}) e^{-\frac{(x - (a_j^-/a_k^-)y - a_j^- t)^2}{4 \bar{\beta}_{jk}^- t}}.$$ In all cases, these terms can be analyzed as were the the exponentially decaying corrections in $(\partial_t + a_j^- \partial_x) R(x, t; y)$ (see (\[cleanup\])). This completes the proof of Lemma \[nonlinearintegralestimates\]. $\square$ Nonlinear integral Estimates II {#integralestimatesNLII} =============================== Finally, we complete the paper by establishing the nonlinear integral estimates of Proposition \[Hnonlinear\]. [**Proof of Lemma \[Hnonlinear\].**]{} To show (\[Hinteract\])(i) we need to estimate $$\label{quant} \begin{aligned} \Big| \int_0^t\int_{-\infty}^{+\infty}\mathcal{R}_j^*(x) &\mathcal{O}(e^{-\eta_0 (t-s)}) \delta_{x-\bar a_j^* (t-s)}(-y) \mathcal{L}_j^{*t}(y)\Upsilon(y,s) dyds\Big|\le \\ &\qquad \qquad \qquad \qquad C\int_0^t (e^{-\eta_0 (t-s)})|\Upsilon(-x+\bar a_j^* (t-s),s)|ds \end{aligned}$$ for the various sources $\Upsilon(y,s)$ arising in the bounds for ${\mathcal{F}}$, $\Phi$. For example, the typical term $$|v||v_{yy}|(y,s)\le C(1+s)^{-1/2} (\psi_1+\psi_2)(y,s)$$ arising in the bounds for ${\mathcal{F}}$ leads to sources $$\begin{aligned} \Upsilon_1(y,s)&= (1+s)^{-1/2} \psi_1(y,s)\\ &= (1+s)^{-1/2} (1+|y-a_i^\pm t|)^{-3/2} \end{aligned}$$ and $$\begin{aligned} \Upsilon_2(y,s)&=(1+s)^{-1/2} \psi_2(y,s)\\ &\le (1+s)^{-1/2} (1+|y|)^{-1/2}(1+|y|+t)^{-1/2}(1+|y|+t^{1/2})^{-1/2}. \end{aligned}$$ Substituting $\Upsilon_1$ in (\[quant\]), we obtain $$\begin{aligned} \int_0^t e^{-\eta_0(t-s)} &(1+s)^{-\frac12} (1+|x-\bar a_j^* (t-s)- a_i^\pm s |)^{-\frac32} ds =\\ &\int_0^t e^{-\eta_0(t-s)} (1+s)^{-\frac12} (1+|x- a_i^-t -(\bar a_j^*-a_i^\pm) (t-s)|)^{-\frac32} ds, \end{aligned}$$ which , by (\[abineq\]), is smaller than $$\int_0^t e^{-\eta_0(t-s)} (1+s)^{-\frac12} (1+|x- a_i^-t |)^{-\frac32}(|1+(\bar a_j^*-a_i^-) (t-s)|)^{\frac32} ds,$$ which in turn (absorbing $t-s$ powers in $e^{\frac{-\eta_0 (t-s)}{2}}$) is smaller than $$\begin{aligned} (1+|x- a_i^-t |)^{-\frac32} \int_0^t e^{-\frac{\eta_0}2(t-s)} (1+s)^{-\frac12}ds &\le C(1+|x- a_i^-t |)^{-\frac32}(1+t)^{-1/2}\\ &=C(1+t)^{-1/2}\psi_1(x,t). \end{aligned}$$ Similarly, substituting $\Upsilon_2$ in (\[quant\]), observing that $$(1+|y|+t^{1/2})^{-1/2}\sim \min\{(1+|y|)^{-1/2}, (1+s)^{-1/4}\},$$ and following the same procedure, we obtain an estimate of $$\begin{aligned} C(1+t)^{-1/2}(1+|x|)^{-1/2}(1+|x|+t)^{-1/2} &\min\{ (1+|x|)^{-1/2}, (1+t)^{-1/4} \} \sim \\ &\quad C(1+t)^{-1/2}(1+|x|)^{-1/2}(1+|x|+t)^{-1/2} (1+|x|+t^{1/2})^{-1/2}\\ &\le C(1+t)^{-1/2} (\psi_1+\psi_2)(x,t), \end{aligned}$$ where, in the final step, we have used the fact that, for $\chi(x,t)=0$, $$(1+|x|)^{-1/2}(1+|x|+t)^{-1/2} (1+|x|+t^{1/2})^{-1/2}\sim (1+|x|+t)^{-3/2}\le C\psi_1(x,t).$$ Bounds for other cases follow similarly. $\square$ [*Acknowledgements.*]{} The authors were partially supported by the National Science Foundation, under grants DMS–0500988 (Howard), and grants DMS–0070765 and DMS–0300487 (Raoofi and Zumbrun). [99]{} J. Alexander, R. Gardner, and C. K. R. T. Jones, A topological invariant arising in the analysis of traveling waves, J. Reine Angew Math. [**410**]{} (1990) 167–212. L. Q. Brin, [*Numerical testing of the stability of viscous shock waves*]{}, Ph.D. dissertation, Indiana University, May 1998. L. Q. Brin, [*Numerical testing of the stability of viscous shock waves,*]{} Math. Comp. 70 (2001) 235, 1071–1088. L. Brin and K. Zumbrun, [*Analytically varying eigenvectors and the stability of viscous shock waves*]{}, to appear, Mat. Contemp. (2003). T. Bridges, G. Derks, and G. Gottwald, [*Stability and instability of solitary waves of the fifth-order KdV equation: a numerical framework,*]{} Phys. D 172 (2002), no. 1-4, 190–216. J. W. Evans, Nerve Axon Equations I–IV, Indiana U. Math. J. [**21**]{} (1972) 877–885; [**22**]{} (1972) 75–90; [**22**]{} (1972) 577–594; [**24**]{} (1975) 1169–1190. H. Freistühler, Some results on the stability of non-classical shock waves, J. Partial Diff. Eqs. [**11**]{} (1998) 23–38. H. Freistühler and P. Szmolyan, [*Spectral stability of small shock waves,*]{} Arch. Ration. Mech. Anal. 164 (2002) 287–309. H. Freistühler and K. Zumbrun, [*Examples of unstable viscous shock waves,*]{} unpublished note, Institut für Mathematik, RWTH Aachen, February 1998. R. Gardner and K. Zumbrun, The Gap Lemma and geometric criteria for instability of viscous shock profiles, Comm. Pure Appl. Math. [**51**]{} (1998), no. 7, 797–855. J. Goodman, [*Nonlinear asymptotic stability of viscous shock profiles for conservation laws,*]{} Arch. Rational Mech. Anal. 95 (1986), no. 4, 325–344. P. Howard, [*Pointwise methods for stability of a scalar conservation law,*]{} Doctoral thesis (1998). P. Howard, [*Pointwise estimates on the Green’s function for a scalar linear convection-diffusion equation,*]{} J. Differential Equations 155 (1999) 327–36. P. Howard, [*Pointwise Green’s function approach to stability for scalar conservation laws,*]{} Comm. Pure Appl. Math. 52 (1999) 1295–1313. P. Howard and M. Raoofi, Pointwise asymptotic behavior of perturbed viscous shock profiles, Preprint 2005. Available: www.math.tamu.edu/$\sim$phoward/mathpubs.html. P. Howard and K. Zumbrun, Stability of undercompressive shock profiles, to appear, J. Differential Equations. J. Humpherys and K. Zumbrun, [*An efficient shooting algorithm for Evans function calculations in large systems*]{}, Preprint 2005. J. Humpherys and K. Zumbrun, [*Spectral stability of small amplitude shock profiles for dissipative symmetric hyperbolic–parabolic systems,*]{} Z. Angew. Math. Phys. 53 (2002) 20–34. J. Humpherys, B. Sandstede, and K. Zumbrun, [*Efficient computation of analytic bases in Evans function analysis of large systems*]{}, Preprint 2005. C. K. R. T. Jones, Stability of the traveling wave solution of the FitzHugh–Nagumo system, Trans. Amer. Math. Soc. [**286**]{} (1984), no. 2, 431–469. T. Kapitula and B. Sandstede, Stability of bright solitary-wave solutions to perturbed nonlinear Schrodinger equations, Physica D [**124**]{} (1998) 58–103. S. Kawashima, [*Systems of a hyperbolic–parabolic composite type, with applications to the equations of magnetohydrodynamics*]{}, thesis, Kyoto University (1983). S. Kawashima and A. Matsumura, [*Asymptotic stability of traveling wave solutions of systems for one-dimensional gas motion,*]{} Comm. Math. Phys. 101 (1985), no. 1, 97–127. S. Kawashima, A. Matsumura, and K. Nishihara, [*Asymptotic behavior of solutions for the equations of a viscous heat-conductive gas,*]{} Proc. Japan Acad. Ser. A Math. Sci. 62 (1986), no. 7, 249–252. T.–P. Liu, Nonlinear stability of shock waves for viscous conservation laws, Memoirs AMS [**56**]{} (1985), no. 328. T.–P. Liu, Interaction of nonlinear hyperbolic waves, in: Nonlinear Analysis, Eds. F.–C. Liu and T.–P. Liu, World Scientific, 1991 171–184. T.–P. Liu, Pointwise convergence to shock waves for viscous conservation laws, Comm. Pure Appl. Math. [**50**]{} (1997), no. 11, 1113–1182. A. Majda, The stability of multi-dimensional shock fronts: a new problem for linear hyperbolic equations, Mem. AMS [**275**]{} (1983). A. Majda, The existence of multi-dimensional shock fronts, Mem. AMS [**281**]{} (1983). A. Majda, Compressible fluid flow and systems of conservation laws in several space dimensions, Springer–Verlag, New York (1984) viii+ 159 pp. A. Majda and R. Pego, [*Stable viscosity matrices for systems of conservation laws*]{}, J. Diff. Eqs. 56 (1985) 229–262. C. Mascia and K. Zumbrun, Pointwise Green’s function bounds and stability of relaxation shocks, Indiana Univ. Math. J. [**51**]{} (2002), no. 4, 773–904. C. Mascia and K. Zumbrun, Pointwise Green function bounds for shock profiles with degenerate viscosity, Arch. Rat. Mech. Anal. [**169**]{} (2003) 177–263. C. Mascia and K. Zumbrun, Stability of large-amplitude A. Matsumura and K. Nishihara, [*On the stability of travelling wave solutions of a one-dimensional model system for compressible viscous gas,*]{} Japan J. Appl. Math. 2 (1985), no. 1, 17–25. R. Plaza and K. Zumbrun, [*An Evans function approach to spectral stability of small-amplitude viscous shock profiles,*]{} preprint (2002). M. Raoofi, $L^p$ asymptotic behavior of perturbed viscous shock profiles, to appear, J. Hyperbolic Differential Equations. M. Slemrod, [*Dynamic phase transitions in a van der Waals fluid,*]{} J. Differential Equations 52 (1984), no. 1, 1–23. K. Zumbrun, Dynamic stability of phase transitions in the $p$-system with viscosity–capillarity, SIAM J. Appl. Math. [**60**]{} (2000) 1913–1924. K. Zumbrun, Stability of large amplitude shock waves of the compressible Navier–Stokes equation, Handbook of Fluid Dynamics, Volume IV (2004). K. Zumbrun, [*Refined Wave–tracking and Nonlinear Stability of Viscous Lax Shocks*]{}, Methods Appl. Anal. 7 (2000) 747–768. K. Zumbrun and P. Howard, Pointwise semigroup methods and stability of viscous shock waves, Indiana Univ. Math. J. [**47**]{} (1998) 741–871. See also Errata, Indiana Univ. Math. J. [**51**]{} (2002) 1017–1021. D. Serre and K. Zumbrun, Viscous and inviscid stability of multidimensional planar shock fronts, Indiana Univ. Math. J. [**48**]{} (1999), no. 3, 937–992. Peter HOWARD\ Department of Mathematics\ Texas A&M University\ College Station, TX 77843\ [email protected]\ and\ Mohammadreza RAOOFI\ Max Planck Institute for Mathematics in the Sciences\ Inselstraße 22-26\ D-04103 Leipzig, Germany\ [email protected]\ and\ Kevin ZUMBRUN\ Department of Mathematics\ Indiana University\ Bloomington, IN 47405-4301\ [email protected]\ [^1]: Texas A& M University, College Station, TX 77843; [email protected] [^2]: Indiana University, Bloomington, IN 47405; [email protected]. [^3]: Indiana University, Bloomington, IN 47405; [email protected]. [^4]: Note also that the pointwise bounds of [@HR] directly imply that $\zeta$ is finite for each given $t$.
--- abstract: 'Classical density-functional theory is the most direct approach to equilibrium structures and free energies of inhomogeneous liquids, but requires the construction of an approximate free-energy functional for each liquid of interest. We present a general recipe for constructing functionals for small-molecular liquids based only on bulk experimental properties and *ab initio* calculations of a single solvent molecule. This recipe combines the exact free energy of the non-interacting system with fundamental measure theory for the repulsive contribution and a weighted density functional for the short-ranged attractive interactions. We add to these ingredients a weighted polarization functional for the long-range correlations in both the rotational and molecular-polarizability contributions to the dielectric response. We also perform molecular dynamics calculations for the free energy of cavity formation and the high-field dielectric response, and show that our free-energy functional adequately describes these properties (which are key for accurate solvation calculations) for all three solvents in our study: water, chloroform and carbon tetrachloride.' author: - 'Ravishankar Sundararaman, Kendra Letchworth-Weaver, and T A Arias' title: 'A recipe for free-energy functionals of polarizable molecular fluids' --- Conclusions =========== This work presents a general recipe for constructing free-energy functionals for small molecular liquids by building upon the success of the scalar-EOS functional for liquid water. The prescribed functional consists of the exact free energy for the non-interacting system of rigid molecules, fundamental measure theory for the short-ranged repulsive intermolecular interactions, a simplified weighted density functional for the short-ranged attractive intermolecular interactions, and mean-field Coulomb interactions along with a weighted polarization-density correlation functional for the long-ranged dielectric response. The resulting functional is completely determined by bulk experimental properties, specifically the equation of state and surface tension, and by microscopic properties of the solvent molecule that can be derived from electronic density-functional calculations as detailed in appendix \[sec:AbinitioParameters\]. We test this prescription for three vastly different solvents that range from highly polar to non-polar: water, chloroform and carbon tetrachloride. We examine the two key properties that contribute to solvation of electronic systems within joint density functional theory: the free energy for forming microscopic cavities, and the nonlinearities in the dielectric response at high electric fields. We present reference molecular dynamics simulations of these properties and demonstrate that our free-energy functionals accurately reproduce them for all three solvents. In particular, the microscopic cavity-formation free energy transitions from the volume regime to the surface area regime at the correct length scale. The rotational dielectric response saturates at the correct electric field scale and the inclusion of molecular polarizability effects reproduces the correct high-field behavior including subtle effects such as electrostriction. In conjunction with an approximation for the interactions between a quantum-mechanical system (solute or surface) and the fluid[@DODFT-Coupling] and a suitable self-consistent joint minimization scheme, the current prescription for free-energy functionals will enable joint density-functional theory studies of the solvation of systems described at the electronic-structure level in equilibrium with small-molecule liquids. While this work is applicable to a large class of solvents, it would be desirable to extend such a general prescription to liquids of larger flexible molecules, mixtures of liquids, electrolytes and ionic liquids in future work. This work was supported as a part of the Energy Materials Center at Cornell (EMC$^2$), an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001086. Additional computational time at the Texas Advanced Computing Center (TACC) at the University of Texas at Austin, was provided via the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number OCI-1053575.
--- author: - Atul Singh Arora - Kishor Bharti - Arvind bibliography: - 'references.bib' title: 'Revisiting the admissibility of non-contextual hidden variable models in quantum mechanics' --- Introduction ============ Quantum mechanics (QM) has been one of the most successful theories in physics so far, however, there has not yet been a final word on its completeness and interpretation [@BellSpkblUnspkbl]. Einstein’s [@EinsteinEPR] work on the incompleteness of QM and the subsequent seminal work of Bell [@BellSpkblUnspkbl], assessing the compatibility of a more complete model involving hidden variables (HV) and locality with QM, has provided deep insights into how the quantum world differs from its classical counterpart. In recent times, these insights have been of pragmatic utility in the area of quantum information processing (QIP), where EPR pairs are fundamental motifs of entanglement [@Ekert; @PironioRndmnssCrtfcn; @NielsenChuang]. The work of Kochen Specker (KS) [@KochenSpecker] and Gleason [*et.al.*]{} [@Gleason; @Peres; @Mermin] broadened the schism between HV models and QM. They showed that it was contextuality and not non-locality which was at the heart of this schism and the incompatibility between HV models and QM can arise even for a single indivisible quantum system. Contextuality has thus been identified as a fundamental non-classical feature of the quantum world and experiments have also been proposed and conducted to this effect [@SimonContExpProp; @HuangContExp]. Contextuality, on the one hand has led to investigations on the foundational aspects of QM including attempts to prove the completeness of QM [@PawelCntxClsscl; @CabelloMmryQM], and on the other hand has been harnessed for computation and cryptography [@HowardCntxCmptn; @CabelloCntxScrt]. [ While there have been generalisations, in this letter, we restrict ourselves to the standard notion of non-contextuality as used by KS [@KochenSpecker].]{} Not all HV models (e.g. Bohm’s model based on trajectories [@Bohm1; @Bohm2]), however, are incompatible with QM [@BellOnHiddenVariables] and we, in this letter, present a new non-contextual HV model consistent with QM. ![Non-contextuality is not inconsistent with kinematic predictions of QM[]{data-label="fig:block"}](block2){width="0.98\columnwidth"} [ The KS theorem is applicable to non-contextual HV models which additionally satisfy *functional-consistency — algebraic constraints obeyed by commuting quantum observables must be satisfied by the HV model at the level of individual outcomes*. One of the main justifications for imposing this was the requirement that [ the]{} observable $\hat A^2$ must depend on $\hat A$ even at the HV level. This sounds reasonable because otherwise one can construct HV models where these observables [ can]{} get mapped to random variables with no relation to each other (see [@KochenSpecker]). As we will show, one can weaken this requirement into what we call *weak functional-consistency – functional-consistency is demanded only when the observables in question have sharp values*. This entails that even in the HV model the observables will depend on each other where they should for [ consistency]{} and not otherwise, thereby capturing the essential idea [ without undue constraints]{}. The consequence of demanding only the weaker variant is that the KS theorem no longer applies and non-contextual HV models can be consistently constructed. This is in stark contrast with contextual HV models, such as the toy model[^1] by Bell [@BellOnHiddenVariables] because here we consider models where the algebraic constaints of QM are not obeyed in general by the HV model at the level of individual outcomes. We demonstrate this contrast by re-examining a ‘proof of contextuality’ using our model.]{} This provides a new way of looking at the classical-quantum divide and at the foundations of quantum mechanics. Non-Contextuality and Functional Consistency ============================================ We introduce some notation and make the relevant notions precise to facilitate the construction of our model. \(a) $\psi\in\mathcal{H}$ represents a pure quantum mechanical state of the system in the Hilbert space $\mathcal{H}$, (b) $\hat{\mathcal{H}}$ is defined to mean the set of Hermitian observables for the system, (c) $[\mathcal{H}]$ is defined to mean $\{\mathcal{H},\,\mathbb{R}^{\otimes}\}$, which represents the state space of the system including HVs, (d) $[\psi]\in[\mathcal{H}]$ will represent the state of the system including HVs, (e) a prediction map is $M:\{ \hat{\mathcal{H}},[\mathcal{H}] \}\to\mathbb{R}$, (f) a sequence map is $S:\{ \hat{\mathcal{H}},[\mathcal{H}],\mathbb{R} \}\to[\mathcal{H}]$, (g) $f$ is an arbitrary map from $\{ \hat{\mathcal{H}},\hat{\mathcal{H}},\dots\hat{\mathcal{H}} \} \to \hat{\mathcal{H}}$ constructed using multiplication and addition of compatible observables, and multiplication with complex numbers, (h) $\tilde{f}$ is a map constructed by replacing observables in $f$ with real numbers. A theory is non-contextual, if it provides a map $M: \{ \hat{\mathcal{H}},[\mathcal{H}] \} \to\mathbb{R}$ to explain measurement outcomes. A theory which is not non-contextual is contextual [@peresBook]. A prediction map of the form [$M: \{ \hat{\mathcal{H}},[\mathcal{H}] \} \to\mathbb{R}$]{} itself can be called non-contextual. [ Broader definitions in the literature have been suggested [@spekkens_pra] which extend the notion to probabilistic HV models. The idea is that any feature of a HV model that is not determined solely by the operational aspect of QM is defined to be a demonstration of contextuality. If, for instance, this distinction arises in the measurement procedure then it is termed as measurement contextuality. To maintain a distinction between different features of HV models, which is of interest here, we stick to the standard definition.]{} [ In addition to a HV model being non-contextual KS [@KochenSpecker] demand *functional-consistency* to establish their no-go theorem which is defined below in our notation. ]{} A prediction map $M$ is *functionally-consistent* iff $$\begin{aligned} & M(f(\hat{B}_{1},\hat{B}_{2},\dots\hat{B}_{N}),[\psi]) = \\ & \tilde f(M(\hat{B}_{1},[\psi]),M(\hat{B}_{2},[\psi]),\dots M(\hat{B}_{N},[\psi])), \end{aligned}$$ where $\hat{B}_{i}\in\hat{\mathcal{H}}$ are arbitrary mutually commuting observables and $[\psi]\in[\mathcal{H}]$. A *non functionally-consistent* map is one that is not functionally-consistent. Note that if $M$ is taken to represent the measurement outcome (in QM), then for states of the system which are simultaneous eigenkets of $\hat{B}_{i}$s, $M$ must clearly be *functionally-consistent*. It is, however, not obvious that this property must always hold. For example, consider two spin-half particles in the state $\left|1\right\rangle \otimes\left|1\right\rangle $ written in the computational basis and operators $\hat{B}_{1}=\hat{\sigma}_{x}\otimes\hat{\sigma}_{x}$, $\hat{B}_{2}=\hat{\sigma}_{y}\otimes\hat{\sigma}_{y}$ and $\hat{C}=\hat{B}_{1}\hat{B}_{2}=-\hat{\sigma}_{z}\otimes\hat{\sigma}_{z}$ written in terms of Pauli operators. We must have $M(\hat{C})=-1$ while $M(\hat{B}_{1})=\pm1$ and $M(\hat{B}_{2})=\pm1$ independently, according to QM, with probability half. Here *functional-consistency* clearly is not required to hold [ and indeed this is why it was demanded in addition to being consistent with QM by KS]{}. Antithetically, it is clear that if one first measures $\hat{B}_{1}$ and subsequently measures $\hat{B}_{2}$, then the product of the results must be $-1$. This is consistent with measuring $\hat{C}$. [ In our treatment, instead of imposing *functional-consistency*, we demand its aforesaid weaker variant. It captures the same idea, however, only when it has a precise meaning according to QM.]{} [ To this end we define]{} *weak functional-consistency* as follows. A prediction map $m$ has *weak functional-consistency* for a given sequence map $s$, iff $$\begin{aligned} &M(f(\hat{B}_{1},\hat{B}_{2},\dots\hat{B}_{N}),[\psi_{1}])=\\ &\tilde f(M(\hat{B}_{1},[\psi_{k_{1}}]),M(\hat{B}_{2},[\psi_{k_{2}}]),\dots,M(\hat{B}_{N},[\psi_{k_{N}}])),\end{aligned}$$ where $\{k_{1},k_{2},\dots,k_{N}\} \in\{ N! {\rm ~permutations~of~} k'{\rm s} \}$, $\hat{B}_{i}\in\hat{\mathcal{H}}$ are arbitrary mutually commuting observables, $[\psi_{i}]\in[\mathcal{H}]$ and $[\psi_{k+1}] := S(\hat{B}_{k},[\psi_{k}],M(\hat{B}_{k},[\psi_{k}]))$, $\forall\,[\psi_{i}]$. \[defn:seqnMltpl\] With these definitions we are now ready to discuss the ‘proof of contextuality’. We first state the contextuality theorem in our notation: Let a map $M:\hat{\mathcal{H}}\to\mathbb{R}$, be s.t. (a) $M(\hat{\mathbb{I}})=1$, (b) $M(f(\hat{B}_{1},\hat{B}_{2},\dots))=\tilde f(M(\hat{B}_{1}),M(\hat{B}_{2}),\dots)$, for any arbitrary function $f$, where $\hat{B}_{i}$ are mutually commuting Hermitian operators. If $m$ is assumed to describe the outcomes of measurements, then no $M$ exists which is consistent with all predictions of QM. \[thm:KS\] Peres Mermin \[PM\] ($\left|\mathcal{H}\right|\ge4$) [@Peres; @Mermin]: For a system composed of two spin-half particles consider the following set of operators $$\hat{A}_{ij}\doteq\left[\begin{array}{ccc} \hat{\mathbb{I}}\otimes\hat{\sigma}_{x} & \hat{\sigma}_{x}\otimes\hat{\mathbb{I}} & \hat{\sigma}_{x}\otimes\hat{\sigma}_{x}\\ \hat{\sigma}_{y}\otimes\hat{\mathbb{I}} & \hat{\mathbb{I}}\otimes\hat{\sigma}_{y} & \hat{\sigma}_{y}\otimes\hat{\sigma}_{y}\\ \hat{\sigma}_{y}\otimes\hat{\sigma}_{x} & \hat{\sigma}_{x}\otimes\hat{\sigma}_{y} & \hat{\sigma}_{z}\otimes\hat{\sigma}_{z} \end{array}\right]$$ which have the property that all operators along a row (or column) commute. Further, the product of rows (or columns) yields $\hat{R}_{i}=\hat{\mathbb{I}}$ and $\hat{C}_{j}=\hat{\mathbb{I}}\,(j\neq3)$, $\hat{C}_{3}=-\hat{\mathbb{I}}$, ($\forall\,i,j$) where $\hat{R}_{i}:=\prod_{j}\hat{A}_{ij}$, $\hat{C}_{j}:=\prod_{i}\hat{A}_{ij}$. Let us assume that $M$ exists. From property (b) of the map, to get $M(\hat{C}_{3})=-1$ (as required by property (a)), we must have an odd number of $-1$ assignments in the third column. In the remaining columns, the number of $-1$ assignments must be even (for each column). Thus, in the entire square, the number of $-1$ assignments must be odd. Let us use the same reasoning, but along the rows. Since each $M(\hat{R}_{i})=1$, we must have even number of $-1$ assignments along each row. Thus, in the entire square, the number of $-1$ assignments must be even. We have arrived at a contradiction and therefore we conclude that $M$ does not exist One could in principle assume $M$, to be s.t. (a) $M(\hat{\mathbb{I}})=1$, (b) $M(\alpha\hat{B}_{i})=\alpha M(\hat{B}_{i})$, for $\alpha\in\mathbb{R}$, (c) $M(\hat{B}_{i}^{2})=M(\hat{B}_{i})^{2}$, (d) $M(\hat{B}_{i}+\hat{B}_{j})=M(\hat{B}_{i})+M(\hat{B}_{j})$, to deduce (d) $M(\hat{B}_{i}\hat{B}_{j})=M(\hat{B}_{i})M(\hat{B}_{j})$ and that $M(\hat{B}_{i})\in$ spectrum of $\hat{B}_{i}$. Effectively then, condition (b) listed in the theorem is satisfied as a consequence. Therefore, assuming (a)-(d) as listed above, rules out a larger class of $M$ [@KochenSpecker]. Here $M$ maybe viewed as a specific class of prediction maps that implicitly depends on the state $[\psi]$. It is clear that according to the theorem, non-contextual maps which are *functionally-consistent* must be incompatible with QM. This leaves open an interesting possibility that non-contextual maps which have *weak functional-consistency* could be consistent with QM. [ Before proceeding to do so explicitly we observe that *weak functional-consistency* is, in fact, a consequence of QM.]{} Let a quantum mechanical system be in a state, s.t. measurement of $\hat{C}$ yields repeatable results (same result each time). Then according to QM, *weak functional consistency* holds, where $\hat{C}:= f(\hat{B}_{1},\hat{B}_{2},\dots\hat{B}_{n})$, and $\hat{B}_{i}$ are as defined (in ) Without loss of generality we can take $\hat{B}_{1},\hat{B}_{2},\dots\hat{B}_{n}$ to be mutually compatible and a complete set of observables (operators can be added to make the set complete if needed). It follows that $\exists$ $\left|\mathbf{b}=\left(b_{1},b_{2},\dots b_{n}\right)\right\rangle $ s.t. $\hat{B}_{i}\left|\mathbf{b}\right\rangle =b_{i}\left|\mathbf{b}\right\rangle $, and that $\sum_{\mathbf{b}}\left|\mathbf{b}\right\rangle \left\langle \mathbf{b}\right|=\hat{\mathbb{I}}$. Let the state of the system $\left|\psi\right\rangle $ be s.t. $\hat{C}\left|\psi\right\rangle =c\left|\psi\right\rangle $. For the statement to follow, one need only show that $\left|\psi\right\rangle $ must be made of only those $\left|\mathbf{b}\right\rangle $s, which satisfy $c=\tilde f(b_{1},b_{2},\dots b_{n})$. This is the crucial step and proving this is straightforward. We start with $\hat{C}\left|\psi\right\rangle =c\left|\psi\right\rangle $ and take its inner product with $\left\langle \mathbf{b}\right|$ to get $$\begin{aligned} c \langle \mathbf{b}|\psi\rangle & = & \langle \mathbf{b} |\hat{C} |\psi \rangle \\ &=& \langle \mathbf{b}| f(\hat{B}_{1},\hat{B}_{2},\dots\hat{B}_{n}) | \psi \rangle \\ &=& \tilde f (b_{1},b_{2},\dots b_{n}) \langle \mathbf{b}|\psi\rangle \end{aligned}$$ Also, we have $\left|\psi\right\rangle =\sum_{\mathbf{b}}\left\langle \mathbf{b}|\psi\right\rangle \left|\mathbf{b}\right\rangle $, from completeness. If we consider $\left|\mathbf{b}\right\rangle $s for which $\left\langle \mathbf{b}|\psi\right\rangle \neq0$, then we find that indeed $c=\tilde f(b_{1},b_{2},\dots b_{n})$. The case when $\left\langle \mathbf{b}|\psi\right\rangle =0$ is anyway irrelevant as the corresponding $\vert b\rangle$ does not contribute to $\vert \psi\rangle$. We can thus conclude that $\left|\psi\right\rangle $ is made only of those $\left|\mathbf{b}\right\rangle $s that satisfy the required relation. It is worth noting that in the Peres Mermin case, where $\hat{R}_{i}$ and $\hat{C}_{j}$ are $\pm\hat{\mathbb{I}}$, it follows that all states are their eigenstates. Consequently, for these operators *weak functional-consistency* must always hold. Construction ============ We are now ready to describe our model explicitly. Let the state of a finite dimensional quantum system be $\left|\chi\right\rangle $. We wish to assign a value to an arbitrary observable $\hat{A}=\sum_{a}a\left|a\right\rangle \left\langle a\right|$, which has eigenvectors $\{ \vert a_j\rangle \}$. The corresponding ordered eigenvalues are $\{a_j\}$ such that $a_{\rm min}=a_1$ and $a_{\rm max}=a_n$. Our HV model for QM assigns values in the following three steps: 1. [**Initial HV:**]{} Pick a number $c\in[0,1]$, from a uniform random distribution. 2. [**Assignment or Prediction:**]{} The value assigned to $\hat{A}$ is given by finding the smallest $a$ s.t. $c\le\sum_{a'=a_{\text{min}}}^{a}\left|\left\langle a'|\chi\right\rangle \right|^{2},$ viz. we have specified a prediction map, $M(\hat{A},[\chi ])=a$. 3. [**Update:**]{} After measuring an operator, the state is updated (collapsed) in accordance with the rules of QM. This completely specifies the sequence map $S$. The above HV model works for all quantum systems, however, to illustrate its working, consider the example of a spin-half particle in the state $\left|\chi\right\rangle =\cos\theta\left|0\right\rangle +\sin\theta\left|1\right\rangle $ and the observable $\hat{A}=\hat{\sigma}_{z}=\left|0\right\rangle \left\langle 0\right|-\left|1\right\rangle \left\langle 1\right|$. Now, according to the postulates of this theory, $M(\hat{A},[\chi])=+1$ if the randomly generated value for $c\le\cos^{2}\theta$ else $\hat{A}$ is assigned $-1$; it then follows, from $c$ being uniformly random in $[0,1]$ that the statistics agree with the predictions of QM [*i.e.*]{} the Born rule. The assignment described by the prediction map $M$ is non-contextual since, given an operator and a state (alongwith the HV $c$), the value is uniquely assigned. The map $M$ is, however, *non functionally-consistent*. [ To see this *non functional-consistency* explicitly in]{} our model and to see its applicability to composite systems, we apply the model to the Peres Mermin situation of two spin-half particles. Consider the initial state of the system $\left|\psi_{1}\right\rangle =\left|00\right\rangle $. Assume we obtained $c=0.4$ as a random choice. To arrive at the assignments, note that $\left|00\right\rangle $ is an eigenket of only $\hat{R}_{i},\hat{C}_{j}$ and $\hat{A}_{33}=\hat{\sigma}_{z}\otimes\hat{\sigma}_{z}$. Thus, in the first iteration, all these should be assigned their respective eigenvalues. The remaining operators must be assigned $-1$ as one can readily verify by explicitly finding the smallest $a$ as described in postulate 2 of the model (see ). For the next iteration, $i=2$, after the first measurement is over say the random number generator yielded the value $c=0.1$. Since $\left|\psi\right\rangle $ is also unchanged the assignment remains invariant (in fact any of $c<0.5$ would yield the same result as evident from the previous exercise). For the final step we choose to measure $\hat{A}_{23}(=\hat{\sigma}_{y}\otimes\hat{\sigma}_{y})$, to proceed with sequentially measuring $\hat{C}_{3}$. To simplify calculations, we note $\left|00\right\rangle =\frac{1}{\sqrt 2} \left[ \left(\left|\tilde{+}\tilde{-}\right\rangle +\left|\tilde{-}\tilde{+}\right\rangle \right)/\sqrt{2}+\left(\left|\tilde{+}\tilde{+}\right\rangle +\left|\tilde{-}\tilde{-}\right\rangle \right)/\sqrt{2}\right] , $ with $\left|\tilde{\pm}\right\rangle =(\left|0\right\rangle \pm i\left|1\right\rangle)/\sqrt{2} $ (eigenkets of $\hat{\sigma}_{y}$). Since $\left|00\right\rangle$ is manifestly not an eigenket of $\hat A_{23}$, we must find an appropriate eigenket $\left|a^-_{23}\right\rangle $ s.t. $\hat A_{23}\left|a^{-}_{23}\right\rangle =-\left|a^{-}_{23}\right\rangle $, since $c=0.1$ and $\left\langle a_{23}^{-}|00\right\rangle $ is already $>0.1$. It is evident that $\left|a^{-}_{23}\right\rangle =\left(\left|\tilde{+}\tilde{-}\right\rangle +\left|\tilde{-}\tilde{+}\right\rangle \right)/\sqrt{2}=\left(\left|00\right\rangle +\left|11\right\rangle \right)/\sqrt{2}$, which becomes the final state. For the final iteration, $i=3$, say we obtain $c=0.7$. So far, we have $M_{1}(\hat{A}_{33})=1$ and $M_{2}(\hat{A}_{23})=-1$. We must obtain $M_{3}(\hat{A}_{13})=1$, independent of the value of $c$, to be consistent. Let us check that. Indeed, according to postulate 2, since $\hat{\sigma}_{x}\otimes\hat{\sigma}_{x}\left(\left|00\right\rangle +\left|11\right\rangle \right)/\sqrt{2}=1\left(\left|00\right\rangle +\left|11\right\rangle \right)/\sqrt{2}$, $M_{3}(\hat{A}_{13})=1$ for all allowed values of $c$. It is to be noted that $M_{2}(\hat{A}_{33})=M_{3}(\hat{A}_{33})$ and $M_{2}(\hat{A}_{23})=M_{3}(\hat{A}_{23})$, which essentially expresses the compatibility of these observables; i.e. once measured, the values of observables compatible with $\hat{A}_{13}$ are not affected by the measurement of $\hat{A}_{13}$. $$\renewcommand{\arraystretch}{1.5} \begin{array}{ccc} i=1:\,\, c=0.4,\,\left|\psi_{\text{init}}\right\rangle=\left|00\right\rangle & i=2:\,\, c=0.1,\,\left|\psi_{\text{init}}\right\rangle=\left|00\right\rangle & i=3:\,\, c=0.7,\,\left|\psi_{\text{init}}\right\rangle=\frac{1}{\sqrt{2}} (\left|00\right\rangle +\left|11\right\rangle) \\ \renewcommand{\arraystretch}{1} M_{1}(\hat{A}_{ij})\doteq\left[\begin{array}{ccc} -1 & -1 & -1\\ -1 & -1 & -1\\ -1 & -1 & +1\end{array}\right] & \renewcommand{\arraystretch}{1} M_{2}(\hat{A}_{ij})\doteq\left[\begin{array}{ccc}-1 & -1 & -1\\ -1 & -1 & -1\\ -1 & -1 & +1\end{array}\right] & \renewcommand{\arraystretch}{1} M_{3}(\hat{A}_{ij})\doteq\left[\begin{array}{ccc}+1 & +1 & +1\\ +1 & +1 & -1\\ +1 & +1 & +1\end{array}\right] \\ M_{1}(\hat{R}_{i}), M_{1}(\hat{C}_{j})=+1\,(j\neq3) & M_{2}(\hat{R}_{i}), M_{2}(\hat{C}_{j})=+1\,(j\neq3) & M_{3}(\hat{R}_{i}), M_{3}(\hat{C}_{j})=+1\,(j\neq3)\\ M_{1}(\hat{C}_{3})=-1 & M_{2}(\hat{C}_{3})=-1 & M_{3}(\hat{C}_{3})=-1\\ \hat{A}_{33}=\hat{\sigma}_{z}\otimes\hat{\sigma}_{z};M_{1}(\hat{A}_{33})=+1& \quad\quad\hat{A}_{23}=\hat{\sigma}_{y}\otimes\hat{\sigma}_{y};M_{2}(\hat{A}_{23})=-1\quad\quad& \hat{A}_{13}=\hat{\sigma}_{x}\otimes\hat{\sigma}_{x};M_{3}(\hat{A}_{13})=+1\\ \left|\psi_{\text{final}}\right\rangle = \left|00\right\rangle & \left|\psi_{\text{final}}\right\rangle = \frac{1}{\sqrt{2}}(\left|00\right\rangle +\left|11\right\rangle) & \left|\psi_{\text{final}}\right\rangle = \frac{1}{\sqrt{2}}(\left|00\right\rangle+\left|11\right\rangle) \label{eq:toyModel} \end{array} $$ The *non functional-consistency* is manifest, for $M_{1}(\hat{C}_{3})=1\neq M_{1}(\hat{A}_{13})M_{1}(\hat{A}_{23})M_{1}(\hat{A}_{33})=-1$, where the subscript refers to the iteration number. More precisely, $M_{1}(\hat{O}):=M(\hat{O},\left[\left|\psi_{1}\right\rangle =\left|00\right\rangle \right])$ where the complete state $\left[\left|\psi_{1}\right\rangle \right]$ implicitly refers to both the quantum state $\left|00\right\rangle $ and the HV $c=0.4$. The model, however, obeys the *weak functional-consistency* requirement, namely, $M_{1}(\hat{C}_{3})=M_{1} (\hat{A}_{33})M_{2}(\hat{A}_{23})M_{3}(\hat{A}_{13})$, where $M_{2}:=M(\hat{O},\left[\left|\psi_{2}\right\rangle \right])$, $M_{3}:=M(\hat{O},\left[\left|\psi_{3}\right\rangle \right])$ and $\left|\psi_{2}\right\rangle ,\,\left|\psi_{3}\right\rangle $ are obtained from postulate 3. Note that for each iteration, a new HV is generated. Implications and Remarks ======================== [ The model demonstrates *non function-consistency* as an alternative signature of quantumness as opposed to contextuality. This view is not just an artifact of the simplicity of the model. It ]{}has implications to Bohm’s HV model (BHVM), where if the initial conditions are precisely known the entire trajectory of a particle (guided by the wavefunction) can be predicted including the individual outcome of measurements. BHVM applied to a single spin-half particle in a Stern-Gerlach experiment, can predict opposite results for two measurements of the same operator. This observation which is often used as a demonstration of “contextuality” in BM, does not involve overlapping sets of compatible measurements to provide two different contexts [@HardyCntxBM] [and is therefore not of interest here.]{} It turns out that, if one constructs a one-one map between an experiment and an observable (by following a certain convention), this so-called “contextuality” can be removed from BHVM. However, the prediction map so obtained from observables to measurement outcomes turns out to be *non functionally-consistent*, suggesting that *non functional-consistency* is a more suitable explanation of non-classicality of BHVM. In fact, our model when appropriately extended to continuous variables yields Bohmian trajectories in the single particle case which suggests that it can be used as a starting point for constructing more interesting families of HV theories that take quantum time dynamics into account as well. Bell himself had constructed a deceptively similar toy model[^2] to demonstrate a HV construction that is not ruled out by his contextuality no-go theorem. However, his model was contextual (and *functionally-consistent*) which is in contrast with our construction which is non-contextual (and has *weak functional-consistency*). We end with two short remarks. First, we illustrate how *non functional-consistency* gives rise to situations which could get confused with the presence of contextuality. Imagine for two spin 1/2 particles $$\begin{aligned} &&\hat{B}_{1}\!=\!\hat{\sigma}_{z}\otimes\hat{\mathbb{I}}\!=\\ &&\!\left|00\right\rangle \left\langle 00\right|+\left|01\right\rangle \left\langle 01\right|-\left[\left|10\right\rangle \left\langle 10\right|+\left|11\right\rangle \left\langle 11\right|\right], \\ &&\hat{B}_{2}\!=\!\hat{\mathbb{I}}\otimes\hat{\sigma}_{z}\!=\\ &&\!\left|10\right\rangle \left\langle 10\right|+\left|11\right\rangle \left\langle 11\right|-\left[\left|00\right\rangle \left\langle 00\right|+\left|01\right\rangle \left\langle 01\right|\right], \\ &&\hat{C}=\!f(\{\hat{B}_{i}\}) \\ &&\quad=1.\left|00\right\rangle \left\langle 00\right|+2.\left|01\right\rangle \left\langle 01\right|+ \\ &&\! \quad 3.\left|10\right\rangle \left\langle 10\right|+4.\left|11\right\rangle \left\langle 11\right|. \end{aligned}$$ $\hat{C}$ maybe viewed as a function of $\hat{B}_{1}$, $\hat{B}_{2}$ and other operators $\hat{B}_{i}$ which are constructed to obtain a maximally commuting set. A measurement of $\hat{C}$, will collapse the state into one of the states which are simultaneous eigenkets of $\hat{B}_{1}$ and $\hat{B}_{2}$. Consequently, from the observed value of $\hat{C}$, one can deduce the values of $\hat{B}_{1}$ and $\hat{B}_{2}$. Now consider $\sqrt{2}\left|\chi\right\rangle =\left|10\right\rangle +\left|01\right\rangle $, for which $M_{1}(\hat{B}_{1})=1$, and $M_{1}(\hat{B}_{2})=1$, using our model, with $c<0.5$. However, $M_{1}(\hat{C})=1$, from which one can deduce that $\hat{B}_{1}$ was $+1$, while $\hat{B}_{2}$ was $-1$. This property itself, one may be tempted call contextuality, viz. the value of $\hat{B}_{1}$ depends on whether it is measured alone or with the remaining $\{\hat{B}_{i}\}$. However, it must be noted that $\hat{B}_{1}$ has a well defined value, and so does $\hat{C}$. Thus by our accepted definition, there is no contextuality. It is just that $M_{1}(\hat{C})\neq \tilde f(M_{1}(\hat{B}_{1}),M_{1}(\hat{B}_{2}),\dots)$, viz. the theory is *non functionally-consistent*. Note that after measuring $\hat{C}$, however, $M_{2}(\hat{B}_{1})=+1$ and $M_{2}(\hat{B}_{2})=-1$ (for any value of $c$) consistent with those deduced by measuring $\hat{C}$. Evidently, *functional-consistency* must hold for the common eigenkets of $\hat{B}_{i}$’s. Consequently, any violation of *functional-consistency* must arise from states that are super-positions or linear combinations of these eigenkets. [ Second, note that]{} entanglement is not necessary to demonstrate *non functional-consistency*; for instance the Peres Mermin test is a state independent test where a separable state can be used to arrive at a contradiction. On the other hand if everything is *functionally-consistent* in a situation, can we have violation of Bell’s inequality or non-locality? The answer is no and, therefore, one can say that Bell’s inequalities bring out non-local consequences of *non functionally-consistent* prediction maps and the notion of *non functional-consistency* is more basic. Conclusion ========== In this letter we have presented *non functional-consistency* as an alternative to contextuality and as an essential signature of quantumness at the kinematic level. Our result points to a (quantum) dynamical exploration of contextuality which so far has effectively been studied kinematically only. We expect our result to provide new insights that will be useful in the areas of foundations of QM and QIP. ASA and Arvind acknowledge funding from KVPY and DST Grant No. EMR/2014/000297 respectively. [^1]: introduced in connection with Gleason’s Theorem [^2]: as referred to earlier
--- author: - | [Maxim R. Burke]{} [^1]\ [Department of Mathematics and Statistics]{}\ [University of Prince Edward Island]{}\ [Charlottetown PE, Canada C1A 4P3]{}\ [[email protected]]{} - | [Arnold W. Miller]{} [^2]\ [University of Wisconsin-Madison]{}\ [Department of Mathematics, Van Vleck Hall]{}\ [480 Lincoln Drive]{}\ [Madison, Wisconsin 53706-1388]{}\ [[email protected]]{}\ [http://www.math.wisc.edu/$\sim$miller]{} title: Models in which every nonmeager set is nonmeager in a nowhere dense Cantor set --- Abstract > We prove that it is relatively consistent with ZFC that in any perfect Polish space, for every nonmeager set $A$ there exists a nowhere dense Cantor set $C$ such that $A\cap C$ is nonmeager in $C$. We also examine variants of this result and establish a measure theoretic analog. Introduction ============ Our starting point is the following question of Laczkovich: > Does there exist (in ZFC) a nonmeager set that is relatively meager in every nowhere dense perfect set? Note that the continuum hypothesis implies the existence of a Luzin set, i.e., an uncountable set of reals which meets every nowhere dense set in a countable set. Hence, we can think of Laczkovich’s question as asking if one can construct a sort of weak version of a Luzin set without any extra set theoretic assumptions. Recall that a perfect set in a Polish space is a closed nonempty set without isolated points, and a Polish space is said to be perfect if it is nonempty and has no isolated points. As we shall see, the underlying space in the question of Laczkovich can be taken to be any perfect Polish space. If we ask, as is quite natural, for the nowhere dense perfect sets in the statement to be Cantor sets (i.e., sets homeomorphic to the Cantor middle third set), then we do not know whether the nature of the Polish space matters. Even for various standard incarnations of the reals (the real line, the Baire space, and so on), we have only partial results on their equivalence in this context. We answered Laczkovich’s question for the Cantor set in 1997 by building a model where the answer is negative. (And of course the perfect nowhere dense sets in this case are necessarily Cantor sets.) Very shortly afterwards, we noticed the more elegant solution suggested by the first sentence of Remark \[(b) is enough for perf(R) + model from CS\] in which the negative answer is deduced from a slight variant on a statement proven consistent by Shelah in [@Sh1980]. We show in Section \[Consistency results\] that the stronger conclusion in which, for any perfect Polish space, the perfect nowhere dense sets can be taken to be Cantor sets follows from yet another variant on the same statement. The proof of the consistency of the variants in question is similar to the proof of Shelah. Unfortunately, the proof is quite technical and the argument in [@Sh1980] is only a brief sketch, so we give the argument in some detail in Section \[Order-isomorphisms of everywhere nonmeager sets\] in order to be clear. An alternative model for the negative answer to Laczkovich’s question for the Cantor set is provided by a paper of Ciesielski and Shelah [@CS]. See Remark \[(b) is enough for perf(R) + model from CS\]. In the final section of the paper, we show how a measure theoretic version of our results can be deduced from results in Roslanowski and Shelah [@RS]. The authors thank Ilijas Farah for helpful discussions concerning the models constructed in [@RS]. Write $\perf(X)$ for a Polish space $X$ to mean that for every nonmeager set $A\sq X$ there is a nowhere dense perfect set $P\sq X$ such that $A\cap P$ is nonmeager relative to $P$. Write $\cantor(X)$ if moreover $P$ can be taken to be a Cantor set. Note that $\perf(X)$ and $\cantor(X)$ are trivially equivalent in spaces in which nowhere dense perfect sets are necessarily Cantor sets, e.g., $2^\o$ and $\R$. We recall for emphasis the following well-known elementary fact of which we will make frequent use without mention. \[rel nwd in dense subspace\][*If $X$ is a topological space and $Y$ is a dense subspace of $X$, then for any $A\sq Y$, $A$ is nowhere dense in $Y$ if and only if $A$ is nowhere dense in $X$. Similarly, $A$ is meager in $Y$ if and only if $A$ is meager in $X$.* ]{} Relationships between various Polish spaces {#Relationships between various Polish spaces} =========================================== We begin by showing that for any two perfect Polish spaces $X$ and $Y$, $\perf(X)$ and $\perf(Y)$ are equivalent statements. \[equivalence of perf(irr) and perf(Polish)\] ** 1. Suppose $X$ is a perfect Polish space and $\perf(X)$ holds. Then $\perf(\o^\o)$ holds. 2. $\perf(\o^\o)$ implies $\perf(X)$ for every perfect Polish space $X$. We will use the well-known fact that every perfect Polish space $X$ has a dense $G_\d$ subset $Y$ homeomorphic to $\o^\o$. (To get $Y$, first remove the boundaries of the elements of a countable base for $X$. What remains is a zero-dimensional dense $G_\d$. Remove a countable dense subset of this dense $G_\d$ and call the result $Y$. Then $Y$ is a perfect Polish space which is zero-dimensional and has no compact open sets and hence is homeomorphic to $\o^\o$.) \(a) Let $Y$ be a residual subspace of $X$ homeomorphic to $\o^\o$. Let $A$ be a nonmeager set in $Y$. In $X$, $A$ is nonmeager so there is a nowhere dense perfect set $C$ so that $A$ is nonmeager in $C$. By replacing $C$ by the closure of one of its nonempty open subsets, we may assume that $A$ is everywhere nonmeager in $C$. In particular, $A\cap C$ is dense in $C$. Note that $F = Y\cap C$ is closed relative to $Y$, is nonempty and has no isolated points (because it contains $A\cap C$ which is dense in $C$). Since $F$ is dense in $C$, $A\cap C=F\cap C$ is not meager in $F$. Also, because $Y$ is dense in $X$ and $F$ is nowhere dense in $X$, $F$ is also nowhere dense in $Y$. \(b) Let $X$ be a perfect Polish space. Let $Y$ be a residual subspace of $X$ homeomorphic to $\o^\o$. Let $A\sq X$ be nonmeager. Then $A\cap Y$ is nonmeager in $X$ and hence in $Y$ as well since $Y$ is dense. By $\perf(\o^\o)$, there is a nowhere dense perfect set $C$ in $Y$ such that $A\cap C$ is nonmeager in $C$. If $P$ denotes the closure of $C$ in $X$, then, since $C$ is dense in $P$, $A\cap C$ is nonmeager in $P$ and hence $A\cap P$ is also nonmeager in $P$. $P$ is perfect since it is the closure of a nonempty set without isolated points. $P$ is nowhere dense since it is the closure of a set which is nowhere dense in $Y$ and hence in $X$ as well. Part (b) holds for $\cantor(\cdot)$ by an easier argument. \[(b) for perf\^\*\][*$\cantor(\o^\o)$ implies $\cantor(X)$ for every perfect Polish space $X$.*]{} Similar to the proof of Proposition \[equivalence of perf(irr) and perf(Polish)\](b), except that this time the proof yields a nowhere dense Cantor set $C\sq Y$ such that $A\cap C$ is nonmeager in $C$ and then we are done. We do not know whether (a) holds for $\cantor(\cdot)$. \[Irrationals vs Cantor set\][Does $\cantor(2^\o)$ imply $\cantor(\o^\o)$?]{} \[Unit square vs Cantor set\][Does $\cantor([0,1])$ imply $\cantor([0,1]\times[0,1])$?]{} Of course, $\cantor([0,1])\equiv\perf([0,1]) \equiv\perf(2^\o) \equiv\cantor(2^\o)$, so these two questions have equivalent hypotheses. We introduce one more version of $\perf(X)$ based on the following observation. If $\perf(\o^\o)$ holds, then for any nonmeager set $A$, we have a perfect set $P$ such that $A\cap P$ is nonmeager in $P$. Replacing $P$ by the closure of one of its open sets, we may assume that $A\cap P$ is everywhere nonmeager in $P$. Then if $P$ has a compact open subset $U$, then $U$ is a Cantor set and $A\cap U$ is nonmeager in $U$. Otherwise, $P$ itself is homeomorphic to $\o^\o$. Hence, the perfect set $P$ in the conclusion of $\perf(\o^\o)$ can always be taken to be either a closed nowhere dense copy of $\o^\o$ or a Cantor set. Let $\irr(X)$ be the strengthening of $\perf(X)$ in which we require that the perfect nowhere dense sets in the definition be homeomorphic to the Baire space $\o^\o$. Of course a Polish space need not contain any closed copies of $\o^\o$, so $\irr(X)$ can fail. However, when $X=\o^\o$ it would seem reasonable that $\irr(X)$ might hold, and we will show in the next section that $\irr(\o^\o)$ is indeed consistent. Its relationship to $\cantor(\o^\o)$ is unclear to us. \[question on irr\][(a) Does $\perf(\o^\o)$ imply that one of $\irr(\o^\o)$ or $\cantor(\o^\o)$ must hold? (b) Does either of $\irr(\o^\o)$ or $\cantor(\o^\o)$ imply the other? ]{} Consistency results {#Consistency results} =================== We now turn to the proof of the consistency of $\cantor(\o^\o)$ and $\irr(\o^\o)$. We need a variation on the following result which forms part of the proof of [@Sh1980 Theorem 4.7] which states that if ZFC is consistent, then so is ZFC + $\neg$CH + “There is a universal (linear) order of power $\o_1$.” \[universal order 1\] *If ZFC is consistent, then so is ZFC + both of the following statements.* 1. There is a nonmeager set in $\R$ of cardinality $\o_1$. 2. Let $A$ and $B$ be everywhere nonmeager subsets of $\R$ of cardinality $\o_1$. Then $A$ and $B$ are order-isomorphic. We shall need the following variant of this result. \[universal order 2\] *If ZFC is consistent, then so is ZFC + both of the following statements.* 1. Every nonmeager set in $\R$ has a nonmeager subset of cardinality $\o_1$. 2. Let $A$ and $B$ be everywhere nonmeager subsets of $\R$ of cardinality $\o_1$. Suppose we are given countable dense subsets $A_0\sq A$ and $B_0\sq B$. Then $A$ and $B$ are order-isomorphic by an order isomorphism taking $A_0$ isomorphically to $B_0$. \[Is added part automatic\][In the presence of (a), does (b) imply (b)$'$?]{} We shall in fact verify in Theorem \[universal order 3\] that in (b)$'$ we can even ask that given pairwise disjoint countable dense subsets $A_i$, $i<\o$, of $A$ and pairwise disjoint countable dense subsets $B_i$, $i<\o$, of $B$, the order-isomorphism of $A$ and $B$ takes $A_i$ isomorphically to $B_i$ for each $i<\o$. As explained in the introduction, the proof is similar to the one in [@Sh1980], but as the proof is quite technical and the argument in [@Sh1980] is only a brief sketch, we need to give the argument in some detail in order to be clear. We do that in the next section. Here, we derive the consequences of interest to us for this paper. The definition of $\irr(X)$ is given at the end of Section \[Relationships between various Polish spaces\]. \[consequences of interest\][*Assume [(a)$'$]{} and [(b)$'$]{}. Then $\cantor(\o^\o)$ and $\irr(\o^\o)$ both hold.*]{} We will use the following elementary fact. \[extending order isomorphisms\][If $K,L\subseteq\R$ are dense and $h\colon K\to L$ is an order isomorphism, then $h$ extends to an order isomorphism of $\R$. ]{} Suppose that $A\subseteq\R\sm\Q$ is not meager. We wish to find a Cantor set $C\sq\R\sm\Q$ such that $A\cap C$ is nonmeager relative to $C$. By (a)$'$, we may assume that $A$ has cardinality exactly $\o_1$. $A$ is everywhere nonmeager in some open interval $(a,b)$. Let $C\subseteq (a,b)\sm\Q$ be a Cantor set, and, by (a)$'$, let $B\subseteq C$ be a set of cardinality $\o_1$ which is nonmeager relative to $C$. Then $A\cup\Q$ and $A\cup B\cup\Q$ are both everywhere nonmeager in $(a,b)$ and both have cardinality $\o_1$. By (b)$'$, there is an order-isomorphism $h\colon A\cup\Q\to A\cup B\cup\Q$ such that $h[\Q]=\Q$. Extend $h$ to $(a,b)$ and denote the extension also by $h$. Since $h$ is a homeomorphism, $h^{-1}[C]$ is a Cantor set and $h^{-1}[B]$ is non meager relative to $h^{-1}[C]$. Since $h^{-1}[C]\sq\R\sm\Q$ and $h^{-1}[B]\subseteq A$, we are done. To get $\irr(\o^\o)$, we make a different choice of $C$ in the preceding argument. This time, choose $C$ to be any Cantor set so that $C\cap\Q$ is dense in $C$. Then $h^{-1}[C]$ will have the same property, so $h^{-1}[C\sm\Q]=h^{-1}[C]\sm\Q$ is closed nowhere dense in $\R\sm\Q$ and homeomorphic to $\o^\o$. \[(b) is enough for perf(R) + model from CS\] The reader can easily verify that a similar but simpler argument yields that (a)$'$ and (b) imply $\perf(\R)$. An alternative proof of the consistency of $\perf(\R)$ can by had by using Theorem 2 of [@CS] which states that the following statement is consistent relative to ZFC: > For every $A\subseteq 2^\omega\times 2^\omega$ for which the sets $A$ and $A^c=(2^\omega\times 2^\omega\setminus A)$ are nowhere meager in $2^\omega\times 2^\omega$ there is a homeomorphism $f:2^\omega\to 2^\omega$ such that the set $\{x\in 2^\omega: > (x,f(x))\in A\}$ does not have the Baire property in $2^\omega$. Note that the map $2^\o\to f$ given by $x\mapsto(x,f(x))$ is a homeomorphism. Hence the conclusion could be stated as “$f\cap A$ does not have the Baire property in $f$”. Since $2^\omega\times 2^\omega$ is homeomorphic to $2^\omega$ and the graph of a homeomorphism of $2^\o$ is a perfect nowhere dense set in $2^\omega\times 2^\omega$, the statement above implies the following special case of $\perf(2^\o)$ (which is equivalent to $\perf(\R)$). > For every $A\subseteq 2^\omega$ for which the sets $A$ and $A^c$ are both nowhere meager in $2^\omega$, there is a perfect nowhere dense set $P$ such that the set $A\cap P$ is not meager in $P$. To reduce $\perf(2^\o)$ to this special case, consider a nonmeager set $A\subseteq 2^\omega$. $A$ is everywhere nonmeager in some clopen set $U$. If $A$ is comeager in some clopen set, then it contains a nowhere dense perfect set and we are done. Hence we may assume that, relative to $U$, $A$ and $A^c$ are both everywhere nonmeager. The clopen set $U$ is homeomorphic to $2^\omega$, so we now find ourselves in the special case described above. Order-isomorphisms of everywhere nonmeager sets {#Order-isomorphisms of everywhere nonmeager sets} =============================================== We now turn to the proof of the consistency of (a)$'$ and (b)$'$. We begin by recalling the basic properties of oracle-cc forcing. See [@Sh1998 Chapter IV] for the details. A version of this material is also explained in [@Bu Sections 4–6]. \[oracle\][A sequence $$\barM=\la M_\d:\d\ \mbox{is a limit ordinal}\ <\o_1\ra$$ is called an [*oracle*]{} if each $M_\d$ is a countable transitive model of a sufficiently large fragment of ZFC, $\d\in M_\d$ and for each $A\sq\o_1$, $\{\d:A\cap\d\in M_\d\}$ is stationary in $\o_1$.]{} The meaning of “sufficiently large” depends on the context. In a particular proof, some fragment of ZFC for which models can be produced in ZFC must suffice for all the oracles in the proof. The existence of an oracle is equivalent to $\diamondsuit$, (see [@Ku Theorem II 7.14]) and hence implies CH. We limit the definition of the $\barM$-chain condition to partial orders of cardinality $\o_1$. This covers our present needs. Associated with an oracle $\barM$, there is a filter $\Trap$ generated by the sets $$\{\d<\o_1:\d\ \mbox{is a limit ordinal and}\ A\cap\d\in M_\d\},\quad A\sq\o_1.$$ This is a proper normal filter containing all closed unbounded sets. \[completely embedded modulo D\] If $P$ is any partial order, $P'\sq P$, and ${\mathfrak D}$ is any class of sets, then we write $P'<_{\mathfrak D}P$ to mean that every predense subset of $P'$ which belongs to ${\mathfrak D}$ is predense in $P$. \[oracle-cc\][A partial order $P$ [*satisfies the $\barM$-chain condition*]{}, or simply [*is $\barM$-cc*]{}, if there is a one-to-one function $f\colon P\to\o_1$ such that $$\{\d<\o_1:\d\ \mbox{is a limit ordinal and}\ f^{-1}(\d)<_{M_{\d,f}}P\}$$ belongs to $\Trap$, where $M_{\d,f}=\{f^{-1}(A):A\sq\d,\,A\in M_\d\}$.]{} It is not hard to verify that if $P$ is $\barM$-cc, then $P$ is ccc. Also, any one-to-one function $g\colon P\to\o_1$ can replace $f$ in the definition. \[oracle-cc properties\] *The $\barM$-cc satisfies the following properties.* 1. If $\a<\o_2$ is a limit ordinal, $\la\la P_\beta\ra_{\beta\leq\a},\la \dot Q_\beta\ra_{\beta<\a}\ra$ is a finite-support $\a$-stage iteration of partial orders, and for each $\beta<\a$, $P_\beta$ is $\barM$-cc, then $P_\a$ is $\barM$-cc. 2. If $P$ is $\barM$-cc, then there is a $P$-name $\barM^*$ for an oracle such that for each $P$-name $\dot Q$ for a partial order, if $\Vdash_P$ “$\dot Q$ is $\barM^*$-cc” then $P*\dot Q$ is $\barM$-cc. 3. If $\barM_\a$, $\a<\o_1$, are oracles, then there is an oracle $\barM$ such that for any partial order $P$, if $P$ is $\barM$-cc, then $P$ is $\barM_\a$-cc for all $\a<\o_1$. We will need the following lemmas. \[Main Lemma 1\][*Let $\barM=\la M_\d:\d<\o_1\ra$ be an oracle and let $A$ and $B$ be everywhere nonmeager subsets of $\R$. Suppose we are given pairwise disjoint countable dense subsets $A_i$, $i<\o$, of $A$ and pairwise disjoint countable dense subsets $B_i$, $i<\o$, of $B$. Then there is a forcing notion $P$ satisfying the $\barM$-cc such that for every $G\sq P$ generic over $V$, $V[G]\models$ $A$ and $B$ are order-isomorphic by an order isomorphism taking $A_i$ isomorphically to $B_i$ for each $i<\o$.* ]{} Fix well-orderings of $A$ and $B$ in type $\o_1$. (CH holds because there is an oracle.) We will inductively define one-to-one enumerations $\la a_\a:\a<\o_1\ra$ of $A$ and $\la b_\a:\a<\o_1\ra$ of $B$ and functions $f_\d$, $\d<\o_1$. We let $A_\d=\{a_\a:\o\d\leq\a<\o(\d+1)\}$ and $B_\d=\{b_\a:\o\d\leq\a<\o(\d+1)\}$ for $\d<\o_1$. For $A'\sq A$ and $B'\sq B$, let $P(A',B')$ denote the set of finite partial order-preserving maps $p\colon A'\to B'$ such that $p[A_\d]\sq B_\d$ for all $\d<\o_1$. We also use the notation $$A\restr\a=\{a_\beta:\beta<\a\},\ B\restr\a=\{b_\beta:\beta<\a\}.$$ We will arrange that the following conditions hold. 1. The sets $A_\d$ and $B_\d$ are dense in $\R$. 2. For $\d<\o$, the sets $A_\d$ and $B_\d$ are as in the hypothesis. 3. For each $\d<\o_1$, $f_\d$ is a bijective map of $P(A\restr\o\d,B\restr\o\d)$ onto $\o\d$. 4. For each $\d<\d'<\o_1$, $f_\d\sq f_{\d'}$. 5. For each infinite $\d<\o_1$, the predense subsets of $P(A\restr\o\d,B\restr\o\d)$ which have the form $f_\d^{-1}[S]$ for some $S\in \bigcup_{\eta\leq\d}M_{\eta}$ remain predense in $P(A\restr\o(\d+1),B\restr\o(\d+1))$. To do this, we proceed as follows. The construction of the functions $f_\d$ is dictated by (4) at limit stages, and $f_{\d+1}$ is an arbitrary extension of $f_\d$ satisfying (3). The elements of $A_\d$ and $B_\d$ for $\d<\o$ are given by (2). For $\d\geq\o$, by induction on $\d$ we choose the elements of $A_\d$ and $B_\d$ by alternately defining $a_{\o\d+n}$ and $b_{\o\d+n}$, beginning with $a_{\o\d}$ when $\d$ is even and with $b_{\o\d}$ when $\d$ is odd. Let us illustrate the construction with the case where $\d$ is even. Fix an enumeration $\la I_m:0<m<\o\ra$ of the nonempty open intervals with rational endpoints. The first element $a_{\o\d}$ is simply the least element, in the well-ordering of $A$ fixed at the beginning of the proof, which is different from any of the elements of $A$ chosen so far. We now choose $b_{\o\d},a_{\o\d+1}, b_{\o\d+1},a_{\o\d+2},b_{\o\d+2},\dots$ in that order. For $n>0$, we pick $a_{\o\d+n}$ and $b_{\o\d+n}$ from $I_n$ to ensure $A_\d$ and $B_\d$ will be dense. To choose one of these elements, say $b_{\o\d+n}$, let $N$ be a countable elementary submodel of $H_\t$, for a suitably large $\t$, such that $A$, $B$, $f_\d$, the sequences $\la a_\a:\a\leq\o\d+n\ra$ and $\la b_\a:\a<\o\d+n\ra$, and $\bigcup_{\eta\leq\d}M_\eta$ are all elements of $N$. Choose $b_{\o\d+n}$ to be a member of $B$ which is a Cohen real over $N$. We must check that the construction gives (5). Let $D$ be a predense subset of $P(A\restr\o\d,B\restr\o\d)$ of the appropriate form. In particular, we have $D\in N$. We will show that $D$ remains predense in $P(A\restr\o\d+n+1,B\restr\o\d+n+1)$. \[why it works\][We are showing by induction on $n$ that $D$ remains predense in $P(A\restr\o\d+n+1,B\restr\o\d+n)$ and then in $P(A\restr\o\d+n+1,B\restr\o\d+n+1)$. (This establishes (5) since each member of $P(A\restr\o(\d+1),B\restr\o(\d+1))$ belongs to $P(A\restr\o\d+n,B\restr\o\d+n)$ for some $n<\o$.) Our current stage has the second form. Note that at the stage $n=0$, we first consider the passage from $P(A\restr\o\d,B\restr\o\d)$ to $P(A\restr\o\d+1,B\restr\o\d)$. But these two partial orders are equal because there is no legal target value for $a_{\o\d}$ until $b_{\o\d}$ is chosen. So the preservation of the predense sets trivially holds at that stage. In particular, it does not matter that $a_{\d\o}$ is not Cohen generic over the previous construction.]{} Let $$p\in P(A\restr\o\d+n+1,B\restr\o\d+n+1)\sm P(A\restr\o\d+n+1,B\restr\o\d+n).$$ Then $p$ has the form $q\cup\{(a,b_{\o\d+n})\}$ for some $q\in P(A\restr\o\d+n+1,B\restr\o\d+n)$ and $a\in\{a_{\o\d+m}:m\leq n\}$. Fix $r\in D$. The set $$\{b\in\R: q\cup\{(a,b)\}\ \mbox{is compatible with}\ r\}\in N$$ is open and hence its complement $C_r$ is closed, as is the set $C_D=\bigcap_{r\in D} C_r$ of $b$ for which $q\cup\{(a,b)\}$ is incompatible with every member of $D$. Since $p$ is an partial order isomorphism, there are open rational intervals $J_1$ and $J_2$ such that $J_1\cap\dom p=\{a\}$, $J_2\cap \range p=\{b_{\o\d+n}\}$. Note that whenever $x\in J_1$ and $b\in J_2$, $q\cup\{(x,b)\}$ is a partial isomorphism. \[C\_D is nowhere dense in J\_2\][$C_D$ is nowhere dense in $J_2$.]{} Fix a nonempty open subinterval $J$ of $J_2$. There is an extension of $q$ by members of $A_0\times B_0$—the point of using $A_0$ and $B_0$ being simply that they are dense and contained in $A(\o\d+n+1)$ and $B(\o\d+n)$, respectively—which adds two points in $J_1\times J$ straddling the line $x=a$. So this extension has the form $$q'=q\cup\{(x_1,y_1),(x_2,y_2)\},\ x_1<a<x_2,\,y_1<y_2$$ where $(x_1,x_2)\sq J_1$ and $(y_1,y_2)\sq J$. Since $q'\in P(A\restr\o\d+n+1,B\restr\o\d+n)$, by the induction hypothesis $D$ must have an element $r$ compatible with this extension. Since $a\not\in A(\o\d)$, we have $a\not\in\dom r$. Let $x'_1$, $x'_2$ be the closest members of $\dom(q'\cup r)$ to the left and right of $a$, respectively. Write $y'_1=r(x'_1)$, $y'_2=r(x'_2)$. Then $(y'_1,y'_2)\sq (y_1,y_2)\sq J$ and for any choice of $b\in (y'_1,y'_2)$, $q\cup\{(a,b)\}$ is compatible with $r$. Hence $(y'_1,y'_2)$ is disjoint from $C_r$ and hence from $C_D$. This proves the claim. Thus, $b_{\o\d+n}\not\in C_D$ and hence $p$ is compatible with some member of $D$. This establishes (5). Now take $P=P(A,B)$. The fact that $P$ forces the desired order-isomorphism of $A$ and $B$ is clear from (1) and (2). To see that $P$ is $\barM$-cc, let $f=\bigcup_{\d<\o_1}f_\d\colon P\to\o_1$. For any $\d<\o_1$ we have $f^{-1}[\o\d]=P(A(\o\d),B(\o\d))$ and for each $S\sq\o\d$ whenever a set $D$ of the form $f^{-1}[S]=f_\d^{-1}[S]$ belongs to $M_\d$ and is predense in $P(A(\o\d),B(\o\d))$, a simple induction on $\d'$ using (5) shows that if $\d$ is infinite and $\d<\d'\leq\o_1$, then $D$ is predense in $P(A(\o\d'),B(\o\d'))$. In particular, $D$ is predense in $P=P(A(\o_1),B(\o_1))$. For a club of $\d<\o_1$ we have $\o\d=\d$, so this shows that $P$ satisfies the $\barM$-cc. \[Main Lemma 2\][*Assume $\diamondsuit$. Let $A$ be a nonmeager subset of $\R$. Then there is an oracle $\barM=\la M_\d:\d<\o_1\ra$ such that if $P$ is any partial order satisfying the $\barM$-cc, then $\Vdash_P$ “$A$ is nonmeager”.* ]{} This is [@Sh1998 Example IV 2.2]. \[universal order 3\] *If ZFC is consistent, then so is ZFC + both of the following statements.* 1. Every nonmeager set in $\R$ has a nonmeager subset of cardinality $\o_1$. 2. Let $A$ and $B$ be everywhere nonmeager subsets of $\R$ of cardinality $\o_1$. Suppose we are given pairwise disjoint countable dense subsets $A_i$, $i<\o$, of $A$ and pairwise disjoint countable dense subsets $B_i$, $i<\o$, of $B$. Then $A$ and $B$ are order-isomorphic by an order isomorphism taking $A_i$ isomorphically to $B_i$ for each $i<\o$. Start with a ground model of $V=L$. Fix a diamond sequence $$\la (f_\a,g_\a,h_\a):\a<\o_2,\,\cof(\a)=\o_1\ra$$ for trapping triples $(f,g,h)$ consisting of: 1. A function $f\colon\o_2\to([\o_2]^{\leq\o})^\o$. The idea of $f$ is that, with $\o_2$ identified with the ccc partial order we are about to build, $[\o_2]^{\leq\o}$ contains the maximal antichains. Thus, $([\o_2]^{\leq\o})^\o$ contains a name for each real number (construed as a subset of $\o$). Then for any nonmeager set $X$ in the extension, we can find a ground model function $f\colon\o_2\to([\o_2]^{\leq\o})^\o$ enumerating the names of the elements of $X$. 2. Functions $g,h\colon\o_1\to([\o_2]^{\leq\o})^\o$ intended to represent (enumerations of the names for the elements of) everywhere nonmeager sets of cardinality $\o_1$ with each of the sets $\{g(\o i+n):n<\o\}$ and $\{h(\o i+n):n<\o\}$, for $i<\o$, dense in $\R$. So for each $\a<\o_2$ of cofinality $\o_1$, $f_\a\colon\a\to([\a]^{\leq\o})^\o$, and $g_\a,h_\a\colon\o_1\to([\a]^{\leq\o})^\o$. Also, for each $(f,g,h)$ as in (1)–(2), $\{\a<\o_2:\cof(\a)=\o_1$, $f\restr\a=f_\a$, $g\restr\a=g_\a$ and $h\restr\a=h_\a\}$ is stationary in $\o_2$. We will inductively define an $\o_2$-stage finite support iteration $$\la\la P_\a\ra_{\a\leq\o_2},\la\dot Q_\a\ra_{\a<\o_1}\ra$$ as well as a $P_\a$-names $\barM_\a$ for oracles and one-to-one functions $F_\a\colon P_\a\to\o_2$ for $\a<\o_2$ such that the range of each $F_\a$ is an initial segment of $\o_2$ which includes $\a$ and for $\beta<\a<\o_2$, we have $F_\beta\sq F_\a$. (At each stage, $F_\a$ is any function satisfying these conditions.) For $\a<\o_2$, we will let $\dot X_\a$ denote the $P_\a$-name for the set of real numbers whose elements have the names $${\textstyle\bigcup_{n<\o}}\{n\}\times F^{-1}_\a(f_\a(\xi)(n)),\quad \xi<\a.$$ Similarly, we will let $\dot A_\a$ and $\dot B_\a$ denote the $\o_1$-sequences of $P_\a$-names for real numbers $$\left\langle{\textstyle\bigcup_{n<\o}}\{n\}\times F^{-1}_\a(g_\a(\xi)(n)):\xi<\o_1\right\rangle$$ and $$\left\langle{\textstyle\bigcup_{n<\o}}\{n\}\times F^{-1}_\a(h_\a(\xi)(n)):\xi<\o_1\right\rangle$$ respectively. At stage $\a<\o_2$ of the construction, if $\cof(\a)=\o_1$ and if $$\Vdash_{P_\a}\dot X_\a\ \mbox{is not meager},$$ then we use Lemma \[Main Lemma 2\] to get a $P_\a$-name $\barM'_\a$ for an oracle so that if $P$ is any forcing notion which satisfies the $\barM'_\a$-cc, then $X_\a$ remains nonmeager after forcing with $P$. Otherwise, in particular if $\cof(\a)\not=\o_1$, we let $\barM'_\a$ be any $P_\a$-name for an oracle. For $\beta<\a$, let $P_{\beta\a}$ be the usual $P_\beta$-name for a partial order such that $P_\a$ is isomorphic to a dense subset of $P_\beta*P_{\beta\a}$ (see \[Ba\]). Let $\barM_{\beta\a}$ be a $P_\a$-name for an oracle such that $$\mbox{If}\ \Vdash_{P_{\beta}}\ \mbox{``}P_{\beta,\a}\ \mbox{is}\ \barM_\beta\mbox{-cc and}\ \Vdash_{P_{\beta,\a}}\dot Q_\a\ \mbox{is}\ \barM_{\beta\a}\mbox{-cc''},$$ (1) $$\mbox{then}\ \Vdash_{P_{\beta}}\ \mbox{``}P_{\beta,\a+1}=P_{\beta,\a}*\dot Q_\a\ \mbox{is}\ \barM_\beta\mbox{-cc}\mbox{''}.$$ (There is such an $\barM_{\beta\a}$ by Proposition \[oracle-cc properties\](2). In (1), $\barM_{\beta\a}$ is actually a $P_{\beta}$-name for a $P_{\beta,\a}$-name for an oracle. We denote the corresponding $P_\a$-name also by $\barM_{\beta\a}$.) Let $\barM_\a$ be a $P_\a$-name for an oracle such that $$\Vdash_{P_\a}\ \mbox{``If}\ \dot Q_\a\ \mbox{is}\ \barM_\a\mbox{-cc, then} \ \dot Q_\a\ \mbox{is}\ \barM'_\a\mbox{-cc and}\ \barM_{\beta\a}\mbox{-cc for all}\ \beta<\a\mbox{''.}\leqno{(2)}$$ (Use Proposition \[oracle-cc properties\](3).) Now, if $\cof(\a)=\o_1$ and if $$\Vdash_{P_\a}\ \mbox{The ranges of}\ \dot A_\a,\dot B_\a\ \mbox{are everywhere nonmeager and each of the sets}$$ $$\hspace{45pt}\{\dot A_\a(\o i+n):n<\o\},\,\{\dot B_\a(\o i+n):n<\o\},\ \mbox{for}\ i<\o,\ \mbox{is dense in}\ \R.$$ then use Lemma \[Main Lemma 1\] to get a $P_\a$-name $\dot Q_\a$ for a partial order satisfying the $\barM_\a$-cc and forcing an isomorphism between $A_\a$ and $B_\a$ as described in the statement of the lemma. In all other cases, take $\dot Q_\a$ to name the partial order $Q$ for adding one Cohen real. We have thus $$\Vdash_{P_\a}\ \mbox{``}\dot Q_\a\ \mbox{satisfies the $\barM_\a$-cc''}.\leqno{(3)}$$ Now suppose that for some $P_{\o_2}$-name $\dot X$ we have $$\Vdash_{P_{\o_2}}\dot X\ \mbox{is not meager}.$$ (Every nonmeager set in any extension has a name forced by the weakest condition to be nonmeager since there is always a nonmeager set.) Fix a name $\dot f$ such that $$\Vdash_{P_{\o_2}}\dot f\colon\o_2\to\dot X\ \mbox{is onto}.$$ Then define $f\colon\o_2\to([\o_2]^{\leq\o})^\o$ so that if $$\tau_\xi={\textstyle\bigcup_{n<\o}}\{n\}\times F^{-1}(f(\xi)(n)),\ \xi<\o_2,$$ then for each $\xi<\o_2$, $$\Vdash_{P_{\o_2}}\dot f(\xi)=\tau_\xi.$$ There is a closed unbounded set $C\sq\o_2$ such that for each $\a\in C$ of cofinality $\o_1$ we have: 1. $f\restr\a\colon\a\to([\a]^{\leq\o})^\o$. 2. $\forall\xi<\a$, $\tau_\xi$ is a $P_\a$-name. 3. $\Vdash_{P_\a}\{\tau_\xi:\xi<\a\}$ is not meager. (For (iii), note that when $\a$ has cofinality $\o_1$, each $P_\a$-name for a meager set is a $P_\beta$-name for some $\beta<\a$. Thus, if $M$ is an elementary submodel of $H_\t$ for a suitably large $\t$ such that $|M|=\o_1$, $M^\o\sq M$, $\la\tau_\xi:\xi<\o_2\ra\in M$ and $\a=M\cap\o_2\in\o_2$ has cofinality $\o_1$, then for each (nice) $P_\a$-name $\s$ for a meager Borel set set, we have $\s\in M$ and hence $M$ knows about a maximal antichain of conditions each deciding a $\xi$ for which $\tau_\xi$ is forced not to be in $\s$. The antichain is countable and hence contained in $M$. For each condition in the antichain, the least $\xi$ which it decides is in $M$ and hence below $\a$. Hence $\Vdash_{P_{\a}}$ “$\{\tau_\xi:\xi<\a\}$ is not contained in $\s$”.) Choose such an $\a$ of cofinality $\o_1$ for which $f\restr\a=f_\a$. By (i) and (ii), the definition of $\tau_\xi$ would not change if we used $f_\a$ instead of $f$ and $F_\a$ instead of $F$. Then from the definition of $\dot X_\a$ we get $$\Vdash_{P_{\a}}\dot X_\a=\{\tau_\xi:\xi<\a\}.$$ So at stage $\a$ we chose a $P_\a$-name $\barM_\a$ and we arranged that $$\Vdash_{P_\a}\ \mbox{``}P_{\a,\g}\ \mbox{is}\ \barM_\a\mbox{-cc''.}$$ (This follows easily by induction on $\g\geq\a$ and Propositions \[oracle-cc properties\](1,2). (Recall that $P_{\a,\g}$ can be viewed in canonical way as an iteration: see [@Ba]. At limits $\g$ use Propositions \[oracle-cc properties\](1). At stages $\g+1$, use (3) to get $\Vdash_{P_\g}$ “$\dot Q_\g$ satisfies the $\barM_\g$-cc” and then use (2) and (1) with $(\beta,\a)$ replaced by $(\a,\g)$.) Hence, by the choice of $\barM_\a$, $$\Vdash_{P_\a}\Vdash_{P_{\a,\g}}\dot X_\a\ \mbox{is not meager}\leqno{(4)}$$ from which it follows that $$\Vdash_{P_\a}\Vdash_{P_{\a,\o_2}}\dot X_\a\ \mbox{is not meager}$$ since if this failed then we would have $$p\Vdash_{P_\a} q\Vdash_{P_{\a,\o_2}}\dot X_\a\sq \dot B$$ for some conditions $p\in P_\a$, $q\in P_{\a,\o_2}$ and some name $\dot B$ for a meager Borel set. But then for some $\g$, we have $\a<\g<\o_2$, $q\in P_{\a,\g}$ and $\dot B$ is a $P_\g$-name and this contradicts (4). By what we have established, there are guaranteed to be sets of cardinality $\o_1$ which are not meager in any extension by $P_{\o_2}$. Hence there are guaranteed to be everywhere nonmeager sets of cardinality $\o_1$. Suppose that for some $P_{\o_2}$-names $\dot A$ and $\dot B$ for $\o_1$-sequences we have $$\Vdash_{P_{\o_2}}\ \mbox{The ranges of}\ \dot A,\dot B\ \mbox{are everywhere nonmeager and each of the sets}$$ $$\hspace{45pt}\{\dot A(\o i+n):n<\o\},\,\{\dot B(\o i+n):n<\o\},\ \mbox{for}\ i<\o,\ \mbox{is dense in}\ \R.$$ (By what we just said, every pair of everywhere nonmeager sets $A$ and $B$ of cardinality $\o_1$, together with choices of countably many disjoint countable dense subsets of each one, has a name such that the weakest condition forces the desired properties.) Define $g,h\colon\o_1\to([\o_2]^{\leq\o})^\o$ so that if $$\s_\xi={\textstyle\bigcup_{n<\o}}\{n\}\times F^{-1}(g(\xi)(n)),\quad \xi<\o_1$$ and $$\tau_\xi={\textstyle\bigcup_{n<\o}}\{n\}\times F^{-1}(h(\xi)(n)),\quad \xi<\o_1$$ then for each $\xi<\o_1$, $$\Vdash_{P_{\o_2}}\dot A(\xi)=\s_\xi$$ and $$\Vdash_{P_{\o_2}}\dot B(\xi)=\tau_\xi.$$ For all large enough $\a<\o_2$, we have: 1. $g,h\colon\o_1\to([\a]^{\leq\o})^\o$. 2. $\forall\xi<\a$, $\s_\xi$ and $\tau_\xi$ are $P_\a$-names. Choose any such $\a$ of cofinality $\o_1$. By (i) and (ii), the definitions of $\s_\xi$ and $\tau_\xi$ would not change if we used $g_\a$ instead of $g$, $h_\a$ instead of $h$, and $F_\a$ instead of $F$. Then from the definitions of $\dot A_\a$ and $\dot B_\a$ we get $$\Vdash_{P_\a}\ \mbox{The ranges of}\ \dot A_\a,\dot B_\a\ \mbox{are everywhere nonmeager and each of the sets}$$ $$\hspace{45pt}\{\dot A_\a(\o i+n):n<\o\},\,\{\dot B_\a(\o i+n):n<\o\},\ \mbox{for}\ i<\o,\ \mbox{is dense in}\ \R.$$ (Being everywhere nonmeager is trivially downward absolute.) Then $\dot Q_\a$ was chosen to add an order isomorphism between $A_\a$ and $B_\a$ of the desired type. This completes the proof of the theorem. A measure-theoretic analog of $\perf(2^\o)$ {#measure-theoretic analog} =========================================== A measure theoretic version of Laczkovich’s question is not completely obvious because perfect sets carry many measures. We consider the following measures on $2^\o$ which we will call canonical. Given $P\subseteq 2^\omega$ a perfect set, define $$T_P=\{s\in 2^{<\omega}: P\cap [s]\not=\emptyset\}$$ We say that $s\in T_P$ splits iff both $s0$ and $s1$ are in $T_P$. The canonical measure $\mu_P$ is the one supported by $P$ and determined by declaring $\mu_P([s])=1/{2^n}$ iff $s\in T_P$ and $|\{i<|s|:s\res i$ splits$\}|=n$. An equivalent view is to take the natural map from $2^{<\omega}$ to the splitting nodes of $T_P$ and the homeomorphism $h:2^\omega\to P\sq 2^\o$ induced by it and then $\mu_P$ is the measure corresponding to the product measure $\mu$ on $2^\omega$, i.e., $\mu_P(A)=\mu(h^{-1}(A))$. \[measure analog\][*It is relatively consistent with ZFC that for any set $B\subseteq 2^\omega$ which is not of measure zero, there exists a perfect set $P$ of measure zero such that $B\cap P$ does not have measure zero in the canonical measure $\mu_P$ on $P$.*]{} The model is the one used by Rosłanowski and Shelah in the proof of [@RS Theorem 3.2]. It is obtained by forcing over a model of CH with an $\o_2$-stage countable support iteration $\la\la\poset_\a\ra_{\a\leq\o_2},\la\dot{\mathbb Q}_\a\ra_{\a<\o_2}\ra$ of the measured creature forcing ${\mathbb Q}={{\mathbb Q}}^{\rm mt}_4(K^*, \Sigma^*,{\bf F}^*)$ defined in [@RS Section 2]. We use the notation of [@RS] concerning this partial order. The definition involves in particular a rapidly growing sequence of powers of $2$, $\la N_i=2^{M_i}:i<\o\ra$. Forcing with ${\mathbb Q}$ gives rise to a continuous function $h\colon \prod_{i<\o}N_i\to 2^\o$. We will make use of the following result concerning this function. The measure on $\prod_{i<\omega}N_i$ in this proposition is the product of the uniform probability measures on the factors and the measure on $2^\o$ is the usual product measure. In the remainder of this proof, we denote both of these measures, as well as their product, by $\mu$, letting the context distinguish them. > [@RS Proposition 2.6] Suppose that $A\subseteq \prod_{i<\omega}N_i \times 2^\omega$ is a set of outer measure one. Then, in $V^{\mathbb Q}$, the set $$\{x\in\prod_{i<\omega}N_i : (x,h(x))\in A\}$$ has outer measure one. We shall also need to know that ${\mathbb Q}$ is proper and that countable support iterations of ${\mathbb Q}$ preserve Lebesgue outer measure. The former is [@RS Corollary 1.14]. The latter is explained in the proof of [@RS Theorem 3.2]. (The explanation refers the reader to some very general preservation theorems for iterated forcing. For the reader who wants to verify this without learning these general theorems, we indicate that it also follows from the special case of these theorems, preservation of $\sqsubset^{\rm random}$, given in [@Go; @1993] by imitating the proof in [@Pa; @1996] that Laver forcing satisfies what is called there $\bigstar$ and by noting that $\bigstar$ implies preservation of $\sqsubset^{\rm random}$.) Recall that $N_i=2^{M_i}$. We identify $N_i$ with the set of binary sequences of length $M_i$. The map $h:\prod_{i<\omega}N_i\to 2^\omega$ is determined from a generic sequence of finite maps $(W(i):N_i\to 2: i<\omega)$ added by ${\mathbb Q}$. $h$ is defined by $h(x)(i)=W(i)(x(i))$ for each $i$. We use the $W(i)$’s to define a perfect set $P\subseteq 2^\omega$ by the condition that $x\in P$ if and only if there exists $y \in 2^\omega$ such that $x$ is the concatenation of the sequence $s_0,i_0,s_1,i_1,\ldots$ where $y$ is the concatenation of $s_0,s_1,s_2,\ldots$ and where each $s_k$ has length $M_k$ and $i_k=W(k)(s_k)\in \{0,1\}$. $P$ is essentially the same as the graph of $h$ but we spell out the details to be sure the canonical measure is the one we want. Another way to define $P$ is as follows: 1. Let $l_i=M_i+\sum_{j<i} (M_j+1)$. Let $l_{-1}=-1$. The $l_i$, $i<\o$, are the nonsplitting levels of the tree $T_P$ which can be determined by the next two conditions. 2. If $s\in T_P$ and $l_{i-1}<|s|<l_i$ then both $s0$ and $s1$ are in $T_P$. 3. If $s\in T_P$ and $|s|=l_i$, then only $sj$ in $T_P$ where $W(i)(t)=j$ and $s=rt$ is the concatenation of $r$ and $t$ where $|t|=M_i$ and $r$ has the appropriate length. 4. Define $$P=[T_P]\ \stackrel{\mbox{\scriptsize def}}{=}\ \{x\in 2^\omega: \forall n\;\; x\res n\in T_P\}$$ Every time we pass a nonsplitting level $l_i$ we lose half the measure and so $P$ is a perfect set of measure zero for the usual measure on $2^\o$. Let $\rho:\prod_{i<\omega}N_i \times 2^\omega\to 2^\omega$ be the natural homeomorphism given by $\rho(x,z)$ is the concatenation of the sequence $x_0,z_0,x_1,z_1,\ldots$ where we are identifying $N_i$ with the set of binary sequences of length ${M_i}$. \[rho is measure preserving\][$\rho$ is measure-preserving.]{} By a standard uniqueness theorem for the extension of a measure from an algebra to the $\s$-algebra it generates, it suffices to verify that $\rho^{-1}[C]$ has the same measure as $C$ for every clopen set $C\sq 2^\o$. Every clopen set $C\sq 2^\o$ can be partitioned into clopen sets of the form $[r]$, where for some $k<\o$, $s^r\in\prod_{i<k}N_i$, $t^r\in 2^k$ and $r$ is the concatenation of $s^r_0,t^r_0,\dots,s^r_{k-1}t^r_{k-1}$. (These are simply the basic open sets $[r]$ for which $r$ has length $\sum_{i<k}(M_i+1)$ for some $k<\o$.) Hence it suffices to verify $\mu(\rho^{-1}[[r]])=\mu([r])$ for $r$ of this form. We have $$\textstyle{\mu(\rho^{-1}[[r]])=\mu([s^r]\times[t^r])=\left( \prod_{i<k}2^{-M_i}\right)2^{-k}= 2^{-\sum_{i<k}(M_i+1)}=\mu([r]).}$$ This proves the claim. Let $g\colon \prod_{i<\omega}N_i\to \prod_{i<\omega}N_i\times 2^\o$ be the homeomorphism of $\prod_{i<\omega}N_i$ onto the graph of $h$ given by $g(x)=(x,h(x))$. We have $\rho[h]=P$ (i.e., the graph of $h$ corresponds to $P$ under $\rho$). \[identification of measures\][For any Borel set $B\subseteq 2^\omega$ $$\mu_P(B)=\mu \left(g^{-1}[\rho^{-1}[B]]\right)$$ and similarly for outer measure.]{} Since the range of $g$ is the graph of $h$, we have $$\mu\left( g^{-1}[\rho^{-1}[B]]\right)=\mu\left( g^{-1}[\rho^{-1}[B]\cap h]\right)=\mu\left( g^{-1}[\rho^{-1}[B\cap P]]\right).$$ Similarly, since $\mu_P$ concentrates on $P$, $\mu_P(B)=\mu_P(B\cap P)$. Hence, it suffices to prove the claim for Borel subsets of $P$. Given $s\in\prod_{i<k}N_i$, define $t^s\in 2^k$ by $t^s_i=W(i)(s(i))$ for all $i<k$, and write $r^s=(s_0,t^s_0,\dots,s_{k-1}t^s_{k-1})$. We have $\mu_P([r^s])=\prod_{i<k}2^{-M_i}$. Also, $\rho^{-1}[[r^s]]=[s]\times[t^s]$ and $g^{-1}[\rho^{-1}[[r^s]]]=g^{-1}[[s]\times[t^s]]=[s]$, so $\mu\left(g^{-1}[\rho^{-1}[[r^s]]]\right)=\mu([s])=\prod_{i<k}2^{-M_i}$. Thus, the claim holds for basic open sets of the form $[r^s]$. Every clopen subset of $P$ is partitioned by such sets, so the claim holds for all clopen sets, and hence, as in the proof of Claim \[rho is measure preserving\], for all Borel sets. This proves the claim. Now we prove Theorem \[measure analog\] in the case that $B\subseteq 2^\omega$ has outer measure one. It follows from the usual Lowenheim-Skolem arguments that if we let $B_\alpha=V^{\poset_\alpha}\cap B$ then there will exist $\alpha<\omega_2$ such that $B_\alpha\in V^{\poset_\alpha}$ and $$V^{\poset_\alpha}\models B_\alpha \mbox{ has outer measure one}.$$ Letting $A=\rho^{-1}(B_\alpha)$ (which has outer measure one by Claim \[rho is measure preserving\]) in [@RS Proposition 2.6] cited above and using Claim \[identification of measures\], we have that $$V^{\poset_{\alpha+1}}\models B_\alpha \mbox{ has $\mu_P$ outer measure one.}$$ Because ${\mathbb Q}$ is proper, the remainder $\poset_{\o_2}/\poset_{\a+1}$ of the forcing is isomorphic in $V^{\poset_{\a+1}}$ to a countable support iteration of ${\mathbb Q}$ and hence preserves outer measure. It follows that in the final model $V^{\poset_{\omega_2}}$, $B$ has $\mu_P$ outer measure one. Now in the case that $B$ has outer measure less than one, replace it by $B'=Q+B$ where $Q$ is a countable dense subset of $2^\omega$. Then $B'$ has outer measure one, and so we know there exists a measure zero perfect $P$ such that $B'$ has positive $\mu_P$ outer measure. Hence for some $q\in Q$ we have that $q+B$ has positive $\mu_P$ outer measure. But then, $B$ has positive $\mu_{q+P}$ outer measure. This completes the proof of the theorem. [99]{} J.E. Baumgartner, [*Iterated forcing*]{}, pp. 1–59 in ‘Surveys in Set Theory’, edited by A.R.D. Mathias, LMS Lecture notes 87, Cambridge, Cambridge Univ. Press, 1983. K. Ciesielski, S. Shelah, [*Category analogue of sup-measurability problem*]{}, J. Appl. Anal., 6 (2000) 159–172. M.R. Burke, [*Liftings for Lebesgue measure*]{}, in Set theory of the reals (Ramat Gan, 1991), Israel Math. Conf.Proc., 6 (1993) 119–150. M. Goldstern, [*Tools for your forcing construction*]{}, Set theory of the reals (Ramat Gan, 1991), 305–360, Israel Math.Conf. Proc., 6, Bar-Ilan Univ., Ramat Gan, 1993. K. Kunen, [*Set Theory*]{}, North-Holland, 1983. J. Pawlikowski, [*Laver’s forcing and outer measure*]{}, Set theory (Boise, ID, 1992–1994), 71–76, Contemp. Math., 192, Amer. Math. Soc., Providence, RI, 1996. A. Roslanowski, S. Shelah, [*Measured creatures*]{}, http://front.math.ucdavis.edu/math.LO/0010070 S. Shelah, [*Independence results*]{}, J. Symbolic Logic, 45 (1980) 563–573. S. Shelah, [*Proper and improper forcing*]{}, 2nd ed., Springer-Verlag, Berlin, 1998. [^1]: Research supported by NSERC. The author thanks the Department of Mathematics at the University of Wisconsin for its hospitality during the academic year 1996/1997 when the earlier result mentioned in the introduction was produced and the Department of Mathematics at the University of Toronto for its hospitality during the academic year 2003/2004 when the present paper was completed. [^2]: Thanks to the Fields Institute for Research in Mathematical Sciences for their support during part of the time this paper was written and to Juris Steprans who directed the special program in set theory and analysis. AMS Subject Classification: Primary 03E35; Secondary 03E17 03E50. Key words and phrases: Property of Baire, Lebesgue measure, Cantor set, oracle forcing
--- author: - 'Fusayoshi J. <span style="font-variant:small-caps;">Ohkawa</span>[^1]' title: 'Itinerant-Electron Magnetism in the Heisenberg Limit' --- \[SecItinerantM\] Because of critical fluctuations, no order appears at $T>0$ K in one and two dimensions.[@mermin] Then, $T_{\rm N}$ can be low in actual quasi-one and quasi-two dimensional magnets; $T_{\rm N}$ can also be low in actual highly frustrated magnets. Here, we assume that $T_{\rm N} \ll T_{\rm K}$. When $T_{\rm N}<T\ll T_{\rm K}$, Eq. (\[EqInverseSus\]) is reduced to $$\begin{aligned} \label{EqChiHighTK} \frac1{\chi_s(0,{\bm q})} &= k_{\rm B}T_{\rm K} - \frac1{4}J_s({\bm q}) -\frac1{4}J_Q(0,{\bm q}) +\Lambda(0,{\bm q}) . \end{aligned}$$ This can be approximately used at $T\lesssim T_{\rm K}$. The $T$ dependence of $1/\chi_s(0,{\bm q})$ arises from $J_Q(0,{\bm q})$ and $\Lambda(0,{\bm q})$. When $\delta\Sigma_\sigma({\rm i}\varepsilon_n, {\bm q})$ is ignored, the density of states for quasi-particles is defined by $$\begin{aligned} \rho^*(\varepsilon) &= \frac1{L}\sum_{\bm k}\delta\left[\varepsilon-\xi({\bm k})\right] = \tilde{\phi}_\gamma \rho^{\rm (c)}(\varepsilon) ,\end{aligned}$$ which is well defined even in the Heisenberg limit. Since $4\tilde{W}_s^2/\tilde{\chi}_s(0)^2 = 4^2(k_{\rm B}T_{\rm K})^2$, it follows that $$\begin{aligned} \frac1{4}J_Q(0,{\bm q}) \simeq 4(k_{\rm B}T_{\rm K})^2 \left[P(0,{\bm q})-P_0(0)\right]. \end{aligned}$$ The subtraction term is given by $$\begin{aligned} P_0(0) &= 2 \int \hskip-3pt d\varepsilon_1 \hskip-2pt \int \hskip-3pt\varepsilon_2 \hskip2pt \rho^*(\varepsilon_1)\rho^*(\varepsilon_2) \frac{f(\varepsilon_1)-f(\varepsilon_2)}{\varepsilon_2-\varepsilon_1}.\end{aligned}$$ The $T$ dependence of $P_0(0)$ is small. The homogeneous $({\bm q}=0)$ and staggered $({\bm q}={\bm Q}_{L})$ components of $P(0,{\bm q})$ are given by $$\begin{aligned} \label{EqJQ0} P(0,0) &= 2\int \hskip-3pt d\varepsilon\rho^*(\varepsilon) \left[- \frac{d f(\varepsilon)}{d\varepsilon} \right] ,\end{aligned}$$ and $$\begin{aligned} \label{EqJQQ} P(0,{\bm Q}_{L}) &= \int \hskip-3pt d\varepsilon \rho^*(\varepsilon) \frac1{\varepsilon}\tanh\left(\beta\varepsilon\right).\end{aligned}$$ When $D\ne 2$, $\rho^*(\varepsilon)$ is almost constant around $\varepsilon\simeq 0$, as shown in Eq. (\[EqFLR1\]), it follows that $$\begin{aligned} \label{EqJQ0T1} P(0,0) &\simeq 2\rho^*(0)\left[1 +O(T^2) \right],\end{aligned}$$ and $$\begin{aligned} \label{EqJQQT1} P(0,{\bm Q}_{L}) &\simeq 2\rho^*(0)\left[ \ln \bigl(T_{\rm K}/T\bigr) + O(T^0)\right],\end{aligned}$$ When $D=2$, $\rho^*(\varepsilon)$ is logarithmically diverging as $\varepsilon\rightarrow 0$, as shown in Eq. (\[EqFLR1\]), it follows that $$\begin{aligned} \label{EqJQ0T2} P(0,0) &\simeq \frac{2\alpha_2}{|t_d^*|}\left[\ln |\tilde{\phi}_\gamma t_d^*/(k_{\rm B}T)| +O(T^0)\right],\end{aligned}$$ and $$\begin{aligned} \label{EqJQQT2} P(0,{\bm Q}_{L}) &\simeq \frac{2\alpha_2}{|t_d^*|}\left\{ \bigl[\ln |\tilde{\phi}_\gamma t_d^*/(k_{\rm B}T)|\bigr]^2 + O(T^0)\right\}.\end{aligned}$$ The $T$ dependence of $P(0,{\bm Q}_{L})$ or $J_Q(0,{\bm Q}_{L})$ is larger than that of $P(0,0)$ or $J_Q(0,0)$ because of the perfect nesting of the Fermi surface. The strength of $J_Q(0,{\bm q})$ is proportional to the bandwidth of $\xi({\bm k})$ or $k_{\rm B}T_{\rm K}$. [@comItinerantMag] If either of $\delta\Sigma_{\sigma}({\rm i}\varepsilon_n,{\bm k})$ and $\Lambda(0,{\bm q})$ can be ignored, the staggered susceptibility shows a logarithmic $T$ dependence due to the perfect nesting of the Fermi surface, i.e., it approximately obeys the CW law; this is a mechanism of the CW law in antiferromagnetic metals.[@cwAF; @FJO-Landau-AFCW] Since the density of state $\rho^*(\varepsilon)$ has a sharp peak at $\varepsilon=0$ in two dimensions, the homogeneous susceptibility approximately obeys the CW law; this is a mechanism of the CW law in ferromagnetic metals.[@miyai] In general, the ${\bm q}$ dependence of $\Lambda(0,{\bm q})$ is small. Even if $\delta\Sigma_\sigma(\varepsilon+{\rm i}0,{\bm k})$ is considered, the Fermi surface shows a perfect nesting for ${\bm Q}_{L}$, so that the temperature dependence of $J_Q(0,{\bm Q}_{L})$ must be much stronger than that of $J_Q(0,0)$. Thus, the qualitative features of $\chi_s(0,{\bm Q}_{L})$ and $\chi_s(0,0)$ when $\delta\Sigma_\sigma(\varepsilon+{\rm i}0,{\bm k})$ is considered must be the same as those when $\delta\Sigma_\sigma(\varepsilon+{\rm i}0,{\bm k})$ is ignored; the $T$ dependence of $\chi_s(0,{\bm Q}_{L})$ is stronger than that of $\chi_s(0,0)$. Such ${\bm q}$ dependence is characteristic of itinerant-electron magnetism, and it can be distinguished from the ${\bm q}$ dependence of local-moment magnetism, where only the Weiss constant depends on ${\bm q}$. ### Magnetization in the Néel state When we follow previous papers,[@FJO-Landau-AFCW; @FJO-Landau-MultiQ] it is straightforward to derive Landau’s free energy below $T_{\rm N}$. In two dimensions and higher, provided that critical fluctuations are ignored, staggered magnetization at $T\le T_{\rm N}$ and $T\simeq T_{\rm N}$ is given by $$\begin{aligned} |{\bm m}_{{\bm Q}_L}| \propto \sqrt{1-(T/T_{\rm N})^2},\end{aligned}$$ either when $T_{\rm N} \ll T_{\rm K}$ or when $T_{\rm N} \gg T_{\rm K}$. When $T_{\rm N} \gg T_{\rm K}$, an antiferromagnetic (AF) gap much larger than $k_{\rm B}T_{\rm K}$ opens at $T\ll T_{\rm N}$. The spectrum of single-particle excitations is not well defined at any $T$. If an AF gap is smaller than $k_{\rm B}T_{\rm K}$, which is only possible when $T_{\rm N} \ll T_{\rm K}$, the spectrum of single-particle excitations is still well defined at $T\ll T_{\rm K}$, even at $T\le T_{\rm N}$, in the Heisenberg limit. Discussion {#SecDiscussion} ========== The Kondo temperature $T_{\rm K}$ or $k_{\rm B}T_{\rm K}$ is the energy scale of local quantum spin fluctuations. When $T\gg T_{\rm K}$, local thermal spin fluctuations overcome the quantum ones so that electron behave as localized spins. When $T_{\rm N}\gg T_{\rm K}$, therefore, magnetism is local-moment magnetism. When $T\ll T_{\rm K}$, on the other hand, electrons are itinerant even in the Heisenberg limit, in general. An antiferromagnetic gap opens at $T\le T_{\rm N}$. Since the gap is a complete gap in the model of this paper, the conductivity vanishes at $T=0$ K, i.e., electrons are localized at $T=0$ K. If the gap is much smaller than $k_{\rm B}T_{\rm K}$, however, electrons can still behave as itinerant ones so that magnetism is itinerant-electron magnetism. If the gap is much larger than $k_{\rm B}T_{\rm K}$, which is presumably only possible when $T_{\rm N} \gg T_{\rm K}$, electrons cannot behave as itinerant ones so that magnetism is local-moment magnetism. Whether magnetism is itinerant-electron or local-moment magnetism is determined by whether $T_{\rm N}\ll T_{\rm K}$ or $T_{\rm N} \gg T_{\rm K}$. It is an interesting issue whether or not the above discussion on the Heisenberg limit can be extended to the Heisenberg model. The relationship between the Heisenberg and Hubbard models is similar to that between the $s$-$d$ and Anderson models. According to Yosida’s theory,[@yosida] the ground state of the $s$-$d$ model is a singlet. Anderson’s scaling theory supports the singlet ground state.[@poorman] Wilson proved that the ground state is a singlet; a crossover occurs between a localized spin at $T\gg T_{\rm K}$ and a local spin liquid at $T \ll T_{\rm K}$.[@wilson] Nozières proposed a Fermi-liquid description of the local spin liquid based on the phase shift of conduction electrons due to scattering by the local spin liquid.[@nozieres] Either of the ground states in the $s$-$d$ and Anderson models is a singlet or a normal Fermi liquid. Then, essentially the same Fermi-liquid analysis as Nozières’s was made based on the Anderson model by Yamada and Yosida.[@yamada1; @yamada2] In the Heisenberg limit of the Hubbard model, the ground state within the constrained Hilbert subspace is also a singlet, a normal Fermi liquid or a Tomonaga-Luttinger (TL) liquid. The Green function for the reservoir is given by $$\begin{aligned} G_{c\sigma}(\varepsilon+{\rm i}0,{\bm k}) &= 1/\left[\varepsilon + 2t_c\varphi({\bm k}) - \Gamma_c(\varepsilon + {\rm i}0)\right].\end{aligned}$$ When $|\varepsilon|\ll k_{\rm B}T_{\rm K}$, it follows that $$\begin{aligned} \frac{\Gamma_c(\varepsilon \hskip-1pt + \hskip-1pt{\rm i}0)}{\lambda^2} &= \frac{\overline{|v|^2}}{\tilde{\phi}_\gamma} \frac1{L}\! \sum_{\bm k} \frac1{\varepsilon \hskip-1pt - \hskip-1pt\xi({\bm k}) \hskip-1pt - \hskip-1pt \delta\Sigma_\sigma(\varepsilon \hskip-1pt + \hskip-1pt {\rm i}0, {\bm k})/ \tilde{\phi}_\gamma}.\end{aligned}$$ Since $\overline{|v|^2}/\tilde{\phi}_\gamma =O(U^0)$, $\Gamma_c(\varepsilon + {\rm i}0)/\lambda^2$ is well defined even in the Heisenberg limit. The quantity $\Gamma_c(\varepsilon + {\rm i}0)$ depends on vanishing fermionic excitations in the Hubbard model, and it corresponds to the phase shift discussed by Nozières.[@nozieres] A vanishing fermionic spectrum exists in the $s$-$d$ limit of the Anderson model or the Heisenberg limit of the Hubbard model. It is reasonable that there is [*a trace or vestige*]{} of the vanishing fermionic spectrum even in the conduction band of the $s$-$d$ model or the reservoir for the Heisenberg model. The charge susceptibility is exactly zero in the $s$-$d$ model but is nonzero in the Anderson model. The conductivity is exactly zero in the Heisenberg model but is nonzero, finite or infinite, in the Hubbard model. These differences are obvious because the local electron number at each unit cell is a conserved quantity in the $s$-$d$ and Heisenberg models but is not in the Anderson and Hubbard models, i.e., local gauge symmetry exists in the $s$-$d$ and Heisenberg models but it does not in the Anderson and Hubbard models. On the other hand, local gauge symmetry can never be spontaneously broken [*or recovered*]{}.[@elitzur] The [*recovery*]{} of local gauge symmetry in the $s$-$d$ or Heisenberg model is caused by [*a non-spontaneous process*]{}, i.e. by constraining the Hilbert space within the subspace where no empty or double occupancy is allowed at a localized spin site or each unit call. Because of this peculiar nature of local gauge symmetry, it is never a relevant symmetry to classify or distinguish a phase from the other. Thus, the adiabatic continuation[@adiabaticCont] holds between a spin liquid in the $s$-$d$ model and an electron liquid in the Anderson model, i.e., low-energy physical properties of a spin liquid in the $s$-$d$ model can be described as those of a normal Fermi liquid in the Anderson model. According to the scaling theory by Abrahams et al.,[@abrahams] there is no mobility edge between itinerant and localized states in a disordered system or there is no minimum metallic conductivity, which means that there is no qualitative difference between a metal and an insulator. Thus, the adiabatic continuation can hold between a metal and an insulator, in general; e.g., it holds between Wilson’s insulator and a doped metal. The adiabatic continuation must also hold between an electron liquid in the Hubbard model and a spin liquid in the Heisenberg model such that low-energy physical properties of a spin liquid in the Heisenberg model can be described as those of an electron liquid in the Hubbard model. Thus, it is reasonable to speculate that [*itinerant-electron type of magnetism*]{} is also possible in the Heisenberg model. Since $T_{\rm K} \simeq T_{\rm MF}/(2D)$, where $T_{\rm MF}$ is the Néel temperature in the mean-field approximation for the corresponding Heisenberg model and $D$ is the spatial dimensionality, whether magnetism is [*itinerant-electron type of magnetism*]{} or local-moment magnetism must be determined by whether $T_{\rm N}\ll T_{\rm MF}/(2D)$ or $T_{\rm N} \gg T_{\rm MF}/(2D)$. It has been proposed that, in one dimension, low-energy spin excitations in the Heisenberg and XXY models can be mapped to those of a TL liquid.[@TL1; @TL2; @TL3; @TL4] This proposal is simply a proposal that magnetism in the Heisenberg and XXY models must be itinerant-electron type of magnetism. According to the Kondo-lattice theory, the spectrum of low-energy pair excitations in the Heisenberg limit are determined by the imaginary part of the exchange interaction $J_Q(\omega+{\rm i}0, {\bm q})$ or $P(\omega+{\rm i}0, {\bm q})$. When the RVB term is considered but $\delta\Sigma_\sigma(\varepsilon + {\rm i}0, {\bm q})$ is ignored, the energy $\omega_{\rm pair}(q)$ of a pair excitation lies in $$\begin{aligned} \label{EqDeCP0} 6\hskip1pt\Xi_1 |J| \sin(qa) \le \omega_{\rm pair}(q) \le 12\hskip1pt\Xi_1|J|\sin(qa/2),\end{aligned}$$ as a function of wave number $q$. The lower limit corresponds to des Cloizeaux-Pearson’s mode in the Heisenberg model,[@cloiseaux] whose dispersion relation is given by $$\begin{aligned} \omega_{\rm dCP}(q) = \bigl(2|J|/\pi\bigr)\sin(qa).\end{aligned}$$ When $\delta\Sigma_\sigma(\varepsilon + {\rm i}0, {\bm q})$ is ignored, $\Xi_{1}=1/\pi$, as shown in §\[SecFS\]. If $\Xi_{1}=1/\pi$, the lower limit is three times as large as $\omega_{\rm dCP}(q)$. The imaginary part of $\delta\Sigma_\sigma(\varepsilon + {\rm i}0, {\bm q})$ reduces $\Xi_D$. The energy dependence of $\delta\Sigma_\sigma(\varepsilon+ {\rm i}0, {\bm q})$ also reduces $\Xi_D$. It is interesting to study how $\omega_{\rm pair}(q)$ is corrected or modified when $\delta\Sigma_\sigma(\varepsilon\pm {\rm i}0, {\bm q})$ is considered. In the Heisenberg model on the square lattice, the Néel state is only stabilized at $T=0$ K: $T_{\rm N}=+0$ K. The staggered susceptibility $\chi_s(0,{\bm Q}_L)$ is diverging as $T\rightarrow 0$ K. On the other hand, the homogeneous $\chi_s(0,0)$ shows a peak around $T_{\rm MF}\simeq |J|/k_{\rm B}$.[@Hz2D1; @Hz2D2; @Hz2D3] This feature, $\chi_s(0,{\bm q})$ having qualitatively different $T$ dependences for different ${\bm q}$’s, is characteristic of itinerant-electron magnetism. For example, a similar result was also obtained by a conventional perturbative theory in terms of $U$ based on an itinerant electron model.[@miyake] When we follow previous papers,[@FJO-Landau-AFCW; @miyai; @miyake] it is easy to show that, since the Fermi surface shows a perfect nesting and the density of state $\rho^*(\varepsilon)$ has a sharp peak at the chemical potential, $\Lambda(0,{\bm q})$ has a large $T$ dependence such that it suppresses the Curie-Weiss $T$ dependence for any ${\bm q}$; the ${\bm q}$ dependence of $\Lambda(0,{\bm q})$ is small. The reduction of $T_{\rm N}$ and the peak around $T_{\rm MF}\simeq |J|/k_{\rm B}$ in $\chi_s(0,0)$ can also be explained in terms of the mode-mode coupling $\Lambda(0,{\bm q})$ by the Kondo-lattice theory based on the Hubbard model. Furthermore, magnetization at $T=0$ K is as small as $0.3~\mu_{\rm B}$ per unit cell.[@reger] This small number implies that, in the corresponding Heisenberg limit of the Hubbard model, an antiferromagnetic gap is smaller than $k_{\rm B}T_{\rm K}$ so that magnetism is itinerant-electron magnetism. Thus, we propose that, even in the Heisenberg model in two dimensions, magnetism at $T \ll T_{\rm MF}/4$ should be characterized as itinerant-electron type of magnetism. Conclusion {#SecConclusion} ========== The Heisenberg limit of the Hubbard model is studied by the Kondo-lattice theory. The Kondo temperature $T_{\rm K}$ or $k_{\rm B}T_{\rm K}$ can be still defined as an energy scale of low-energy local quantum spin fluctuations even in the Heisenberg limit. It is enhanced by the resonating valence bond (RVB) mechanism, so that $T_{\rm K} \simeq T_{\rm MF}/(2D)$, where $T_{\rm MF}$ is the Néel temperature in the mean-field approximation of the corresponding Heisenberg model and $D$ is the spatial dimensionality. When $T \gg T_{\rm K}$, electrons behaves as localized spins and the entropy is about $k_{\rm B}\ln 2$ per unit cell. Thus, a high temperature phase at $T\gg T_{\rm K}$ is a Mott insulator. Within the constrained Hilbert subspace where no order parameter exists, however, the ground state is not a Mott insulator nor a Lieb-Wu insulator either but is an almost spin liquid, i.e., a normal Fermi liquid or a Tomonaga-Luttinger liquid in which the band of low-energy single-particle excitations is almost vanishing; however, the width of the vanishing band is $O(k_{\rm B}T_{\rm K})$, the Fermi surface is well defined in the vanishing band, and the conductivity at $T=0$ K is diverging in the vanishing limit of disorder. When $D\gg1$, the Néel temperature $T_{\rm N}$ is so high that $T_{\rm N}\simeq T_{\rm MF}$ but the Kondo temperature is so low that $T_{\rm K}\ll T_{\rm MF}$. Thus, $T_{\rm N}\gg T_{\rm K}$, so that magnetism for $D\gg1$ is local-moment magnetism. In particular, magnetism in infinite dimensions, where $T_{\rm K}=+0$K and $T_{\rm N}=T_{\rm MF}$, is prototypic local-moment magnetism. When $D$ is small enough, $T_{\rm K}$ is so high or $T_{\rm N}$ is so low that $T_{\rm N}\ll T_{\rm K}$. Electrons are itinerant at $T\ll T_{\rm K}$, unless an antiferromagnetic complete gap opens in the vanishing band. Magnetic properties at $T_{\rm N}<T\ll T_{\rm K}$ and $|\omega|\ll k_{\rm B}T_{\rm K}$ are those of a normal Fermi liquid or a Tomonaga-Luttinger liquid, i.e., an itinerant electron liquid; e.g., the spin susceptibility has a temperature and wave-number dependence characteristic of itinerant-electron magnetism. When $T_{\rm N}\ll T_{\rm K}$, therefore, magnetism is itinerant-electron magnetism. Since local gauge symmetry is never relevant to classify a phase from the other, there is no essential difference between an electron liquid in the Hubbard model, where local gauge symmetry does not exist, and a spin liquid in the Heisenberg model, where local gauge symmetry exists, except for charge-channel properties such as the conductivity. Thus, [*itinerant-electron type of magnetism*]{} must also be possible even in the Heisenberg model. Whether magnetism is itinerant-electron type of magnetism or local-moment magnetism must be determined by whether $T_{\rm N}\ll T_{\rm MF}/(2D)$ or $T_{\rm N} \gg T_{\rm MF}/(2D)$. Since no $T_{\rm N}$ exists in one dimension and $T_{\rm N}=+0$ K in two dimensions, in particular, magnetic properties of the Heisenberg model at $T \ll T_{\rm MF}$ and $|\omega|\ll k_{\rm B}T_{\rm MF}$ in one dimension and two dimensions must be able to be described as those of an itinerant-electron liquid, i.e., a Tomonaga-Luttinger liquid or a normal Fermi liquid. Acknowledgements {#acknowledgements .unnumbered} ================ The author thanks to T. Hikihara for useful discussions. Proof of Eq. (\[EqWardtdtd\*\]) {#SecEquality} =============================== Consider a function defined by $$\begin{aligned} \label{EqIntC1} X(\mu^*) &= \frac1{L}\sum_{\bm k} \varphi({\bm k}) f\bigl[\xi({\bm k})-\mu^*\bigr] % \nonumber \\ &= \frac{\sqrt{D}}{L}\sum_{\bm k} \cos(k_1a) f\bigl[\xi({\bm k})-\mu^*\bigr] ,\end{aligned}$$ where $\xi({\bm k})=-2\bigl(t_d^*/\tilde{\phi}_\gamma\bigr) \varphi({\bm k})$. Equation (\[EqIntC1\]) is also given, in the integration form, by $$\begin{aligned} \label{EqIntC2} X(\mu^*) &= \frac{\sqrt{D}a^D}{(2\pi)^D} \hskip-4pt \int_{-\pi/a}^{+\pi/a} \hskip-18pt dk_1 \cdots \hskip-3pt \int_{-\pi/a}^{+\pi/a} \hskip-18pt dk_D \hskip0pt \cos(k_1a) f\bigl[\xi({\bm k})\hskip-1pt - \hskip-1pt\mu^*\bigr].\end{aligned}$$ By partial integration of Eq. (\[EqIntC2\]) with respect to $k_1$, it follows that $$\begin{aligned} \label{EqC=S} X(\mu^*)= 2\left(t_d^*/\tilde{\phi}_\gamma\right) Y(\mu^*), \end{aligned}$$ where $$\begin{aligned} \label{EqIntS1} Y(\mu^*) &= \frac{a^D}{(2\pi)^D} \hskip-2pt\int_{-\pi/a}^{+\pi/a} \hskip-10pt dk_1 \cdots \hskip-2pt\int_{-\pi/a}^{+\pi/a} \hskip-10pt dk_D\hskip2pt \hskip-2pt\sin^2(k_1a)\hskip-2pt \nonumber \\ & \qquad \times \left[- \frac{df(\varepsilon)}{d\varepsilon}\right]_{\varepsilon=\xi({\bm k})-\mu^*}.\end{aligned}$$ This is also given, in the sum form, by $$\begin{aligned} \label{EqIntS2} Y(\mu^*) &= \frac{1}{L}\sum_{\bm k} \sin^2(k_1a) \left[- \frac{df(\varepsilon)}{d\varepsilon}\right]_{\varepsilon=\xi({\bm k})-\mu^*} \nonumber \\ &= \frac{1}{L}\sum_{\bm k} \frac1{D}\sum_{\nu=1}^{D}\sin^2(k_\nu a) \left[- \frac{df(\varepsilon)}{d\varepsilon}\right]_{\varepsilon=\xi({\bm k})-\mu^*}.\end{aligned}$$ Since $\Xi_{D}=X(0)$ and $\pi_{xx}(0) = Y(0)$, it follows from Eq. (\[EqC=S\]) that $\Xi_{D}=2(t_d^*/\tilde{\phi}_\gamma)\pi_{xx}(0)$ or $$\begin{aligned} \label{EqXiPi} \Xi_{D} &= \frac{2}{\tilde{\phi}_\gamma}\left(t_d - \frac{3}{4}\tilde{\phi}_\gamma \tilde{W}_s^2 \Xi_{D}\frac{J}{D}\right) \pi_{xx}(0).\end{aligned}$$ Then, it is straightforward to prove Eq. (\[EqWardtdtd\*\]). [99]{} See for example, C. Herring: [*Magnetism IV, Exchange Interaction among Itinerant Electrons*]{}, ed. by G. T. Rado and H. Suhl, (Academic Press, New York and London, 1966), p. 118. ed. by W. J. L. Buyers, (Plenum Press, New York and London, 1984), articles therein. N. F. Mott: [*Metal-Insulator Transition*]{} (London New York Philadelphia, 1990). P. Fazekas and P. W. Anderson: Philos. Mag. [**30**]{} (1974) 423. J. Hubbard: Proc. R. Soc. London Ser. A [**276**]{} (1963) 238. J. Hubbard: Proc. R. Soc. London Ser. A [**281**]{} (1964) 401. P. W. Anderson: [*Magnetism I*]{}, ed. by G. T. Rado and H. Suhl, (Academic Press, New York and London, 1963). F. J. Ohkawa: J. Phys. Soc. Jpn. [**74**]{} (2005) 3340. F. J. Ohkawa and T. Toyama: J. Phys. Soc. Jpn. [**78**]{} (2009) 124707. F. J. Ohkawa: Phys. Rev. B [**44**]{} (1991) 6812. F. J. Ohkawa: J. Phys. Soc. Jpn. [**60**]{} (1991) 3218. F. J. Ohkawa: J. Phys. Soc. Jpn. [**61**]{} (1992) 1615. W. Metzner and D. Vollhardt: Phys. Rev. Lett. [**62**]{} (1989) 324. E. Müller-Hartmann: Z. Phys. B [**74**]{} (1989) 507. E. Müller-Hartmann: Z. Phys. B [**76**]{} (1989) 211. V. Janis: Z. Phys. B [**83**]{} (1991) 227. A. Georges and G. Kotliar: Phys. Rev. B [**45**]{} (1992) 6479. G. Kotliar and D. Vollhardt: Phys. Today [**57**]{} (2004) 53. A. Georeges, G. Kotliar, W. Krauth, and M. J. Rozenberg: Rev. Mod. Phys. [**68**]{} (1996) 13. G. Kotliar, S. Murthy, and M. J. Rozenberg: Phys. Rev. Lett. [**89**]{} (2002) 046401. Y. Kakehashi and P. Fulde: Phys. Rev. B [**69**]{} (2004) 045101. M. C. Gutzwiller: Phys. Rev. Lett. [**10**]{} (1963) 159. M. C. Gutzwiller: Phys. Rev. [**134**]{} (1964) A923. M. C. Gutzwiller: Phys. Rev. [**137**]{} (1965) A1726. J. M. Luttinger and J. C. Ward: Phys. Rev. [**118**]{} (1960) 1417. J. M. Luttinger: Phys. Rev. [**119**]{} (1960) 1153. F. J. Ohkawa and N. Matsumoto: J. Phys. Soc. Jpn. [**63**]{} (1994) 602. F. J. Ohkawa: Phys. Rev. B [**59**]{} (1999) 8930. F. J. Ohkawa: Phys. Rev. B [**66**]{} (2002) 0144408. According to this paper, the superexchange interaction can be ferromagnetic in a multi-band model provided that he Hund coupling is strong enough. This ferromagnetic exchange interaction can play a crucial role in not only local-moment ferromagnetism but also itinerant-electron ferromagnetism. The specific heat linear in $T$ at low temperatures is due to the accumulation of low-energy excitations. Such accumulation in a system is only possible if the system is so frustrated that no order parameter exists. In this sense, a normal Fermi liquid is frustrated, even if it is a free electron liquid. If an order parameter exists or a system is not frustrated, either of the specific heat and the entropy approaches zero faster than linearly in $T$ as $T\rightarrow0$ K, in general. K. Yamada: Prog. Theor. Phys. [**53**]{} (1975) 970. K. Yamada and K. Yosida: Prog. Theor. Phys. [**53**]{} (1975) 1286. K. G. Wilson: Rev. Mod. Phys. [**47**]{} (1975) 773. It is easy to prove that, when $T=0$ K, $\rho(0)=1/[\pi\Delta(0)]$ does not depend on $U/|t_d|$ in the S$^3$A. If the RVB mechanism is considered, $\rho(0)|t_d|\rightarrow +0$ as $D\rightarrow +\infty$ and $U/|t_d|\rightarrow +\infty$. Thus, the ground state in the S$^3$A is slightly different from that in the vanishing limit of the RVB mechanism or in the large limit of $D$ although either of the ground states is a normal Fermi liquid characterized by $T_{\rm K}=+0$ K. E. H. Lieb and F. Y. Wu: Phys. Rev. Lett. [**20**]{} (1968) 1445. According to this paper, when $U$ is nonzero, a complete gap opens in the energy spectrum of adding or removing a single electron provided that the number of electrons is constrained to integers, i.e., [*in the canonical ensemble*]{}. However, the spectrum is not necessarily equal to the spectrum of single-particle excitations in the grand canonical ensemble, where the averaged number of electrons is irrational, in general. J. C. Ward: Phys. Rev. [**78**]{} (1950) 182. T. Moriya: [*Spin Fluctuations in Itinerant Electron Magnetism*]{}, Springer Series in Solid-State Sciences (Springer-Verlag, Heidelberg, 1985) Vol. 56. Since the RVB mechanism is not considered in the S$^3$A, the probability $p$ of empty or double occupancy in the S$^3$A must be much smaller than $t_d^2/(DU^2)$. When $U/|t_d|\gg 1$, $p$ may depend on $\lambda^2$, i.e., $p=O(\lambda^2)$. When $U/|t_d|\gg 1$, a solution of a Mott insulator was obtained in numerical theories based on DMFT,[@PhyToday; @RevMod; @kotliar] in which $p$ is presumably vanishingly small or exactly zero. The solution implies that, in the S$^3$A or DMFT, $p=O(\lambda^2)$ if the electron reservoir is explicitly considered. P. W. Anderson: Phys. Rev. Lett. [**64**]{} (1990) 1839. R. Kubo, J. Phys. Soc. Jpn. [**12**]{} (1957) 570. E. Abrahams, P. W. Anderson, D. C. Licciardello, and T. V. Ramakrishnan, Phys. Rev. Lett. [**42**]{} (1979) 673. According to the scaling theory of this paper, provided that $\lambda^2$ is nonzero, the static conductivity at $T=0$ K in one and two dimensions should be vanishing in the thermodynamic limit. If two limiting procedures of $T\rightarrow 0$K and $D\rightarrow+\infty$ are taken so as to keep $T/T_{\rm K}\ll 1$, $J_Q(0,{\bm q})$ is $O(k_{\rm B}T_{\rm K}/D^0)$ for particular ${\bm q}$’s although it is $O\bigl(k_{\rm B}T_{\rm K}/\sqrt{D}\bigr)$ for almost all ${\bm q}$’s. When $t_d$ is finite, the electron filling is non-half filling, the nesting of the Fermi surface is not sharp, and the superexchange interaction is weak enough, i.e., in a non-Heisenberg-limit case, it is possible that $T_{\rm N}\ll T_{\rm K}$ even in infinite dimensions. When $T_{\rm N}\ll T_{\rm K}$ in infinite dimensions, magnetism is prototypic itinerant-electron magnetism. N. D. Mermin and H. Wagner: Phys. Rev. Lett. 17 (1966) 1133. F. J. Ohkawa, K. Onoue, and H. Satoh: J. Phys. Soc. Jpn. [**67**]{} (1998) 535. F. J. Ohkawa: Phys. Rev. B [**57**]{} (1998) 412. E. Miyai and F. J. Ohkawa: Phys. Rev. B [**61**]{} (2000) 1357. F. J. Ohkawa: Phys. Rev. B [**66**]{} (2002) 014408. K. Yosida: Phys. Rev. [**147**]{} (1966) 223. P. W. Anderson: J. Phys. C [**3**]{} (1970) 2436. P. Nozières: J. Low. Temp. Phys. [**17**]{} (1974) 31. S. Elitzur: Phys. Rev. D [**12**]{} (1975) 3978. See, for example, P. W. Anderson, [*Basic Notions of Condensed Matter Physics*]{}, Frontiers in Physics (Benjamin/Cummings, New York, 1984) R. Chitra and T. Giamarch: Phys. Rev. B [**55**]{} (1997) 5816. T. Giamarch and A. M. Tsvelik: Phys. Rev. B [**59**]{} (1999) 11398. A. Furusaki and F. C. Zhang: Phys. Rev. B [**60**]{} (1999) 1175. T. Hikihara and A. Furusaki: Phys. Rev. B [**63**]{} (2001) 134438. J. des Cloizeaux and J. J. Pearson: Phys. Rev. [**128**]{} (1962) 2131. A. Auerbach and D. P. Avoras: Phys. Rev. Lett. [**61**]{} (1988) 617. S. Miyashita: J. Phys. Soc. Jpn. [**57**]{} (1988) 1934. Y. Okabe and M. Kikuchi: J. Phys. Soc. Jpn. [**57**]{} (1988) 4351. K. Miyake and Narikiyo: J. Phys. Soc. Jpn. [**63**]{} (1994) 3821. J. D. Reger and A. P. Young: Phys. Rev. B [**37**]{} (1988) 5987. [^1]: E-mail address: [email protected]
--- abstract: | Single-star stellar population (ssSSP) models are usually used for spectral stellar population studies. However, more than 50% of stars are in binaries and evolve differently from single stars. This suggests that the effects of binary interactions should be considered when modeling the stellar populations of galaxies and star clusters. Via a rapid spectral stellar population synthesis ($RPS$) model, we give detailed studies of the effects of binary interactions on the Lick indices and colours of stellar populations, and on the determination of the stellar ages and metallicities of populations. Our results show that binary interactions make stellar populations less luminous, bluer, with larger age-sensitive Lick index (H$\beta$) and smaller metallicity-sensitive indices (e.g., Mgb, Fe5270 and Fe5335) compared to ssSSPs. It also shows that when ssSSP models are used to determine the ages and metallicities of stellar populations, smaller ages or metallicities will be obtained, when using two line indices (H$\beta$ and \[MgFe\]) and two colours (e.g., $u-R$ and $R-K$), respectively. Some relations for linking the stellar-population parameters obtained by ssSSPs to those obtained by binary-star stellar populations (bsSSPs) are presented in the work. This can help us to get some absolute values for stellar-population parameters and is useful for absolute studies. However, it is found that the relative luminosity-weighted stellar ages and metallicities obtained via ssSSPs and bsSSPs are similar. This suggests that ssSSPs can be used for most spectral stellar population studies, except in some special cases. author: - 'Zhongmu Li[^1]  and Zhanwen Han' title: How binary interactions affect spectral stellar population synthesis --- Introduction ============ Stellar population synthesis is a powerful technique to study the stellar contents of galaxies and star clusters (see, e.g., @Yungelson:1997, @Tout:1997, @Pols:1998, @Hurley:2007). It is also an important method to study the formation and evolution of galaxies. Simple stellar population (SSP) models that do not take binary interactions into account are usually used for spectral stellar population studies, as most models are ssSSP models (e.g., @Bruzual:2003, @Vazdekis:1999, @Fioc:1997, @Worthey:1994). However, as pointed by, e.g., [@Duquennoy:1991], [@Pinfield:2003], and [@Lodieu:2007], about 50% of stars are in binaries and they evolve differently from single stars. We can see this when comparing the isochrone of an ssSSP to that of a bsSSP (Fig.1). In fact, bsSSPs better fit the colour-magnitude diagrams (CMDs) of star clusters than ssSSPs [@Li:2007database]. This suggests that binary interactions can affect stellar population synthesis studies significantly and it is, therefore, necessary to consider binary interactions. This is also supported by some observational results, e.g., the Far-UV excess of elliptical galaxies [@Han:2007] and blue stragglers in star clusters (e.g., @Davies:2004; @Tian:2006; @Xin:2007). These phenomena can be naturally explained via stellar populations with binary interactions, without any special assumptions. A few works have tried to model populations via binary stars and have presented some results on the effects of binary interactions on spectral stellar population synthesis. For example, @Zhang:2004 [@Zhang:2005] showed that binary interactions can make bsSSPs bluer than ssSSPs. However, there is not a more detailed investigation about how binary interactions affect the Lick indices and colours of stellar populations, and on the determination of stellar ages and metallicities. One of our previous works, @Li:2007database, compared bsSSPs to ssSSPs, but various stellar population models are used. This makes it difficult to understand the effects of the changes of Lick indices and colours of populations that result only from binary interactions. Furthermore, it did not show how binary interactions affect the determinations of stellar ages and metallicities. In this case, we have no clear picture for the differences between the predictions of bsSSPs and ssSSPs, and do not well know the differences between luminosity-weighted stellar-population parameters (age and metallicity) determined by bsSSPs and ssSSPs. Because all galaxies contain some binaries, detailed studies of the effects of binary interactions on the Lick indices and colours of populations are important, as are the determinations of stellar-population parameters. In this work, we perform a detailed study of the effects of binary interactions on spectral stellar population synthesis studies, via the rapid spectral population synthesis ($RPS$) model of @Li:2007database. The paper is organized as follows. In Sect. 2, we briefly introduce the $RPS$ model. In Sect. 3, we study the effects of binary interactions on the isochrones, spectral energy distributions (SEDs), Lick Observatory Image Dissector Scanner absorption line indices (Lick indices), and colours of stellar populations. In Sect. 4, we investigate the differences between stellar ages and metallicities fitted by ssSSPs and bsSSPs. Finally, in Sect. 5, we give our discussions and conclusions. The rapid spectral population synthesis model ============================================= We take the results of the $RPS$ model of @Li:2007database for the work, as there is no other available model. The $RPS$ model calculated the evolution of binaries and single stars via the rapid stellar evolution code of @Hurley:2002 (hereafter Hurley code) and took the spectral libraries of @Martins:2005 and @Westera:2002 (BaSeL 3.1) for spectral synthesis. The model calculated the high-resolution (0.3 $\rm \AA$) SEDs, Lick indices and colours for both bsSSPs and ssSSPs with the initial mass functions (IMFs) of @Salpeter:1955 and @Chabrier:2003. Note that the $RPS$ model used a statistical isochrone database for modeling stellar populations [@Li:2007database]. Each bsSSP contains about 50% stars that are in binaries with orbital periods less than 100 yr (the typical value of the Galaxy), and binary interactions such as mass transfer, mass accretion, common-envelope evolution, collisions, supernova kicks, angular momentum loss mechanism, and tidal interactions are considered when evolving binaries via Hurley code. Thus the $RPS$ model is suitable for studying the effects of binary interactions on stellar population synthesis studies. However, some parameters such as the ones used for describing the common envelope prescription, mass-loss rates, and supernova kicks are free parameters and the default values in Hurley code, i.e., 0.5, 1.5, 1.0, 0.0, 0.001, 3.0, 190.0, 0.5, and 0.5, are taken in this work for wind velocity factor ($\beta_{\rm w}$), Bondi-Hoyle wind accretion faction ($\alpha_{\rm w}$), wind accretion efficiency factor ($\mu_{\rm w}$), binary enhanced mass loss parameter ($B_{\rm w}$), fraction of accreted material retained in supernova eruption ($\epsilon$), common-envelope efficiency ($\alpha_{\rm CE}$), dispersion in the Maxwellian distribution for the supernovae kick speed ($\sigma_{\rm k}$), Reimers coefficient for mass loss ($\eta$), and binding energy factor ($\lambda$), respectively. These default values are taken because they have been tested by the developer of Hurley code and seem more reliable. One can refer to the paper of @Hurley:2002 for more details. In fact, many of these free parameters remain uncertain, and their uncertainties can possibly have great effect on our results. When we test the uncertainties caused by various $\alpha_{\rm CE}$ and $\lambda$, we find that the number of blue stragglers can be changed as large as 40% compared to the default case. However, it is extremely difficult to give detailed uncertainties in spectral stellar population synthesis due to the free parameters, as we lack of constrains on these free parameters (see @Hurley:2002). Therefore, when we estimate the synthetic uncertainties in our $RPS$ model, the uncertainties due to variation of free parameters will not be taken into account. Because the fitted formulae used by Hurley code to evolve stars lead to uncertainties less than about 5% @Hurley:2002, we take 5% as the uncertainties in the evolution of stars in the whole paper. The correctness of the results of the $RPS$ model depends on how correct the default parameters of Hurley code are. In addition, the uncertainties in final generated spectrum caused by the spectral library and the method used for spectral synthesis are about 3% and 0.81%, respectively. Because a Monte Carlo technique is used by the $RPS$ model to generate the star sample (2000000 binaries or 4000000 single stars in our work and it is twice of that of the model of @Zhang:2004) of stellar populations, the number of stars can result in statistical errors in the Lick indices and colours of populations. According to our test, 1000000 binaries is enough to get reliable Lick indices (see also @Zhang:2005), but the near-infrared colours such as $(I-K)$, $(R-K)$, and $(r-K)$ of old populations are affected by the Monte Carlo method. However, the errors caused by the Monte Carlo method are small, about 2% for a sample of 4000000 stars. Note that in this work, an uniform distribution is used to generate the ratio ($q$, 0–1) of the mass of the secondary to that of the primary (@Mazeh:1992; @Goldberg:1994), and then the mass of the secondary is calculated from that of the primary and $q$. The separation ($a$) of two components of a binary is generated following the assumption that the fraction of binary in an interval of log($a$) is constant when $a$ is big (10$R_\odot$ $< a <$ 5.75 $\times$ 10$^{\rm 6}$$R_\odot$) and it falls off smoothly when when $a$ is small ($\leq$ 10$R_\odot$) [@Han:1995]. The distribution of $a$ is written as $$a~.p(a) = \left\{ \begin{array}{ll} a_{\rm sep}(a/a_{\rm 0})^{\psi}, &~a \leq a_{\rm 0}\\ a_{\rm sep}, &~a_{\rm 0} < a < a_{\rm 1}\\ \end{array} \right.$$ where $a_{\rm sep} \approx 0.070, a_{\rm 0} = 10R_{\odot}, a_{\rm 1} = 5.75 \times 10^{\rm 6}R_\odot$ and $\psi \approx 1.2$. The eccentricity ($e$) of each binary system using a uniform distribution, in the range of 0–1, and $e$ affects the results slightly [@Hurley:2002]. In addition, $RPS$ model uses some methods different from those used in the work of @Zhang:2004 to calculate the SEDs, Lick indices, and colours of populations. Besides $RPS$ used a statistical isochrone database, it calculated the Lick indices directly from SEDs while the work of @Zhang:2004 used some fitting formulae to compute the same indices. The $RPS$ model calculated the colours of populations from SEDs, but the work of @Zhang:2004 computed colours by interpolating the photometry library of BaSeL 2.2 [@Lejeune:1998]. Furthermore, the $RPS$ model used the more advanced version 3.1 [@Westera:2002] of BaSeL library rather than version 2.2 to give the colours of the populations. The BaSeL 3.1 library overcomes the weakness of the BaSeL 2.2 library at low metallicity, because it has been colour-calibrated independently at all levels of metallicity. This makes the predictions of our model are more reliable. Another important point is that the model of Zhang et al. did not present the near-infrared colours of stellar populations, but such colours are very important for disentangling the well-known age–metallicity degeneracy. Effects of binary interactions on stellar population synthesis ============================================================== The effects on isochrones of stellar populations ------------------------------------------------ The direct effect of binary interactions on stellar population synthesis is to change the isochrones of stellar populations, e.g., the distribution of stars in the surface gravity \[log($g$)\] versus effective temperature ($T_{\rm eff}$) grid, hereafter $gT$-grid. We investigate the differences between the isochrones of bsSSPs and ssSSPs. Stellar populations with the IMF of @Salpeter:1955 are taken as our standard models for this work. The Salpeter IMF is actually not the best one for stellar population studies, although it is widely used. The reason is that the IMF is not valid for low masses. However, this IMF is reliable for stellar population synthesis, because low-mass stars contribute much less to the light of populations compared to high-mass stars. Some further studies will be given by taking some more reliable IMFs, e.g., @Kroupa:1993. Because the isochrone database used by this work divides the $gT$-grid into 1089701 sub-grids, with 0.01 and 40K the intervals of log($g$) and $T_{\rm eff}$, it is possible to compare the isochrones of bsSSPs and ssSSPs. The differences between the isochrones of two kinds of populations are calculated by subtracting the fraction of stars of bsSSP from that of its corresponding ssSSP, sub-grid by sub-grid. The ssSSP and its corresponding bsSSP have the same star sample, metallicity and age, and all their integrated specialties (SEDs, colours and Lick indices) are calculated via the same method. Therefore, the differences between the isochrones of a bsSSP and its corresponding ssSSP only result from binary interactions. For convenience, we call the difference a “discrepancy isochrone”. Here we show the discrepancy isochrones for a few stellar populations in Figs. 2 and 3, for metal-poor ($Z$ = 0.004) and solar-metallicity ($Z$ = 0.02) populations, respectively. Because it is found that the discrepancy isochrones of metal-rich ($Z$ = 0.03) populations are similar to those of solar-metallicity populations, we do not show the results for metal-rich populations. Note that the results for populations with metallicities poorer than 0.004 are also given by our work, but we do not show them as the example for metal-poor populations as the $RPS$ model did not give the SEDs and Lick indices for populations with metallicities poorer than 0.004. This is actually limited by the spectral library used by the $RPS$ model, which only supplies spectra for stars more metal rich than 0.002. As we see, some special stars, e.g., blue stragglers, are generated by binary interactions (see also Fig. 1). We also show that the differences between isochrones of old bsSSPs and ssSSPs are smaller than those of young populations, because the isochrones of old populations are dominated by low-mass stars, in which binary interactions are much weaker. The effects on integrated features of populations ------------------------------------------------- The widely used integrated features of stellar populations are SEDs, Lick indices and colours. They are usually used for stellar population studies and are important. We investigate the effects of binary interactions on them in this section. ### Spectral energy distributions To investigate how binary interactions affect the SEDs of stellar populations, we compare the SEDs of a bsSSP and an ssSSP that have the same age and metallicity. The difference between SEDs are simply called discrepancy SEDs. The absolute discrepancy SED for a pair of bsSSP and ssSSP is derived by subtracting the flux of the ssSSP from that of the bsSSP, as a function of wavelength. The discrepancy SEDs are mainly caused by blue stragglers and hot sub-dwarfs, as such stars are very hot and luminous. The changes of surface abundances of stars caused by binary interactions can also contribute to discrepancy SEDs. The absolute discrepancy SEDs for metal-poor ($Z$ = 0.004), and solar-metallicity ($Z$ = 0.02) stellar populations are shown in Figs. 4, and 5, respectively. The absolute discrepancy SEDs such as those shown in the two figures can be easily used to add binary interactions into ssSSP models, but fractional discrepancy SEDs are more useful for understanding the effects of binary interactions. We show the fractional discrepancy SEDs of a few solar-metallicity populations in Fig. 6. As we see, binary interactions make stellar populations less luminous, but the flux in shortwave bands are changed by binary interactions weakly compared to that in longwave bands. This mainly results from special stars generated by binary interactions, which contribute differently to flux in different bands. The differences between the SEDs of a bsSSP and its corresponding ssSSP decrease with increasing age or decreasing metallicity. In addition, it suggests that binary interactions can affect most Lick indices and colours of populations, because the flux changes caused by binary interactions are not 0, in the bands where widely used Lick indices and magnitudes are defined. In this case, bsSSP and ssSSP models usually give different results for stellar population studies. Furthermore, because the effects of binary interactions on the SED flux of populations are about 11% on average, they are detectable for observations with spectral signal to noise ratio (SNR) grater than 10. In other words, the effects can be detected by most observations, as most reliable observations have SNRs greater than 10. ### Lick indices Lick indices are the most widely used indices in stellar population studies, because they can disentangle the well-known stellar age–metallicity degeneracy (see, e.g., @Worthey:1994). If binary interactions are taken into account, some results different from those determined via ssSSP models will be obtained, which was suggested by the study of the differences between the SEDs of bsSSPs and ssSSPs. Here we test how binary interactions change the Lick indices of stellar populations when comparing to those of ssSSPs. In Fig. 7, we show the differences between four widely used indices of bsSSPs and those of ssSSPs. The indices are calculated from SEDs on the Lick system (@Worthey:1994lickdefinition) directly. As we see, Fig. 7 shows that binary interactions make the H$\beta$ index of a population larger by about 0.15 ${\rm \AA}$, while making Mgb index smaller by about 0.06 ${\rm \AA}$ and Fe indices smaller by more than about 0.1 ${\rm \AA}$, compared to ssSSPs. Therefore, the changes in Lick indices are usually larger than typical observational uncertainties (about 0.07 ${\rm \AA}$ for H$\beta$ index and 0.04 ${\rm \AA}$ for metal-line indices according to the data of @Thomas:2005). For fixed metallicity, the effects of binary interactions on both age- and metallicity-sensitive indices become stronger with increasing age when stellar age is less than about 1.5 $\sim$ 2Gyr, and then the effects become weaker with increasing age. The reason is that binary interactions change the isochrones of populations most strongly near 1.5 $\sim$ 2Gyr as the first mass transfer between two components of binaries peak and a lot of blue stragglers are generated near 1.5 $\sim$ 2Gyr according to the star sample of bsSSPs, and the light of old populations is dominated by low-mass binaries. The interactions between two components of low-mass binaries are usually weaker than for high-mass binaries. The effects of binary interactions on the isochrones are tested quantitatively using the numbers of stars with log($g$) $<$ 4.0 and log($T_{\rm eff}$) $>$ 3.75, because these stars are very luminous and contribute a lot to the light of their populations. Our result shows that binary interactions change the number distribution of stars in the above log($g$) and log($T_{\rm eff}$) ranges most significantly when stellar age is from 1.5 to 2. In addition, from Fig. 7, we find that for a fixed age, binary interactions affect the H$\beta$ and Fe5270 indices of metal-poor populations more strongly than those of metal-rich populations, while they affect the Mgb and Fe5335 indices of metal-poor populations more weakly. As a whole, using ssSSPs and bsSSPs, various ages and metallicities will be measured for the same stellar population, via popular Lick-index methods such as the H$\beta$ & \[MgFe\] method, which determines the ages and metallicities of populations by comparing the observational and theoretical H$\beta$ and \[MgFe\] indices [@Thomas:2003]. Note that the evolution of the differences between the Lick indices of ssSSPs and bsSSPs were not shown before. ### Colour indices Because colours are useful for estimating the ages and metallicities of distant galaxies (see, e.g., @Li:2008colourpairs), we investigate the effects of binary interactions on them. We use a method similar to that used for studying the Lick indices of stellar populations. Our detailed results are shown in Fig. 8. In the figure, the differences between two $UBVRIJHK$ colours, $(B-V)$ and $(B-K)$, a $ugriz$ colour on the photometric system used by Sloan Digital Sky Survey (hereafter SDSS system), $(u-r)$, and a composite colour, $(r-K)$, of bsSSPs and those of ssSSPs are shown, respectively. Note that the $(r-K)$ colour consists of a Johnson system magnitude, $K$, and an SDSS system magnitude, $r$. As we see, for a fixed age and metallicity, binary interactions make the colours of most stellar populations bluer than those predicted by ssSSPs. This mainly results from blue stragglers generated by binary interactions, because such stars are very luminous and blue. It suggests that we will get different stellar metallicities and ages for galaxies via bsSSP and ssSSP models, using a photometric method. When comparing to the typical colour uncertainties (0.12, 0.06, 0.13, 0.10, and 0.01 mag for $B$, $V$, $K$, $u$, and $r$ magnitudes, respectively), the changes \[e.g., about -0.04, -0.15, and -0.08 mag for $(B-V)$, $(B-K)$, and $(u-r)$, respectively\] of colours caused by binary interactions are similar to, but somewhat less than, typical observational errors. Note that the photometric uncertainties are estimated using the data of some publications, SDSS, and Two Micron All Sky Survey (2MASS). The uncertainties actually depend on surveys. The observational uncertainties of $u$ and $K$ magnitudes may be smaller when taking the data of other surveys instead of those of SDSS and 2MASS. In addition, similar to Lick indices, the differences between the colours of two kinds of stellar populations peak near 2Gyr. The effects of binary interactions on the determination of stellar-population parameters ======================================================================================== Two stellar-population parameters, i.e., stellar age and metallicity, are crucial in the investigations of the formation and evolution of galaxies. We investigate the effects of binary interactions on the estimates of the two parameters. We try to fit the stellar-population parameters of bsSSPs with various ages and metallicities using ssSSPs, via both Lick-index and photometric methods. Because observations show that about 50% of stars in the Galaxy are in binaries, bsSSPs should be more similar to the real stellar populations of galaxies and star clusters. Therefore, the stellar-population parameters fitted via ssSSPs (hereafter ss-fitted results, represented by $t_{\rm s}$ and $Z_{\rm s}$) should be different from the results obtained via bsSSPs (bs-fitted results, $t_{\rm b}$ and $Z_{\rm b}$). The detailed differences are shown in this section. Lick-index method ----------------- In a widely used method, i.e., Lick-index method, we fit the stellar ages and metallicities of populations by two indices, i.e., H$\beta$ and \[MgFe\] = $\rm {\sqrt{Mgb . (0.72Fe5270 + 0.28Fe5335)}}$, after @Thomas:2003. Thus the results are slightly affected by $\alpha$-enhancement and stellar population models (e.g., our $RPS$ model) without any $\alpha$-enhancement compared to the sun can be used to measure stellar-population parameters. The differences between bs- and ss-fitted stellar-populations parameters of populations with four metallicities (0.004, 0.01, 0.02, and 0.03) and 150 ages (from 0.1 to 15Gyr) are tested. In the test, we try to fit the stellar ages and metallicities of testing bsSSPs via an H$\beta$ versus \[MgFe\] grid of ssSSPs. Because ssSSPs predict different Lick indices for populations compared to bsSSPs, when we use ssSSPs to fit the ages and metallicities of our testing bsSSPs, the results obtained are different from the real parameters of bsSSPs, i.e., bs-fitted parameters. From an H$\beta$ versus \[MgFe\] grid of ssSSPs (Fig. 9), we can see this clearly. In detail, the ss-fitted stellar ages are less than the bs-fitted ones, by 0 $\sim$ 5Gyr. The maximal difference is larger than the typical uncertainty ($<$ 2Gyr) in stellar population studies (see Fig. 9). The older the populations, the bigger the difference between ages fitted via bsSSPs and ssSSPs, although the differences between the Lick indices of old bsSSPs and ssSSPs are smaller (see Section 3). The reason is that the differences among the Lick indices of populations with different ages are much less for old populations than for young populations (see Fig. 9 for comparison). Therefore, the ss-fitted ages of galaxies can be much less than bs-fitted ages, because most galaxies, especially early-type ones, have old (7$\sim$8Gyr) populations and their metallicities are not big (peak near 0.002) [@Gallazzi:2005]. However, ss-fitted metallicities of populations are similar to bs-fitted values, compared to the typical uncertainties ($\sim$ 0.002). Therefore, if some stars are binaries, smaller ages will be measured via comparing the observational H$\beta$ and \[MgFe\] indices of galaxies with those of theoretical ssSSPs. This is more significant for metal-poor stellar populations. In our testing populations, on average, the ss-fitted metallicities are 0.0010 poorer than bs-fitted values, while ssSSP fitted ages are younger than bs-fitted values, by 0.3Gyr for all and 1.8Gyr for old ($\geq$ 7Gyr) testing populations. In this work, the ss-fitted stellar ages and metallicities of testing bsSSPs are obtained by finding the best-fit populations in a grid of theoretical populations with intervals of stellar age and metallicity of 0.1Gyr and 0.0001, respectively. A least square method is used in the fit. In addition, it is found that the bs-fitted ages of populations can be calculated from the ss-fitted ages and metallicities (with RMS of 1.45Gyr) via the equation $$t_{\rm b} = (0.17 + 8.27Z_{\rm s}) + (1.38 - 14.45Z_{\rm s})t_{\rm s},$$ where $t_{\rm b}$, $Z_{\rm s}$ and $t_{\rm s}$ are the bs-fitted age, ss-fitted metallicity and age, respectively. The relation between bs-fitted ages and ss-fitted stellar-population parameters can be seen in Fig. 10. We find that ss-fitted ages are usually less than the bs-fitted ages of populations, and the poorer the metallicity, the larger the differences between the bs- and ss-fitted ages. Note that Eq. (2) is not very accurate for metal-poor ($Z$ = 0.004) and old (age $>$ 11Gyr) populations. The reason is that the H$\beta$ index increases with age for metal-poor and old populations, while it decreases with age for other populations. Photometric method ------------------ In the photometric method, we fit stellar-population parameters respectively via two pairs of colours, i.e., \[$(u-R)$, $(R-K)$\] and \[$(u-r)$, $(r-K)$\]. The two pairs are shown to have the ability to constrain the ages and metallicities of populations and can be used to study the stellar populations of some distant galaxies (see @Li:2008colourpairs). The test shows that ss-fitted metallicities are poorer than the bs-fitted metallicities of populations. When taking \[$(u-R)$, $(R-K)$\] for this work, on average, ss-fitted metallicities are 0.003 smaller than bs-fitted values. It is 0.0035 when taking the pair \[$(u-r)$, $(r-K)$\]. The distribution of a few testing bsSSPs in the $(u-R)$ versus $(R-K)$ grid of ssSSPs is shown in Fig. 11. In particular, it is found that the ss-fitted ages are correlated with the bs-fitted ages of populations, which is independent of metallicity. The relation (with a RMS of 0.72Gyr) between ss- and bs-fitted ages of populations can be written as $$t_{\rm b} = 0.24 + 0.93t_{\rm s},$$ where $t_{\rm b}$ and $t_{\rm s}$ are bs- and ss-fitted ages, respectively. It shows that the bs- and ss-fitted ages are similar. The equation is clearly different from Eq. (2), because colours are usually less sensitive to metallicity compared with metal-line Lick indices. The relation between bs- and ss-fitted ages of populations is shown in Fig. 12. The figure shows the approximate relation between the bs- and ss-fitted ages of populations, which is nearly independent of metallicity. The equation is possibly useful to estimate the absolute ages of distant galaxies and star clusters. Note that the relation is presented for populations younger than 14Gyr, because the age of the universe is shown to be smaller than about 14Gyr [@WMAP:2003]. When we use $(u-r)$ and $(r-K)$ colours to estimate the stellar-population parameters of populations, we find that ss-fitted metallicities are about 0.0035 smaller than the bs-fitted values. The bs- and ss-fitted ages of populations can be approximately transformed by $$t_{\rm b} = 0.28 + 0.91t_{\rm s},$$ where $t_{\rm b}$ and $t_{\rm s}$ are the bs- and ss-fitted ages, respectively. The RMS of the fitted relation is 1.00Gyr. The equation is similar to Eq. (3) but with larger scatter. This results from the various metallicity and age sensitivities of the colours used. One can see Fig. 13 for more details about the relation. As a whole, from both the results obtained by Lick-index and photometric methods, we are shown that bs-fitted stellar-population parameters increase with ss-fitted ones. Therefore, using bsSSP models instead of ssSSP models, similar results for relative studies of stellar-population parameters of galaxies will be obtained. However, if one wants to get the absolute stellar-population parameters of galaxies and star clusters, the effects of binary interactions should be taken into account, especially for metal-poor populations. It can be conveniently done by taking the average metallicity deviations and the relations between the bs-fitted ages and ss-fitted results of populations, which were shown above. Results for populations with Chabrier initial mass function ----------------------------------------------------------- Some stellar populations with Salpeter IMF [@Salpeter:1955] were taken as standard models for the work, but even if some populations with other IMFs were taken instead, we can obtain similar results. We have a test using populations with the Chabrier IMF [@Chabrier:2003]. The result shows that ss-fitted metallicities are 0.0011 less on average than bs-fitted results, when taking H$\beta$ and \[MgFe\] for measuring stellar-population parameters. The bs-fitted ages and ss-fitted stellar-population parameters have a relation of $t_{\rm b} = (-0.06 + 20.63Z_{\rm s}) + (1.46 - 18.76Z_{\rm s})t_{\rm s}$, where $Z_{\rm s}$ is the ss-fitted metallicity, while $t_{\rm b}$ and $t_{\rm s}$ the bs- and ss-fitted ages, respectively. When we used $(u-R)$ and $(R-K)$ to estimate the stellar-population parameters of populations, ss-fitted metallicities were shown to be 0.0031 smaller than bs-fitted values, and bs-fitted ages can be calculated from ss-fitted results via $t_{\rm b} = 0.46 + 0.89t_{\rm s}$. A similar relation for the results fitted by $(u-r)$ and $(r-K)$ is $t_{\rm b} = 0.41 + 0.88t_{\rm s}$, with a deviation of 0.0039 in metallicity. As a whole, the relations between bs- and ss-fitted results obtained via populations with Salpeter and Chabrier IMFs are similar, compared to the typical uncertainties of stellar-population parameter studies. The comparisons of the results obtained via two IMFs can be seen in Figs. 11 and 12. Discussions and Conclusions =========================== We investigated the effects of binary interactions on the isochrones, SEDs, Lick indices and colours of simple stellar populations, and then on the determination of light-weighted stellar ages and metallicities. The results showed that binary interactions can affect stellar population synthesis studies significantly. In detail, binary interactions make stellar populations less luminous and bluer, while making the H$\beta$ index larger and metal line indices smaller compared to ssSSPs. The colour changes (2$\sim$5%) caused by binary interactions are smaller than the systematic errors (about 6%) of the $RPS$ model while similar to observational errors (4$\sim$7%). Note that the systematic error of 6% did not take the uncertainties due to the free parameters of the star model into account (see Sect. 2). The changes (3$\sim$6%) of Lick indices caused by binary interactions are somewhat smaller than the systematic errors (about 6%) of stellar population synthesis model but larger than observational errors (1$\sim$4%). Therefore, if we measure luminosity-weighted stellar-population parameters (metallicity and age) via bsSSPs instead of ssSSPs, higher (0.0010 on average) metallicities and significantly larger ages will be obtained via a Lick-index method, and significantly higher (about 0.0030) metallicities and similar ages will be obtained via a photometric method. Because simple stellar population models are usually used for studying the populations of early-type galaxies or globular clusters, which possibly have old ($>$ 7Gyr) and relative metal-poor populations, the changes ($\sim$ 1.8Gyr in age and 0.0030 in metallicity) caused by binary interactions in stellar ages and metallicities are larger than the typical uncertainties. In particular, we found that the relative results of stellar population studies obtained by ssSSPs and bsSSPs are similar. The bs-fitted stellar-population parameters can be calculated from the ss-fitted ones, via equations presented by the paper. The relations between bs-fitted ages and ss-fitted stellar-population parameters are useful for some special investigations. For example, when studying the age of the universe via the stellar ages of some distant globular clusters, we can estimate the absolute age of star clusters using ss-fitted results. Although the results shown in this paper can help us to give some estimates for the absolute stellar ages and metallicities of galaxies, it is far from getting accurate values because of the large uncertainties in stellar population models (see, e.g., @Yi:2003). In addition, different stellar population models usually give different absolute results for stellar population studies. Note that the results obtained by the Lick-index method are affected slightly by $\alpha$-enhancement, according to the work of @Thomas:2003, but this is not the case for the results obtained by photometric methods. In this work, all bsSSPs contains about 50% binaries with orbital periods less than 100 yr (the typical value of the Galaxy). If the binary (with orbital periods less than 100 yr) fraction of galaxies are different from 50%, the results shown in the paper will change. The higher the fraction of binaries, the larger the difference between ss- and bs-fitted stellar-population parameters. Thus the results obtained by in this paper may not be proper for investigating galaxies or star clusters with binary fractions obviously different from 50%. Furthermore, when building bsSSPs, we assumed that the masses of the two components of a binary are correlated [@Li:2007database], according to previous works. We did not try to take other distributions for secondary mass and binary period in this work, because it is limited by our present computing ability. We will give further studies in the future. The differences between the Lick indices and colours of bsSSPs and ssSSPs do not evolve smoothly with age. This possibly relates to the method used to calculate the integrated features of stellar populations. In fact, the Monte Carlo method usually leads to some scatter (about 2%) in the integrated features of populations. The analytic fits and the binary algorithm used by Hurley code can also lead to some scatter (about 5%). We investigated the effects of binary interactions via only some simple stellar populations, but the real populations of galaxies and star clusters are usually not so simple. In other words, the populations of galaxies and star clusters seem to be composite stellar populations including populations with various ages and metallicities (e.g., @Yi:2005). It seems that the effects of binary interactions and population-mixing are degenerate. This is a complicated subject, which requires further study. We greatly acknowledge the anonymous referee for two constructive referee’s reports, Profs. Licai Deng, Tinggui Wan, Xu Kong, and Xuefei Chen for useful discussions, and Dr. Richard Simon Pokorny for greatly improving the English. This work is supported by the Chinese National Science Foundation (Grant Nos 10433030, 10521001, 2007CB815406) and the Youth Foundation of Knowledge Innovation Project of The Chinese Academy of Sciences (07ACX51001). [31]{} natexlab\#1[\#1]{} , G., & [Charlot]{}, S. 2003, [MNRAS]{}, 344, 1000 , G. 2003, [ApJ]{}, 586, L133 , M., [Piotto]{}, G., & [De Angeli]{}, F. 2004, [MNRAS]{}, 349, 129 , A., & [Mayor]{}, M. 1991, [A&A]{}, 248, 485 , M., & [Rocca-Volmerange]{}, B. 1997, [A&A]{}, 326, 950 , A., [Charlot]{}, S., [Brinchmann]{}, J., [White]{}, S., 2005, [MNRAS]{}, 362, 41 , D., [Mazeh]{}, T., 1994, [A&A]{}, 282, 801 , Z., [Podsiadlowski]{}, P., [Eggleton]{}, P. P., 1995, [MNRAS]{}, 272, 800 , Z., [Podsiadlowski]{}, P., & [Lynas-Gray]{}, A. E. 2007, [MNRAS]{}, 380, 1098 , J. R., [Aarseth]{}, S. J., & [Shara]{}, M. M. 2007, [ApJ]{}, 665, 707 , J. R., [Tout]{}, C. A., & [Pols]{}, O. R. 2002, [MNRAS]{}, 329, 897 , P., [Tout]{}, C. A., & [Gilmore]{}, G. 1993, [MNRAS]{}, 262, 545 , T., [Cuisinier]{}, F., & [Buser]{}, R. 1998, [A&AS]{}, 130, 65 , Z., & [Han]{}, Z. 2007, MNRAS, in press, ArXiv:astro-ph/0708.1204 , Z., & [Han]{}, Z. 2008, MNRAS, 385, 1270 , N., [Dobbie]{}, P. D., [Deacon]{}, N. R., [Hodgkin]{}, S. T., [Hambly]{}, N. C., & [Jameson]{}, R. F. 2007, [MNRAS]{}, 380, 712 , L. P., [Delgado]{}, R. M. G., [Leitherer]{}, C., [Cervi[ñ]{}o]{}, M., & [Hauschildt]{}, P. 2005, [MNRAS]{}, 358, 49 , T., [Goldberg]{}, D., [Duquennoy]{}, A., [Mayor]{}, M., 1992, [ApJ]{}, 401, 265 , D. J., [Dobbie]{}, P. D., [Jameson]{}, R. F., [Steele]{}, I. A., [Jones]{}, H. R. A., & [Katsiyannis]{}, A. C. 2003, [MNRAS]{}, 342, 1241 , O., [Hurley]{}, J., & [Tout]{}, C. 1998, in IAU Symposium, Vol. 191, IAU Symposium, 607 , E. E. 1955, [ApJ]{}, 121, 161 , D., [Maraston]{}, C., & [Bender]{}, R. 2003, [MNRAS]{}, 343, 279 , D., [Maraston]{}, C., [Bender]{}, R., & [Mendes de Oliveira]{}, C. 2005, [ApJ]{}, 621, 673 , B., [Deng]{}, L., [Han]{}, Z., & [Zhang]{}, X. B. 2006, [A&A]{}, 455, 247 , C. A., [Aarseth]{}, S. J., [Pols]{}, O. R., & [Eggleton]{}, P. P. 1997, [MNRAS]{}, 291, 732 , A. 1999, [ApJ]{}, 513, 224 , P., [Lejeune]{}, T., [Buser]{}, R., [Cuisinier]{}, F., & [Bruzual]{}, G. 2002, [A&A]{}, 381, 524 , 2006, [ApJS]{}, 148, 1 , G. 1994, [ApJS]{}, 95, 107 , G., [Faber]{}, S. M., [Gonzalez]{}, J. J., & [Burstein]{}, D. 1994, [ApJS]{}, 94, 687 , Y., [Deng]{}, L., & [Han]{}, Z. W. 2007, [ApJ]{}, 660, 319 , S. K. 2003, [ApJ]{}, 582, 202 , S. K., [Yoon]{}, S.-J., [Kaviraj]{}, S., [Deharveng]{}, J.-M., [Rich]{}, R. M., [Salim]{}, S., [Boselli]{}, A., [Lee]{}, Y.-W., [Ree]{}, C. H., [Sohn]{}, Y.-J., [Rey]{}, S.-C., [Lee]{}, J.-W., [Rhee]{}, J., [Bianchi]{}, L., [Byun]{}, Y.-I., [Donas]{}, J., [Friedman]{}, P. G., [Heckman]{}, T. M., [Jelinsky]{}, P., [Madore]{}, B. F., [Malina]{}, R., [Martin]{}, D. C., [Milliard]{}, B., [Morrissey]{}, P., [Neff]{}, S., [Schiminovich]{}, D., [Siegmund]{}, O., [Small]{}, T., [Szalay]{}, A. S., [Jee]{}, M. J., [Kim]{}, S.-W., [Barlow]{}, T., [Forster]{}, K., [Welsh]{}, B., & [Wyder]{}, T. K. 2005, [ApJ]{}, 619, L111 , L., & [Tutukov]{}, A. 1997, in Advances in Stellar Evolution, ed. R. T. [Rood]{} & A. [Renzini]{}, 237 , F., [Han]{}, Z., [Li]{}, L., & [Hurley]{}, J. R. 2004, [A&A]{}, 415, 117 , F., [Li]{}, L., & [Han]{}, Z. 2005, [MNRAS]{}, 364, 503 [^1]: Graduate University of the Chinese Academy of Sciences
--- abstract: 'Let ${{\bf {M}}^{3} }$ be a closed hyperbolic three manifold. We construct closed surfaces which map by immersions into ${{\bf {M}}^{3} }$ so that for each one the corresponding mapping on the universal covering spaces is an embedding, or, in other words, the corresponding induced mapping on fundamental groups is an injection.' address: 'Mathematics Department Stony Brook University Stony Brook, 11794 NY, USA and University of Warwick Institute of Mathematics Coventry, CV4 7AL, UK' author: - Jeremy Kahn and Vladimir Markovic title: Immersing almost geodesic surfaces in a closed hyperbolic three manifold --- Introduction ============ The purpose of this paper is to prove the following theorem. \[main\] Let ${{\bf {M}}^{3} }={ {\mathbb {H}}^{3}}/ {{\mathcal {G}}}$ denote a closed hyperbolic three manifold where ${{\mathcal {G}}}$ is a Kleinian group and let $\epsilon>0$. Then there exists a Riemann surface $S_{\epsilon}={{\mathbb {H}}^{2}}/F_{\epsilon}$ where $F_{\epsilon}$ is a Fuchsian group and a $(1+\epsilon)$-quasiconformal map $g:\partial{{ {\mathbb {H}}^{3}}} \to \partial{{ {\mathbb {H}}^{3}}}$, such that the quasifuchsian group $g\circ F_{\epsilon} \circ g^{-1}$ is a subgroup of ${{\mathcal {G}}}$ (here we identify the hyperbolic plane ${{\mathbb {H}}^{2}}$ with an oriented geodesic plane in ${ {\mathbb {H}}^{3}}$ and the circle $\partial{{{\mathbb {H}}^{2}}}$ with the corresponding circle on the sphere $\partial{{ {\mathbb {H}}^{3}}}$). In the above theorem the Riemann surface $S_{\epsilon}$ has a pants decomposition where all the cuffs have a fixed large length and they are glued by twisting for $+1$. One can extend the map $g$ to an equivariant diffeomorphism of the hyperbolic space. This extension defines the map $f:S_{\epsilon} \to {{\bf {M}}^{3} }$ and the surface $f(S_{\epsilon}) \subset {{\bf {M}}^{3} }$ is an immersed $(1+\epsilon)$-quasigeodesic surface. In particular, the surface $f(S_{\epsilon})$ is essential which means that the induced map $f_{*}:\pi_1(S_{\epsilon}) \to \pi_1({{\bf {M}}^{3} })$ is an injection. We summarise this in the following theorem. \[thm-intro-1\] Let ${{\bf {M}}^{3} }$ be a closed hyperbolic 3-manifold. Then we can find a closed hyperbolic surface $S$ and a continuous map $f:S \to {{\bf {M}}^{3} }$ such that the induced map between fundamental groups is injective. .1cm Let $S$ be an oriented closed topological surface with a given pants decomposition ${{\mathcal {C}}}$, where ${{\mathcal {C}}}$ is a maximal collection of disjoint (unoriented) simple closed curves that cut $S$ into the corresponding pairs of pants. Let $f:S \to {{\bf {M}}^{3} }$ be a continuous map and let $\rho_f: \pi_1(S) \to \pi_1({{\bf {M}}^{3} })$ be the induced map between the fundamental groups. Assume that $\rho_f$ is injective on $\pi_1(\Pi)$, for every pair of pants $\Pi$ from the pants decomposition of $S$. Then to each curve $C \in {{\mathcal {C}}}$ we can assign a complex half-length ${{\bf {hl}}}(C) \in ({\mathbb {C}}/2\pi i {\mathbb {Z}})$ and a complex twist-bend $s(C) \in {\mathbb {C}}/ ({{\bf {hl}}}(C) {\mathbb {Z}}+2\pi i {\mathbb {Z}}) $. We prove the following in Section 2. \[thm-intro-2\] There are universal constants ${\widehat}{\epsilon}, K_0>0$ such that the following holds. Let $\epsilon$ be such that ${\widehat}{\epsilon}>\epsilon>0$. Suppose $(S,{{\mathcal {C}}})$ and $f:S \to {{\bf {M}}^{3} }$ are as above, and for every $C \in {{\mathcal {C}}}$ we have $$|{{\bf {hl}}}(C)-\frac{R}{2}|<\epsilon, \, \, \text{and} \,\, |s(C)-1|<{{\epsilon}\over{R}},$$ for some $R>R(\epsilon)>0$. Then $\rho_f$ is injective and the map $\partial{{\widetilde}{f}}:\partial{{\widetilde}{S}} \to \partial{{\widetilde}{{{\bf {M}}^{3} }}}$ extends to a $(1+K_0\epsilon)$-quasiconformal map from $\partial{{ {\mathbb {H}}^{3}}}$ to itself (here ${\widetilde}{S}$ and ${\widetilde}{{{\bf {M}}^{3} }}$ denote the corresponding universal covers). .1cm It then remains to construct such a pair $(f,(S,{{\mathcal {C}}}))$. If $\Pi$ is a (flat) pair of pants, we say $f:\Pi \to {{\bf {M}}^{3} }$ is a skew pair of pants if $\rho_f$ is injective, and $f(\partial{\Pi})$ is the union of three closed geodesics. Suppose we are given a collection $\{f_{\alpha}: \Pi_\alpha \to {{\bf {M}}^{3} }\}_{\alpha \in A}$ of skew pants, and suppose for the sake of simplicity that no $f_\alpha$ maps two components of $\partial{\Pi}$ to the same geodesic. .1cm For each closed geodesic $\gamma$ in ${{\bf {M}}^{3} }$ we let $A_{\gamma}=\{\alpha \in A: \gamma \in f_{\alpha} (\partial{\Pi_{\alpha} } ) \}$. Given permutations $\sigma_{\gamma}:A_{\gamma} \to A_{\gamma}$ for all such $\gamma$, we can build a closed surface in ${{\bf {M}}^{3} }$ as follows. For each $(f_{\alpha},\Pi_{\alpha})$ we make two pairs of skew pants in ${{\bf {M}}^{3} }$, identical except for their orientations. For each $\gamma$ we connect via the permutation $\sigma_{\gamma}$ the pants that induce one orientation on $\gamma$ to the pants that induce the opposite orientation on $\gamma$. We show in Section 3 that if the pants are “evenly distributed’ around each geodesic $\gamma$ then we can build a surface this way that satisfies the hypotheses of Theorem \[thm-intro-2\]. .1cm We can make this statement more precise as follows: for each $\gamma \in f_{\alpha}(\partial{\Pi}_{\alpha})$ we define an unordered pair $\{n_1,n_2\} \in {N^{1}}(\gamma)$, the unit normal bundle to $\gamma$. The two vectors satisfy $2(n_1-n_2)=0$ in the torus ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\gamma){\mathbb {Z}})$, where $l(\gamma)$ is the complex length of $\gamma$. So we write $${\operatorname{foot}}_{\gamma}(\Pi_{\alpha})\equiv {\operatorname{foot}}_{\gamma}(f_{\alpha},\Pi_{\alpha})=\{n_1,n_2\} \in {N^{1}}(\sqrt{\gamma})={\mathbb {C}}/(2\pi i {\mathbb {Z}}+{{\bf {hl}}}(\gamma){\mathbb {Z}}).$$ We let ${\operatorname{foot}}(A)=\{{\operatorname{foot}}_\gamma(\Pi_{\alpha} ): \alpha \in A, \gamma \in \partial \Pi_\alpha\}$ (properly speaking ${\operatorname{foot}}(A)$ is a labelled set (or a multiset) rather than a set; see Section 3 for details). We then define ${\operatorname{foot}}_{\gamma}(A)= {\operatorname{foot}}(A)|_{{N^{1}}(\sqrt{\gamma})}$. We let $\tau:{N^{1}}(\sqrt{\gamma}) \to {N^{1}}(\sqrt{\gamma})$ be defined by $\tau(n)=n+1+i\pi$. If for each $\gamma$ we can define a permutation $\sigma_{\gamma}{\colon}A_\gamma \to A_\gamma$ such that $$|{\operatorname{foot}}_{\gamma}(\Pi_{\sigma_{\gamma}(\alpha)})-\tau({\operatorname{foot}}_{\gamma}(\Pi_{\alpha}))| <{{\epsilon}\over{R}},$$ and $|{{\bf {hl}}}(\gamma)- \frac{R}{2}|<\epsilon$ for all $\gamma \in \partial{\Pi_{\alpha}}$, then the resulting surface will satisfy the assumptions of Theorem \[thm-intro-2\]. The details of the above discussion are carried out in Section 3. .1cm In Section 4 we construct the measure on skew pants that after rationalisation will give us the collection $\Pi_\alpha$ we mentioned above. This is the heart of the paper. Showing that there exists a single skew pants that satisfy the first inequality in Theorem \[thm-intro-2\] is a non-trivial theorem and the only known proofs use the ergodicity of either the horocyclic or the frame flow. This result was first formulated and proved by L. Bowen [@bowen], where he used the horocyclic flow to construct such skew pants. Our construction is different. We use the frame flow to construct a measure on skew pants whose equidistribution properties follow from the exponential mixing of the frame flow. This exponential mixing is a result of Moore [@moore] (see also [@pollicott]) (it has been shown by Brin and Gromov [@brin-gromov] that for a much larger class of negatively curved manifolds the frame flow is strong mixing). The detailed outline of this construction is given at the beginning of Section 4. .1cm We point out that Cooper-Long-Reid [@c-l-r] proved the existence of essential surfaces in cusped finite volume hyperbolic 3-manifolds. Lackenby [@lackenby] proved the existence of such surfaces in all closed hyperbolic 3-manifolds that are arithmetic. Acknowledgement --------------- We would like to thank the following people for their interest in our work and suggestions for writing the paper: Ian Agol, Nicolas Bergeron, Martin Bridgeman, Ken Bromberg, Danny Calegari, Dave Gabai, Bruce Kleiner, Francois Labourie, Curt McMullen, Yair Minsky, Jean Pierre Otal, Peter Ozsvath, Dennis Sullivan, Juan Suoto, Dylan Thurston, and Dani Wise. In particular, we are grateful to the referee for numerous comments and suggestions that have improved the paper. Quasifuchsian representation of a surface group =============================================== The Complex Fenchel-Nielsen coordinates --------------------------------------- Below we define the Complex Fenchel-Nielsen coordinates. For a very detailed account we refer to [@series] and [@kour]. Originally the coordinates were defined in [@ser-tan] and [@kour]. A word on notation. By $d(X,Y)$ we denote the hyperbolic distance between sets $X,Y \subset { {\mathbb {H}}^{3}}$. If $\gamma^{*} \subset { {\mathbb {H}}^{3}}$ is an oriented geodesic and $p,q \in \gamma^{*}$ then ${{\bf d}}_{\gamma^{*}}(p,q)$ denotes the signed real distance between $p$ and $q$. Let $\alpha^{*},\beta^{*}$ be two oriented geodesics in ${ {\mathbb {H}}^{3}}$, and let $\gamma^{*}$ be the geodesic that is orthogonal to both $\alpha^{*}$ and $\beta^{*}$, with an orientation. Let $p= \alpha^{*} \cap \gamma^{*}$ and $q=\beta^{*} \cap \gamma^{*}$. Let $u$ be the tangent vector to $\alpha^{*}$ at $p$, and $v$ be the tangent vector to $\beta^{*}$ at $q$. We let $u'$ be the parallel transport of $u$ to $q$. By ${{\bf d}}_{\gamma^{*}}(\alpha^{*},\beta^{*})$ we denote the complex distance between $\alpha^{*}$ and $\beta^{*}$ measured along $\gamma^{*}$. The real part is given by ${\operatorname{Re}}({{\bf d}}_{\gamma^{*}}(\alpha^{*},\beta^{*} ) )={{\bf d}}_{\gamma^{*}}(p,q)$. The imaginary part ${\operatorname{Im}}({{\bf d}}_{\gamma^{*}}(\alpha^{*},\beta^{*}))$ is the oriented angle from $u'$ to $v$, where the angle is oriented by $\gamma^{*}$ which is orthogonal to both $u'$ and $v$. The complex distance is well defined $\pmod {2k\pi i}$, $k \in {\mathbb {Z}}$. In fact, every identity we write in terms of complex distances is therefore assumed to be true $\pmod {2k\pi i}$. We have the following identities ${{\bf d}}_{\gamma^{*}}(\alpha^{*},\beta^{*})=-{{\bf d}}_{\gamma^{*}}(\beta^{*},\alpha^{*})$; ${{\bf d}}_{-\gamma^{*}}(\alpha^{*},\beta^{*})=-{{\bf d}}_{\gamma^{*}}(\alpha^{*},\beta^{*})$, and ${{\bf d}}_{\gamma^{*}}(-\alpha^{*},\beta^{*})={{\bf d}}_{\gamma^{*}}(\alpha^{*},\beta^{*})+i \pi$. We let ${{\bf d}}(\alpha^*,\beta^*)$ (without a subscript for ${{\bf d}}$) denote the unsigned complex distance equal to ${{\bf d}}_{\gamma^{*}}(\alpha^*,\beta^*)$ modulo $<z \to -z>$. We will write ${{\bf d}}(\alpha^*,\beta^*) \in ({\mathbb {C}}/ 2 \pi i {\mathbb {Z}})/ {\mathbb {Z}}_2$, where ${\mathbb {Z}}_2$ of course stands for $<z \to -z>$. We observe that ${{\bf d}}(\alpha^*,\beta^*)= {{\bf d}}(\beta^*,\alpha^*)={{\bf d}}(-\alpha^*,-\beta^*)= {{\bf d}}(-\beta^*,-\alpha^*)$. For a loxodromic element $A \in {{\bf {PSL}}(2,{\mathbb {C}})}$, by ${{\bf l}}(A)$ we denote its complex translation length. The number ${{\bf l}}(A)$ has a positive real part and it is defined $\pmod {2k\pi i}$, $k \in {\mathbb {Z}}$. By $\gamma^{*}$ we denote the oriented axis of $A$, where $\gamma^{*}$ is oriented so that the attracting fixed point of $A$ follows the repelling fixed point. Let $\Pi^{0}$ be a topological pair of pants (a three holed sphere). We consider $\Pi^0$ as a manifold with boundary, that is we assume that $\Pi^0$ contains its cuffs. We say that a pair of pants in a closed hyperbolic 3-manifold ${{\bf {M}}^{3} }$ is an injective homomorphism $\rho:\pi_1(\Pi^{0}) \to \pi_1({{\bf {M}}^{3} })$, up to conjugacy. This induces a representation $$\rho:\pi_1(\Pi^{0}) \to {{\bf {PSL}}(2,{\mathbb {C}})},$$ up to conjugacy, which in general we also call a free-floating pair of pants. A pair of pants in ${{\bf {M}}^{3} }$ is determined by (and determines) a continuous map $f:\Pi^{0} \to {{\bf {M}}^{3} }$, up to homotopy, and free-floating pair of pants likewise determines a map $$f:\Pi^{0} \to { {\mathbb {H}}^{3}}/ \rho(\pi_1(\Pi^0))=M_{\rho},$$ up to homotopy. Suppose $\rho:\pi_1(\Pi^0) \to {{\bf {PSL}}(2,{\mathbb {C}})}$ is a free-floating pair of pants, and $\rho=f_{*}$, where $f:\Pi^0 \to M_{\rho}$. We orient the components $C_i$ of $\partial{\Pi}^0$ so that $\Pi^0$ is on the left of each $C_i$. For each $i$, there is a unique oriented closed geodesic $\gamma_i$ in $M_{\rho}$ freely homotopic to $f(C_0)$. Now let $a_i$ be the simple non-separating arc on $\Pi^0$ connecting $C_{i-1}$ and $C_{i+1}$ (we take the subscript $\pmod{3}$). We can homotop $f$ so that $f$ maps each $C_i$ to $\gamma_i$, and maps $a_i$ to an arc $\eta_i$ from $\gamma_{i-1}$ to $\gamma_{i+1}$ that is orthogonal at its endpoints to $\gamma_{i-1}$ and $\gamma_{i+1}$. While such an $f$ is not unique, the 1-complex made of the $\gamma_i$ and the $\eta_i$ together divide $f(\Pi^0)$ into two singular regions whose boundaries are geodesic right-angled hexagons. Because the geometry of each of these two hexagons is determined by these unsigned complex distances ${{\bf d}}_{\eta_{i}}(\gamma_{i-1},\gamma_{i+1})$, the two right-angled hexagons are isometric. Let us fix for the moment $i \in \{0,1,2\}$. We then orient $\eta_{i-1}$ and $\eta_{i+1}$ to point away from $\gamma_i$ (so the signed complex distance ${{\bf d}}_{\eta_{i \pm 1}}(\gamma_i,\gamma_{i \mp 1})$ has positive real part). Recall that ${{\bf d}}_{\gamma_{i}}(\eta_{i-1},\eta_{i+1})$ denotes the signed complex distance from $\eta_{i-1}$ to $\eta_{i+1}$, along $\gamma_i$. Because the two hexagons are isometric, $${{\bf d}}_{\gamma_{i}}(\eta_{i-1},\eta_{i+1})={{\bf d}}_{\gamma_{i}}(\eta_{i+1},\eta_{i-1}).$$ We let $${{\bf {hl}}}(\gamma_i)={{\bf d}}_{\gamma_{i}}(\eta_{i-1},\eta_{i+1}).$$ We can also think of this definition on the universal cover ${ {\mathbb {H}}^{3}}$ as follows: We conjugate $\rho$ so that there is a lift ${\widetilde}{\gamma}_i$ of $\gamma_i$ to ${ {\mathbb {H}}^{3}}=\{(x,y,z):z>0\}$ that connects $0$ and $\infty$. We let $A_{\gamma_{i}} \in {{\bf {PSL}}(2,{\mathbb {C}})}$ be such that $\gamma_{i}={\widetilde}{\gamma}_i /\langle A_{\gamma_{i}} \rangle $. Then $A_{\gamma_{i}}:{ {\mathbb {H}}^{3}}\to { {\mathbb {H}}^{3}}$ extends to map ${\widehat}{{\mathbb {C}}}=\partial{{ {\mathbb {H}}^{3}}}$ to itself by $z \mapsto e^{{{\bf l}}(\gamma_{i})} \cdot z$. Moreover, the lifts of $\eta_{i-1}$ and $\eta_{i+1}$ that intersect ${\widetilde}{\gamma}_{i}$ will alternate along ${\widetilde}{\gamma}_{i}$ (so we can define ${{\bf d}}_{\gamma_{i}}(\eta_{i-1},\eta_{i+1})$ as ${{\bf d}}_{{\widetilde}{\gamma}_{i}}({\widetilde}{\eta}_{i-1},{\widetilde}{\eta}_{i+1})$, where ${\widetilde}{\eta}_{i-1}$ is a lift of $\eta_{i-1}$ that intersects ${\widetilde}{\gamma}_{i}$ and ${\widetilde}{\eta}_{i+1}$ is the next lift of $\eta_{i+1}$ along ${\widetilde}{\gamma}_{i}$). If we define $\sqrt{A_{\gamma_{i}}} \in {{\bf {PSL}}(2,{\mathbb {C}})}$ so that it maps $z \mapsto e^{{{\bf {hl}}}(\gamma_{i})} \cdot z$, then it will map the lifts of $\eta_{i-1}$ to the lifts of $\eta_{i+1}$, and vice verse. Moreover, the unit normal bundle ${N^{1}}({\widetilde}{\gamma}_i)$ is a torsor for ${\mathbb {C}}^{*} \equiv {\mathbb {C}}/ 2\pi i {\mathbb {Z}}$, and the unit normal bundle ${N^{1}}(\gamma_i)$ is a torsor for $${\mathbb {C}}^{*}/\langle A_{\gamma_{i}} \rangle={\mathbb {C}}/ 2\pi i {\mathbb {Z}}+{{\bf l}}(\gamma_{i}) \cdot {\mathbb {Z}}.$$ Let $G$ be a group and let $X$ be a space on which $G$ acts. We say that $X$ is a torsor for $G$ (or that $X$ is a $G$-torsor) if for any two elements $x_1$ and $x_2$ of $X$ there exists a unique group element $g \in G$ with $g(x_1) = x_2$. By a mild abuse of notation, we let $${N^{1}}(\sqrt{\gamma_{i}})={N^{1}}({\widetilde}{\gamma}_{i})/\langle \sqrt{A_{\gamma_{i}}} \rangle.$$ This is a torsor for $${\mathbb {C}}^{*}/\langle \sqrt{A_{\gamma_{i}}} \rangle={\mathbb {C}}/ 2\pi i {\mathbb {Z}}+{{\bf {hl}}}(\gamma_{i}) \cdot {\mathbb {Z}}.$$ For $i \ne j$, $i,j=0,1,2$, we let $n(i,j) \in {N^{1}}(\gamma_{i})$ be the unit vector at $\gamma_i \cap \eta_j$ pointing along $\eta_j$. Then $\sqrt{A_{\gamma_{i}}}$ interchanges $n(i,i-1)$ and $n(i,i+1)$, so we can think of the unordered pair $\{n(i,i-1),n(i,i+1)\}$ as an element of ${N^{1}}(\sqrt{\gamma_{i}})$. We call this element ${\operatorname{foot}}_{\gamma_{i}}(\rho)$ or ${\operatorname{foot}}_{\gamma_{i}}(f)$ where $f:\Pi^0 \to M_{\rho}$ is a map whose homotopy class is determined by $\rho$. If $\rho:\pi_1(\Pi^0) \to {{\bf {PSL}}(2,{\mathbb {C}})}$ is a representation for which ${{\bf {hl}}}(C) \in {\mathbb {R} }^{+}$ for each $C \in \partial{\Pi}^0$, then, after conjugation, $\rho(\pi_1(\Pi^0)) \in {{\bf {PSL}}(2,{\mathbb {R} })}< {{\bf {PSL}}(2,{\mathbb {C}})}$, and ${{\mathbb {H}}^{2}}/\rho(\pi_1(\Pi^0))$ is a topological pair of pants (homeomorphic to the interior of $\Pi^0$). Also the converse is true: if we are given $\rho:\pi_1(\Pi^0) \to {{\bf {PSL}}(2,{\mathbb {R} })}$ and ${{\mathbb {H}}^{2}}/ \rho(\pi_1(\Pi^0))$ is homeomorphic to the interior of $\Pi^0$, then ${{\bf {hl}}}(C) \in {\mathbb {R} }^+$ for each cuff $C \in \partial{\Pi}^0$. Now suppose that $S^0$ is a closed surface (of genus at least 2), and ${{\mathcal {C}}}^0$ a maximal set of simple closed curves on $S^0$ (the curves in ${{\mathcal {C}}}^0$ are disjoint, non-isotopic and nontrivial). By ${{\mathcal {C}}}^*$ we denote the set of oriented curves from ${{\mathcal {C}}}^0$ (each curve is taken with both orientations). A pair of pants $\Pi$ for $(S^0,{{\mathcal {C}}}^0)$ is the closure of a component of $S^0 \setminus \bigcup {{\mathcal {C}}}^0 $, and a marked pair of pants is a pair $(\Pi,C)$, where $C \in {{\mathcal {C}}}^*$ is an oriented closed curve such that $C \in \partial{\Pi}$, and $C$ lies to the left of $\Pi$. For any marked pair of pants $(\Pi,C)$, there is a unique marked pair of pants $(\Pi',C')$ such that $C'=-C$ (where $-C$ denotes the curve $C$ but with the opposite orientation). We observe in passing that $\Pi$ can be equal to $\Pi'$. Now suppose that $$\rho:\pi_1(S^0) \to {{\bf {PSL}}(2,{\mathbb {C}})}$$ is a representation that is discrete and faithful when restricted to $\pi_1(\Pi)$, for each pair of pants $\Pi$ in $S^0 \setminus \bigcup {{\mathcal {C}}}^0 $. By $M_{\rho}$ we again denote the quotient ${ {\mathbb {H}}^{3}}/\rho(\pi_1(S^0))$. Suppose that $\rho=f_*$ for some continuous map $f:S^0 \to M_{\rho}$. Then for each marked pair of pants $(\Pi,C)$ we let $\gamma$ be the oriented geodesic freely homotopic to $f(C)$. As before, we define ${{\bf {hl}}}_{\Pi}(\gamma)$ using $f|_{\Pi}$. Let $(\Pi',C')$ be the marked pair of pants such that $C'=-C$. Then ${{\bf {hl}}}_{\Pi}(C)={{\bf {hl}}}_{\Pi'}(C)$, or ${{\bf {hl}}}_{\Pi}(C)={{\bf {hl}}}_{\Pi'}(C)+i\pi$. In the former case, $\langle \sqrt{A_{\gamma}} \rangle=\langle \sqrt{A_{\gamma'}} \rangle$, so ${N^{1}}(\sqrt{\gamma})={N^{1}}(\sqrt{\gamma'})$ literally. In this case we write ${{\bf {hl}}}(C)={{\bf {hl}}}_{\Pi}(C)={{\bf {hl}}}_{\Pi'}(C)$. Let $S^0$ and ${{\mathcal {C}}}^0$ be as above. We say that a representation $$\rho:\pi_1(S^0) \to {{\bf {PSL}}(2,{\mathbb {C}})}$$ is viable if - $\rho$ is discrete and faithful when restricted to $\pi_1(\Pi)$, for each pair of pants $\Pi$ in $S^0 \setminus \bigcup {{\mathcal {C}}}^0 $, - ${{\bf {hl}}}(C)={{\bf {hl}}}_{\Pi}(C)={{\bf {hl}}}_{\Pi'}(C)$, for each $C \in {{\mathcal {C}}}^0$, where $\Pi$ and $\Pi'$ are two pairs of pants that contain $C$. Given a viable representation $\rho:\pi_1(S^0) \to {{\bf {PSL}}(2,{\mathbb {C}})}$, we let $$s(C)={\operatorname{foot}}_{\gamma}(\rho|_{\Pi})-{\operatorname{foot}}_{\gamma'}(\rho|_{\Pi'})-i\pi.$$ Then $s(C) \in {\mathbb {C}}/ 2\pi i {\mathbb {Z}}+{{\bf {hl}}}(C) \cdot {\mathbb {Z}}$. If we reverse the roles of $(\Pi,C)$ and $(\Pi',C')$, we negate the difference of the two feet, but we also reverse the orientation of $\gamma$, so we get the same element $s(C) \in {\mathbb {C}}/ 2\pi i {\mathbb {Z}}+{{\bf {hl}}}(C) \cdot {\mathbb {Z}}$. The coordinates $({{\bf {hl}}}(C),s(C))$ are called the reduced complex Fenchel-Nielsen coordinates for $\rho$. The following is the main result of this section and it will be used later in the paper. \[geometry\] Let $0<\epsilon< {\widehat}{\epsilon}$ where ${\widehat}{\epsilon}>0$ is a universal constant. Then there exists $R_0=R_0(\epsilon)>0$ such that the following holds. Let $S^{0}$ be a closed topological surface with a pants decomposition ${{\mathcal {C}}}^0$. Suppose that $\rho:\pi_1(S^{0}) \to {{\bf {PSL}}(2,{\mathbb {C}})}$ is a viable representation such that $$|{{\bf {hl}}}(C)-\frac{R}{2}|<\epsilon, \, \, \text{and} \,\, |s(C)-1|<{{\epsilon}\over{R}},$$ for some $R>R_0>0$. Then there exists a viable representation $\rho_0:\pi_1(S^{0}) \to {{\bf {PSL}}(2,{\mathbb {C}})}$ such that ${{\bf {hl}}}(C)=R$ and $s(C)=1$ for all $C \in {{\mathcal {C}}}^0$, and a $K$-quasisymmetric map $h:\partial{{ {\mathbb {H}}^{3}}} \to \partial{{ {\mathbb {H}}^{3}}}$ so that $h^{-1}\rho_0(\pi_1(S^0)) h=\rho(\pi_1(S^0))$, where $K=K(\epsilon)$ and $K(\epsilon) \to 1$ uniformly when $\epsilon \to 0$. In particular, the representation $\rho$ is injective and the group $\rho(\pi_1(S^{0}) )$ is quasifuchsian. Holomorphic families of representations --------------------------------------- In this subsection we state Theorem \[geometry-1\] that will imply Theorem \[geometry\]. The rest of Section 2 is devoted to proving Theorem \[geometry-1\]. Fix a closed surface $S^0$ with a pants decomposition ${{\mathcal {C}}}^0$. Fix a pair of pants $\Pi$ from $S^0 \setminus {{\mathcal {C}}}^0$, and let $C_0,C_1,C_2 \in {{\mathcal {C}}}^0$ denote the cuffs of $\Pi$. The inclusion $\Pi \to S^0$ induces an embedding $\pi_1(\Pi) \to \pi_1(S^0)$ (such embedding is well defined up to conjugation). Let $c_0,c_1 \in \pi_1(\Pi) \subset \pi_1(S^0)$ be elements in the conjugacy classes corresponding to $C_0$ and $C_1$ respectively. Let $\rho:\pi_1(S^0) \to {{\bf {PSL}}(2,{\mathbb {C}})}$ be a viable representation. After conjugating $\rho$ by an element of ${{\bf {PSL}}(2,{\mathbb {C}})}$, we may assume that the axis of $\rho(c_0)$ is the geodesic in ${ {\mathbb {H}}^{3}}$ that connects $0$ and $\infty$ (such that $0$ is the repelling point) and that the point $1 \in \partial{{ {\mathbb {H}}^{3}}}$ is the repelling point of $\rho(c_1)$ (such a conjugation exists since $\rho$ is viable and the restriction of $\rho$ to $\pi_1(\Pi)$ is injective). Such $\rho$ is said to be normalized (the normalization depends on the choice of $c_0$ and $c_1$ but we suppress this). Let $R>0$, and we let $\Omega$ denote the set of all pairs $(z_C,w_C)$, $C \in {{\mathcal {C}}}^0$, where for each $C$ we have 1. $z_C \in {\mathbb {C}}/ 2\pi i {\mathbb {Z}}$ and $|z_C-\frac{R}{2}|<1$, 2. $w_C \in {\mathbb {C}}/ 2\pi i {\mathbb {Z}}+z_C \cdot {\mathbb {Z}}$ and $|s(C)-1|<{{1}\over{R}}$. For simplicity we let $z=(z_C)_{C \in {{\mathcal {C}}}^{0}}$ and $w=(w_C)_{C \in {{\mathcal {C}}}^{0}}$. It follows from [@kour] and [@series] that when $R$ is large enough (say $R>2$), for each $(z,w) \in \Omega$ there exists a normalized viable representation $\rho:\pi_1(S^{0}) \to {{\bf {PSL}}(2,{\mathbb {C}})}$ such that ${{\bf {hl}}}(C)=z_C$ and $s(C)=w_C$. Such a normalized representation $\rho$ is not unique since $({{\bf {hl}}}(C),s(C))$ are the reduced complex Fenchel-Nielsen coordinates and they determine the normalized representation only if we specify the marking of the cuffs (that is, a normalized viable representation is uniquely determined by the choice of the (non-reduced) Fenchel-Nielsen coordinates). Suppose that we are given a normalized viable representation $\rho':\pi_1(S^{0}) \to {{\bf {PSL}}(2,{\mathbb {C}})}$ such that $|{{\bf {hl}}}(C)-\frac{R}{2}|<1$ and $|s(C)-1|<{{1}\over{R}}$, where $({{\bf {hl}}}(C),s(C))$ are the reduced complex Fenchel-Nielsen coordinates for $\rho'$. Let $z'_C={{\bf {hl}}}(C)$ and $w'_C=s(C)$. Then $(z',w') \in \Omega$. It then follows from [@kour] and [@series] that for each $(z,w) \in \Omega$, there exists a unique normalized viable representation $\rho_{z,w}:\pi_1(S^{0}) \to {{\bf {PSL}}(2,{\mathbb {C}})}$ such that - $z_C={{\bf {hl}}}(C)$ and $w_C=s(C)$, where $({{\bf {hl}}}(C),s(C))$ are the reduced complex Fenchel-Nielsen coordinates for $\rho_{z,w}$, - The family of representations $\rho_{z,w}$ varies holomorphically in $(z,w)$, - $\rho'=\rho_{z',w'}$. \[def-C\] For $C \in {{\mathcal {C}}}^0$ let $\zeta_C, \eta_C \in {\mathbb {D}}$, where ${\mathbb {D}}$ denotes the unit disc in the complex plane. Let $\tau \in {\mathbb {D}}$ be a complex parameter. Fix $R>1$ and let ${{\bf {hl}}}(C)(\tau)=\frac{1}{2}(R+ \tau \zeta_C)$ and $s(C)(\tau)=1+{{\tau \eta_C } \over{R}}$. By $\rho_{\tau}$ we denote the corresponding normalized viable representation with the reduced Fenchel-Nielsen coordinates $({{\bf {hl}}}(C)(\tau),s(C)(\tau))$. Note that $\rho_{\tau}$ depends on $\zeta_C,\eta_C$ but we suppress this. It follows that $\rho_{\tau}$ depends holomorphically on $\tau$. The remainder of this section is devoted to proving the following theorem. \[geometry-1\] There exist constants ${\widehat}{R},{\widehat}{\epsilon}>0$, such that the following holds. Let $S^{0}$ be any closed topological surface with a pants decomposition ${{\mathcal {C}}}^0$ and fix $\zeta_C, \eta_C \in {\mathbb {D}}$ for $C \in {{\mathcal {C}}}^{0}$. Then for every $R \ge {\widehat}{R}$ and $|\tau| <{\widehat}{\epsilon}$, the group $\rho_{\tau} (\pi_1(S^{0}))$ is quasifuchsian and the induced quasisymmetric map $f_{\tau}:\partial {{\mathbb {H}}^{2}}\to \partial { {\mathbb {H}}^{3}}$ (that conjugates $\rho_{0}(\pi_1(S^{0}))$ to $\rho_{\tau}(\pi_1(S^{0}))$) is $K(\tau)$-quasisymmetric, where $$K(\tau)={{{\widehat}{\epsilon} +|\tau|}\over{{\widehat}{\epsilon}-|\tau|}}.$$ \[strans\] Notation and the brief outline of the proof of Theorem \[geometry-1\] --------------------------------------------------------------------- The following notation remains valid through the section. Fix $S^{0}$, ${{\mathcal {C}}}^0$ and $\zeta_C,\eta_C \in {\mathbb {D}}$ as above. Denote by ${{\mathcal {C}}}_{\tau}(R)$ the collection of translation axes in ${ {\mathbb {H}}^{3}}$ of all the elements $\rho_{\tau}(c)$, where $c \in \pi_1(S^{0})$ is in the conjugacy class of some cuff $C \in {{\mathcal {C}}}^{0}$. Fix two such axes $C(\tau)$ and ${\widehat}{C}(\tau)$ and let $O(\tau)$ be their common orthogonal in ${ {\mathbb {H}}^{3}}$. Since $C(\tau)$ and ${\widehat}{C}(\tau)$ vary holomorphically in $\tau$ so does $O(\tau)$ (this means that the endpoints of $O(\tau)$ vary holomorphically on $\partial { {\mathbb {H}}^{3}}$). Note that the endpoints of $O(\tau)$ might not belong to the limit set of the group $\rho_{\tau}(\pi_1(S^{0}))$. .1cm Let $C_0(0),C_1(0),...,C_{n+1}(0)$ be the ordered collection of geodesics from ${{\mathcal {C}}}_0(R)$ that $O(0)$ intersects (and in this order) and so that $C_0(0)=C(0)$ and $C_{n+1}(0)={\widehat}{C}(0)$. The geodesic segment on $O(0)$ between $C_0(0)$ and $C_{n+1}(0)$ intersects $n \ge 0$ other geodesics from ${{\mathcal {C}}}_0(R)$ (until the end of this section $n$ will have the same meaning). We orient $O(0)$ so that it goes from $C_0(0)$ to $C_{n+1}(0)$. We orient each $C_i(0)$ so that the angle from $O(0)$ to $C_i(0)$ is positive (recall that we fix in advance an orientation on the initial plane ${{\mathbb {H}}^{2}}\subset { {\mathbb {H}}^{3}}$ so this angle is positive with respect to this orientation of the plane ${{\mathbb {H}}^{2}}$). Then the oriented geodesics $C_i(\tau)$ vary holomorphically in $\tau$. .1cm Let $N_i(\tau)$ be the common orthogonal between $O(\tau)$ and $C_i(\tau)$ that is oriented so that the imaginary part of the complex distance ${{\bf d}}_{N_{i}(\tau)}(O(\tau),C_i(\tau))$ is positive. Let $D_i(\tau)$, $i=0,...,n$ be the common orthogonal between $C_i(\tau)$ and $C_{i+1}(\tau)$, that is oriented so that the angle from $D_i(0)$ to $C_i(0)$ is positive. Also, let $F_i(\tau)$ be the common orthogonal between $O(\tau)$ and $D_i(\tau)$, for $i=0,...,n$. We orient $F_i(\tau)$ so that the angle from $O(0)$ to $F_i(0)$ is positive. Observe that $F_0(\tau)=C_0(\tau)$ and $F_{n}(\tau)=C_{n+1}(\tau)$. .1cm For simplicity, in the rest of this section we suppress the dependence on $\tau$, that is we write $C_i(\tau)=C_i$, $O(\tau)=O$ and so on. However, we still write $C_i(0)$, $O(0)$, to distinguish the case $\tau=0$. .1cm For Theorem \[geometry-1\] we need to estimate the quasisymmetric constant of the map $f_{\tau}$, when $\tau$ belongs to some small, but definite neighbourhood of the origin in ${\mathbb {D}}$. In order to do that we want to estimate the derivative (with respect to $\tau$) ${{\bf d}}'_{O} (C_0,C_{n+1})$ of the complex distance ${{\bf d}}_{O} (C_0,C_{n+1})$ between any two geodesics $C_0,C_{n+1} \in {{\mathcal {C}}}_{\tau}(R)$. We will compute an upper bound of $|{{\bf d}}'_{O} (C_0,C_{n+1})|$ in terms of ${{\bf d}}_{O} (C_0,C_{n+1})$. This will lead to an inductive type of argument that will finish the proof. We will offer more explanations as we go along. The Kerckhoff-Series-Wolpert type formula ----------------------------------------- In [@series] C. Series has derived the formula for the derivative of the complex translation length of a (not necessarily simple) closed curve on $S^{0}$ under the representation $\rho_{\tau}$. Using the same method (word by word) one can obtain the appropriate formula for the derivative of the complex distance ${{\bf d}}_{O(\tau)}(C_0(\tau),C_{n+1}(\tau))$. \[thm-ksw\] Letting $'$ denote the derivative with respect to $\tau$ we have $$\begin{aligned} {{\bf d}}'_{O} (C_0,C_{n+1}) &= \sum_{i=0}^{n} \cosh( {{\bf d}}_{F_{i}}(O,D_i)) {{\bf d}}'_{D_i}(C_i,C_{i+1})+ \label{S-formula} \\ &+ \sum_{i=1}^{n} \cosh( {{\bf d}}_{N_{i}}(O,C_i)) {{\bf d}}'_{C_i}(D_{i-1},D_i) \notag.\end{aligned}$$ For each $i=1,...,n$ consider the skew right-angled hexagon with sides $O,F_{i},D_i,C_i,D_{i-1},F_{i-1}$. Since each hexagon varies holomorphically in $\tau$ we have the following derivative formula in each hexagon (this is the formula $(7)$ in [@series]) $$\begin{aligned} {{\bf d}}'_{O}(F_{i-1},F_i) &= \cosh({{\bf d}}_{N_{i}}(O,C_i)) {{\bf d}}'_{C_i}(D_{i-1},D_i)+ \notag \\ &+ \cosh({{\bf d}}_{F_{i-1}}(O,D_{i-1}) ) {{\bf d}}'_{D_{i-1}}(F_{i-1},C_{i}) \label{S-formula-1} \\ & + \cosh({{\bf d}}_{F_{i}}(O,D_{i})) {{\bf d}}'_{D_{i}}(C_{i}, F_i) \notag .\end{aligned}$$ The following relations (\[S-formula-2\]), (\[S-formula-3\]) and (\[S-formula-4\]) are direct corollaries of the identities $F_0=C_0$ and $F_{n}=C_{n+1}$. We have $$\label{S-formula-2} \sum_{i=1}^{n} {{\bf d}}_{O}(F_{i-1},F_i)={{\bf d}}_{O}(C_0,C_{n+1}).$$ Also $$\label{S-formula-3} {{\bf d}}'_{D_{0}}(F_{0},C_{1})= {{\bf d}}'_{D_{0}}(C_{0},C_{1}),$$ and $$\label{S-formula-4} {{\bf d}}'_{D_{n}}(C_{n}, F_n)={{\bf d}}'_{D_{n}}(C_{n}, C_{n+1}).$$ Also for $1=1,...,n$ we observe the identity $${{\bf d}}_{D_{i}}(C_{i}, F_{i})+{{\bf d}}_{D_{i}}(F_{i},C_{i+1})= {{\bf d}}_{D_{i}}(C_{i},C_{i+1}).$$ Putting all this together and summing up the formulae (\[S-formula-1\]), for $i=1,...,n$ we obtain (\[S-formula\]). Let $H$ be a consistently oriented skew right-angled hexagon with sides $L_k$, $k \in {\mathbb {Z}}$, and $L_k=L_{k+6}$. Set $\sigma(k)={{\bf d}}_{L_{k} } (L_{k-1},L_{k+1})$. Recall the cosine formula $$\cosh(\sigma(k))={{ \cosh(\sigma(k+3)) -\cosh(\sigma(k+1)) \cosh(\sigma(k-1)) } \over{\sinh(\sigma(k+1)) \sinh(\sigma(k-1)) }}.$$ Assume that $\sigma(2j+1)={{1}\over{2}}(R+a_{2j+1})+i\pi$, $j=0,1,2$, and $a_{2j+1} \in {\mathbb {D}}$. A hexagon with this property is called a $thin$ $hexagon$. From the cosine formula for a skew right-angled hexagon we have (see also Lemma 5.1 in [@bowen]) $$\label{hex-1} \sigma(2j)=2e^{ {{1}\over{4}}[-R+a_{2j+3}-a_{2j+1}-a_{2j-1}] }+i\pi+O(e^{ -{{3R}\over{4}} }).$$ From the pentagon formula the hyperbolic distance between opposite sides in the hexagon can be estimated as (see Lemma 5.4 in [@bowen] and Lemma 2.1 in [@series]) $$\label{hex-2} {{R}\over{4}} -10 <d(L_k,L_{k+3})<{{R}\over{4}}+10,$$ for $R$ large enough. \[derivative-est-0\] Suppose that $|{{\bf d}}_{O} (C_0,C_{n+1})|<{{R}\over{5}}$. Then for $R$ large enough the following estimate holds $$|{{\bf d}}'_{O} (C_0,C_{n+1})| \le 20e^{ -{{R}\over{4}} } \sum_{i=0}^{n} e^{d(O,D_i) }+ {{n}\over{R}} \left( \max_{1 \le i \le n} e^{d(O,C_i)} \right).$$ Let $\gamma$ be the geodesic segment on $O(0)$ that runs between $C_j(0)$ and $C_{j+1}(0)$. Then $\gamma$ is a lift of a geodesic arc connecting two cuffs in the pair of pants whose all three cuffs have length $R$. Since the length of $\gamma$ is at most ${{R}\over{5}}$ we have from (\[hex-2\]) that $\gamma$ connects two different cuffs in this pair of pants and is freely homotopic to the shortest orthogonal arc between these two cuffs in this pair of pants. This implies that there exists ${\widetilde}{C} \in {{\mathcal {C}}}(R)$ such that the hexagon determined by $C_j,C_{j+1}$ and ${\widetilde}{C}$ is a thin hexagon. Then $D_i$ is a side of this hexagon since it is the common orthogonal for $C_i$ and $C_{i+1}$. Taking into account that the orientation of $D_i$ that comes from this hexagon is the opposite to the one we defined above in terms of $O$ and applying (\[hex-1\]) we obtain $$\label{cuff-distance} {{\bf d}}_{D_j}(C_j,C_{j+1})=2e^{{{1}\over{4}}[-R+ {\widetilde}{\zeta} \tau - \zeta_j \tau - \zeta_{j+1} \tau ] }+O(e^{ -{{3R}\over{4}} }).$$ where $\zeta_j,\zeta_{j+1},{\widetilde}{\zeta} \in {\mathbb {D}}$ are the complex numbers associated to the corresponding $C \in {{\mathcal {C}}}^{0}$ in Definition \[def-C\]. Differentiating the cosine formula for the skew right-angled hexagon we get $$|{{\bf d}}'_{D_j}(C_j,C_{j+1})|<20 e^{-{{R}\over{4} } }.$$ for $R$ large enough (here we use $|\zeta_C|, |\tau|<1$). .1cm On the other hand ${{\bf d}}_{C_j}(D_{j-1},D_j)=1+{{\tau \eta_{j-1}}\over{R}}$, where $\eta_{j-1} \in {\mathbb {D}}$ is the corresponding number. Differentiating this identity gives $|{{\bf d}}'_{C_j}(D_{j-1},D_j ) | \le {{1}\over{R}}$ (we use $|\eta_{j-1}|<1$). Combining these estimates with the equality of Theorem \[thm-ksw\] proves the lemma. Preliminary estimates --------------------- The purpose of the next two subsections is to estimate the two terms on the right-hand side of the inequality of Lemma \[derivative-est-0\] in terms of the complex distance ${{\bf d}}_{O} (C_0,C_{n+1})$. We will show that $$|{{\bf d}}'_{O} (C_0,C_{n+1})| \le C F({{\bf d}}_{O} (C_0,C_{n+1})),$$ where $C$ is a constant and $F$ is the function $F(x)=xe^{x}$. We will obtain this estimate under some natural assumptions (see Assumption \[ass\] below). .1cm Let $\alpha, \beta$ be two oriented geodesics in ${ {\mathbb {H}}^{3}}$ such that $d(\alpha,\beta)>0$ and let $O$ be their common orthogonal (with either orientation). Let $q_0=\beta \cap O$ . Let $t \in {\mathbb {R} }$ and let $q:{\mathbb {R} }\to \beta$ be the parametrisation by arc length such that $q(0)=q_0$. The following trigonometric formula follows directly from the $\cosh$ and $\sinh$ rules for right angled triangles in the hyperbolic plane (the planar case of this formula was stated in Lemma 2.4.7 in [@epstein]) $$\label{epstein} \sinh^{2}(d(q(t),\alpha))=\sinh^{2}(d(\alpha,\beta) ) \cosh^{2}(t)+\sinh^{2}(t)\sin^{2}({\operatorname{Im}}[{{\bf d}}_{O}(\alpha,\beta)]).$$ This yields the following inequality that will suffice for us $$\label{epstein-0} \sinh(d(\alpha,\beta) ) \cosh(t) \le \sinh(d(q(t),\alpha)).$$ From this we derive: $$\label{epstein-1} |t| \le d(q(t), \alpha) - \log d(\alpha, \beta),\,\, \text{for every}\,\, t \in {\mathbb {R} },$$ and $$\label{epstein-2} |t| \le \log d(q(t), \alpha) + 1 - \log d(\alpha, \beta), \,\, \text{when} \,\, d(q(t),\alpha) \le 1 .$$ .1cm Let $\gamma=\gamma(\tau)$, $\tau \in {\mathbb {D}}$, be an oriented geodesic in ${ {\mathbb {H}}^{3}}$ that varies continuously in $\tau$, and such that $\gamma(0)$ belongs to the plane ${{\mathbb {H}}^{2}}\subset { {\mathbb {H}}^{3}}$ that contains the lamination ${{\mathcal {C}}}_{0}(R)$ (the common orthogonal $O$ from the previous subsection is an example of $\gamma$ but there is no need to restrict ourselves to $O$ in order to prove the estimates below). Let $C_1(0),...,C_k(0)$ be an ordered subset of geodesics from ${{\mathcal {C}}}_0(R)$ that $\gamma(0)$ consecutively intersects (this means that the segment of $\gamma(0)$ between $C_i(0)$ and $C_{i+1}(0)$ does not intersect any other geodesic from ${{\mathcal {C}}}_0(R)$). Orient each $C_i$ so that the angle from $\gamma(0)$ to $C_i(0)$ is positive. Let $N_i$ be the common orthogonal between $\gamma$ and $C_i$, and let $z_i=N_i \cap C_i$ and $z'_i=N_i \cap \gamma$. (See Figure \[fig:transversal\]). Let $D_i$, $i=1,...,k$ be the common orthogonal between $C_i$ and $C_{i+1}$ and let $w^{-}_i=D_i \cap C_i$ and $w^{+}_i=D_i \cap C_{i+1}$. As long as the distance between $z_i$ and $z_{i+1}$ is at most ${{R}\over{5}}$, then (as seen in the previous subsection) for $R$ large enough we have $$\label{cuff-distance-1} {{\bf d}}_{D_{i} }(C_i,C_{i+1})=(2+o(1))e^{-{{R}\over{4}}+ \tau \mu} \le e^{-{{R}\over{4}}+2},$$ where $\mu \in {\mathbb {C}}$ and $|\mu|\le {{3}\over{4}}$ (see (\[cuff-distance\]) ). Then it follows from the definition of ${{\mathcal {C}}}_{\tau}(R)$ that $$\label{twist} {{\bf d}}_{C_{i}}(w^{+}_{i-1},w^{-}_i)=1+ {\operatorname{Re}}[ {{\tau \eta}\over{R}} +j{{ (R+ \tau \zeta)}\over{2}} ],$$ for some $j \in {\mathbb {Z}}$, where $\eta=\eta_C$ and $\zeta=\zeta_C$ are the complex numbers from the unit disc that correspond to the cuff in $C \in {{\mathcal {C}}}^{0}$ whose lift is $C_i(0)$. Here ${{\bf d}}_{C_{i}}(w^{+}_{i-1},w^{-}_i)$ denotes the signed hyperbolic distance. \[lemma-est-1\] Assume that $d(z_i,z_{i+1})< e^{-5}$, for $i=1,...,k-1$. Set $a_i={{\bf d}}_{C_{i}}(z_i,w^{-}_i)$. Then for $R$ large enough the following inequalities hold 1. $a_{i+1}-a_i<1+e^{-1}$, $i=1,...,k-2$ 2. $k < R$. Since the distance between each pair $z_i$ and $z_{i+1}$ is at most $e^{-5}$, applying (\[epstein-2\]) and (\[cuff-distance-1\]) to all pairs $\alpha=C_i$ and $\beta=C_{i+1}$ yields the inequality $$\label{est-1} d(z_i,w^{+}_{i-1}), d(z_i,w^{-}_{i}) \le {{R}\over{4}}-2,$$ for each $i=1,...,k-1$. By the triangle inequality we have $$\label{pm} {{\bf d}}_{C_{i}}(w^{+}_{i-1},w^{-}_i) \le {{R}\over{2}}-4.$$ On the other hand, from (\[twist\]) we obtain $$|j|(1-{{|\tau \zeta|}\over{R}} ) \le {{2}\over{R}} ( {{\bf d}}_{C_{i}}(w^{+}_{i-1},w^{-}_i) +1+{{|\tau \eta|}\over{R}} ) \le {{2}\over{R}} ({{R}\over{2}}-4+2).$$ Since $|\tau|,|\zeta|,|\eta|<1$ and from (\[pm\]) we get $$|j|\le {{1-{{4}\over{R}} }\over{1-{{1}\over{R}} }},$$ which shows that $j=0$ in (\[twist\]). .1cm From (\[est-1\]) we have $$\label{est-2} |a_i| < {{R}\over{4}}.$$ We write (using the triangle inequality) $$a_{i+1} - a_i - 1 \le d(w_i^-, w_i^+) + d(z_i, z_{i+1} ) + |d(w_i^+, w_{i+1}^-) - 1|.$$ By (\[cuff-distance-1\]) we have $$d(w_i^{-} , w_i^+) \le e^{-{{R}\over{4}} + 2}.$$ The assumption of the lemma is $d(z_i, z_{i+1}) \le e^{-5}$. It follows from (\[twist\]) (and the established fact that in this case $j=0$) that $$| d(w_i^+, w_{i+1}^-) -1 | \le | {\operatorname{Re}}({{\tau \eta}\over{R}} ) | \le {{1}\over{R}}.$$ Therefore $$a_{i+1}-a_i -1 < e^{-1},$$ which proves the first part of the lemma. .1cm From (\[est-2\]) we have $-{{R}\over{4}}< a_1$, which implies that $a_{k-1}>(k-1)(1-e^{-1})-{{R}\over{4}}$. Again from (\[est-2\]) we have $a_{k-1}<{{R}\over{4}}$, which proves $$k<{{R}\over{2(1-e^{-1}) } }+1<R.$$ The following lemma is a corollary of the previous one. \[number\] Let $\gamma$ be a geodesic segment in ${{\mathbb {H}}^{2}}$ that is transverse to the lamination ${{\mathcal {C}}}_0(R)$. For $R$ large enough, the number of geodesics from ${{\mathcal {C}}}_0(R)$ that $\gamma$ intersects is at most $(2+R)e^{5}|\gamma|$. As above denote by $C_i(0)$, $i=1,...,k$, the geodesics from ${{\mathcal {C}}}_0(R)$ that $\gamma$ intersects. Using the above notation, let $j_1,...,j_l \in \{0,..,k\}$, be such that $d(z_{j_{i}}(0),z_{j_{i}+1}(0))> e^{-5}$. Then $$l<{{|\gamma|}\over{e^{-5}} }=e^{5}|\gamma|.$$ By definition, the open segment between $z_{j_{i}}(0)$ and $z_{j_{i}+1} (0)$ does not intersect any geodesics from ${{\mathcal {C}}}_0(R)$. .1cm On the other hand, by the previous lemma the number of geodesics from ${{\mathcal {C}}}_0(R)$ that the subsegment of $\gamma$ between $z_{j_{i}+1 } (0)$ and $z_{j_{i+1} } (0)$ intersects is at most $R$ (because the distance between any $z_i(0)$ and $z_{i+1}(0)$ in this range is at most $e^{-5}$). Since there are at most $l$ such segments we have that the total number of geodesics from ${{\mathcal {C}}}_0(R)$ that $\gamma$ intersects is at most $2l+lR<(2+R) e^{5}|\gamma|$. Estimating the derivative $|{{\bf d}}'_{O} (C_0,C_{n+1})|$ ---------------------------------------------------------- We now combine the notation of the previous two subsections (and set $\gamma=O$). In the following lemmas we prove estimates for the two terms on the right-hand side in the inequality of Lemma \[derivative-est-0\], that are independent of $R$. .1cm We first estimate the second term in the inequality of Lemma \[derivative-est-0\]. \[derivative-est-1\] We have $${{n}\over{R}} \left( \max_{1 \le i \le n} e^{d(O,C_i)} \right) \le 1000 d(C_0(0),C_{n+1}(0)) \left( \max_{1 \le i \le n} e^{d(O,C_i)} \right),$$ where $n$ is the number of geodesics that $O(0)$ intersects between $C_0(0)$ and $C_{n+1}(0)$. From Lemma \[number\] we have $$n \le (2+R)e^{5}d(C_0(0),C_{n+1}(0))<1000 R d(C_0(0),C_{n+1}(0)),$$ which proves the lemma. We now bound the first term in the inequality of Lemma \[derivative-est-0\] under the following assumption. \[ass\] Assume that for some $\tau \in {\mathbb {D}}$ the following estimates hold for $i=0,...,n+1$, $$d(z_i,z_{i}(0)),\, d(O,C_i)< {{1}\over{4}} e^{-5}.$$ We have \[derivative-est-2\] Under Assumption \[ass\] and for $R$ large enough we have $$20e^{ -{{R}\over{4}} }\sum_{i=0}^{n+1} e^{d(O,D_i) } \le 10^{8}d(C_0(0),C_{n+1}(0) ) e^{d(C_0(0),C_{n+1}(0)) }.$$ Recall $z'_i=N_{i} \cap O$ (note $z_0=z'_0$ and $z_{n+1}=z'_{n+1}$ since $O$ is the common orthogonal between $C_0$ and $C_{n+1}$). Observe $$\label{distance-1} d(O,D_i) \le d(z'_i,z_i)+ d(z_i,w^{-}_i) =d(O,C_i)+ |a_i|<1+|a_i|.$$ It follows from (\[epstein-1\]) that $$|a_i|=d(z_i,w^{-}_i) \le d(z_i,C_{i+1}) - \log d(C_{i},C_{i+1})$$ We observe the estimate $d(z_i,C_{i+1}) \le d(z_i,z_{i+1})$. On the other hand, by (\[cuff-distance-1\]) we have $${{\bf d}}_{D_{i} }(C_i,C_{i+1})=(2+o(1))e^{-{{R}\over{4}}+ \tau \mu},$$ so for $R$ large enough (such that $|o(1)|<1$) we find that (using the estimate $|\tau \mu|<1$) $$d(C_i,C_{i+1}) \ge e^{-\frac{R}{4}-1},$$ that is $-\log d(C_{i},C_{i+1}) \le \frac{R}{4}+1$. It follows that $$|a_i| \le d(z_i,z_{i+1}) + {{R}\over{4}}+1.$$ From $$\label{est-z} |d(z_i,z_{i+1})-d(z_i(0),z_{i+1}(0))| \le d(z_i,z_{i}(0))+d(z_{i+1},z_{i+1}(0)) \le {{e^{-5}}\over{2}}$$ and $d(z_i(0),z_{i+1}(0)) \le d(C_{0}(0),C_{n+1}(0))$, we obtain $$\label{a-est} |a_i|<{{R}\over{4}}+d(C_0(0),C_{n+1}(0) ) +2.$$ .1cm Let $j_1,...,j_{l} \in \{1,..,n-1 \}$, be such that $d(z_{j_{i}},z_{j_{i}+1})> e^{-5}$ (note that $l=l(\tau)$ depends on $\tau$). Set $j_0=0$ and $j_{l+1}=n$. From (\[est-z\]) we have $d(z_{j_{i}}(0),z_{j_{i}+1}(0))> {{e^{-5}}\over{2}}$ for each $1 \le i \le l$. The intervals $(z_i(0),z_{i+1}(0))$ partition the arc between $z_0(0)$ and $z_{n+1}(0)$ so we get $$\label{distance-2} l<{{ d(C_0(0),C_{n+1}(0)) }\over{{{e^{-5}}\over{2}} }}=2 e^{5} d(C_0(0),C_{n+1}(0)).$$ .1cm Let $0 \le i \le l+1$. For $j_{i}+1 \le t < j_{i+1}$ we have $d(z_{t},z_{t+1}) \le e^{-5}$. It follows from Lemma \[lemma-est-1\] that $${{1}\over{2}}<a_{t+1}-a_t.$$ We see that in this interval the sequence $a_t$ is an increasing sequence. Combining this with (\[a-est\]) and (\[distance-1\]) we obtain $$\begin{aligned} \sum_{t= j_{i}+1}^{j_{i+1}} e^{d(O,D_t) } &\le 2e^{ {{R}\over{4}} + d(C_0(0),C_{n+1}(0)) +3 } \sum_{t=0}^{\infty} e^{-{{t}\over{2}} } \label{est-mo} \\ &<200e^{ {{R}\over{4}} + d( C_0(0),C_{n+1}(0) ) } \notag. \end{aligned}$$ We have $$\sum_{i=0}^{n+1} e^{d(O,D_i) } \le (l+1) \max_{i=0,...,n+1} e^{d(O,D_i) }+ \sum_{i=0}^{l+1} \sum_{t= j_{i}+1}^{j_{i+1}} e^{d(O,D_t) }$$ By (\[distance-1\]), (\[a-est\]) we have $$e^{d(O,D_i)} \le e^{ {{R}\over{4}}+d(C_0(0),C_{n+1}(0) ) +2 }.$$ Also, by (\[distance-2\]) and (\[est-mo\]) we have $$\begin{aligned} \sum_{i=0}^{l+1} \sum_{t= j_{i}+1}^{j_{i+1}} e^{d(O,D_t) } &\le \big( 2 e^{5} d(C_0(0),C_{n+1}(0))+1 \big) \times 200e^{ {{R}\over{4}} + d( C_0(0),C_{n+1}(0) ) } \\ &< 10^{6} d(C_0(0),C_{n+1}(0))e^{ {{R}\over{4}} + d( C_0(0),C_{n+1}(0) ) }\end{aligned}$$ Combining all this gives $$20e^{ -{{R}\over{4}} }\sum_{i=0}^{n+1} e^{d(O,D_i) } \le 10^{8} d(C_0(0),C_{n+1}(0)) e^{d(C_0(0),C_{n+1}(0)) }.$$ The previous two lemmas together with Lemma \[derivative-est-0\] imply \[derivative-3\] Under Assumption \[ass\] and assuming that $d(C_0,C_{n+1})<\frac{R}{5}$, for $R$ large enough we have $$|{{\bf d}}'_{O} (C_0,C_{n+1})| < 10^{9} d(C_0(0),C_{n+1}(0)) e^{d(C_0(0),C_{n+1}(0)) }.$$ By Lemma \[derivative-est-0\] the estimate $$|{{\bf d}}'_{O} (C_0,C_{n+1})| \le 20e^{ -{{R}\over{4}} } \sum_{i=0}^{n} e^{d(O,D_i) }+ {{n}\over{R}} \left( \max_{1 \le i \le n} e^{d(O,C_i)} \right)$$ holds for $R$ large enough (recall that $n$ is the number of geodesics that $O(0)$ intersects between $C_0(0)$ and $C_{n+1}(0)$). By Lemma \[derivative-est-1\] we have $${{n}\over{R}} \left( \max_{1 \le i \le n} e^{d(O,C_i)} \right) \le 1000 d(C_0(0),C_{n+1}(0)) \left( \max_{1 \le i \le n} e^{d(O,C_i)} \right).$$ By Assumption \[ass\] we have that $$d(O,C_i) \le {{1}\over{4}} e^{-5},$$ for every $0\le i \le n+1$, so we obtain $${{n}\over{R}} \left( \max_{1 \le i \le n} e^{d(O,C_i)} \right) \le 3000 d(C_0(0),C_{n+1}(0)).$$ On the other hand, by Lemma \[derivative-est-2\] we have $$20e^{ -{{R}\over{4}} }\sum_{i=0}^{n+1} e^{d(O,D_i) } \le 10^{8}d(C_0(0),C_{n+1}(0) ) e^{d(C_0(0),C_{n+1}(0)) }.$$ Putting the above estimates together proves the lemma. The family of surfaces ${{\bf S}}(R)$ ------------------------------------- We will consider geodesic laminations on a closed hyperbolic surface, and on its universal cover, the hyperbolic plane, which we will identify with the unit disk. By recording the endpoints of the leaves of a lamination of the unit disk, we can think of the lamination as a symmetric subset of $\partial{\mathbb {D}}\times \partial{\mathbb {D}}$, and by adding the diagonal, we obtain a closed subset of $\partial {\mathbb {D}}\times \partial {\mathbb {D}}$. The Hausdorff topology on such closed subsets will give us what we will call the Hausdorff topology on geodesic laminations of the unit disk. We have Let $R>1$ and let $P(R)$ be the pair of pants whose all three cuffs have the length $R$. We define the surface ${{\bf S}}(R)$ to be the genus two surface that is obtained by gluing two copies of $P(R)$ alongside the cuffs with the twist parameter equal to $+1$ (these are the Fenchel-Nielsen coordinates for ${{\bf S}}(R)$). The surface ${{\bf S}}(R)$ can also be obtained by first doubling $P(R)$ and then applying the right earthquake of length $1$, where the lamination that supports the earthquake is the union of the three cuffs of $P(R)$. By ${{\operatorname {Orb}}}(R)$ we denote the quotient orbifold of the surface ${{\bf S}}(R)$ (the quotient of ${{\bf S}}(R)$ by the group of automorphisms of ${{\bf S}}(R)$). Observe that the Riemann surface ${{\mathbb {H}}^{2}}/ \rho_{0}(\pi_1(S^{0}))$ is a regular finite degree cover of the orbifold ${{\operatorname {Orb}}}(R)$. In particular there exists a Fuchsian group $G(R)$ such that ${{\operatorname {Orb}}}(R)={{\mathbb {H}}^{2}}/ G(R)$ and that $\rho_{0}(\pi_1(S^{0}))<G(R)$ is a finite index subgroup. It is important to point out that the lamination ${{\mathcal {C}}}_0(R)$ is invariant under the group $G(R)$. In fact, one can define the group $G(R)$ as the group of all elements of ${{\bf {PSL}}(2,{\mathbb {R} })}$ that leave invariant the lamination ${{\mathcal {C}}}_0(R) \subset {{\mathbb {H}}^{2}}$. Observe that the group $G(R)$ acts transitively on the geodesics from ${{\mathcal {C}}}_0(R)$, that is the $G(R)$-orbit of a geodesic from ${{\mathcal {C}}}_0(R)$ is equal to ${{\mathcal {C}}}_0(R)$. .1cm Although the marked family of surfaces $S(R)$ (marked by its Fenchel-Nielsen coordinates defined above) tends to $\infty$ in the Teichmüller space of genus two surfaces, the unmarked family $S(R)$ stays in some compact set in the moduli space of genus two surfaces. We prove this fact below. \[short\] For $R$ large enough, the length of the shortest closed geodesic on the surface ${{\bf S}}(R)$ is at least $e^{-5}$. Suppose that the length of the shortest closed geodesic on ${{\bf S}}(R)$ is less than $e^{-5}$ and let $\gamma$ be a lift of this geodesic to ${{\mathbb {H}}^{2}}$ (this geodesic is transverse to the lamination ${{\mathcal {C}}}_0(R)$ because otherwise $\gamma \in {{\mathcal {C}}}_0(R)$ which implies that the length of the shortest closed geodesic on ${{\bf S}}(R)$ is equal to $R$). Then by Lemma \[lemma-est-1\] every subsegment of $\gamma$ can intersect at most $R$ geodesics from ${{\mathcal {C}}}_0(R)$, which means that $\gamma$ intersects at most $R$ geodesics from ${{\mathcal {C}}}_0(R)$. This is impossible since $\gamma$ is a lift of a closed geodesic that is transverse to ${{\mathcal {C}}}_0(R)$ so it has to intersect infinitely many geodesics from ${{\mathcal {C}}}_0(R)$. This proves the lemma. The conclusion is that the family of (unmarked) Riemann surfaces ${{\bf S}}(R)$ stays in some compact set in the moduli space of genus two surfaces. One can describe the accumulation set of the family ${{\bf S}}(R)$ in the moduli space as follows. Let $P$ be a pair of pants that is decomposed into two ideal triangles so that all three shears between these two ideal triangles are equal to $1$. Then all three cuffs have the length equal to $2$. Let ${{\bf S}}_t$, $t \in [0,1]$ be the genus two Riemann surface that is obtained by gluing one copy of $P$ onto another copy of $P$ (along the three cuffs) and twisting by $+2t$ along each cuff. The “circle” of surfaces ${{\bf S}}_t$ is the accumulation set of ${{\bf S}}(R)$, when $R \to \infty$. Note that the edges of the ideal triangles that appear in the pants $P$ are the limits of the ($R$ long) cuffs from the pairs of pants $P(R)$. .1cm Then we have the induced circle of orbifolds ${{\operatorname {Orb}}}_t$. Let $G_t$ be a circle of Fuchsian groups such that ${{\operatorname {Orb}}}_t={{\mathbb {H}}^{2}}/ G_t$. By ${{\mathcal {C}}}_{0,t}$ we denote the lamination in ${{\mathbb {H}}^{2}}$ that is the lift of the corresponding ideal triangulation on ${{\bf S}}_t$. Then up to a conjugation by elements of ${{\bf {PSL}}(2,{\mathbb {R} })}$, the circle of groups $G_t$ is the accumulation set of the groups $G(R)$, when $R \to \infty$, and the circle of laminations ${{\mathcal {C}}}_{0,t}$ is the accumulation set of the laminations ${{\mathcal {C}}}_0(R)$. We observe that the group $G_t$ acts transitively on ${{\mathcal {C}}}_{0,t}$. Quasisymmetric maps and hyperbolic geometry ------------------------------------------- In this subsection we state and prove a few preparatory statements about quasisymmetric maps and the complex distances between geodesics in ${ {\mathbb {H}}^{3}}$, culminating in Theorem \[theorem-new-1\]. We say that a geodesic lamination $\lambda$ on ${{\mathbb {H}}^{2}}$ is nonelementary if neither of the following holds: 1. There exists $z \in \partial{{{\mathbb {H}}^{2}}}$ that is an endpoint of every leaf of $\lambda$. 2. There exists a geodesic $O \subset {{\mathbb {H}}^{2}}$ that is orthogonal to every leaf of $\lambda$. Of course, $\lambda$ has at least three elements if $\lambda$ is nonelementary. Moreover if $\lambda$ is nonelementary then there is a sublamination $\lambda' \subset \lambda$ such that $\lambda'$ contains exactly three geodesics and that such that $\lambda'$ is nonelementary. .1cm Let $\lambda$ be a geodesic lamination, all of whose leaves have disjoint closures. By $\partial{\lambda}$ we denote the union of the endpoints of leaves from $\lambda$. We let $\iota_{\lambda}:\partial{\lambda} \to \partial{\lambda}$ be the involution such that $\iota_{\lambda}$ exchanges the two points of $\partial{\alpha}$, for every leaf $\alpha \in \lambda$. .1cm We say that a quasisymmetric map $g:\partial{{{\mathbb {H}}^{2}}} \to \partial{{ {\mathbb {H}}^{3}}}$ is $K$-quasisymmetric if for every 4 points on $\partial{{{\mathbb {H}}^{2}}}$ with cross ratio equal to $-1$, the cross ratio of the image four points is within $\log K$ hyperbolic distance of $-1$ for the hyperbolic metric on ${\mathbb {C}}\setminus \{0,1, \infty\}$ (observe that a map is $K$-quasisymmetric if and only if it has a $K'$-quasiconformal extension to $\partial{{ {\mathbb {H}}^{3}}}$ for some $K'>1$). .1cm If $\alpha$ and $\beta$ are oriented geodesics in ${ {\mathbb {H}}^{3}}$ by ${{\bf d}}(\alpha,\beta)$ we denote their unsigned complex distance. \[lemma-G\] Suppose that $\lambda$ is nonelementary, and $f:\partial{\lambda} \to \partial{{ {\mathbb {H}}^{3}}}$ is such that $${{\bf d}}(f(\alpha),f(\beta))={{\bf d}}(\alpha,\beta),$$ for all $\alpha, \beta \in \lambda$. Then there is a unique Möbius transformation $T$ such that either 1. $T=f$ on $\partial{\lambda}$, or 2. $T=f \circ \iota_{\lambda}$ on $\partial{\lambda}$. The second case can only occur when all the leaves of $\lambda$ have disjoint closures. We will prove two special cases of Lemma \[lemma-G\] before we prove the lemma. .1cm If the endpoints of $\alpha$ are $x$ and $y$ and $\alpha$ is oriented from $x$ to $y$ then we write $\partial{\alpha}=(x,y)$. The following lemma is elementary. \[lemma-K\] For every $d \in {\mathbb {C}}/ 2 \pi i {\mathbb {Z}}/ {\mathbb {Z}}_2$, with $d \ne 0$, there exists a unique $s \in {\mathbb {C}}/ 2\pi i {\mathbb {Z}}$ such that for two oriented geodesics $\alpha$ and $\beta$ we have ${{\bf d}}(\alpha,\beta)=d$ if and only if $\partial{\beta}=(x,y)$ and $y=T_{s,\alpha}(x)$, where $T_{s,\alpha}$ is the translation by $s$ along $\alpha$. \[proposition-A\] Suppose that $\alpha_0,\alpha_1,\alpha_2$ are oriented geodesics in ${ {\mathbb {H}}^{3}}$ for which ${{\bf d}}(\alpha_i,\alpha_j)\ne 0$ for $i\ne j$, and $\alpha_0,\alpha_1,\alpha_2$ do not have a common orthogonal. Suppose $\alpha'_0,\alpha'_1,\alpha'_2$ are such that ${{\bf d}}(\alpha_i,\alpha_j)={{\bf d}}(\alpha'_i,\alpha'_j)$. Then we can find a unique $T \in {{\bf {PSL}}(2,{\mathbb {C}})}$ that satisfies one of the two conditions 1. $T(\alpha_i)=\alpha'_i$, $i=0,1,2$, 2. $T(\alpha_i)=-\alpha'_i$, $i=0,1,2$, where $-\alpha'_i$ is $\alpha'_i$ with the orientation reversed, Given $\alpha_i$ and $\alpha'_i$ satisfying the hypotheses of the proposition we can assume that $\alpha_i=\alpha'_i$ for $i=0,1$. Let $d_i={{\bf d}}(\alpha_i,\alpha_2)$, and let $T_i=T_{d_{i},\alpha_{i}}$ as in Lemma \[lemma-K\]. Then by Lemma \[lemma-K\] for any $\beta$ for which ${{\bf d}}(\alpha_i,\beta)=d_i$ we have $T_i(x)=y$ where $\partial{\beta}=(x,y)$. Thus $(T_1^{-1} \circ T_{0})(x)=x$. Since $T_1 \ne T_0$ (because $\alpha_0 \ne \alpha_1$), we see that the equation ${{\bf d}}(\alpha_i,\beta)=d_i$ (in $\beta$) has at most as many solutions as the equation $(T_1^{-1} \circ T_{0})(x)=x$, $x \in \partial{{{\mathbb {H}}^{2}}}$. Therefore ${{\bf d}}(\alpha_i,\beta)=d_i$ has at most two solutions and it has at most one solution if $T_1^{-1} \circ T_{0}$ has a unique fixed point on $\partial{{{\mathbb {H}}^{2}}}$. .1cm On the other hand if we let $Q$ be the Möbius transformation such that $Q(\alpha_i)=-\alpha_i$, for $i=0,1$ (such $Q$ exists since ${{\bf d}}(\alpha_i,\alpha_j)\ne 0$ for $i\ne j$). Let ${\widehat}{\alpha}_2=-Q(\alpha_2)$. Then ${{\bf d}}(\alpha_i,{\widehat}{\alpha}_2)={{\bf d}}(\alpha_i,\alpha_2)$ for $i=0,1$. Therefore ${\widehat}{\alpha}_2 \ne \alpha_2$ since $\alpha_0,\alpha_1$ and $\alpha_2$ do not have a common orthogonal. We conclude that $\alpha'_2=\alpha_2$ or $\alpha'_2={\widehat}{\alpha}_2$. \[proposition-P\] Suppose that distinct geodesics $\alpha_0$ and $\alpha_1$ in ${{\mathbb {H}}^{2}}$ have a common endpoint $x \in \partial{{{\mathbb {H}}^{2}}}$, and let $\beta$ be another geodesic in ${{\mathbb {H}}^{2}}$ such that $x$ is not an endpoint of $\beta$. Set $E=\partial{\alpha_0} \cup \partial{\alpha_1} \cup \partial{\beta}$. Let $f:E \to \partial{{ {\mathbb {H}}^{3}}}$ be such that ${{\bf d}}(f(\alpha_i),f(\beta))={{\bf d}}(\alpha_i,\beta)$, $i=0,1$. Then there exists a unique Möbius transformation $T$ such that $f=T$ on $E$. We can assume that the restriction of $f$ to $\partial{\alpha_0} \cup \partial{\alpha_1}$ is the identity. If $\partial{\beta} \subset \partial{\alpha_0} \cup \partial{\alpha_1}$, then $|E|=3$ and we are finished. If $\partial{\beta} \cap \partial{\alpha_0}=\{y\}$ for some $y$, then we can write $\partial{\beta}=(y,z)$ (or $(z,y)$), and then $\partial{f(\beta)}=(y,z')$ (or respectively $(z',y)$). But then $z=z'$ because ${{\bf d}}(f(\alpha_1),f(\beta))={{\bf d}}(\alpha_1,\beta)$ (here we use Lemma \[lemma-K\]). Likewise if $\partial{\beta} \cap \partial{\alpha_1} \ne \emptyset$. .1cm If $\partial{\beta} \cap (\partial{\alpha_0} \cup \partial{\alpha_1}) =\emptyset$, then by Lemma \[lemma-K\] $\partial{\beta}=(y,z)$ and $\partial{f(\beta)}=(y',z')$ and $z=T_0(y)=T_1(y)$, $z'=T_0(y')=T_1(y')$, where $T_i$ translates along $\alpha_i$, and then $T^{-1}_0 \circ T_1$ has $x$ as one of its fixed points, so the other must be $y$, so $y'=y$, so $f(\beta)=\beta$. Now we are ready to prove Lemma \[lemma-G\]. First suppose that $\lambda$ has two distinct leaves $\alpha,\beta$ with a common endpoint $x$. Then there is a unique $T \in {{\bf {PSL}}(2,{\mathbb {C}})}$ for which $T=f$ on $\partial{\alpha}\cup \partial{\beta}$. By Proposition \[proposition-P\] we have $T(\gamma)=f(\gamma)$, whenever $\gamma \in \lambda$ and $x$ does not belong to $\partial{\gamma}$. Because $\lambda$ is nonelementary we can find at least one such $\gamma$. .1cm Now suppose $\delta \in \lambda$ and $x \in \partial{\delta}$. We want to show $T(\delta)=f(\delta)$. We can find $T' \in {{\bf {PSL}}(2,{\mathbb {C}})}$ such that $T'=f$ on $\partial{\alpha} \cup \partial{\delta}$. By Proposition \[proposition-P\], $T'(\gamma)=f(\gamma)$, so $T$ and $T'$ agree on $\partial{\alpha} \cup \partial{\delta}$, so $T=T'$, so $f(\delta)=T(\delta)$, and we are done. .1cm Now suppose that any two distinct leaves of $\lambda$ have disjoint closures. Then we can find three leaves $\alpha_i$, $i=0,1,2$, with no common orthogonal (because $\lambda$ is nonelementary). By Proposition \[proposition-A\] we can find a unique $T \in {{\bf {PSL}}(2,{\mathbb {C}})}$ such that $T=f$ on $E=\bigcup_{i=0}^{2} \partial{\alpha_i}$, or $T=f \circ \iota_{\lambda}$ on $E$. In the latter case we can replace $f$ with $f \circ \iota_{\lambda}$. In either case we can assume that $T$ is not the identity. .1cm Now given any $\beta \in \lambda$, we want to show that $f(\beta)=\beta$. For $i=1,2$ let $Q_i$ be the 180 degree rotation around $O_i$, the common orthogonal to $\alpha_0$ and $\alpha_i$. If $f(\beta)\ne \beta$, then $f(\beta)=-Q_i(\beta)$ for $i=1,2$, so $Q^{-1}_0 \circ Q_1$ fixes the endpoints of $\beta$. But $Q^{-1}_0 \circ Q_1$ fixes the endpoints of $\alpha_0$, and $\beta \ne \alpha_0$, so this is impossible. So $f(\beta)=\beta$ for every $\beta$, and we are finished. We observe that Lemma \[lemma-G\] holds even if we do not require the lamination to be closed. Let $\lambda$ be a geodesic lamination on ${{\mathbb {H}}^{2}}$. An effective radius for $\lambda$ is a number $M>0$ such that every open hyperbolic disc of radius $M$ in ${{\mathbb {H}}^{2}}$ intersects $\lambda$ in a (not necessarily closed) nonelementary sublamination. We observe that the condition that the intersection of $\lambda$ and the open disc centred at $z$ of radius $M$ is nonelementary is open in both $z$ and $\lambda$. The following proposition follows easily from this observation. \[prop-new-1\] Let $\Lambda$ be a family of geodesic laminations on ${{\mathbb {H}}^{2}}$ such that 1. \[group invariant\] if $\lambda \in \Lambda$, and $g \in {{\bf {PSL}}(2,{\mathbb {R} })}$, then $g(\lambda) \in \Lambda$, 2. \[Hausdorff closed\] $\Lambda$ is closed (and hence compact) in the Hausdorff topology on the space of geodesic laminations modulo ${{\bf {PSL}}(2,{\mathbb {R} })}$, 3. if $\lambda \in \Lambda$ then $\lambda$ is nonelementary. Then we can find $M>0$ such that $M$ is an effective radius for every $\lambda \in \Lambda$. We call such a family a closed invariant family of non-elementary laminations. For any $R_1>0$ we let $\Lambda(R_1)$ be the closure of $\bigcup_{R \ge R_1} {{\mathcal {C}}}_0(R)$ under properties \[group invariant\] and \[Hausdorff closed\] in Proposition \[prop-new-1\]. We observe that taking the Hausdorff closure just adds the translates of all the ${{\mathcal {C}}}_{0,t}$ under ${{\bf {PSL}}(2,{\mathbb {R} })}$, where ${{\mathcal {C}}}_{0,t}$ was defined in the previous subsection. Hence $\Lambda(R_1)$ is a closed invariant family of nonelementary laminations. We say that a lamination $\lambda$ is *unflippable* if it has two distinct leaves with a common endpoint, or if the involution $\iota_\lambda$ is not continuous. The latter occurs if and only if there is a point of $\partial \lambda$ that is the limit of a sequence leaves of $\lambda$ whose diameter go to zero (or $\lambda$ has two distinct leaves with a common endpoint). This will always occur when $\lambda$ is invariant by a nonelementary Fuchsian group $G$, and $\lambda$ has a recurrent (or closed) leaf in ${{\mathbb {H}}^{2}}/G$. In particular, a nonempty lamination $\lambda$ that is invariant under a cocompact group is unflippable (and nonelementary). We conclude that all of the laminations in $\Lambda(R_1)$ are unflippable. We can now prove that a quasisymmetric map that locally preserves complex distances on an unflippable lamination is Möbius. \[proposition-C\] Suppose that $\lambda$ is an unflippable nonelementary lamination. Suppose that $M$ is an effective radius for $\lambda$, and $f:\partial{{{\mathbb {H}}^{2}}} \to \partial{{ {\mathbb {H}}^{3}}}$ is a continuous embedding such that ${{\bf d}}(f(\alpha),f(\beta)) ={{\bf d}}(\alpha,\beta)$, for all $\alpha,\beta \in \lambda$ such that $d(\alpha,\beta) \le 3M$. Then $f$ is the restriction of a Möbius transformation. For $z \in {{\mathbb {H}}^{2}}$ let $D_z$ be the open disc of radius $M$ centred at $z$, and $\lambda_z$ be the leaves of $\lambda$ that meet $D_z$. Because $M$ is an effective radius, $\lambda_z$ is nonelementary. Therefore there is a unique $T_z \in {{\bf {PSL}}(2,{\mathbb {C}})}$ such that either $T_z=f$ on $\partial{\lambda_z}$ or $T_z=f \circ \iota_{\lambda}$ on $\partial{\lambda_z}$. Now if $d(z,z') \le M$, then ${{\bf d}}(f(\alpha),f(\beta)) ={{\bf d}}(\alpha,\beta)$ for all $\alpha,\beta \in \lambda_z \cup \lambda_{z'}$, and $\lambda_z \cup \lambda_{z'}$ is nonelementary, so $T_z=T_{z'}$. We conclude that there is one $T \in {{\bf {PSL}}(2,{\mathbb {C}})}$ such that $T=f$ or $T=f \circ \iota_{\lambda}$ on all of $\partial{\lambda}$. But in the latter case, $\iota_\lambda$ would be continuous, which is impossible since $\lambda$ is unflippable. We now characterize the sequences of $K$-quasiconformal maps whose dilatations do not go to 1. \[lemma-new-1\] Let $K_1>K>1$. Suppose that for $m \in {\mathbb {N}}$, $f_m:\partial{{{\mathbb {H}}^{2}}} \to \partial{{ {\mathbb {H}}^{3}}}$ is $K_1$-quasisymmetric but not $K$-quasisymmetric. Then, after passing to a subsequence if necessary, we have that there exist $h_m, q_m \in {{\bf {PSL}}(2,{\mathbb {C}})}$ such that $q_m \circ f_m \circ h_m \to f_{\infty} : \partial{{{\mathbb {H}}^{2}}} \to \partial{{ {\mathbb {H}}^{3}}}$ is a $K_1$-quasisymmetric map and $f_{\infty}$ is not a restriction of a Möbius transformation on $\partial{{{\mathbb {H}}^{2}}}$. Fix $a,b,c,d \in \partial{{{\mathbb {H}}^{2}}}$ such that the cross ratio of these four points is equal to 1. Since $f_m$ is not K-quasisymmetric there exist points $a_m,b_m,c_m,d_m \in \partial{{{\mathbb {H}}^{2}}}$ whose cross ratio is equal to one and such that the cross ratio of the points $f_m(a_m),f_m(b_m),f_m(c_m),f_m(d_m) \in \partial{{ {\mathbb {H}}^{3}}}$ stays outside some closed disc $U$ centred at the point $1 \in {\mathbb {C}}$ for every $m$. We let $h_m$ be the Möbius transformation that maps $a,b,c,d$ to $a_m,b_m,c_m,d_m$. We then choose $q_m \in {{\bf {PSL}}(2,{\mathbb {C}})}$ such that $q_m \circ f_m \circ h_m$ fixes the points $a,b,c$. Then for each $m$ the map $q_m \circ f_m \circ h_m$ is $K_1$-quasisymmetric and it fixes the points $a,b,c$. .1cm The standard normal family argument states that given $L>1$, a sequence of $L$-quasisymmetric maps that all fix the same three distinct points, converges uniformly to a $L$-quasisymmetric map (after passing onto a subsequence if necessary). Therefore, we have $q_m \circ f_m \circ h_m \to f_{\infty}$. Moreover the cross ratio of the points $f_{\infty}(a),f_{\infty}(b), f_{\infty}(c), f_{\infty}(d)$ lies outside the disc $U$ so we conclude that $f_{\infty}$ is not a Möbius transformation on $\partial{{{\mathbb {H}}^{2}}}$. We can now conclude the constant of quasisymmetry for $f$ is close to 1 when $f$ changes the complex distance of neighbouring geodesics a sufficiently small amount. \[theorem-distance-qc\] Let $\Lambda$ be a closed invariant family of unflippable nonelementary laminations, and let $K_1 \ge K > 1$. Then there exists $\delta=\delta(K_1,K,\Lambda) > 0$ and $T=T(\Lambda)$ such that the following holds. If $\lambda \in \Lambda$ and $f:\partial{{{\mathbb {H}}^{2}}} \to \partial{{ {\mathbb {H}}^{3}}}$ is a $K_1$-quasisymmetric map, and $$|{{\bf d}}(f(\alpha),f(\beta))-{{\bf d}}(\alpha,\beta)| \le \delta,$$ for all $\alpha,\beta \in \lambda$ such that $d(\alpha,\beta) \le T$. Then $f$ is $K$-quasisymmetric. By Proposition \[prop-new-1\], we can find $M=M(\Lambda)>0$ such that $M$ is an effective radius for every $\lambda \in \Lambda$. We let $T = 3M$. Suppose that there is no good $\delta$. Then we can find $\lambda_m \in \Lambda$, $f_m$ (for $m \in {\mathbb {N}}$) such that $$|{{\bf d}}(f(\alpha),f(\beta))-{{\bf d}}(\alpha,\beta)| \to 0, m \to \infty,$$ uniformly for all $\alpha,\beta \in \lambda_m$ for which $d(\alpha,\beta) \le T$, but for which $f$ is not $K$-quasisymmetric. Passing to a subsequence and applying Lemma \[lemma-new-1\], we obtain $\lambda_m \to \lambda_{\infty} \in \Lambda$, and $f_m \to f_{\infty}:\partial{{{\mathbb {H}}^{2}}} \to \partial{{ {\mathbb {H}}^{3}}}$ such that $f_{\infty}$ is a $K_1$-quasisymmetric map that is not a Möbius transformation on $\partial{{{\mathbb {H}}^{2}}}$. Moreover ${{\bf d}}(f_{\infty}(\alpha),f_{\infty}(\beta))={{\bf d}}(\alpha,\beta)$ for all $\alpha,\beta \in \lambda_{\infty}$ with $d(\alpha,\beta) \le T=3M$. Then by Proposition \[proposition-C\], $f_{\infty}$ is a Möbius transformation, a contradiction. We can now derive a corollary, which is our object for this section: \[theorem-new-1\] Let $K_1 \ge K>1$, and let $R_1 = 10$. There exists $\delta_1=\delta_1(K,K_1)>0$ and a universal constant $T_1$ such that the following holds. Suppose that $R \ge R_1$, and $f:\partial{{{\mathbb {H}}^{2}}} \to \partial{{ {\mathbb {H}}^{3}}}$ is a $K_1$-quasisymmetric map, and $$|{{\bf d}}(f(\alpha),f(\beta))-{{\bf d}}(\alpha,\beta)| \le \delta_1,$$ for all $\alpha,\beta \in {{\mathcal {C}}}_0(R)$ such that $d(\alpha,\beta) \le T_1$. Then $f$ is $K$-quasisymmetric. This follows immediately from Theorem \[theorem-distance-qc\], because $\Lambda(R_1)$ is a closed invariant family of unflippable noninvariant laminations. Observe that $T_1=3M_1$ is a universal constant, where $M_1$ is the effective radius of every lamination in $\Lambda(R_1)$. Proof of Theorem \[geometry-1\] -------------------------------- In this section we will verify Assumption \[ass\] holds when the quasisymmetry constant for $f_{\tau}$ is close to 1. This will permit us, thanks to Lemma \[derivative-3\], to verify the hypotheses of Theorem \[theorem-new-1\] and thereby improve the quasisymmetry constant for $f_{\tau}$. We thus obtain an inductive argument for Theorem \[geometry-1\]. .1cm This lemma is an abstraction of its corollary, Corollary \[cor-1\] where $A,B,C$ will be $C_0(0),C_i(0),C_{n+1}(0)$. \[ll-1\] For all $\delta_2,T_1>0$ we can find $K>1$ such that if 1. A,B,C are oriented geodesics in ${{\mathbb {H}}^{2}}$, $d(A,C)>0$, and $B$ separates $A$ and $C$, 2. $d(A,C) \le T_1$, 3. O is the common orthogonal for $A$ and $C$, 4. $x=A \cap O$, $y=B \cap O$, 5. $f:\partial{{{\mathbb {H}}^{2}}} \to \partial{{ {\mathbb {H}}^{3}}}$ is $K$-quasisymmetric, 6. $\partial{A'}=f(\partial{A})$, $\partial{B'}=f(\partial{B})$, and $\partial{C'}=f(\partial{C})$ (taking into account the order of the endpoints), 7. $O'$ is the common orthogonal to $A'$ and $C'$, and $x'=A' \cap O'$, 8. $N$ is the common orthogonal to $O'$ and $B'$, and $y'=N \cap O'$, then $d(O',B') \le \delta_2$, and $|{{\bf d}}_{O'}(x',y')-{{\bf d}}(x,y)| \le \delta_2$. First suppose that $d(A,C)$ is small, say $d(A,C) \le T_2$ for some $T_2>0$, and $f$ is $2$-quasisymmetric. Then by applying a Möbius transformation to the range and domain of $f$ we can assume that $\partial{A}=\partial{A'}=\{0,\infty\}$, and $1 \in \partial{O}$, $1 \in \partial{O'}$ (and hence $\partial{O}=\partial{O'}=\{-1,1\}$). Note that while $f(0)=0$ and $f(\infty)=\infty$, $f(1)$ is not necessarily equal to 1. It follows that $\partial{C}=\{c,{{1}\over{c}} \}$, for $c$ real and small (we can assume $c>0$), and $\partial{C'}=\{c',{{1}\over{c'}} \}$ where $c'$ is small and $c'=f(c)$, ${{1}\over{c'}}=f( {{1}\over{c}})$. .1cm We let $\partial{B}=\{b_0,b_1\}$, where $b_0,{{1}\over{b_{1}}} \in (0,c)$. Then $|f(b_0)|<10|c'|$, and $|f(b_1)|>{{1}\over{10}}|{{1}\over{c'}}|$ because $f$ is $2$-quasisymmetric and $f$ fixes $0,\infty$. Therefore, by choosing $T_2$ to be small enough we can arrange that $d(O',B')$, $d(x,y)$ and $d(x',y')$ are as small as we want, so we conclude that for every $\delta_2>0$ there exists $T_2>0$ such that if $d(A,C) \le T_2$, and $f$ is $2$-quasisymmetric then $$d(O',B'), |d(x,y)-d(x',y')| <\delta_2.$$ So we need only show that for every $\delta_2$ and $T_1$ there exists $K>1$ such that if $d(A,C) \in [T_2,T_1]$, where $T_2=T_2(\delta_2)$, and all other hypotheses hold, then $$\label{eka-1} d(O',B'), |d(x,y)-d(x',y')| <\delta_2.$$ .1cm Suppose that this statement is false. Then we can find a sequence of $A_n,B_n,C_n$ and $f_n$ for which $f_n$ is $K_n$-quasisymmetric, $K_n \to 1$, but for which (\[eka-1\]) does not hold. Then normalizing and passing to a subsequence we obtain $A,B,C$ in the limit, and $f_n \to {{\operatorname {id}}}$. So $A'_n \to A'=A$, $B'_n \to B'=B$, and $C'_n \to C'=C$. Moreover, because the common orthogonal to two geodesics varies continuously when the complex distance is non-zero, $O_n \to O$ and $O'_n \to O'$, so $d(O',B'_n) \to 0$, and ${{\bf d}}(B'_n,O) \ne 0$. Also $N'_n \to N$, and $(x_n,y_n,x'_n,y'_n) \to (x,y,x',y')$, and $x'=x$, $y'=y$ so $$|d(x'_n,y'_n)-d(x_n,y_n)| \to 0.$$ We conclude that (\[eka-1\]) holds for large enough $n$, a contradiction. Assume that for some $\tau \in {\mathbb {D}}$ the representation $\rho_{\tau}:\pi_1(S^{0}) \to {{\bf {PSL}}(2,{\mathbb {C}})}$ is quasifuchsian and let $f_{\tau}:\partial{{{\mathbb {H}}^{2}}} \to \partial{{ {\mathbb {H}}^{3}}}$ be the normalised equivariant quasisymmetric map (that conjugates $\rho_{0}(\pi_1(S^{0}))$ to $\rho_{\tau}(\pi_1(S^{0}))$). .1cm Here we show that Assumption \[ass\] holds if $f_{\tau}$ is sufficiently close to being conformal. \[cor-1\] Given $T_1$ we can find $K_1>1$ such that if $f_{\tau}$ is $K_1$-quasisymmetric then the following holds. Let $C_0(0),C_{n+1}(0)$ be geodesics in ${{\mathcal {C}}}_0(R)$ such that $d(C_0(0),C_{n+1}(0)) \le T_1$, and let $C_i(0) \in {{\mathcal {C}}}_0(R)$, $i=1,...,n$, denote the intermediate geodesics. Also, $O(0), O(\tau)$, $z_i(0),z_i$ and $C_i(\tau)$ are defined as usual. Then $$|d(z_i,z_{i+1})-d(z_i(0),z_{i+1}(0))| <{{e^{-5} }\over{4}},$$ and $d(O,C_i) \le {{e^{-5} }\over{4}}$. We apply the previous lemma with $\delta_2={{ e^{-5} }\over{16}}$. Then $d(O,C_i)<{{e^{-5}}\over{16}}$. Furthermore $$|d(z'_0,z'_i)-d(z_0(0),z_i(0))| <{{e^{-5} }\over{16}},$$ and $$|d(z'_0,z'_{i+1})-d(z_0(0),z_{i+1}(0))| <{{e^{-5} }\over{16}},$$ so $$|d(z'_i,z'_{i+1})-d(z_i(0),z_{i+1}(0))| <{{e^{-5} }\over{8}}.$$ Moreover $$d(z_i, z_{i+1}) \le d(z'_i, z'_{i+1}) + d(O, C_i) + d(O, C_{i+1})$$ and therefore $$|d(z_i,z_{i+1})-d(z_i(0),z_{i+1}(0))| <{{e^{-5} }\over{4}}.$$ We are now ready to complete the proof of Theorem \[geometry-1\]. Let $R>R_1=10$. Since the space of quasifuchsian representations of the group $\pi_1(S^{0})$ is open (in the space of all representations), there exists $0<\epsilon_1<1$ so that the disc ${\mathbb {D}}(0,\epsilon_1)$ (of radius $\epsilon_1$ and centred at $0$) is the maximal disc such that $f_{\tau}$ is $K_1$-quasisymmetric on all of ${\mathbb {D}}(0,\epsilon_1)$, where $K_1$ is the constant from Corollary \[cor-1\]. We can choose such $\epsilon_1$ to be positive because the map $f_{0}$ is $1$-quasisymmetric and given any $K>1$ we can find an open neighbourhood of $0$ in the $\tau$ plane such that in that neighbourhood we have that every $f_{\tau}$ is $K$-quasisymmetric. By that corollary, Assumption \[ass\] holds for $f_{\tau}$, for all $\tau \in {\mathbb {D}}(0,\epsilon_1)$. Let $C_0(0), C_{n+1}(0) \in {{\mathcal {C}}}_0(R)$ be such that $d(C_0(0),C_0(n+1)) \le T_1$, where $T_1$ is the constant from Theorem \[theorem-new-1\]. From Lemma \[derivative-3\], for $R$ large enough and for every $\tau \in {\mathbb {D}}(0,\epsilon_1)$, we have $$|{{\bf d}}'_{O} (C_0,C_{n+1})| \le 10^{9} T_1e^{T_{1}}.$$ This yields $$\label{ajde} |{{\bf d}}_{O} (C_0,C_{n+1})-{{\bf d}}_{O(0)} (C_0(0),C_{n+1}(0))| \le 10^{9} \epsilon_1 T_1 e^{T_{1}},$$ for every $\tau \in {\mathbb {D}}(0,\epsilon_1)$. .1cm Let $0<\delta_1=\delta_1(\sqrt{K_1},K_1)$, be the corresponding constant from Theorem \[theorem-new-1\]. We show $$\epsilon_1 \ge {{\delta_1}\over{10^{9} T_1 e^{T_{1}}}}.$$ Assume that this is not the case. Then from (\[ajde\]) we have that for every $\tau \in {\mathbb {D}}(0,\epsilon_1)$ the map $f_{\tau}$ is $\sqrt{K_1}$-quasisymmetric (and hence for $\tau \in \overline{ {\mathbb {D}}(0,\epsilon_1)}$). This implies that $f_{\tau}$ is $K_1$-quasisymmetric for every $\tau \in {\mathbb {D}}(0,\epsilon)$ for some $\epsilon>\epsilon_1$. But this contradicts the assumption that ${\mathbb {D}}(0,\epsilon_1) $ is the maximal disc so that every $f_{\tau}$ is $K_1$-quasisymmetric. .1cm Set $${\widehat}{\epsilon}= {{\delta_1}\over{10^{9} T_1 e^{T_{1}}}}.$$ Then for every $\tau \in {\mathbb {D}}(0,{\widehat}{\epsilon})$, and for $R$ large enough the map $f_{\tau}$ is $K_1$-quasisymmetric. We prove the other estimate in Theorem \[geometry-1\] as follows. First of all, by the Slodkowski extension theorem (for the statement and proof of this theorem see [@fletcher-markovic]), we can extend the maps $f_{\tau}$ to quasiconformal maps of the sphere $\partial{{ {\mathbb {H}}^{3}}}$ such that the Beltrami dilatation $$\mu_{\tau}(z)=\frac{\overline{\partial} f_{\tau}}{\partial f_{\tau} }(z)$$ varies holomorphically in $\tau$ for every fixed $z \in \partial{{ {\mathbb {H}}^{3}}}$. Observe that $\mu_{0}(z)=0$, and $|\mu_{\tau}(z)|<1$ for every $\tau$ and $z$ (recall that the absolute value of the Beltrami dilatation of any quasiconformal map is less than 1). For a fixed $z$ we then apply the Schwartz lemma to the function $\mu_{\tau}(z)$, and this yields the desired estimate from Theorem \[geometry-1\]. Surface group representations in $\pi_1({{\bf {M}}^{3} })$ ========================================================== Labelled collection of oriented skew pants ------------------------------------------ From now on ${{\bf {M}}^{3} }={ {\mathbb {H}}^{3}}/ {{\mathcal {G}}}$ is a fixed closed hyperbolic three manifold and ${{\mathcal {G}}}$ a suitable Kleinian group. By $\Gamma^{*}$ and $\Gamma$ we denote respectively the collection of oriented and unoriented closed geodesics in ${{\bf {M}}^{3} }$. By $-\gamma^{*}$ we denote the opposite orientation of an oriented geodesic $\gamma^{*} \in \Gamma^{*}$. .1cm Let $\Pi^{0}$ be a topological pair of pants. Recall (from the beginning of Section 2) that a pair of pants in a closed hyperbolic 3-manifold ${{\bf {M}}^{3} }$ is an injective homomorphism $\rho:\pi_1(\Pi^{0}) \to \pi_1({{\bf {M}}^{3} })$, up to conjugacy. A pair of pants in ${{\bf {M}}^{3} }$ is determined by (and determines) a continuous map $f:\Pi^{0} \to {{\bf {M}}^{3} }$, up to homotopy. Moreover, the representation $\rho$ induces a representation $$\rho:\pi_1(\Pi^{0}) \to {{\bf {PSL}}(2,{\mathbb {C}})},$$ up to conjugacy. Fix an orientation and a base point on $\Pi^{0}$. We equip $\Pi^0$ with an orientation preserving homeomorphism $\omega:\Pi^{0} \to \Pi^{0}$, of order three that permutes the cuffs and let $\omega^{i}(C)$, $i=0,1,2$, denote the oriented cuffs of $\Pi^0$. We may assume that the base point of $\Pi^{0}$ is fixed under $\omega$. By $\omega:\pi_1(\Pi^0) \to \pi_1(\Pi^0)$ we also denote the induced isomorphism of the fundamental group (observe that the homeomorphism $\omega:\Pi^{0} \to \Pi^{0}$ has a fixed point that is the base point for $\Pi^0$ so the isomorphism of the fundamental group is well defined). Choose $c \in \pi_1(\Pi^0)$ to be an element in the conjugacy class that corresponds to the cuff $C$, such that $\omega^{-1}(c) c \omega(c)={{\operatorname {id}}}$. \[def-adm\] Let $\rho:\pi_1(\Pi^{0}) \to {{\bf {PSL}}(2,{\mathbb {C}})}$ be a faithful representation. We say that $\rho$ is an admissible representation if $\rho(\omega^i(c))$ is a loxodromic Möbius transformation, and $${{\bf {hl}}}(\omega^{i}(C) )={{{{\bf l}}(\omega^{i}(C) )}\over{2}},$$ where ${{\bf l}}(\omega^{i}(C))$ is chosen so that $ -\pi <{\operatorname{Im}}({{\bf l}}(\omega^{i}(C))) \le \pi$. \[def-skew\] Let $\rho:\pi_1(\Pi^{0}) \to {{\mathcal {G}}}$, be an admissible representation. A skew pants $\Pi$ is the conjugacy class $\Pi=[\rho]$. The set of all skew pants is denoted by ${{\bf {\Pi}}}$. For $\Pi \in {{\bf {\Pi}}}$ we define ${{\mathcal {R}}}(\Pi) \in {{\bf {\Pi}}}$ as follows. Let $\rho:\pi_1(\Pi^{0}) \to {{\mathcal {G}}}$ be a representation such that $[\rho]=\Pi$, and set $\rho(\omega^i(c))=A_i \in {{\mathcal {G}}}$. Define the representation $\rho_1:\pi_1(\Pi^{0}) \to {{\mathcal {G}}}$ by $\rho_1(\omega^{-i}(c) )=A^{-1}_i$. One verifies that $\rho_1$ is well defined and we let ${{\mathcal {R}}}(\Pi)=[\rho_1]$. The mapping ${{\mathcal {R}}}:{{\bf {\Pi}}}\to {{\bf {\Pi}}}$ is a fixed point free involution. .1cm For $\Pi \in {{\bf {\Pi}}}$ such that $\Pi=[\rho]$ we let $ \gamma^{*}(\Pi,\omega^{i}(c)) \in \Gamma^{*}$ denote the oriented geodesic that represents the conjugacy class of $\rho(\omega^i(c))$. Observe the identity $ \gamma^{*}({{\mathcal {R}}}(\Pi),\omega^{i}(c))= -\gamma^{*}(\Pi,\omega^{-i}(c))$. The set of pairs $(\Pi,\gamma^{*})$, where $\gamma^{*}=\gamma^{*}(\Pi,\omega^{i}(c))$, for some $i=0,1,2$, is called the set of marked skew pants and denoted by ${{\bf {\Pi}}}^{*}$. .1cm There is the induced (fixed point free) involution ${{\mathcal {R}}}:{{\bf {\Pi}}}^{*} \to {{\bf {\Pi}}}^{*}$, given by ${{\mathcal {R}}}(\Pi,\gamma^{*}(\Pi,\omega^{i}(c)))=({{\mathcal {R}}}(\Pi),\gamma^{*}({{\mathcal {R}}}(\Pi),\omega^{-i}(c) ))$. Another obvious mapping ${\operatorname{rot}}:{{\bf {\Pi}}}^{*} \to {{\bf {\Pi}}}^{*}$ is given by ${\operatorname{rot}}(\Pi,\gamma^{*}(\Pi,\omega^{i}(c)))=(\Pi,\gamma^{*}(\Pi,\omega^{i+1}(c) ) )$. Let ${{\mathcal {L}}}$ be a finite set of labels. We say that a map ${\operatorname{lab}}:{{\mathcal {L}}}\to {{\bf {\Pi}}}^{*}$ is a legal labeling map if the following holds 1. there exists an involution ${{\mathcal {R}}}_{{{\mathcal {L}}}}:{{\mathcal {L}}}\to {{\mathcal {L}}}$, such that ${{\mathcal {R}}}({\operatorname{lab}}(a))={\operatorname{lab}}({{\mathcal {R}}}_{{{\mathcal {L}}}}(a))$, 2. there is a bijection ${\operatorname{rot}}_{{{\mathcal {L}}}}:{{\mathcal {L}}}\to {{\mathcal {L}}}$ such that ${\operatorname{rot}}({\operatorname{lab}}(a))={\operatorname{lab}}({\operatorname{rot}}_{{{\mathcal {L}}}}(a))$. Let ${\mathbb {N}}{{\bf {\Pi}}}$ denote the collection of all formal sums of oriented skew pants from ${{\bf {\Pi}}}$ over non-negative integers. We say that $W \in {\mathbb {N}}{{\bf {\Pi}}}$ is symmetric if $W=n_1(\Pi_1+{{\mathcal {R}}}(\Pi_1))+n_2(\Pi_2+{{\mathcal {R}}}(\Pi_2))+...+n_m(\Pi_m+{{\mathcal {R}}}(\Pi_m))$, where $n_i$ are positive integers, and $\Pi_i \in {{\bf {\Pi}}}$. Every symmetric $W$ induces a canonical legal labeling defined as follows. The corresponding set of labels is ${{\mathcal {L}}}=\{(j,k): j=1,2,...,2(n_1+n_2+...+n_m); k=0,1,2 \}$ (observe that the set ${{\mathcal {L}}}$ has $6(n_1+...+n_m)$ elements). Set ${\operatorname{lab}}(j,k)=(\Pi_s, \gamma^{*}(\Pi_s, \omega^{k}(c) ) )$, if $j$ is odd and if $2(n_1+\cdot \cdot \cdot +n_{s-1}) < j \le 2(n_1+ \cdot \cdot \cdot+n_{s})$. Set ${\operatorname{lab}}(j,k)=({{\mathcal {R}}}(\Pi_s), \gamma^{*}({{\mathcal {R}}}(\Pi_{s}),\omega^{-k}(c) ) )$, if $j$ is even and $2(n_1+ \cdot \cdot \cdot +n_{s-1}) < j \le 2(n_1 + \cdot \cdot \cdot +n_{s})$. The bijection ${{\mathcal {R}}}_{{{\mathcal {L}}}}$ is given by ${{\mathcal {R}}}_{{{\mathcal {L}}}}(j,k)=(j+\delta(j),k)$, where $\delta(j)=+1$ if $j$ is even and $\delta(j)=-1$ if $j$ is odd. The bijection ${\operatorname{rot}}_{{{\mathcal {L}}}}$ is defined accordingly. Let $\sigma:{{\mathcal {L}}}\to {{\mathcal {L}}}$ be an involution. We say that $\sigma$ is admissible with respect to a legal labeling ${\operatorname{lab}}$ if the following holds. Let $a \in {{\mathcal {L}}}$ and let ${\operatorname{lab}}(a)=(\Pi_1,\gamma^{*})$, for some $\Pi_1 \in {{\bf {\Pi}}}$ where $\gamma^{*}=\gamma(\Pi_{1},\omega^{i}(c) )$, for some $i \in \{0,1,2\}$. Then ${\operatorname{lab}}(\sigma(a))=(\Pi_2,-\gamma^{*})$, where $\Pi_2 \in {{\bf {\Pi}}}$ is some other skew pants. Observe that every legal labeling has an admissible involution $\sigma:{{\mathcal {L}}}\to {{\mathcal {L}}}$, given by $\sigma(a)={{\mathcal {R}}}_{{{\mathcal {L}}}}(a)$. .1cm Suppose that we are given a legal labeling ${\operatorname{lab}}:{{\mathcal {L}}}\to {{\bf {\Pi}}}^{*}$ and an admissible involution $\sigma:{{\mathcal {L}}}\to {{\mathcal {L}}}$. We construct a closed topological surface $S^{0}$ (not necessarily connected) with a pants decomposition ${{\mathcal {C}}}^{0}$, and a representation $\rho_{{\operatorname{lab}},\sigma}:\pi(S^{0}) \to {{\mathcal {G}}}$ as follows. Each element of ${{\mathcal {L}}}$ determines an oriented cuff in ${{\mathcal {C}}}^{0}$. Each element in the orbit space ${{\mathcal {L}}}/ {\operatorname{rot}}_{{{\mathcal {L}}}}$ gives a copy of the oriented topological pair of pants $\Pi^{0}$. The pairs of pants are glued according to the instructions given by $\sigma$, and this defines the representation $\rho_{{\operatorname{lab}},\sigma}$. One can check that after we glue the corresponding pairs of pants we construct a closed surface $S^{0}$. Moreover $S^{0}$ is connected if and only if the action of the group of bijections $\left< {{\mathcal {R}}}_{{{\mathcal {L}}}}, {\operatorname{rot}}_{{{\mathcal {L}}}},\sigma \right>$ is minimal on ${{\mathcal {L}}}$ (that is ${{\mathcal {L}}}$ is the smallest invariant set under the action of this group). .1cm Let $a \in {{\mathcal {L}}}$. Then $(\Pi,\gamma^{*})={\operatorname{lab}}(a)$ and $(\Pi_1,-\gamma^{*})={\operatorname{lab}}(\sigma(a))$ for some skew pants $\Pi,\Pi_1 \in {{\bf {\Pi}}}$. Also $\gamma^{*}=\gamma^{*}(\Pi,\omega^i(c))$ and $-\gamma^{*}=\gamma^{*}(\Pi_1,\omega^j(c))$. Set $${{\bf {hl}}}(a)={{\bf {hl}}}(\omega^i(C)),$$ where the half length ${{\bf {hl}}}(\omega^i(C))$ is computed for the representation that corresponds to the skew pants $\Pi$. It follows from our definition of admissible representations that ${{\bf {hl}}}(a)={{\bf {hl}}}(\sigma(a))$. Set ${{\bf l}}(a)= {{\bf l}}(\omega^i(C))$. Then ${{\bf l}}(a)={{\bf l}}(\sigma(a))$ and $${{\bf {hl}}}(a)={{{{\bf l}}(a)}\over{2}}.$$ The unit normal bundle of a closed geodesic ------------------------------------------- Next, we discuss in more details the structure of the unit normal bundle ${N^{1}}(\gamma)$ of a closed geodesic $\gamma \subset {{\bf {M}}^{3} }$ (for the readers convenience we will repeat several definitions given at the beginning of Section 2). The bundle ${N^{1}}(\gamma)$ has an induced differentiable structure and it is diffeomorphic to a torus. Elements of ${N^{1}}(\gamma)$ are pairs $(p,v)$, where $p \in \gamma$ and $v$ is a unit vector at $p$ that is orthogonal to $\gamma$. The disjoint union of all the bundles is denoted by ${N^{1}}(\Gamma)$. .1cm Fix an orientation $\gamma^{*}$ on $\gamma$. Consider ${\mathbb {C}}$ as an additive group and for $\zeta \in {\mathbb {C}}$, let ${{\mathcal {A}}}_{\zeta}:{N^{1}}(\gamma) \to {N^{1}}(\gamma)$ be the mapping given by ${{\mathcal {A}}}_{\zeta}(p,v)=(p_1,v_1)$ where $p_1$ and $v_1$ are defined as follows. Let ${\widetilde}{\gamma^{*}}$ be a lift of $\gamma^{*}$ to ${ {\mathbb {H}}^{3}}$ and let $({\widetilde}{p},{\widetilde}{v}) \in {N^{1}}({\widetilde}{\gamma})$ be a lift of $(p,v)$. We may assume that ${\widetilde}{\gamma^{*}}$ is the geodesic between $0,\infty \in \partial{{ {\mathbb {H}}^{3}}}$. Let $A_{\zeta} \in {{\bf {PSL}}(2,{\mathbb {C}})}$ be given by $A_{\zeta}(w)=e^{\zeta} w$, for $w \in \partial{{ {\mathbb {H}}^{3}}}$. Set $({\widetilde}{p}_1,{\widetilde}{v}_1)=A_{\zeta}({\widetilde}{p},{\widetilde}{v})$. Then $({\widetilde}{p}_1,{\widetilde}{v}_1)$ is a lift of $(p_1,v_1)$. .1cm If ${{\mathcal {A}}}^{1}_{\zeta}:{N^{1}}(\gamma)\to {N^{1}}(\gamma) $ and ${{\mathcal {A}}}^{2}_{\zeta}:{N^{1}}(\gamma)\to {N^{1}}(\gamma)$ are the actions that correspond to different orientations on $\gamma$ then on ${N^{1}}(\gamma)$ we have ${{\mathcal {A}}}^{1}_{\zeta}={{\mathcal {A}}}^{2}_{-\zeta }=({{\mathcal {A}}}^{2}_{\zeta})^{-1}$, $\zeta \in {\mathbb {C}}$. Unless we specify otherwise, by ${{\mathcal {A}}}_{\zeta}$ we denote either of the two actions. .1cm The group ${\mathbb {C}}$ acts transitively on ${N^{1}}(\gamma)$. Let ${{\bf l}}(\gamma)$ be the complex translation length of $\gamma$, such that $-\pi<{\operatorname{Im}}({{\bf l}}(\gamma)) \le \pi$ (by definition ${\operatorname{Re}}({{\bf l}}(\gamma)) >0$). Then $A_{{{\bf l}}(\gamma)}={{\operatorname {id}}}$ on ${N^{1}}(\gamma)$. This implies that the map $A_{ {{{{\bf l}}(\gamma)}\over{ 2}} }$ is an involution which enables us to define the bundle ${N^{1}}(\sqrt{\gamma})= {N^{1}}(\gamma)/ A_{ {{{{\bf l}}(\gamma)}\over{2}} }$. The disjoint union of all the bundles is denoted by ${N^{1}}(\sqrt{\Gamma})$. .1cm The additive group ${\mathbb {C}}$ acts on ${N^{1}}(\sqrt{\gamma})$ as well. There is a unique complex structure on ${N^{1}}(\sqrt{\gamma})$ so that the action ${{\mathcal {A}}}_{\zeta}$ is by biholomorphic maps. With this complex structure we have $${N^{1}}(\sqrt{\gamma}) \equiv {\mathbb {C}}/ \big( {{{{\bf l}}(\gamma)}\over{2}} {\mathbb {Z}}+ 2 \pi i {\mathbb {Z}}\big ).$$ The corresponding Euclidean distance on ${N^{1}}(\sqrt{\gamma})$ is denoted by ${{\operatorname {dis}}}$. Then for $|\zeta|$ small we have ${{\operatorname {dis}}}((p,v),({{\mathcal {A}}}_{\zeta}(p,v)))=|\zeta|$. There is also the induced map ${{\mathcal {A}}}_{\zeta}: {N^{1}}(\sqrt{\Gamma}) \to {N^{1}}(\sqrt{\Gamma})$, $\zeta \in {\mathbb {C}}$, where the restriction of ${{\mathcal {A}}}_{\zeta}$ on each torus ${N^{1}}(\sqrt{\gamma})$ is defined above. .1cm Let $(\Pi,\gamma^{*}) \in {{\bf {\Pi}}}^{*}$ and let $\gamma^{*}_k$ be such that $(\Pi,\gamma^{*}_k)={\operatorname{rot}}^{k}(\Pi,\gamma^{*} )$, $k=1,2$. Let $\delta^{*}_k$ be an oriented geodesic (not necessarily closed) in ${{\bf {M}}^{3} }$ such that $\delta^{*}_k$ is the common orthogonal of $\gamma^{*}$ and $\gamma^{*}_k$, and so that a lift of $\delta^{*}_k$ is a side in the corresponding skew right-angled hexagon that determines $\Pi$ (see Section 2). The orientation on $\delta^{*}_k$ is determined so that the point $\delta^{*}_k \cap \gamma^{*}_k$ comes after the point $\delta^{*}_k \cap \gamma^{*}$. Let $p_k=\delta^{*}_k \cap \gamma^{*}$, and let $v_k$ be the unit vector $v_k$ at $p_k$ that has the same direction as $\delta^{*}_k$. Since the pants $\Pi$ is the conjugacy class of an admissible representation in sense of Definition \[def-adm\], we observe that ${{\mathcal {A}}}_{ {{{{\bf l}}(\gamma)}\over{2}} }$ exchanges $(p_1,v_1)$ and $(p_2,v_2)$, so the class $[(p_k,v_k)] \in {N^{1}}(\sqrt{\gamma})$ does not depend on $k \in \{1,2\}$. Define the map $${\operatorname{foot}}:{{\bf {\Pi}}}^{*} \to {N^{1}}(\sqrt{\Gamma}),$$ by $${\operatorname{foot}}_{\gamma}(\Pi)={\operatorname{foot}}(\Pi,\gamma^{*})=[(p_k,v_k)] \in {N^{1}}(\gamma),$$ Observe that ${\operatorname{foot}}(\Pi,\gamma^{*})={\operatorname{foot}}({{\mathcal {R}}}(\Pi,\gamma^{*}))$. .1cm Let $S^{0}$ be a topological surface with a pants decomposition ${{\mathcal {C}}}^{0}$, and let $\rho:\pi_1(S^{0}) \to {{\mathcal {G}}}$ be a representation, such that the restriction of $\rho$ on the fundamental group of each pair of pants satisfies the assumptions of Definition \[def-adm\] (recall that ${{\mathcal {G}}}$ is the Kleinian group such that ${{\bf {M}}^{3} }\equiv { {\mathbb {H}}^{3}}/ {{\mathcal {G}}}$). Let $\Pi^{0}_i$, $i=1,2$ be two pairs of pants from the pants decomposition of $S^{0}$ that both have a given cuff $C \in {{\mathcal {C}}}^{0}$ in its boundary. By $(\Pi_1,\gamma^{*})$ and $(\Pi_2,-\gamma^{*})$ we denote the corresponding marked pants in ${{\bf {\Pi}}}^{*}$. Let $s(C) $ denote the corresponding reduced complex Fenchel-Nielsen coordinate for $\rho$. Let ${{\mathcal {A}}}^{1}_{\zeta}$ be the action on ${N^{1}}(\sqrt{\gamma} )$ that corresponds to the orientation $\gamma^{*}$. Fix $\zeta_0 \in {\mathbb {C}}$ to be such that $${{\mathcal {A}}}^{1}_{\zeta_{0}}( {\operatorname{foot}}(\Pi_1,\gamma^{*} ))= {\operatorname{foot}}(\Pi_2,-\gamma^{*}).$$ Such $\zeta_0$ is uniquely determined up to a translation from the lattice ${{{{\bf l}}(\gamma)}\over{2}} {\mathbb {Z}}+ 2i\pi {\mathbb {Z}}$. If ${{\mathcal {A}}}^{2}_{\zeta}$ is the other action then we have $${{\mathcal {A}}}^{2}_{\zeta_{0}}( {\operatorname{foot}}(\Pi_2,-\gamma^{*} ))= (\Pi_1,\gamma^{*}),$$ since ${{\mathcal {A}}}^{1}_{\zeta} \circ {{\mathcal {A}}}^{2}_{\zeta}={{\operatorname {id}}}$. That is, the choice of $\zeta_0$ does not depend on the choice of the action ${{\mathcal {A}}}_{\zeta}$. Then $s(C) \in {\mathbb {C}}/ ({{{{\bf l}}(\gamma)}\over{2}} {\mathbb {Z}}+ 2\pi i {\mathbb {Z}})$ and $$\label{FN-0} s(C)=(\zeta_0-i\pi), \,\, \pmod { {{{{\bf l}}(\gamma)}\over{2}} {\mathbb {Z}}+ 2\pi i {\mathbb {Z}}}.$$ .1cm The rest of the paper is devoted to proving the following theorem. \[thm-lab\] There exist constants ${\bf q} >0$ and $K>0$ such that for every $\epsilon>0$ and for every $R>0$ large enough the following holds. There exist a finite set of labels ${{\mathcal {L}}}$, a legal labeling ${\operatorname{lab}}:{{\mathcal {L}}}\to {{\bf {\Pi}}}$ and an admissible involution $\sigma:{{\mathcal {L}}}\to {{\mathcal {L}}}$, such that for every $a \in {{\mathcal {L}}}$ we have $$|{{\bf {hl}}}(a)-{{R}\over{2}}| < \epsilon,$$ and $${{\operatorname {dis}}}( {{\mathcal {A}}}_{1+i\pi}( {\operatorname{foot}}({\operatorname{lab}}(a))),{\operatorname{foot}}({\operatorname{lab}}(\sigma(a)))) \le K R e^{- {\bf q} R},$$ where ${{\operatorname {dis}}}$ is the Euclidean distance on ${N^{1}}(\sqrt{\gamma})$. The constant $\bf {q}$ depends on the manifold ${{\bf {M}}^{3} }$. In fact it only depends on the first eigenvalue for the Laplacian on ${{\bf {M}}^{3} }$. Given this theorem we can prove Theorem \[main\] as follows. We saw that every legal labeling together with an admissible involution yields a representation $\rho({\operatorname{lab}},\sigma):\pi_1(S^{0}) \to {{\mathcal {G}}}$, where ${{\mathcal {G}}}$ is the corresponding Kleinian group and $S^{0}$ is a closed topological surface (if $S^{0}$ is not connected we pass onto a connected component). By the above discussion the reduced complex Fenchel-Nielsen coordinates $({{\bf {hl}}}(C),s(C))$ satisfy the assumptions of Theorem \[geometry\] (observe that $K R e^{-{\bf q} R}=o({{1}\over{R}})$, when $R \to \infty$). Then Theorem \[main\] follows from Theorem \[geometry\] . Transport of measure -------------------- Let $(X,d)$ be a metric space. By ${{\mathcal {M}}}(X)$ we denote the space of positive, finite Borel measures on $X$ with compact support. For $A \subset X$ and $\delta>0$ let $${{\mathcal {N}}}_{\delta}(A)=\{x \in X: \quad \text{there exists} \quad a \in A \quad \text{such that} \quad d(x,a) \le \delta \},$$ be the $\delta$-neighbourhood of $A$. Let $\mu, \nu \in {{\mathcal {M}}}(X)$ be two measures such that $\mu(X)=\nu(X)$, and let $\delta>0$. Suppose that for every Borel set $A \subset X$ we have $\mu(A) \le \nu({{\mathcal {N}}}_{\delta}(A))$. Then we say that $\mu$ and $\nu$ are $\delta$-equivalent measures. It appears that the definition is asymmetric in $\mu$ and $\nu$. But this is not the case. For any Borel set $A \subset X$ the above definition yields $\nu(A) \le \nu(X)-\nu \big({{\mathcal {N}}}_{\delta}(X \setminus {{\mathcal {N}}}_{\delta}(A))\big) \le \mu(X)-\mu(X \setminus {{\mathcal {N}}}_{\delta}(A))=\mu({{\mathcal {N}}}_{\delta}(A)$. This shows that the definition is in fact symmetric in $\mu$ and $\nu$. .1cm The following propositions follow from the definition of equivalent measures. \[prop-basic-1\] Suppose that $\mu$ and $\nu$ are $\delta$-equivalent. Then for any $K>0$ the measures $K\mu$ and $K\nu$ are $\delta$-equivalent. If in addition we assume that measures $\nu$ and $\eta$ are $\delta_1$-equivalent, then $\mu$ and $\eta$ are $(\delta+\delta_1)$-equivalent. \[prop-basic-2\] Let $(T,\Lambda)$ be a measure space and let $f_i:T \to X$, $i=1,2$, be two maps such that $d(f_1(t),f_2(t)) \le \delta$, for almost every $t \in T$. Then the measures $(f_1)_{*}\Lambda$ and $(f_2)_{*} \Lambda$ are $\delta$-equivalent. In the remainder of this subsection we prove two theorems, each representing a converse of the previous proposition in a special case. The following theorem is a converse of Proposition \[prop-basic-2\] in the special case of discrete measures. \[thm-Hall\] Suppose that $A$ and $B$ are finite sets with the same number of elements, and equipped with the standard counting measures $\Lambda_A$ and $\Lambda_B$ respectively. Suppose that there are maps $f:A \to X$ and $g:B \to X$ such that the measures $f_{*} \Lambda_A$ and $g_{*} \Lambda_B$ are $\delta$-equivalent for some $\delta>0$. Then one can find a bijection $h:A \to B$ such that $d(g(h(a)),f(a)) \le \delta$, for every $a \in A$. We use the Hall’s marriage theorem which states the following. Suppose that ${{\operatorname {Rel}}}\subset A \times B$ is a relation. For every $Q \subset A$ we let $${{\operatorname {Rel}}}(Q)=\{b \in B: \, \text{there exists} \, a \in Q, \, \text{such that} \, (a,b) \in {{\operatorname {Rel}}}\}.$$ If $|{{\operatorname {Rel}}}(Q)| \ge |Q|$ for every $Q \subset A$ then there exists an injection $h:A \to B$ such that $(a,h(a)) \in {{\operatorname {Rel}}}$ for every $a \in A$. This is Hall’s marriage theorem. In the general case of this theorem the sets $A$ and $B$ need not have the same number of elements. However, in our case they do so the map $h$ is a bijection. .1cm Define ${{\operatorname {Rel}}}\subset A \times B$ by saying that $(a,b) \in {{\operatorname {Rel}}}$ if $d(f(a),g(b)) \le \delta$. Then $${{\operatorname {Rel}}}(Q)=\{b \in B: \, \text{there exists} \, a \in Q, \, \text{such that} \, d(f(a),g(b))\le \delta \},$$ for every $Q \subset A$. Therefore $|{{\operatorname {Rel}}}(Q)|=(g_{*}\Lambda_B)({{\mathcal {N}}}_{\delta}(f(Q))) \ge (f_{*}\Lambda_A)(f(Q)) = |Q|$, since $f_{*} \Lambda_A$ and $g_{*} \Lambda_B$ are $\delta$-equivalent. This means that the hypothesis of the Hall’s marriage theorem is satisfied, and one can find the bijection $h:A \to B$ such that $d(g(h(a)),f(a)) \le \delta$. Let $a,b \in {\mathbb {C}}$ be two complex numbers such that $T(a,b)={\mathbb {C}}/(a{\mathbb {Z}}+ ib{\mathbb {Z}})$ is a torus. We let $z=x+iy$ denote a point in ${\mathbb {C}}$ (sometimes we use $(x,y)$ to denote a point in ${\mathbb {R} }^2 \equiv {\mathbb {C}}$). Let $\phi$ be a positive $C^0$ function on $T(a,b)$. As usual, by $\phi(x,y) \, dx \, dy $ we denote the corresponding two form on the torus $T(a,b)$. By $\lambda_{\phi}$ we denote the measure on $T(a,b)$ given by $$\lambda_{\phi}(A)=\int_{A} \phi(x,y) \, dx \, dy,$$ for any measurable set $A$. We abbreviate this equation $d\lambda_{\phi}=\phi\, dx \, dy$. By $\lambda$ we denote the standard Lebesgue measure on $T(a,b)$, that is $\lambda=\lambda_{\phi}$ for $\phi \equiv 1$. In the following lemma we show that any $C^0$ measure that is close to the Lebesgue measure is obtained by transporting the Lebesgue measure by a diffeomorphism that is $C^0$ close to the identity. \[lemma-Radon\] Let $g:{\mathbb {R} }^2 \to {\mathbb {R} }$ be a $C^{0}$ function on ${\mathbb {C}}$ that is well defined on the quotient $T(1,1)={\mathbb {C}}/({\mathbb {Z}}+ i{\mathbb {Z}})$, and such that 1. For some $0< \delta<\frac{1}{3}$ we have $$1-\delta \le g(x,y) \le 1+\delta$$ for all $(x,y) \in {\mathbb {R} }^2$, 2. The following equality holds $$\int_{0}^{1} \int_{0}^{1} g(x,y) \,dx \, dy=1.$$ Then we can find a $C^1$ diffeomorphism $h:T(1,1) \to T(1,1)$ such that 1. $g(x,y)={{\operatorname {Jac}}}(h)(x,y)$, that is $g(x,y) \, dx \, dy= h^{*}(dx \, dy)$, where $h^{*}(dx \, dy)$ is the pull-back of the two form $dx \, dy$ by the diffeomorphism $h$ and ${{\operatorname {Jac}}}(h)$ is the Jacobian of $h$, 2. The inequality $$|h(z)-z| \le 4 \delta,$$ holds for every $z=x+iy \in {\mathbb {C}}$. We define the map $h:{\mathbb {R} }^2 \to {\mathbb {R} }^2$ by $h(x,y)=(h_1(x,y),h_2(x,y))$, where $$h_1(x,y)=\int_{0}^{x} \left( \int_{0}^{1}g(s,t)\, dt \right)\, ds,$$ and $$h_2(x,y)=\frac{ \int_{0}^{y} g(x,t)\, dt } { \int_{0}^{1} g(x,t) \, dt }.$$ Since $g(x+1,y)=g(x,y+1)=g(x,y)$, we find that $h(x+1,y)-h(x,y)=(1,0)$ and $h(x,y+1)-h(x,y)=(0,1)$, so $h$ descends to a map from $T(1,1)$ to itself. Furthermore, we find that $$\frac{\partial{h_1} } {\partial{x} }=\int_{0}^{1}g(x,t) \, dt ; \,\, \, \frac{\partial{h_1} } {\partial{y} }=0,$$ and $$\frac{\partial{ h_2} }{\partial{y}}=\frac{g(x,y)}{\int_{0}^{1}g(x,t) \, dt },$$ which is sufficient to conclude that $${{\operatorname {Jac}}}(h)(x,y)=g(x,y).$$ Therefore, the map $h:T(1,1) \to T(1,1)$ is a local diffeomorphism, and thus a covering map of degree $n$ where $$n=\int_{T(1,1)} {{\operatorname {Jac}}}(x,y) \, dx \, dy.$$ Since ${{\operatorname {Jac}}}(h)(x,y)=g(x,y)$, and $$\int_{T(1,1)} g(x,y) \, dx \, dy =1,$$ it follows that $n=1$, that is $h$ is a diffeomorphism. On the other hand, for $x,y \in [0,1]$, $$|h_1(x,y)-x| \le \delta x \le \delta,$$ and $$h_2(x,y)-y \le \frac{y(1+\delta)}{1-\delta} -y \le 3\delta y \le 3 \delta,$$ since $\delta<\frac{1}{3}$, and $$y-h_2(x,y) \le y-\frac{y(1-\delta)}{1+\delta} \le 2\delta y \le 2 \delta.$$ Therefore, $|h_2(x,y)-y| \le 3 \delta$. Combining the estimates for $|h_1(x,y)-x|$ and $|h_2(x,y)-y|$ we find that $$|h(z)-z| \le |h_1(x,y)-x|+|h_2(x,y)-y| \le 4\delta.$$ The completes the proof. The following theorem is a corollary of the of the previous lemma. \[thm-Radon\] Let $\mu \in {{\mathcal {M}}}(T(a,b))$ be a measure whose Radon-Nikodym derivative ${{d\mu}\over{d \lambda}}(z)$ is a $C^{0}$ function on the torus $T(a,b)$, such that for some $K>0$ and $\frac{1}{3}>\delta>0$ we have $\mu(T(a,b))=K\lambda(T(a,b))$ and $$K \le \left| {{d\mu}\over{d \lambda}} \right| \le K(1+\delta), \, \, \, \text{everywhere on} \,\, T(a,b),$$ Then $\mu$ is $4\delta (|a|+|b|)$-equivalent to the measure $K\lambda$. By $\mu$ we also denote the lift of the corresponding measure to the universal cover ${\mathbb {C}}$. Then $d\mu=g_1(x,y)dx \, dy$, where $g_1(x,y)={{d\mu}\over{d \lambda}}(x,y)$ is the Radon-Nikodym derivative. The function $g_1$ is $C^0$ on ${\mathbb {C}}$, and $g_1$ is well defined on the quotient ${\mathbb {C}}/(a{\mathbb {Z}}+b i{\mathbb {Z}})=T(a,b)$. Let $L:T(1,1) \to T(a,b)$ be the standard affine map. Let $$g(x,y)=\frac{1}{K} (g_1 \circ L)(x,y).$$ Then $g(x,y)$ satisfies the assumptions of the previous lemma. Let $h$ be the corresponding diffeomorphism from Lemma \[lemma-Radon\], and let $h_1=L \circ h \circ L^{-1}$. Then $(h_1)^{*} \mu=K\lambda$ on $T(a,b)$. Since the affine map $L$ is $(|a|+|b|)$ bi-Lipschitz we conclude that $$|h_1(z)-z| \le 4\delta(|a|+|b|)$$ for every $z \in {\mathbb {C}}$, so $\mu$ is $4\delta (|a|+|b|)$-equivalent to $K\lambda$. Measures on skew pants and the ${\widehat{\partial}}$ operator -------------------------------------------------------------- We have By ${{\mathcal {M}}}^{{{\mathcal {R}}}}_0({{\bf {\Pi}}})$ we denote the space of positive Borel measures with finite support on the set of oriented skew pants ${{\bf {\Pi}}}$ such that the involution ${{\mathcal {R}}}:{{\bf {\Pi}}}\to {{\bf {\Pi}}}$ preserves each measure in ${{\mathcal {M}}}^{{{\mathcal {R}}}}_0({{\bf {\Pi}}})$. By ${{\mathcal {M}}}_0( {N^{1}}(\sqrt{\Gamma}) ) $ we denote the space of positive Borel measures with compact support on the manifold ${N^{1}}(\sqrt{\Gamma})$ (a measure from ${{\mathcal {M}}}_0({N^{1}}(\sqrt{\Gamma}))$ has a compact support if and only if its support is contained in at most finitely many tori ${N^{1}}(\sqrt{\gamma}) \subset {N^{1}}(\sqrt{\Gamma})$). We define the operator $${\widehat{\partial}}:{{\mathcal {M}}}^{{{\mathcal {R}}}}_0({{\bf {\Pi}}}) \to {{\mathcal {M}}}_0( {N^{1}}(\sqrt{\Gamma}) ),$$ as follows. The set ${{\bf {\Pi}}}$ is a countable set, so every measure from $\mu \in {{\mathcal {M}}}^{{{\mathcal {R}}}}_0({{\bf {\Pi}}})$ is determined by its value $\mu(\Pi)$ on every $ \Pi \in {{\bf {\Pi}}}$. Let $ \Pi \in {{\bf {\Pi}}}$ and let $\gamma^{*}_i \in \Gamma^{*}$, $i=0,1,2$, denote the corresponding oriented geodesics so that $(\Pi,\gamma^{*}_i) \in {{\bf {\Pi}}}^{*}$. Let $\alpha^{\Pi}_i \in {{\mathcal {M}}}_0( {N^{1}}(\sqrt{\Gamma}) ) $ be the atomic measure supported at the point ${\operatorname{foot}}(\Pi,\gamma^{*}_i) \in {N^{1}}(\sqrt{\gamma_i})$, where the mass of the atom is equal to $1$. Let $$\alpha^{\Pi}=\sum_{i=0}^{2} \alpha^{\Pi}_i,$$ and define $${\widehat{\partial}}\mu= \sum_{\Pi \in {{\bf {\Pi}}}} \mu(\Pi) \alpha^{\Pi}.$$ We call this the ${\widehat{\partial}}$ operator on measures. The total measure of ${\widehat{\partial}}\mu$ is three times the total measure of $\mu$. .1cm Let $\alpha \in {{\mathcal {M}}}_0({N^{1}}(\sqrt{\Gamma})) $. Choose $\gamma^{*} \in \Gamma^{*}$, and recall the action ${{\mathcal {A}}}_{\zeta}: {N^{1}}(\sqrt{\Gamma}) \to {N^{1}}(\sqrt{\Gamma})$, $\zeta \in {\mathbb {C}}$. Let $({{\mathcal {A}}}_{\zeta})_{*} \alpha$ denote the push-forward of the measure $\alpha$. We say that $\alpha$ is $\delta$-symmetric if the measures $\alpha$ and $({{\mathcal {A}}}_{\zeta})_{*} \alpha$ are $\delta$-equivalent for every $\zeta \in {\mathbb {C}}$. \[theorem-mes\] There exists ${\bf q} >0$ and $D_1,D_2>0$, so that for every $1 \ge \epsilon>0$ and every $R>0$ large enough, there exists a measure $\mu \in {{\mathcal {M}}}^{{{\mathcal {R}}}}_0({{\bf {\Pi}}})$ with the following properties. If $\mu(\Pi)>0$ for some $\Pi \in {{\bf {\Pi}}}$, then the half lengths ${{\bf {hl}}}(\omega^{i}(C))$ that correspond to the skew pants $\Pi$ satisfy the inequality $$\left| {{\bf {hl}}}(\omega^i(C))-{{R}\over{2}} \right| \le \epsilon .$$ There exists a measure $\beta \in {{\mathcal {M}}}_0({N^{1}}(\sqrt{\Gamma}))$, such that the measure ${\widehat{\partial}}\mu$ and $\beta$ are $D_1e^{-{{R}\over{8}} }$-equivalent, and such that for each torus ${N^{1}}(\sqrt{\gamma})$, there exists a constant $K_{\gamma} \ge 0$ so that $$K_{\gamma} \le \left| {{d\beta}\over{d \lambda}} \right| \le K_{\gamma}(1+D_2e^{- {\bf q} R} ), \text{almost everywhere on}\, {N^{1}}(\sqrt{\gamma}) ,$$ where $\lambda$ is the standard Lebesgue measure on the torus ${N^{1}}(\sqrt{\gamma})={\mathbb {C}}/ \big( {{{{\bf l}}(\gamma)}\over{2}} {\mathbb {Z}}+ 2i\pi {\mathbb {Z}}\big)$. This theorem holds in two dimensions as well. That is, in the statement of the above theorem we can replace a closed hyperbolic three manifold ${{\bf {M}}^{3} }$ with any hyperbolic closed surface. We prove this theorem in the next section. But first we prove Theorem \[thm-lab\] assuming Theorem \[theorem-mes\]. We have \[proposition-mes\] There exist ${\bf q} >0$, $D>0$ so that for every $1 \ge \epsilon>0$ and every $R>0$ large enough, there exists a measure $\mu \in {{\mathcal {M}}}^{{{\mathcal {R}}}}_0({{\bf {\Pi}}})$ with the following properties 1. $\mu(\Pi)$ is a rational number for every $\Pi \in {{\bf {\Pi}}}$, 2. if $\mu(\Pi)>0$ for some $\Pi \in {{\bf {\Pi}}}$, then the half lengths ${{\bf {hl}}}(\omega^{i}(C))$ that correspond to the skew pants $\Pi$ satisfy the inequality $ |{{\bf {hl}}}(\omega^i(C))-{{R}\over{2}}| \le \epsilon $, 3. the measures ${\widehat{\partial}}\mu$ and $({{\mathcal {A}}}_{1+i\pi})_{*} {\widehat{\partial}}\mu$ are $D R e^{ - {\bf q} R}$-equivalent. Assume the notation and the conclusions of Theorem \[theorem-mes\]. First we show that the measures ${\widehat{\partial}}\mu$ and $({{\mathcal {A}}}_{1+i\pi})_{*} {\widehat{\partial}}\mu$ are $D R e^{-{\bf q} R}$-equivalent. Let $\gamma \in \Gamma$ be a closed geodesic such that $\beta( {N^{1}}(\sqrt{\gamma})) >0$, that is the support of $\beta$ has a non-empty intersection with the torus $${N^{1}}(\sqrt{\gamma}) \equiv {\mathbb {C}}/ \big( {{{{\bf l}}(\gamma)}\over{2}} {\mathbb {Z}}+2\pi i {\mathbb {Z}}\big).$$ The Lebesgue measure $\lambda$ on ${N^{1}}(\sqrt{\gamma})$ is invariant under the action ${{\mathcal {A}}}_{\zeta}$. This, together with Theorem \[thm-Radon\], implies that for any $\zeta \in {\mathbb {C}}$ the measure $({{\mathcal {A}}}_{\zeta})_{*} \beta$ is $(2\pi+ \big| {{{{\bf l}}(\gamma^{*})}\over{2}} \big| ) D_2 e^{-{\bf q} R}$-equivalent with the measure $K'\lambda$, for some $K'>0$, where $D_2$ is from the previous theorem. Since $\big| {{{{\bf l}}(\gamma^{*})}\over{2}} \big| \le {{R}\over{2}}+1$, we have that the measures $({{\mathcal {A}}}_{\zeta})_{*} \beta$ and $K'\lambda$ are $C_1 R e^{- {\bf q} R}$-equivalent, for some $C_1>0$. .1cm On the other hand, the measures $({{\mathcal {A}}}_{\zeta})_{*} \beta$ and $({{\mathcal {A}}}_{\zeta})_{*} {\widehat{\partial}}\mu$ are $D_1e^{-{{R}\over{8}} }$-equivalent. From Proposition \[prop-basic-1\] we conclude that the measures $({{\mathcal {A}}}_{\zeta})_{*} {\widehat{\partial}}\mu$ and $K'\lambda$ are $D_2 (R e^{- {\bf q} R}+e^{ -{{R}\over{8}} })$-equivalent, for every $\zeta \in {\mathbb {C}}$, and for some constant $D_2>0$. Again, since $\lambda$ is invariant under ${{\mathcal {A}}}_{\zeta}$ and from Proposition \[prop-basic-1\] we conclude that ${\widehat{\partial}}\mu$ is $D R e^{-{\bf q} R}$-symmetric, for some constant $D>0$ and assuming that ${\bf q} \le \frac{1}{8}$ (this assumption can be made without loss of generality). In particular, we have that the measures ${\widehat{\partial}}\mu$ and $({{\mathcal {A}}}_{1+i\pi})_{*} {\widehat{\partial}}\mu$ are $D R e^{- {\bf q} R}$-equivalent. .1cm Both measures ${\widehat{\partial}}\mu$ and $ ({{\mathcal {A}}}_{1+i\pi})_{*} {\widehat{\partial}}\mu$ are atomic (with finitely many atoms), so it follows from the definition that the measures ${\widehat{\partial}}\mu$ and $({{\mathcal {A}}}_{1+i\pi})_{*} {\widehat{\partial}}\mu$ are $D R e^{- {\bf q} R}$-equivalent if and only if a finite system of linear inequalities with integer coefficients has a real valued solution. Then the standard rationalisation procedure (see Proposition 4.2 in [@kahn-markovic] and [@calegari]) implies that this system of equations has a rational solution, so we may assume that that the measure $\mu$ from Theorem \[theorem-mes\] has rational weights. This proves the proposition. Proof of Theorem \[thm-lab\] ---------------------------- First we make several observations about an arbitrary measure $\nu \in {{\mathcal {M}}}^{{{\mathcal {R}}}}_0({{\bf {\Pi}}})$. The measure $\nu$ is supported on finitely many skew pants $\Pi \in {{\bf {\Pi}}}$. Moreover, $\nu(\Pi)=\nu({{\mathcal {R}}}(\Pi))$, for every $\Pi \in {{\bf {\Pi}}}$. Let ${{\bf {\Pi}}}^{+}$ and ${{\bf {\Pi}}}^{-}$ be disjoint subsets of ${{\bf {\Pi}}}$, such that ${{\bf {\Pi}}}^{+} \cup {{\bf {\Pi}}}^{-}={{\bf {\Pi}}}$, and ${{\mathcal {R}}}({{\bf {\Pi}}}^{+})={{\bf {\Pi}}}^{-}$ (there are many such decompositions of ${{\bf {\Pi}}}$). Let $\nu^{+}$ and $\nu^{-}$ denote the restrictions of $\nu$ on the sets ${{\bf {\Pi}}}^{+}$ and ${{\bf {\Pi}}}^{-}$ respectively. Then ${\widehat{\partial}}\nu^{+}={\widehat{\partial}}\nu^{-}$ and ${\widehat{\partial}}\nu = 2 {\widehat{\partial}}\nu^{-}$ (this follows from the fact that ${\operatorname{foot}}(\Pi,\gamma^{*})={\operatorname{foot}}({{\mathcal {R}}}(\Pi),-\gamma^{*})$). Therefore if the measure ${\widehat{\partial}}\nu$ is $\delta$-symmetric then so are the measures ${\widehat{\partial}}\nu^{+}$ and ${\widehat{\partial}}\nu^{-}$. .1cm Let $\mu$ be the measure from Proposition \[proposition-mes\]. Then $\mu$ has rational weights. We multiply $\mu$ by a large enough integer and obtain the measure $\mu'$, such that the weights $\mu'(\Pi)$ are even numbers, $\Pi \in {{\bf {\Pi}}}$. Then ${\widehat{\partial}}\mu'$ and $ ({{\mathcal {A}}}_{1+i\pi})_{*} {\widehat{\partial}}\mu'$ are $D Re^{- {\bf q} R}$-equivalent. For simplicity we set $\mu=\mu'$. .1cm Since $\mu$ is invariant under reflection and the weights are even integers, we see that $\mu \in {\mathbb {N}}{{\bf {\Pi}}}$ is a ${{\mathcal {R}}}$-symmetric formal sum. Let ${\operatorname{lab}}:{{\mathcal {L}}}\to {{\bf {\Pi}}}^{*}$ denote the corresponding legal labeling (see the example at the beginning of this section). It remains to define an admissible involution $\sigma:{{\mathcal {L}}}\to {{\mathcal {L}}}$. .1cm Fix $\gamma^{*} \in \Gamma^{*}$. Let $X^{+} \subset {{\mathcal {L}}}$ such that $ a \in X^{+}$ if ${\operatorname{lab}}(a)=(\Pi,\gamma^{*})$, where $\Pi \in {{\bf {\Pi}}}^{+}$. Define $X^{-}$ similarly, and let $f^{+/-}:X^{+/-} \to {{\bf {\Pi}}}^{*}$ denote the corresponding restriction of the labeling map ${\operatorname{lab}}$ on the set $X^{+/-}$ (observe that $f^{+}={{\mathcal {R}}}\circ f^{-} \circ {{\mathcal {R}}}_{{{\mathcal {L}}}}$). .1cm Denote by $\alpha^{+}$ the restriction of ${\widehat{\partial}}\mu^{+}$ on ${N^{1}}(\sqrt{\gamma})$ (define $\alpha^{-}$ similarly). Observe that $\alpha^{+}=\alpha^{-}$. Then by the definition of ${{\mathcal {L}}}$, the measure $\alpha^{+/-}$ is the ${\widehat{\partial}}$ of the push-forward of the counting measure on $X^{+/-}$ by the map $f^{+/-}$. .1cm Define $g:X^{-} \to {N^{1}}(\sqrt{\gamma} )$ by $g={{\mathcal {A}}}_{1+i\pi} \circ f^{-}$. Then the measure $({{\mathcal {A}}}_{1+i \pi } )_{*} \alpha^{-}$ is the push-forward of the counting measure on $X^{-}$ by the map $g$. Since $\alpha^{+}$ and $({{\mathcal {A}}}_{1+i \pi})_{*} \alpha^{-}$ are $2 D R e^{- {\bf q} R}$-equivalent, by Theorem \[thm-Hall\] there is a bijection $h:X^{+} \to X^{-}$, such that ${{\operatorname {dis}}}(g(h(b)),f^{+}(b)) \le 2D R e^{- {\bf q} R}$, for any $b \in X^{+}$ (recall that ${{\operatorname {dis}}}$ denotes the Euclidean distance on ${N^{1}}(\sqrt{\gamma^{*}})$). .1cm We define $\sigma:X^{+} \cup X^{-} \to X^{+} \cup X^{-}$ by $\sigma(x)=h(x)$ for $x \in X^{+}$, $\sigma(x)=h^{-1}(x)$ for $x \in X^{-}$. The map $\sigma:(X^{+}\cup X^{-}) \to (X^{+}\cup X^{-})$ is an involution. By varying $\gamma^{*}$ we construct the involution $\sigma:{{\mathcal {L}}}\to {{\mathcal {L}}}$. It follows from the definitions that $\sigma$ is admissible and that the pair $({\operatorname{lab}},\sigma)$ satisfies the assumptions of Theorem \[thm-lab\]. Measures on skew pants and the frame flow ========================================= We start by outlining the construction of the measures from Theorem \[theorem-mes\]. Fix a sufficiently small number $\epsilon>0$ and let $r>>0$ denote any large enough real number. Set $R=2(r-\log{{4}\over{3}})$. We let ${{\bf {\Pi}}}_{\epsilon,R}$ be the set of skew pants $\Pi$ in ${{\bf {M}}^{3} }$ for which $|{{\bf {hl}}}(\delta)-\frac{R}{2}| < \epsilon$ for all $\delta \in \partial{\Pi}$. In this section we will construct a measure $\mu$ on ${{\bf {\Pi}}}_{D\epsilon,R}$ (for some universal constant $D>0$) and a measure $\beta_{\delta}$ on each ${N^{1}}(\sqrt{\delta})$ such that for $r$ large enough we have $$\left| K(\delta){{d\beta_{\delta} }\over{d {{\operatorname {Eucl}}}_{\delta} }}-1 \right| \le e^{-{\bf q} r},$$ and the measures ${\widehat{\partial}}\mu|_{{N^{1}}(\sqrt{\delta})}$ and $\beta_{\delta}$ are $Ce^{-{{r}\over{4}} }$ equivalent, where ${{\operatorname {Eucl}}}_{\delta}$ is the Euclidean measure on ${N^{1}}(\sqrt{\delta})$, the unique probability measure invariant under ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ action. .1cm Let ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ denote the set of (unit) 2-frames $F_p=(p,u,n)$ where $p \in { {\mathbb {H}}^{3}}$ and the unit tangent vectors $u$ and $n$ are orthogonal at $p$. By ${{\bf {g}}}_t$, $t \in {\mathbb {R} }$, we denote the frame flow that acts on ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ and by $\Lambda$ the invariant Liouville measure on ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$. We then define a bounded non-negative affinity function ${{\bf a}}={{\bf a}}_{\epsilon,r}:{{\mathcal {F}}}({ {\mathbb {H}}^{3}}) \times {{\mathcal {F}}}({ {\mathbb {H}}^{3}}) \to {\mathbb {R} }$ with the following properties (for $r$ large enough): 1. ${{\bf a}}(F_p,F_q)={{\bf a}}(F_q,F_p)$, for every $F_p,F_q, \in {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$. 2. ${{\bf a}}(A(F_p),A(F_q))={{\bf a}}(F_p,F_q)$, for every $A \in {{\bf {PSL}}(2,{\mathbb {C}})}$. 3. If ${{\bf a}}(F_p,F_q)>0$, and $F_p=(p,u,n)$ and $F_q=(q,v,m)$ then $$|d(p,q)-r|<\epsilon,$$ $$\Theta(n {\mathbin{@}}q,m)<\epsilon,$$ $$\Theta(u,v(p,q))<Ce^{-{{r}\over{4}}}, \,\, \Theta(v,v(q,p))<Ce^{-{{r}\over{4}}}.$$ where $\Theta(x,y)$ denotes the unoriented angle between vectors $x$ and $y$, and $v(p,q)$ denotes the unit vector at $p$ that is tangent to the geodesic segment from $p$ and $q$. Here $n@q$ denotes the parallel transport of $n$ along the geodesic segment from $p$ (where $n$ is based) to $q$. 4. For every co-compact group $G < {{\bf {PSL}}(2,{\mathbb {C}})}$ we have $$\left|\sum\limits_{A \in G} {{\bf a}}(F_p,A(F_q))-{{1}\over{\Lambda({{\mathcal {F}}}({ {\mathbb {H}}^{3}}) / G)}} \right|<e^{-{\bf q}_{G} r}.$$ The last property will follow from the exponential mixing of the frame flow on ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})/ G$. .1cm Now let $F_p=(p,u,n)$ and $F_q=(q,v,m)$ be two 2-frames in ${{\mathcal {F}}}({{\bf {M}}^{3} })={{\mathcal {F}}}({ {\mathbb {H}}^{3}})/{{\mathcal {G}}}$, where ${{\bf {M}}^{3} }$ is a closed hyperbolic 3-manifold and ${{\mathcal {G}}}$ is the corresponding Kleinian group. Let $\gamma$ be a geodesic segment in ${{\bf {M}}^{3} }$ between $p$ and $q$. We let ${\widetilde}{F}_p$ be an arbitrary lift of $F_p$ to ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$, and let ${\widetilde}{F}_q$ be the lift of $F_q$ along $\gamma$. We let ${{\bf a}}_{\gamma}(F_p,F_q)={{\bf a}}({\widetilde}{F}_p,{\widetilde}{F}_q)$. By the properties $(1)$ and $(2)$ this is well defined. Moreover for any $F_p,F_q \in {{\mathcal {F}}}({{\bf {M}}^{3} })$ $$\label{*} \left|\sum\limits_{\gamma} {{\bf a}}_{\gamma}(F_p,F_q)-{{1}\over{\Lambda({{\mathcal {F}}}({{\bf {M}}^{3} }) ) }} \right|<e^{-{\bf q} r},$$ by property $(4)$. .1cm We define $\omega:{{\mathcal {F}}}({ {\mathbb {H}}^{3}}) \to {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ by $\omega(p,u,n)=(p,\omega(u),n)$ where $\omega(u)$ is equal to $u$ rotated around $n$ for ${{2\pi}\over{3}}$, using the right-hand rule. Observe that $\omega^{3}$ is the identity and we let $\omega^{-1}=\overline{\omega}$. To any frame $F$ we associate the tripod $T=(F,\omega(F),\omega^{2}(F))$ and likewise to any frame $F$ we associate the “anti-tripod” $\overline{T}=(F,\overline{\omega}(F),\overline{\omega} ^{2}(F) )$. We have the similar definitions for frames in ${{\mathcal {F}}}({{\bf {M}}^{3} })$. .1cm Let $\theta$-graph be the $1$-complex comprising three one cells (called $h_0,h_1,h_2$) each connecting two $0$-cells (called $\underline{p}$ and $\underline{q}$). A connected pair of tripods is a pair of frames $F_p=(p,u,n)$, $F_q=(q,v,m)$ from ${{\mathcal {F}}}({{\bf {M}}^{3} })$, and three geodesic segments $\gamma_i$, $i=0,1,2$, that connect $p$ and $q$ in ${{\bf {M}}^{3} }$. We abbreviate $\gamma=(\gamma_0,\gamma_1,\gamma_2)$ and we let $${{\bf b}}_{\gamma}(T_p,T_q)=\prod\limits_{i=0}^{2} {{\bf a}}_{\gamma_{i}}(\omega^{i}(F_p),\overline{\omega} ^{i}(F_q)).$$ We say $(T_p,T_q,\gamma)$ is a well connected pair of tripods along the triple of segments $\gamma$ if ${{\bf b}}_{\gamma}(T_p,T_q)>0$. .1cm For any connected pair of tripods $(T_p,T_q,\gamma)$ there is a continuous map from the $\theta$-graph to ${{\bf {M}}^{3} }$ that is obvious up to homotopy (map $\underline{p}$ to $p$ and $\underline{q}$ to $q$, and $h_i$ to $\gamma_i$). If $(T_p,T_q,\gamma)$ is a well connected pair of tripods then this map will be injective on the fundamental group $\pi_1(\theta-\text{graph})$. Moreover, the resulting pair of skew pants $\Pi$ has the half-lengths $D\epsilon$ close to $\frac{R}{2}$ where $R=2(r-\log{{4}\over{3}})$ (then the cuff lengths of the skew pants $\Pi$ are close to $R$) and $D$ is a universal constant. Recall that the collection of skew pants whose half-lengths are $D\epsilon$ close to $\frac{R}{2}$ (for some large $R$ and fixed $\epsilon$) is called ${{\bf {\Pi}}}_{D\epsilon,R}$. .1cm We write $\Pi=\pi(T_p,T_q,\gamma)$, so $\pi$ maps well connected pairs of tripods to pairs of skew pants in ${{\bf {\Pi}}}_{D\epsilon,R}$. We define the measure ${\widetilde}{\mu}$ on well connected tripods by $$d{\widetilde}{\mu}(T_p,T_q,\gamma)={{\bf b}}_{\gamma}(T_p,T_q)\, d\lambda_{T}(T_p,T_q,\gamma),$$ where $\lambda_{T}(T_p,T_q,\gamma)$ is the product of the Liouville measure $\Lambda$ (for ${{\mathcal {F}}}({{\bf {M}}^{3} })$) on the first two terms, and the counting measure on the third term. The measure $\lambda_{T}$ is infinite but ${{\bf a}}_{\gamma}(T_p,T_q)$ has compact support, so ${\widetilde}{\mu}$ is finite. We define the measure $\mu$ on ${{\bf {\Pi}}}_{D\epsilon,R}$ by $\mu=\pi_{*} {\widetilde}{\mu}$. This is the measure from Theorem \[theorem-mes\]. .1cm It remains to construct the measure $\beta_{\delta}$ and show the $Ce^{-{{r}\over{4}}}$-equivalence of $\beta_{\delta}$ and ${\widehat{\partial}}\mu|_{{N^{1}}(\sqrt{\delta})}$. To any frame $F$ we associate the bipod $B=(F,\omega(F))$ and likewise to any frame $F$ we associate the “anti-bipod” $\overline{B}=(F,\overline{\omega}(F))$. We have the similar definitions for frames in ${{\mathcal {F}}}({{\bf {M}}^{3} })$. We say that $(B_p,B_q,\gamma_0,\gamma_1)$ is a well connected pair of bipods along the pair of segments $\gamma_0$ and $\gamma_1$ if $${{\bf a}}_{\gamma_{0}}(F_p,F_q) {{\bf a}}_{\gamma_{1}}(\omega(F_p),\overline{\omega}(F_q))>0.$$ Then the closed curve $\gamma_0 \cup \gamma_1$ is homotopic to a closed geodesic in ${{\bf {M}}^{3} }$. Given a closed geodesic $\delta \in \Gamma$ we let $S_{\delta}$ be the set of well connected bipods $(B_p,B_q,\gamma_0,\gamma_1)$ such that $\gamma_0 \cup \gamma_1$ is homotopic to $\delta$. (Note that $S_{\delta}$ is an open subset of the space of connected bipods which is the space of quadruples $(B_p,B_q,\gamma_0,\gamma_1)$, where $B_p$ and $B_q$ are tripods and $\gamma_0$ and $\gamma_1$ are geodesic segments in ${{\bf {M}}^{3} }$ connecting the points $p$ and $q$). The set $S_{\delta}$ of connected bipods carries the natural measure $\lambda_{B}$ which is the product of the Liouville measures on the first two terms and the counting measure on the third and fourth. One can show that if $\epsilon$ is small in terms of the injectivity radius of ${{\bf {M}}^{3} }$ then for two bipods $B_p$ and $B_q$ in ${{\mathcal {F}}}({{\bf {M}}^{3} })$ there exists at most one pair of segments $(\gamma_0,\gamma_1)$ such that $(B_p,B_q,\gamma_0,\gamma_1)$ is a well connected pair of bipods and that $\gamma_0 \cup \gamma_1$ is homotopic to $\delta$. However, we do not use this. Next, we define the action of the torus ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ on $S_{\delta}$ that leaves the measure $\lambda_{B}$ invariant. .1cm Let ${\mathbb {T}}_{\delta}$ be the open solid torus cover associated to $\delta$ (so $\delta$ lifts to a closed geodesic ${\widetilde}{\delta}$ in ${\mathbb {T}}_{\delta}$). Given a pair of well connected bipods in $S_{\delta}$, each bipod lifts in a unique way to a bipod in ${{\mathcal {F}}}({\mathbb {T}}_{\delta})$ such that the pair of the lifted bipods is well connected in ${\mathbb {T}}_{\delta}$. We denote by ${\widetilde}{S}_{\delta}$ the set of such lifts, so ${\widetilde}{S}_{\delta}$ is in one-to-one correspondence with $S_{\delta}$. There is a natural action of the torus ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ on both ${N^{1}}(\delta)$ ($={N^{1}}({\widetilde}{\delta})$) and on ${{\mathcal {F}}}({\mathbb {T}}_{\delta})$, and hence on ${\widetilde}{S}_{\delta}$ as well. Since ${\widetilde}{S}_{\delta}$ and $S_{\delta}$ are in one-to-one correspondence we have the induced action of ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ on $S_{\delta}$. This action leaves invariant the measure $\lambda_{B}$ on $S_{\delta}$. For either choice of ${{\bf {hl}}}(\delta)$ there is a natural action of ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ on ${N^{1}}(\sqrt{\delta})$ via ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+{{\bf {hl}}}(\delta){\mathbb {Z}})$. We define in Section 4.7 a map ${{\bf {f}}}_{\delta}:S_{\delta} \to {N^{1}}(\sqrt{\delta})$ with two important properties. The first one is that ${{\bf {f}}}_{\delta}$ is equivariant with respect to the action of ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$. The second property is as follows. Let $C_{\delta}$ be the set of well connected tripods $(T_p,T_q,\gamma)$ for which $\gamma_0 \cup \gamma_1$ is homotopic to $\delta$, and let $\chi:C_{\delta} \to S_{\delta}$ be the forgetting map, so $\chi(T_p,T_q,\gamma_0,\gamma_1,\gamma_2)=(B_p,B_q,\gamma_0,\gamma_1)$. Then for any pair of well connected tripods $T=(T_p,T_q,\gamma) \in C_{\delta}$ $$\label{f-foot} |{{\bf {f}}}_{\delta}(\chi (T) )-{\operatorname{foot}}_{\delta}(\pi(T))|<Ce^{-{{r}\over{4}}},$$ where $\pi(T)$ is the skew pants defined above (recall that the map ${\operatorname{foot}}_{\delta}(\Pi)$ that associates the foot to a pair of marked skew pants $(\Pi,\delta)$, $\delta \in \partial{\Pi}$, was defined in Section 3). In other words, the map ${{\bf {f}}}_{\delta}$ predicts feet of the skew pants $\pi(T)$ (just by knowing the pair of well connected bipods $\chi(T)$) up to an error of $Ce^{-{{r}\over{4}}}$. This $Ce^{-{{r}\over{4}}}$ comes from the property $(3)$ of the affinity function ${{\bf a}}$ defined above. .1cm There are two more natural measures on $S_{\delta}$. The first is $\chi_{*}({\widetilde}{\mu}|_{C_{\delta}})$. The second is $\nu_{\delta}$ defined on $S_{\delta}$ by $$d\nu_{\delta}(B_p,B_q,\gamma_0,\gamma_1)={{\bf a}}_{\gamma_{0}}(F_p,F_q){{\bf a}}_{\gamma_{1}}(\omega(F_p),\overline{\omega}(F_q))\, d\lambda_{B}(F_p,F_q,\gamma_0,\gamma_1),$$ where we recall that $\lambda_{B}(F_p,F_q,\gamma_0,\gamma_1)$ is the product of the Liouville measure on the first two terms and the counting measure on the last two. The two measures satisfy the fundamental inequality $$\label{**} \left| {{d \chi_{*}({\widetilde}{\mu}|_{C_{\delta}}) }\over{ d\nu_{\delta}(B_p,B_q,\gamma_0,\gamma_1)}}-{{1}\over{\Lambda({{\mathcal {F}}}({{\bf {M}}^{3} }))}} \right|<Ce^{-{\bf q} r},$$ because the total affinity between $\omega^{2}(F_p)$ and $\overline{\omega}^{2}(F_q)$ (summing over all positive connections $\gamma_2$) is exponentially close to ${{1}\over{\Lambda({{\mathcal {F}}}({{\bf {M}}^{3} }))}}$ by the inequality (\[\*\]) above. Moreover, since $\lambda_{B}$ and the product ${{\bf a}}_{\gamma_{0}}(F_p,F_q){{\bf a}}_{\gamma_{1}}(\omega(F_p),\overline{\omega}(F_q))$ are both invariant under the action of ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$, we see that $\nu_{\delta}$ is also invariant under the action of ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$. Therefore $({{\bf {f}}}_{\delta})_{*}\nu_{\delta}$ is as well, because ${{\bf {f}}}_{\delta}$ is ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ equivariant. It follows that $({{\bf {f}}}_{\delta})_{*}(\nu_{\delta})=K_{\delta}{{\operatorname {Eucl}}}_{\delta}$, for some constant $K_{\delta}$. Therefore, by (\[\*\*\]) $$\label{***} \left|{{ d({{\bf {f}}}_{\delta})_{*}({\widetilde}{\mu}|_{C_{\delta}} ) }\over{ d K'_{\delta}{{\operatorname {Eucl}}}_{\delta} }}-1 \right|<Ce^{-{\bf q}r},$$ where $K'_{\delta}=K_{\delta}/\Lambda({{\mathcal {F}}}({{\bf {M}}^{3} }))$. .1cm This measure $({{\bf {f}}}_{\delta} )_{*} ({\widetilde}{\mu} |_{C_{\delta}} )$ is our desired measure $\beta_{\delta}$; it is $Ce^{-{{r}\over{4}}}$-equivalent to the measure ${\widehat{\partial}}\mu|_{{N^{1}}(\sqrt{\delta} ) }$ because the later measure is just $({\operatorname{foot}}_{\delta} )_{*} \pi_{*}({\widetilde}{\mu}|_{C_{\delta}} )$, and as we already said $$|{{\bf {f}}}_{\delta}(\chi(T))-{\operatorname{foot}}_{\delta}(\pi(T))|<Ce^{-{{r}\over{4}}},$$ for every tripod $T$ in $C_{\delta}$. We define ${{\bf a}}(F_p, F_q)$ in Section 4.4, and we prove that the skew pants $\pi(T_p, T_q, \gamma) \in {{\bf {\Pi}}}_{D\epsilon,R}$ (for some universal constant $D>0$) when ${{\bf b}}_\gamma(T_p, T_q) > 0$ in Sections 4.5 and 4.6, using preliminaries developed in Section 4.1 and 4.2. We define ${{\bf {f}}}_\delta$ and prove in Section 4.7, using preliminaries developed in Section 4.3. Finally we prove equation in Section 4.8. The Chain Lemma --------------- Let $T^{1}({ {\mathbb {H}}^{3}})$ denote the unit tangent bundle. Elements of $T^{1}({ {\mathbb {H}}^{3}})$ are pairs $(p,u)$, where $p \in { {\mathbb {H}}^{3}}$ and $u \in T^{1}_p({ {\mathbb {H}}^{3}})$. For $u,v \in T^{1}_p({ {\mathbb {H}}^{3}})$ we let ${{\bf {\Theta}}}(u,v)$ denote the unoriented angle between $u$ and $v$. The function ${{\bf {\Theta}}}$ takes values in the interval $[0,\pi]$. For $a,b \in { {\mathbb {H}}^{3}}$ we let $v(a,b) \in T^{1}_a({ {\mathbb {H}}^{3}})$ denote the unit vector at $a$ that points toward $b$. If $v \in T^{1}_a({ {\mathbb {H}}^{3}})$ then $v {\mathbin{@}}b \in T^{1}_b({ {\mathbb {H}}^{3}})$ denotes the vector parallel transported to $b$ along the geodesic segment connecting $a$ and $b$. By $(a,b,c)$ we denote the hyperbolic triangle with vertices $a,b,c \in { {\mathbb {H}}^{3}}$. For two points $a,b \in { {\mathbb {H}}^{3}}$, we let $|ab|=d(a,b)$. \[prop-tra-1\] Let $a,b,c \in { {\mathbb {H}}^{3}}$ and $v \in T^{1}_a({ {\mathbb {H}}^{3}})$. Then the inequalities $${{\bf {\Theta}}}(v {\mathbin{@}}b {\mathbin{@}}c {\mathbin{@}}a,v)\le \text{Area}(abc) \le |bc|,$$ hold, where $\text{Area}(abc)$ denotes the hyperbolic area of the triangle $(a,b,c)$. It follows from the Gauss-Bonnet theorem that the inequality ${{\bf {\Theta}}}(v {\mathbin{@}}b {\mathbin{@}}c {\mathbin{@}}a,v)\le \text{Area}(abc)$ holds for every $v \in T^{1}_a({ {\mathbb {H}}^{3}})$. Moreover, if $v$ is in the plane of the triangle $(a,b,c)$, then the equality ${{\bf {\Theta}}}(v {\mathbin{@}}b {\mathbin{@}}c {\mathbin{@}}a,v)= \text{Area}(abc)$ holds. .1cm We now prove that in every hyperbolic triangle the length of a side is greater than the area of the triangle, that is we prove $|bc| \ge \text{Area}(abc)$. Consider the geodesic ray that starts at $b$ and that contains $a$, and let $a' \in \partial{{{\mathbb {H}}^{2}}}$ be the point where this ray hits the ideal boundary. Then the triangle $(a,b,c)$ is contained in the triangle $(a',b,c)$, so it suffices to show that $\text{Area}(a',b,c) \le |bc|$. Thus we may assume that the vertex $a$ is a point on $\partial{{{\mathbb {H}}^{2}}}$. Considering the standard model of the upper half plane ${{\mathbb {H}}^{2}}=\{z \in {\mathbb {C}}: {\operatorname{Im}}(z)>0 \}$, we can assume that $a=\infty$ and that the geodesic segment $(bc)$ lies on the unit circle $\{z \in {\mathbb {C}}:|z|=1 \}$. By the first part of the proposition we know that $\text{Area}(abc)$ is equal to $\alpha$, where $\alpha$ is the unoriented angle between the Euclidean lines $l_b$ and $l_c$, where $l_b$ contains $0$ and $b$ and $l_c$ contains $0$ and $c$ ($0 \in {\mathbb {C}}$ denotes the origin). Since $b$ and $c$ lie on the unit circle we have that $\alpha$ is also equal to the Euclidean length of the arc of the unit circle between $b$ and $c$. On the other hand, the hyperbolic length of this arc (which is the geodesic segment $(bc)$ between $b$ and $c$ in the hyperbolic metric) is strictly larger than $\alpha$ because the density of the hyperbolic metric is $y^{-1}|dz|$, which is greater than $1$ on the unit circle. We have $$\text{Area}(abc) \le \alpha \le |bc|,$$ which proves the proposition. The following claim will be used in the proof of Theorem \[thm-chain\] below. \[claim-1\] Let $a,b,c \in { {\mathbb {H}}^{3}}$. Then the inequality $${{\bf {\Theta}}}( v(c,a), v(b,a) {\mathbin{@}}c ) \le {{\bf {\Theta}}}( v(a,b),v(a,c) ) + \text{Area}(abc) \le {{\bf {\Theta}}}(v(a,b),v(a,c)) + |bc|$$ holds. We have $$\begin{aligned} {{\bf {\Theta}}}( v(c,a), v(b,a) {\mathbin{@}}c ) &= {{\bf {\Theta}}}( v(c,a) {\mathbin{@}}a, v(b,a) {\mathbin{@}}c {\mathbin{@}}a ) \\ &= {{\bf {\Theta}}}( -v(a,c), v(b,a) {\mathbin{@}}c {\mathbin{@}}a ) \\ &\le {{\bf {\Theta}}}( -v(a,c), -v(a,b))+ {{\bf {\Theta}}}(-v(a,b),v(b,a) {\mathbin{@}}c {\mathbin{@}}a) \\ &= {{\bf {\Theta}}}(v(a,c), v(a,b))+ {{\bf {\Theta}}}(-v(a,b) {\mathbin{@}}b, v(b,a) {\mathbin{@}}c {\mathbin{@}}a {\mathbin{@}}b) \\ &= {{\bf {\Theta}}}(v(a,c), v(a,b))+ {{\bf {\Theta}}}(v(b,a), v(b,a) {\mathbin{@}}c {\mathbin{@}}a {\mathbin{@}}b).\end{aligned}$$ By the previous proposition we have ${{\bf {\Theta}}}(v(b,a), v(b,a) {\mathbin{@}}c {\mathbin{@}}a {\mathbin{@}}b) \le \text{Area}(abc) \le |bc|$, and we are finished. The following two propositions are elementary and follow from the $\cosh$ rule for hyperbolic triangles. \[prop-tra-2\] Let $(a,b,c)$ be a hyperbolic triangle such that $|ab|=l_1$ and $|bc|=\eta$. Then for $l_1$ large and $\eta$ small enough, the inequality ${{\bf {\Theta}}}(v(a,b),v(a,c)) \le D \eta e^{-l_{1}}$ holds, for some constant $D>0$. \[prop-tra-3\] Let $(a,b,c)$ be a hyperbolic triangle and set $|ab|=l_1$, $|cb|=l_2$ and $|ac|=l$. Let $\eta=\pi-{{\bf {\Theta}}}(v(b,a),v(b,c))$. Then for $l_1$ and $l_2$ large we have 1. $|(l-(l_1+l_2))+\log 2 -\log(1+\cos\eta) | \le D e^{- 2\min\{l_{1},l_{2} \} }$, for any $0 \le \eta \le {{\pi}\over{2}}$, 2. $|l-(l_1+l_2)| \le D \eta$, for $\eta$ small, 3. ${{\bf {\Theta}}}(v(a,c),v(a,b)) \le D \eta e^{-l_{1}}$, for $\eta$ small, 4. ${{\bf {\Theta}}}(v(c,a),v(c,b)) \le D \eta e^{-l_{2}}$, for $\eta$ small, for some constant $D>0$. The following Theorem (the “Chain Lemma”) allows us to estimate the geometry of a segment that is formed from a chain of long segments that nearly meet at their endpoints. It will be used in Section 4.2 to estimate the complex length of a closed geodesic formed from a closed chain of such segments. \[thm-chain\] Suppose $a_i,b_i \in { {\mathbb {H}}^{3}}$, $i=1,...,k$, and 1. $|a_i b_i| \ge Q$, 2. $|b_i a_{i+1}| \le \epsilon$, 3. ${{\bf {\Theta}}}(v(b_i,a_i) {\mathbin{@}}a_{i+1},-v(a_{i+1},b_{i+1})) \le \epsilon$. Suppose also that $n_i \in T^{1}_{a_{i}}({ {\mathbb {H}}^{3}})$ is a vector at $a_i$ normal to $v(a_i,b_i)$ and $${{\bf {\Theta}}}(n_i {\mathbin{@}}b_i,n_{i+1} {\mathbin{@}}b_i) \le \epsilon.$$ Then for $\epsilon$ small and $Q$ large, and some constant $D>0$ $$\label{s-1} \left| ||a_1 b_k|-\sum\limits_{i=1}^{k} |a_ib_i| \right| \le k D \epsilon,$$ $$\label{s-2} {{\bf {\Theta}}}(v(a_1,b_k),v(a_1,b_1))<kD\epsilon e^{-Q},\,\, \text{and} \,\,\, {{\bf {\Theta}}}(v(b_k,a_1),v(b_k,a_k))<kD\epsilon e^{-Q},$$ $$\label{s-2-1} {{\bf {\Theta}}}(v(a_1,a_k),v(a_1,b_1))<2kD\epsilon e^{-Q},\,\, \text{if} \,\, k>1 ,$$ $$\label{s-3} {{\bf {\Theta}}}(n_k,n_1 {\mathbin{@}}a_k) \le 5k \epsilon.$$ We can think of the sequence of geodesic segments from $a_i$ to $b_i$ as forming an “$\epsilon$-chain”, and we can think of the broken segment connecting $a_1, b_1, a_2, b_2, \ldots, a_k, b_k$ as the concatenation of the $\epsilon$-chain, and the geodesic segment from $a_1$ to $b_k$ (or $a_k$) as the geodesic representative of the concatenation. Then the Chain Lemma is describing the relationship between the concatenation of an $\epsilon$-chain and its geodesic representative, and also estimating the discrepancy between parallel transport along the concatenation, and transport along its geodesic representative. By induction. Suppose that the statement is true for some $k \ge 1$. We need to prove the above inequalities for $k+1$. .1cm We first prove the inequalities (\[s-2\]) and (\[s-2-1\]). By the triangle inequality we have $${{\bf {\Theta}}}(v(a_1,b_k),v(a_1,b_{k+1})) \le {{\bf {\Theta}}}(v(a_1,b_k),v(a_1,a_{k+1}))+{{\bf {\Theta}}}(v(a_1,a_{k+1}),v(a_1,b_{k+1})).$$ By Proposition \[prop-tra-2\] we have ${{\bf {\Theta}}}(v(a_1,b_k),v(a_1,a_{k+1}))\le D_1\epsilon e^{-Q}$, where $D_1$ is the constant from Proposition \[prop-tra-2\]. By (\[pom\]) and Proposition \[prop-tra-3\] we have ${{\bf {\Theta}}}(v(a_1,a_{k+1}),v(a_1,b_{k+1}))\le 2D_2\epsilon e^{-Q}$, where $D_2$ is from Proposition \[prop-tra-3\]. Together this shows $${{\bf {\Theta}}}(v(a_1,b_k),v(a_1,b_{k+1}))\le D\epsilon e^{-Q}.$$ Then by the triangle inequality and the induction hypothesis we have $$\begin{aligned} {{\bf {\Theta}}}(v(a_1,b_{k+1}),v(a_1,b_1 ) ) & \le {{\bf {\Theta}}}(v(a_1,b_k),v(a_1,b_1 ) ) + {{\bf {\Theta}}}(v(a_1,b_k),v(a_1,b_{k+1})) \notag \\ & \le kD\epsilon e^{-Q}+D\epsilon e^{-Q}= (k+1)D\epsilon e^{-Q} \notag ,\end{aligned}$$ which proves the first inequality in (\[s-2\]). The second one follows by symmetry. The inequality (\[s-2-1\]) follows from (\[s-2\]) and Proposition \[prop-tra-2\]. .1cm Next, we prove the inequality (\[s-1\]). By the triangle inequality we have $${{\bf {\Theta}}}(v(a_1,a_{k+1}),v(a_1,b_k)) \le {{\bf {\Theta}}}(v(a_1,a_{k+1}),v(a_1,b_1) ) + {{\bf {\Theta}}}(v(a_1,b_1),v(a_1,b_k)),$$ and then applying (\[s-2\]) and (\[s-2-1\]) we get $${{\bf {\Theta}}}(v(a_1,a_{k+1}),v(a_1,b_k)) \le 2(k+1)D \epsilon e^{-Q} +kD \epsilon e^{-Q}< \epsilon$$ for $Q$ large enough. Then by Claim \[claim-1\] we have $${{\bf {\Theta}}}(v(a_{k+1},a_1),v(b_k,a_1) {\mathbin{@}}a_{k+1}) \le {{\bf {\Theta}}}(v(a_1,a_{k+1}),v(a_1,b_k))+|b_k a_{k+1}| \le 2 \epsilon.$$ Combining this inequality with the assumption (3) of the theorem, and by the triangle inequality we obtain ${{\bf {\Theta}}}(v(a_{k+1},a_1),-v(a_{k+1},b_{k+1}) ) \le 3\epsilon$. Therefore the inequality $$\label{pom} \pi-{{\bf {\Theta}}}(v(a_{k+1},a_1),v(a_{k+1},b_{k+1}) ) \le 3\epsilon$$ holds (observe that the same inequality holds for all $1 \le i \le k$). .1cm It follows from Proposition \[prop-tra-3\] and (\[pom\]) that $\big| |a_1a_{k+1}|+|a_{k+1} b_{k+1}|- |a_1b_{k+1}| \big| \le 3D_1\epsilon$, where $D_1$ is the constant from Proposition \[prop-tra-3\]. Since by the triangle inequality $$\big| |a_1b_k|-|a_1a_{k+1}| \big|\le \epsilon,$$ we obtain $$\big| |a_1b_k|+|a_{k+1} b_{k+1}|- |a_1b_{k+1}| \big| \le D\epsilon.$$ This proves the induction step for the inequality (\[s-1\]). .1cm It remains to prove (\[s-3\]). Using the induction hypothesis and the assumptions in the statement of this theorem, we obtain the following string of inequalities $$\begin{aligned} {{\bf {\Theta}}}(n_{k+1} , n_1 {\mathbin{@}}a_{k+1} ) &= {{\bf {\Theta}}}(n_{k+1} {\mathbin{@}}b_k , n_1 {\mathbin{@}}a_{k+1} {\mathbin{@}}b_k ) \\ &\le {{\bf {\Theta}}}(n_{k+1} {\mathbin{@}}b_k , n_k {\mathbin{@}}b_k ) + {{\bf {\Theta}}}(n_k {\mathbin{@}}b_k , n_1 {\mathbin{@}}a_{k+1} {\mathbin{@}}b_k) \\ &\le \epsilon + {{\bf {\Theta}}}(n_k {\mathbin{@}}b_k , n_1 {\mathbin{@}}a_{k+1} {\mathbin{@}}b_k) \\ &\le \epsilon + {{\bf {\Theta}}}(n_k {\mathbin{@}}b_k , n_1 {\mathbin{@}}a_k {\mathbin{@}}b_k ) + {{\bf {\Theta}}}(n_1 {\mathbin{@}}a_k {\mathbin{@}}b_k, n_1 {\mathbin{@}}a_{k+1} {\mathbin{@}}b_k ) \\ &\le (5k + 1)\epsilon + {{\bf {\Theta}}}(n_1 {\mathbin{@}}a_k {\mathbin{@}}b_k , n_1 {\mathbin{@}}a_{k+1} {\mathbin{@}}b_k) \\ &\le (5k + 1)\epsilon + {{\bf {\Theta}}}(n_1 {\mathbin{@}}a_k {\mathbin{@}}b_k , n_1 {\mathbin{@}}b_k ) + {{\bf {\Theta}}}(n_1 {\mathbin{@}}b_k , n_1 {\mathbin{@}}a_{k+1} {\mathbin{@}}b_k). \end{aligned}$$ By (\[pom\]) we have $${{\bf {\Theta}}}(n_1 {\mathbin{@}}a_k {\mathbin{@}}b_k , n_1 {\mathbin{@}}b_k ) \le 3\epsilon,$$ and by Claim \[claim-1\] we have $${{\bf {\Theta}}}(n_1 {\mathbin{@}}b_k , n_1 {\mathbin{@}}a_{k+1} {\mathbin{@}}b_k) \le \epsilon.$$ Combining these estimates gives $${{\bf {\Theta}}}(n_{k+1} , n_1 {\mathbin{@}}a_{k+1} ) \le (5k + 5)\epsilon,$$ which proves the induction step for (\[s-3\]). Corollaries of the Chain Lemma ------------------------------ For $X \in {{\bf {PSL}}(2,{\mathbb {C}})}$ we write $X(z)={{az+b}\over{cz+d}}$, where $ad-bc=1$. The following proposition will provide the bridge between the Chain Lemma and Lemma \[lemma-chain\]. \[prop-M\] Let $p,q \in { {\mathbb {H}}^{3}}$ and $A \in {{\bf {PSL}}(2,{\mathbb {C}})}$ be such that $A(p)=q$. Suppose that for every $u \in T^{1}_{p}({ {\mathbb {H}}^{3}})$ we have ${{\bf {\Theta}}}(A(u),u {\mathbin{@}}q)\le \epsilon$. Then for $\epsilon$ small enough and $d(p,q)$ large enough, and for some constant $D>0$ we have 1. the transformation $A$ is loxodromic, 2. $|{{\bf l}}(A)-d(p,q)|\le D\epsilon$, 3. if $\text{axis}(A)$ denotes the axis of $A$ then $d(p,\text{axis}(A)),d(q,\text{axis}(A) ) \le D\epsilon$. We may assume that the points $p$ and $q$ lie on the geodesic that connects $0$ and $\infty$, such that $q$ is the point with coordinates $(0,0,1)$ in ${ {\mathbb {H}}^{3}}$, and $p$ is $(0,0,x)$ for some $0<x<1$. Let $B \in {{\bf {PSL}}(2,{\mathbb {C}})}$ be given by $B(z)=Kz$, where $\log K=d(p,q)$. Since $K$ is a positive number it follows that for every $u \in T^{1}_p({ {\mathbb {H}}^{3}})$ the identity $B(u)=u {\mathbin{@}}q$. .1cm Let $A=C \circ B$, where $C \in {{\bf {PSL}}(2,{\mathbb {C}})}$ fixes the point $(0,0,1)\in { {\mathbb {H}}^{3}}$. It follows that for every $u \in T^{1}_{(0,0,1)}({ {\mathbb {H}}^{3}})$ we have ${{\bf {\Theta}}}(u,C(u))\le \epsilon$. This implies that for some $a,b,c,d \in {\mathbb {C}}$, $ad-bc=1$, we have $$C(z)={{az+b}\over{cz+d}},$$ and $|a-1|,|b|,|c|,|d-1| \le D_1\epsilon$, for some constant $D_1>0$. Then $$A(z)={{a\sqrt{K}z+{{b}\over{\sqrt{K}}} }\over{\sqrt{K}cz+ {{d}\over{\sqrt{K}}} }},$$ and we find $${{\operatorname {tr}}}(A)=a\sqrt{K}+{{d}\over{\sqrt{K}}},$$ where ${{\operatorname {tr}}}(A)$ denotes the trace of $A$. Since $|a-1|, |d-1| \le D_1\epsilon$ we see that for $K$ large enough the real part of the trace ${{\operatorname {tr}}}(A)$ is a positive number $>2$, which shows that $A$ is loxodromic. On the other hand, ${{\operatorname {tr}}}(A)=2\cosh({{{{\bf l}}(A)}\over{2}})$. This shows that $|{{\bf l}}(A)-\log K| \le D_2\epsilon$, for some constant $D_2>0$. .1cm Let $z_1,z_2 \in \overline{{\mathbb {C}}}$ denote the fixed points of $A$. We find $$z_{1,2}={{ (a-{{d}\over{K}}) \pm \sqrt{ ( a-{{d}\over{K}} )^{2}+{{4bc}\over{K}} } }\over{2c }}.$$ Then for $K$ large enough we have $$|z_1| \le \epsilon ,\,\, |z_2| \ge {{3}\over{\epsilon}}.$$ This shows that $d(q,\text{axis}(A))=d((0,0,1),\text{axis}(A)) \le D_3 \epsilon$, for some constant $D_3>0$. The inequality $d(p,\text{axis}(A) ) \le D_3 \epsilon$ follows by symmetry. The following lemma is a corollary of Theorem \[thm-chain\] and the previous proposition. It provides an estimate for the complex length of the closed geodesic that is freely homotopic to the concatenation of a closed chain of geodesic segments. \[lemma-chain\] Let $a_i,b_i \in { {\mathbb {H}}^{3}}$, $i \in {\mathbb {Z}}$ such that 1. $|a_i b_i| \ge Q$, 2. $|b_i a_{i+1}| \le \epsilon$, 3. ${{\bf {\Theta}}}(v(b_i,a_i) {\mathbin{@}}a_{i+1},-v(a_{i+1},b_{i+1})) \le \epsilon$. Suppose also that $n_i \in T^{1}_{a_{i}}({ {\mathbb {H}}^{3}})$ is a vector at $a_i$ normal to $v(a_i,b_i)$ and $${{\bf {\Theta}}}(n_i {\mathbin{@}}b_i,n_{i+1} {\mathbin{@}}b_i) \le \epsilon.$$ Suppose there exists $A \in {{\bf {PSL}}(2,{\mathbb {C}})}$ and $k>0$ be such that $A(a_i)=a_{i+k}$, $A(b_i)=b_{i+k}$, and $A(n_i)=n_{i+k}$, $i \in {\mathbb {Z}}$. Then for $\epsilon$ small and $Q$ large $A$ is a loxodromic transformation and $$\label{A-1} \left| {{\bf l}}(A)-\sum\limits_{i=0}^{k} |a_ib_i| \right| \le kD \epsilon,$$ for some constant $D>0$. Moreover $a_i,b_i \in {{\mathcal {N}}}_{D k \epsilon}(\text{axis}(A))$, where ${{\mathcal {N}}}_{D k \epsilon}(\text{axis}(A)) \subset { {\mathbb {H}}^{3}}$ is the $D k \epsilon$ neighbourhood of $\text{axis}(A)$. We can think of taking $a_i,b_i \in { {\mathbb {H}}^{3}}/ A$ (or even in some hyperbolic $3$-manifold $N$), and $i \in {\mathbb {Z}}/ k{\mathbb {Z}}$. We must then describe the geodesic segments from $a_i$ to $b_i$ which we will use to determine $v(b_i,a_i)$ and $n_i {\mathbin{@}}b_i$, and so forth. (As long as the injectivity radius of $N$ is greater than $\epsilon$, there are unique choices of geodesic segments from $b_i$ to $a_{i+1}$ with length less than $\epsilon$.) We then think of this sequence of segments as a “closed $\epsilon$-chain” and $\text{axis}(A)/A$ as its geodesic representative. Let $v_0=v(a_0,b_0)$. Observe that $A(v_0)=v(a_{k},b_{k})$. First we show that the inequality ${{\bf {\Theta}}}(A(v_0),v_0 {\mathbin{@}}a_k) \le 4\epsilon$ holds for $Q$ large enough. Recall that for $Q$ large enough the inequality (\[pom\]) holds (see the proof of Theorem \[thm-chain\]), that is we have $$\pi-{{\bf {\Theta}}}(v(a_{k},a_0),v(a_{k},b_{k}) ) \le 3\epsilon.$$ Since ${{\bf {\Theta}}}(v(a_{k},a_0),-v(a_{k},b_{k}) )= {{\bf {\Theta}}}(v(a_0,a_k),v(a_{k},b_{k}) {\mathbin{@}}a_0 )$, we have $${{\bf {\Theta}}}(v(a_0,a_k),v(a_{k},b_{k}) {\mathbin{@}}a_0 ) \le 3\epsilon.$$ On the other hand, it follows from (\[s-2-1\]) that for $Q$ large enough we have $${{\bf {\Theta}}}(v(a_0,a_k),v(a_{0},b_{0}) ) \le \epsilon,$$ so by the triangle inequality we obtain $$\begin{aligned} {{\bf {\Theta}}}(v(a_0,b_0),v(a_{k},b_{k}) {\mathbin{@}}a_0 ) &\le {{\bf {\Theta}}}(v(a_0,a_k),v(a_{0},b_{0}))+ {{\bf {\Theta}}}(v(a_0,a_k),v(a_{k},b_{k}) {\mathbin{@}}a_0 ) \\ &\le 4\epsilon.\end{aligned}$$ Since $v(a_k,b_k) {\mathbin{@}}a_0 {\mathbin{@}}a_k=v(a_k,b_k)$ we find ${{\bf {\Theta}}}(v(a_0,b_0),v(a_{k},b_{k}) {\mathbin{@}}a_0 )={{\bf {\Theta}}}(v_0 {\mathbin{@}}a_k, A(v_0) )$ so we have proved the inequality ${{\bf {\Theta}}}(v_0 {\mathbin{@}}a_k, A(v_0) ) \le 4 \epsilon$. Next, from (\[s-3\]) we find ${{\bf {\Theta}}}(n_{k},n_0 {\mathbin{@}}a_k) \le 4k \epsilon$. Since $v_0$ is normal to $n_0$, and the parallel transport preserves angles, it follows that $$\label{esta-1} {{\bf {\Theta}}}(u {\mathbin{@}}a_k,A(u)) \le 4kD \epsilon,\,\,\,\text{for every vector}\,\, u \in T^{1}_{a_{0}}({ {\mathbb {H}}^{3}}).$$ On the other hand, the inequality $$\label{esta-2} \left| d(a_0, A(a_0)) - \sum\limits_{i=0}^{k} |a_ib_i| \right| \le kD\epsilon,$$ follows from (\[s-1\]). The lemma now follows from Proposition \[prop-M\]. Preliminary propositions ------------------------ In this subsection we will prove two results in hyperbolic geometry (Lemmas \[lemma-ph-1\] and \[lemma-ph-2\] ). that we will use in Section \[section-4.7\]. The following proposition is elementary \[elem-0\] Let $\alpha$ be an geodesic in ${ {\mathbb {H}}^{3}}$ and let $p_1,p_2 \in { {\mathbb {H}}^{3}}$ be two points such that $d(p_1,p_2) \le C$ and $d(p_i,\alpha)\ge s$, $i=1,2$, for some constants $C,s>0$. Let $\eta_i$ be the oriented geodesic that contains $p_i$ and is normal to $\alpha$, and that is oriented from $\alpha$ to $p_i$. Then there exists a constant $D>0$, that depends only on $C$, such that $|{{\bf d}}_{\alpha}(\eta_1,\eta_2)| \le De^{-s}$. .1cm Let $\alpha, \beta$ be two oriented geodesics in ${ {\mathbb {H}}^{3}}$ such that $d(\alpha,\beta)>0$ and let $\gamma$ be their common orthogonal that is oriented from $\alpha$ to $\beta$. We observe that both $\alpha$ and $\beta$ are mapped to $-\alpha$ and $-\beta$ respectively, by a $180$ degree rotation around $\gamma$. Let $t \in {\mathbb {R} }$ and let $q:{\mathbb {R} }\to \beta$ be parametrisation by arc length such that $q_0(0)=\beta \cap \gamma$. Let $\delta(t)$ be the geodesic that contains $q_0(t)$ and is orthogonal to $\alpha$, and is oriented from $\alpha$ to $q_0(t)$. The following proposition follows from the symmetry of $\alpha$ and $\beta$ around $\gamma$. Recall that the complex distance is well defined $\pmod {2\pi i}$, so we can always choose a complex distance such that its imaginary part is in the interval $(-\pi,\pi]$. \[elem-1\] Assume that $\alpha \ne \beta$. Then ${{\bf d}}(q_0(t_1),\alpha)={{\bf d}}(q_0(t_2),\alpha)$ if and only if $|t_2|=|t_1|$. Moreover, if for some $t \in {\mathbb {R} }$ we can choose the complex distance ${{\bf d}}_{\alpha}(\delta(-t),\delta(t))$ such that $$-\pi < {\operatorname{Im}}\big( {{\bf d}}_{\alpha}(\delta(-t),\delta(t)) \big)< \pi,$$ then $${{\bf d}}_{\alpha}(\delta(-t),\gamma)={{1}\over{2}}{{\bf d}}_{\alpha}(\delta(-t),\delta(t)).$$ .1cm Observe that if $\alpha$ and $\beta$ do not intersect we can always choose the complex distance ${{\bf d}}_{\alpha}(\delta(-t),\delta(t))$ such that $$-\pi < {\operatorname{Im}}\big( {{\bf d}}_{\alpha}(\delta(-t),\delta(t)) \big)< \pi.$$ .1cm Assuming the above notation we have the following. Let $s(t)=d(q_0(t),\alpha)$. Suppose that $d(\alpha,\beta) \le 1$. Then for $s(t)$ large enough we have $$\begin{aligned} &s(t+h)=s(t)+h+o(1), \,\, \text{as} \,\, t \to \infty \\ &s(t+h)=s(t)-h+o(1), \,\, \text{as}\,\, t \to \infty \\\end{aligned}$$ for any $|h|\le {{s(t)}\over{2}}$. By the triangle inequality we have $$\begin{aligned} s(t) &= d(q_0(t),\alpha) \\ &\le d(q_0(t), q_0(0)) + d(q_0(0),\alpha) \\ &\le |t| + 1,\end{aligned}$$ since $ d(q_0(0),\alpha)=d(\alpha,\beta) \le 1$. That is, we have $$\label{in-1} s(t) \le |t|+1.$$ It follows from (\[in-1\]) that $s(t)$ large implies that $|t|$ is large. Recall the following formula (\[epstein\]) from Section 1. $$\sinh^{2}(d(q_0(t),\alpha))=\sinh^{2}(d(\alpha,\beta) ) \cosh^{2}(t)+ \sin^{2}({\operatorname{Im}}[{{\bf d}}_{\gamma}(\alpha,\beta)]) \sinh^{2}(t).$$ Combining this with (\[in-1\]) we get $$e^{2s(t)}=e^{2t}\big(\sinh^{2}(d(\alpha,\beta) )+ \sin^{2}({\operatorname{Im}}[{{\bf d}}_{\gamma}(\alpha,\beta)]) \big )+O(1),$$ which proves the proposition. .1cm We can define the foot of the geodesic $\beta$ on $\alpha$ as the normal to $\alpha$ pointing along $\gamma$. The lemma below estimates how the foot of $\beta$ on $\alpha$ moves when $\beta$ is moved (and $\beta$ is very close to $\alpha$). Let $\epsilon \in {\mathbb {D}}$ be a complex number and let $r>0$. Assume that $$\label{assumption} {{\bf d}}_{\gamma}(\alpha,\beta)=e^{-{{r}\over{2}}+\epsilon}.$$ Then there exists $\epsilon_0>0$ such that for every for $|\epsilon|<\epsilon_0$, for every $r>1$, and for every $t \in {\mathbb {R} }$ we can choose the complex distance ${{\bf d}}_{\alpha}(\delta(-t),\delta(t))$ such that $$\label{d-1} -{{\pi}\over{4}} < {\operatorname{Im}}{{\bf d}}_{\alpha}(\delta(-t),\delta(t)) < {{\pi}\over{4}}.$$ .1cm Let $\beta_1$ be another geodesic with a parametrisation by arc length $q_1:{\mathbb {R} }\to \beta_1$. We let $\gamma_1$, denote the common orthogonal between $\alpha$ and $\beta_1$, that is oriented from $\alpha$ to $\beta_1$. We have \[lemma-ph-1\] Assume that $\alpha$ and $\beta$ satisfy (\[assumption\]). Let $C>0$ and suppose that for some $t_1,t'_1,t_2,t'_2 \in {\mathbb {R} }$, where $t_1<0<t_2$, we have 1. $d(q_1(t'_1),q_0(t_1)),d(q_1(t'_2),q_0(t_2)) \le C$, 2. $|d(q_0(t_1),\alpha)-d(q_0(t_2),\alpha)| \le C$, 3. $d(q_0(t_1),\alpha)>{{r}\over{4}}-C$. Then for $|\epsilon|<\epsilon_0$ and for $r$ large, we have $${{\bf d}}_{\alpha}(\gamma,\gamma_1) \le D e^{-{{r}\over{4}}},$$ for some constant $D>0$, where $D$ only depends on $C$. The constants $D_i$ defined below all depend only on $C$. From (\[assumption\]) we have $d(\alpha,\beta)<e^{-{{r}\over{2}}+1}$. Since $$d(q_1(t'_1),q_0(t_1)),d(q_1(t'_2),q_0(t_2)) \le C,$$ it follows that for $r$ large we have $d(\alpha,\beta_1)=o(1)$, and in particular we have $d(\alpha,\beta_1)<1$. By the triangle inequality we obtain $|d(q_1(t'_1),\alpha)-d(q_1(t'_2),\alpha)| \le D_1$. Then it follows from the previous proposition that the inequalities $|t_2+t_1|,|t'_2+t'_1| \le D_2$ hold. This implies that $d(q_0(-t_1),q_1(-t'_1)) \le D_3$. .1cm Let $\delta_1(t)$ be the geodesic that contains $q_1(t)$ and is orthogonal to $\alpha$, and is oriented from $\alpha$ to $q_1(t)$. Now we apply Proposition \[elem-0\] and find that $$|{{\bf d}}_{\alpha}(\delta(-t_1),\delta_1(-t'_1))| \le D_4e^{-{{r}\over{4}}}.$$ Similarly $$|{{\bf d}}_{\alpha}(\delta(t_1),\delta_1(t'_1))| \le D_4e^{-{{r}\over{4}}}.$$ It follows from (\[d-1\]) and the above two inequalities that for $r$ large we can choose the complex distance ${{\bf d}}_{\alpha}(\delta_1(-t'_1),\delta_1(t'_1))$ such that $$-{{\pi}\over{3}} < {\operatorname{Im}}{{\bf d}}_{\alpha}(\delta_1(-t'_1),\delta_1(t'_1)) < {{\pi}\over{3}}.$$ In particular, we can choose the complex distances ${{\bf d}}_{\alpha}(\delta(-t_1),\delta(t_1))$ and ${{\bf d}}_{\alpha}(\delta_1(-t'_1),\delta_1(t'_1))$ such that the corresponding imaginary parts belong to the interval $(-\pi,\pi)$, and such that $$\big| {{\bf d}}_{\alpha}(\delta(-t_1),\delta(t_1)) - {{\bf d}}_{\alpha}(\delta_1(-t'_1),\delta_1(t'_1)) \big| \le 2D_4e^{-{{r}\over{4}}}.$$ The proof now follows from Proposition \[elem-1\] and the triangle inequality. \[lemma-ph-2\] Let $A \in {{\bf {PSL}}(2,{\mathbb {C}})}$ be a loxodromic transformation with the axis $\gamma$. Let $p,q \in (\partial{{ {\mathbb {H}}^{3}}} \setminus \text{endpoints}(A))$, and denote by $\alpha_1$ the oriented geodesic from $p$ to $q$, and by $\alpha_2$ the oriented geodesic from $q$ to $A(p)$. We let $\delta_j$ be the common orthogonal between $\gamma$ and $\alpha_j$, oriented from $\gamma$ to $\alpha_j$. Then $${{\bf d}}_{\gamma}(\delta_1,\delta_2)=(-1)^{j} {{{{\bf l}}(A)}\over{2}}+k\pi i,$$ for some $k \in \{0,1\}$ and some $j \in \{1,2\}$. Alternatively, we can think of $p$ and $q$ as points on the ideal boundary of ${ {\mathbb {H}}^{3}}/A$, and $\alpha_1$ and $\alpha_2$ as two geodesics from $p$ to $q$, such that $\alpha_1 \cdot (\alpha_2)^{-1}$ is freely homotopic to the core curve of the solid torus ${ {\mathbb {H}}^{3}}/ A$. Let $\alpha_3$ be the oriented geodesic from $A(p)$ to $A(q)$, and let $\delta_3$ be the common orthogonal between $\gamma$ and $\alpha_3$ (oriented from $\gamma$ to $\alpha_3$). Consider the right-angled hexagon $H_1$ with the sides $L_0=\gamma$, $L_1=\delta_1$, $L_2=\alpha_1$, $L_3=q$, $L_4=\alpha_2$ and $L_5=\delta_2$. Let $H_2$ be the right-angled hexagon with the sides $L'_0=\gamma$, $L'_1=\delta_3$, $L'_2=\alpha_3$, $L'_3=A(p)$, $L'_4=\alpha_2$ and $L'_5=\delta_2$. Note that $H_1$ is a degenerate hexagon since the common orthogonal between $\alpha_1$ and $\alpha_2$ has shrunk to a point on $\partial{{ {\mathbb {H}}^{3}}}$. The same holds for $H_2$. We note that the $\cosh$ formula is valid in degenerate right-angled hexagons and every such hexagon is uniquely determined by the complex lengths of its three alternating sides. .1cm Denote by $\sigma_k$ and $\sigma'_k$ the complex lengths of the sides $L_k$ and $L'_k$ respectively. By changing the orientations of the sides $L_{k}$ and $L'_k$ if necessary we can arrange that $\sigma_1=\sigma'_1$, $\sigma_5=\sigma'_5$ and $\sigma_3=\sigma'_3=0$ (see Section 2.2 in [@series]). This shows that the hexagons $H_1$ and $H_2$ are isometric modulo the orientations of the sides, and this implies the equality $\sigma_0=\sigma'_0$. On the other hand, changing orientations of the sides can change the complex length of a side by changing its sign and/or adding $\pi i$. This proves the lemma. The two-frame bundle and the well connected frames -------------------------------------------------- Let ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ denote the two frame bundle over ${ {\mathbb {H}}^{3}}$. Elements of ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ are frames $F=(p,u,n)$, where $p \in { {\mathbb {H}}^{3}}$ and $u,n \in {T^{1}}_p({ {\mathbb {H}}^{3}})$ are two orthogonal vectors at $p$ (here ${T^{1}}({ {\mathbb {H}}^{3}})$ denotes the unit tangent bundle). The group ${{\bf {PSL}}(2,{\mathbb {C}})}$ acts naturally on ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$. For $(p_i,u_i,n_i)$, $i=1,2$, we define the distance function ${{\mathcal {D}}}$ on ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ by $${{\mathcal {D}}}((p_1,u_1,n_1),(p_2,u_2,n_2))=d(p_1,p_2)+{{\bf {\Theta}}}(u'_1,u_2)+{{\bf {\Theta}}}(n'_1,n_2),$$ where $u'_1, n'_1 \in {T^{1}}_{p_{2}}({ {\mathbb {H}}^{3}})$, are the parallel transports of $u_1$ and $v_1$ along the geodesic that connects $p_1$ and $p_2$. One can check that ${{\mathcal {D}}}$ is invariant under the action of ${{\bf {PSL}}(2,{\mathbb {C}})}$ (we do not claim that ${{\mathcal {D}}}$ is a metric on ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$). By ${{\mathcal {N}}}_{\epsilon}(F) \subset {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ we denote the $\epsilon$ ball around a frame $F \in {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$. .1cm Recall the standard geodesic flow ${{\bf {g}}}_r:{T^{1}}({ {\mathbb {H}}^{3}}) \to {T^{1}}({ {\mathbb {H}}^{3}})$, $r \in {\mathbb {R} }$. The flow action extends naturally on ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$, that is the map ${{\bf {g}}}_r:{{\mathcal {F}}}({ {\mathbb {H}}^{3}}) \to {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$, is given by ${{\bf {g}}}_r(p,u,n)=(p_1,u_1,n_1)$, where $(p_1,u_1)={{\bf {g}}}_r(p,u)$ and $n_1$ is the parallel transport of the vector $n$ along the geodesic that connects $p$ and $p_1$. The flow ${{\bf {g}}}_r$ on ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ is called the frame flow. The space ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ is equipped with the Liouville measure $\Lambda$ which is invariant under the frame flow, and under the ${{\bf {PSL}}(2,{\mathbb {C}})}$ action. Locally on ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$, the measure $\Lambda$ is the product of the standard Liouville measure for the geodesic flow and the Lebesgue measure on the unit circle. .1cm Recall that ${{\bf {M}}^{3} }={ {\mathbb {H}}^{3}}/ {{\mathcal {G}}}$ denotes a closed hyperbolic three manifold, and ${{\mathcal {G}}}$ from now on denotes an appropriate Kleinian group. We identify the frame bundle ${{\mathcal {F}}}({{\bf {M}}^{3} })$ with the quotient ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})/ {{\mathcal {G}}}$. The frame flow acts on ${{\mathcal {F}}}({{\bf {M}}^{3} })$ by the projection. .1cm It is well known [@brin-gromov] that the frame flow is mixing on closed 3-manifolds of variable negative curvature. In the case of constant negative curvature the frame flow is known to be exponentially mixing. This was proved by Moore in [@moore] using representation theory (see also [@pollicott]). The proof of the following theorem follows from the spectral gap theorem for the Laplacian on closed hyperbolic manifold ${{\bf {M}}^{3} }$ and Proposition 3.6 in [@moore] (we thank Livio Flaminio and Mark Pollicott for explaining this to us). \[exp-mixing\] There exists a ${\bf q}>0$ that depends only on ${{\bf {M}}^{3} }$ such that the following holds. Let $\psi,\phi:{{\mathcal {F}}}({{\bf {M}}^{3} }) \to {\mathbb {R} }$ be two $C^{1}$ functions. Then for every $r \in {\mathbb {R} }$ the inequality $$\left| \Lambda({{\mathcal {F}}}({{\bf {M}}^{3} })) \int\limits_{{{\mathcal {F}}}({{\bf {M}}^{3} })} ({{\bf {g}}}^{*}_r \psi)(x) \phi(x) \, d\Lambda(x)- \int\limits_{{{\mathcal {F}}}({{\bf {M}}^{3} })} \psi(x)\, d\Lambda(x) \int\limits_{{{\mathcal {F}}}({{\bf {M}}^{3} })} \phi(x) \, d\Lambda(x)\right| \le Ce^{- {\bf q} |r| },$$ holds, where $C>0$ only depends on the $C^{1}$ norm of $\psi$ and $\phi$. In fact, one can replace the $C^{1}$ norm in the above theorem by the (weaker) Hölder norm (see [@moore]). For two functions $\psi,\phi:{{\mathcal {F}}}({{\bf {M}}^{3} }) \to {\mathbb {R} }$ we set $$(\psi,\phi)=\int\limits_{{{\mathcal {F}}}({{\bf {M}}^{3} })} \psi(x) \phi(x) \, d\Lambda(x).$$ .1cm From now on $r>>0$ denotes a large positive number that stands for the flow time of the frame flow. Also let $\epsilon>0$ denote a positive number that is smaller than the injectivity radius of ${{\bf {M}}^{3} }$. Then the projection map ${{\mathcal {F}}}({ {\mathbb {H}}^{3}}) \to {{\mathcal {F}}}({{\bf {M}}^{3} })$ is injective on every $\epsilon$ ball ${{\mathcal {N}}}_{\epsilon}(F) \subset {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$. .1cm Fix $F_0 \in {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ and let ${{\mathcal {N}}}_{\epsilon}(F_0) \subset {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ denote the $\epsilon$ ball around the frame $F_0$. Choose a $C^{1}$-function $f_{\epsilon}(F_0):{{\mathcal {F}}}({ {\mathbb {H}}^{3}}) \to {\mathbb {R} }$, that is positive on ${{\mathcal {N}}}_{\epsilon}(F_0)$, supported on ${{\mathcal {N}}}_{\epsilon}(F_0)$, and such that $$\label{nor} \int\limits_{{{\mathcal {F}}}({ {\mathbb {H}}^{3}})}f_{\epsilon}(F_0)(X) \, d\Lambda(X)=1.$$ For every $F \in {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ we define $f_{\epsilon}(F)$ by pulling back $f_{\epsilon}(F_0)$ by the corresponding element of ${{\bf {PSL}}(2,{\mathbb {C}})}$. For $F \in {{\mathcal {F}}}({{\bf {M}}^{3} })$ the function $f_{\epsilon}(F):{{\mathcal {F}}}({{\bf {M}}^{3} }) \to {\mathbb {R} }$ is defined accordingly (it is well defined since every ball ${{\mathcal {N}}}_{\epsilon}(F) \subset {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ embeds in ${{\mathcal {F}}}({{\bf {M}}^{3} })$). Moreover the equality (\[nor\]) holds for every $f_{\epsilon}(F)$. .1cm The following definition tells us when two frames in ${{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ are well connected. \[def-well-0\] Let $F_j=(p_j,u_j,n_j) \in {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$, $j=1,2$, be two frames, and set ${{\bf {g}}}_{ {{r}\over{4}} } (p_j,u_j,n_j)=({\widehat}{p}_j,{\widehat}{u}_j,{\widehat}{n}_j)$. Define $${{\bf a}}_{{ {\mathbb {H}}^{3}}}(F_1,F_2)= \big( {{\bf {g}}}^{*}_{ {{r}\over{2}}}f_{\epsilon}( {\widehat}{p}_1,{\widehat}{u}_1,{\widehat}{n}_1 ), f_{\epsilon}({\widehat}{p}_2,-{\widehat}{u}_2,{\widehat}{n}_2) \big) .$$ We say that the frames $F_1$ and $F_2$ are $(\epsilon,r)$ well connected (or just well connected if $\epsilon$ and $r$ are understood) if ${{\bf a}}_{{ {\mathbb {H}}^{3}}}(F_1,F_2) >0$. The preliminary flow by time ${{r}\over{4}}$ to get $({\widehat}{p}_j,{\widehat}{u}_j,{\widehat}{n}_j)$ is used to get the estimates needed for Proposition \[prop-tripo-1\] and Proposition \[prop-dis\]. \[def-well\] Let $F_j=(p_j,u_j,n_j) \in {{\mathcal {F}}}({{\bf {M}}^{3} })$, $j=1,2$, be two frames and let $\gamma$ be a geodesic segment in ${{\bf {M}}^{3} }$ that connects $p_1$ and $p_2$. Let ${\widetilde}{p}_1 \in { {\mathbb {H}}^{3}}$ be a lift of $p_1$, and let ${\widetilde}{p}_2$ denotes the lift of $p_2$ along $\gamma$. By ${\widetilde}{F}_j=({\widetilde}{p}_j,{\widetilde}{u}_j,{\widetilde}{n}_j) \in {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ we denote the corresponding lifts. Set ${{\bf a}}_{\gamma}(F_1,F_2)={{\bf a}}_{{ {\mathbb {H}}^{3}}}({\widetilde}{F}_1,{\widetilde}{F}_2)$. We say that the frames $F_1$ and $F_2$ are $(\epsilon,r)$ well connected (or just well connected if $\epsilon$ and $r$ are understood) along the segment $\gamma$, if ${{\bf a}}_{\gamma}(F_1,F_2) >0$. The function ${{\bf a}}_{\gamma}(F_1,F_2)$ is the affinity function from the outline above. .1cm Let $F_j=(p_j,u_j,n_j) \in {{\mathcal {F}}}({{\bf {M}}^{3} })$, $j=1,2$ and let ${{\bf {g}}}_{ {{r}\over{4}} } (p_j,u_j,n_j)=(p'_j,u'_j,n'_j)$. Define $${{\bf a}}(F_1,F_2)= \big( {{\bf {g}}}^{*}_{ -{{r}\over{2}}} f_{\epsilon}( p'_1,u'_1,n'_1), f_{\epsilon}(p'_2,-u'_2,n'_2 ) \big).$$ Then $${{\bf a}}(F_1,F_2)=\sum_{\gamma} {{\bf a}}_{\gamma}(F_1,F_2),$$ where $\gamma$ varies over all geodesic segments in ${{\bf {M}}^{3} }$ that connect $p_1$ and $p_2$ (only finitely many numbers ${{\bf a}}_{\gamma}(F_1,F_2)$ are non-zero). One can think of ${{\bf a}}(F_1,F_2)$ as the total probability that the frames $F_1$ and $F_2$ are well connected, and ${{\bf a}}_{\gamma}(F_1,F_2)$ represents the probability that they are well connected along the segment $\gamma$. The following lemma follows from Theorem \[exp-mixing\]. \[mixing\] Fix $\epsilon>0$. Then for $r$ large and any $F_1,F_2 \in {{\mathcal {F}}}({{\bf {M}}^{3} })$ we have $${{\bf a}}(F_1,F_2)= {{1}\over{\Lambda({{\mathcal {F}}}({{\bf {M}}^{3} }))}} (1+O(e^{-{\bf q} {{r}\over{2}} })),$$ where ${\bf q}>0$ is a constant that depends only on the manifold ${{\bf {M}}^{3} }$. The geometry of well connected bipods ------------------------------------- There is natural order three homeomorphism $\omega:{{\mathcal {F}}}({ {\mathbb {H}}^{3}}) \to {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$ given by $\omega(p,u,n)=(p,\omega(u),n)$, where $\omega(u)$ is the vector in ${T^{1}}_p({ {\mathbb {H}}^{3}})$ that is orthogonal to $n$ and such that the oriented angle (measured anticlockwise) between $u$ and $\omega(u)$ is ${{2\pi}\over{3}}$ (the plane containing the vectors $u$ and $\omega(u)$ is oriented by the normal vector $n$). An equivalent way of defining $\omega$ is by the Right hand rule. The homeomorphism $\omega$ commutes with the ${{\bf {PSL}}(2,{\mathbb {C}})}$ action and it is well defined on ${{\mathcal {F}}}({{\bf {M}}^{3} })$ by the projection. The distance function ${{\mathcal {D}}}$ on ${{\mathcal {F}}}({{\bf {M}}^{3} })$ is invariant under $\omega$. To every $F_p=(p,u,n) \in {{\mathcal {F}}}({{\bf {M}}^{3} })$ we associate the bipod $B_p=(F_p,\omega(F_p)$ and the anti-bipod $\overline{B}_p=(F_p,\overline{\omega}(F_p)$ (we recall that $\overline{\omega}=\omega^{-1}$). We have the following definition. \[def-bipo\] Given two frames $F_p=(p,u,n) \in {{\mathcal {F}}}({{\bf {M}}^{3} })$, and $F_q=(q,v,m) \in {{\mathcal {F}}}({{\bf {M}}^{3} })$, let $B_p$ and $B_q$ denote the corresponding bipods. Let $\gamma=(\gamma_0,\gamma_1)$, be a pair of geodesic segments in ${{\bf {M}}^{3} }$, each connecting the points $p$ and $q$. We say that the bipods $B_p$ and $B_q$ are $(\epsilon,r)$ well connected along the pair of segments $\gamma$, if the pairs of frames $F_p$ and $F_q$, and $\omega(F_p)$ and $\overline{\omega}(F_q)$, are $(\epsilon,r)$ well connected along the segments $\gamma_0$ and $\gamma_1$ respectively. \[lemma-bipo\] Let $F_p=(p,u,n)$ and $F_q=(q,v,m)$ be two frames in ${{\bf {M}}^{3} }$. Suppose that the corresponding bipods $B_p$ and $B_q$ are $(\epsilon,r)$-well connected along a pair of geodesic segments $\gamma_0$ and $\gamma_1$ that connect $p$ and $q$ in ${{\bf {M}}^{3} }$, that is we assume ${{\bf a}}_{\gamma_{0}}(F_p,F_q)>0$ and likewise ${{\bf a}}_{\gamma_{1}}(\omega(F_p),\overline{\omega}(F_q) )>0$. Then for $r$ large, the closed curve $\gamma_0 \cup \gamma_1$ is homotopic to a closed geodesic $\delta \in \Gamma$, and the following inequality holds $$\big| {{\bf l}}(\delta)- 2r + 2\log {{4}\over{3}} \big| \le D\epsilon,$$ for some constant $D>0$. Moreover, $$d(p,\delta), \, d(q,\delta) \le \log \sqrt{3} +D\epsilon.$$ We define, for $i=0,1$, $F_{{\widehat}{p}_{i}} =({\widehat}{p}_i,{\widehat}{u}_i,{\widehat}{n}_i)$ by ${{\bf {g}}}_{\frac{r}{4}}(\omega^{i}(F_p))$. Likewise, we let $F_{{\widehat}{q}_{i}} =({\widehat}{q}_i,{\widehat}{v}_i,{\widehat}{m}_i)$ by ${{\bf {g}}}_{\frac{r}{4}}(\overline{\omega}^{i}(F_q))$. Because $\omega^{i}(F_p)$ and $\overline{\omega}^{i}(F_q)$ are well connected, we can find $F_{p'_{i}} \in {{\mathcal {N}}}_{\epsilon}(F_{{\widehat}{p}_{i}})$, and $F_{q'_{i}} \in {{\mathcal {N}}}_{\epsilon}(F_{{\widehat}{q}_{i}})$ such that ${{\bf {g}}}_{\frac{r}{2}}(p'_i,u'_i,n'_i)=(q'_i,-v'_i,m'_i)$. Moreover, there is a homotopy condition that is satisfies, namely that the concatenation of the $\epsilon$-chain $$\left( {{\bf {g}}}_{[0,\frac{r}{4}]}(p_i,u_i), {{\bf {g}}}_{[0,\frac{r}{2}]}(p'_i,u'_i),{{\bf {g}}}_{[0,\frac{r}{4}]}({\widehat}{q}_i,-{\widehat}{v}_i \right),$$ is homotopic rel endpoints to $\gamma_i$. We let $\eta_p$ be the geodesic segment from ${\widehat}{p}_0$ to ${\widehat}{p}_1$ that is homotopic rel endpoints to $${{\bf {g}}}_{[0,\frac{r}{4}]}({\widehat}{p}_0,-{\widehat}{u}_0) \cdot {{\bf {g}}}_{[0,\frac{r}{4}]}(p,u).$$ Then ${\widehat}{n}_0$ and ${\widehat}{n}_1$ are parallel along $\eta_p$ because the are orthogonal to the plane of the immersed triangle we have formed), and the angle between $\eta_p$ and $-{\widehat}{u}_i$ (at ${\widehat}{p}_i$) is less than $De^{-\frac{r}{4}}$. Moreover, $$\left| {{\bf l}}(\eta_p)-\frac{r}{2}+\log \frac{4}{3} \right| \le De^{-\frac{r}{4}}.$$ We likewise define $\eta_q$ and make the same observation. We refer the reader to Figure \[fig:bipods\] for an illustration of our construction. The segments $\eta_p$, ${{\bf {g}}}_{[0,\frac{r}{2}]}(p'_1,u'_1)$, $\eta^{-1}_q$, ${{\bf {g}}}_{[0,\frac{r}{2}]}(q'_0,v'_0)$, form a closed $\epsilon$-chain, and we are therefore in a position to apply Lemma \[lemma-chain\]. We take $$(a_0,b_0,a_1,b_1,a_2,b_2,a_3,b_3)=({\widehat}{p}_0,{\widehat}{p}_1, p'_1,q'_1,{\widehat}{q}_1, {\widehat}{q}_0,q'_0,p'_0)$$ (and connect $a_i$ to $b_i$ by the aforementioned segments), and we let $(n_0,n_1,n_2,n_3)=({\widehat}{n}_0,n'_1,m'_1,m'_0)$. We can easily verify that the hypotheses of Lemma \[lemma-chain\] are satisfied, and we conclude that $\gamma_0 \cup \gamma_1$ is freely homotopic to a closed geodesic $\delta$, and the following inequalities $$\big| {{\bf l}}(\delta)- 2r + 2\log {{4}\over{3}} \big| \le D\epsilon,$$ and $$d({\widehat}{p}_i,\delta), \, d({\widehat}{q}_i,\delta) \le D\epsilon$$ hold. It follows that the projection of $p$ onto $\eta_p$ is exponentially close to $\delta$, and therefore $$d(p,\delta), \, d(q,\delta) \le \log \sqrt{3} +D\epsilon.$$ The geometry of well connected tripods -------------------------------------- Let $P,P_1,P_2 \in {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$. We call $P$ the reference frame and $P_1,P_2$ the moving frames. Let $F_1 \in {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$, and $r$ large. Then the frame $F_2=L(F_1,P_1,P_2,r)$ is defined as follows: .1cm Let ${\widetilde}{F}_1={{\bf {g}}}_{ {{r}\over{4}} }(F_1)$. Let ${\widehat}{F}_1 \in {{\mathcal {N}}}_{\epsilon}({\widetilde}{F}_1)$ denote the frame such that for some $M_1 \in {{\bf {PSL}}(2,{\mathbb {C}})}$ we have $M_1(P)={\widetilde}{F}_1$ and $M_1(P_1)={\widehat}{F}_1$. Set ${{\bf {g}}}_{ {{r}\over{2}} }({\widehat}{F}_1)=({\widehat}{q},-{\widehat}{v},{\widehat}{m})$, and ${\widehat}{F}_2=({\widehat}{q},{\widehat}{v},{\widehat}{m})$. Let ${\widetilde}{F}_2$ denote the frame such that for some $M_2 \in {{\bf {PSL}}(2,{\mathbb {C}})}$ we have $M_2(P)={\widetilde}{F}_2$ and $M_2(P_2)={\widehat}{F}_2$. Set ${{\bf {g}}}_{ -{{r}\over{4}} }({\widetilde}{F}_2)=F_2=(q,v,m)$. Observe that the frame $F_2$ only depends on $F_1$, $P_1$, $P_2$, and $r$. .1cm Recall from Section 3 that $\Pi^{0}$ denotes an oriented topological pair of pants equipped with a homeomorphism $\omega_{0}:\Pi^{0} \to \Pi^{0}$, of order three that permutes the cuffs. By $\omega^{i}_0(C)$, $i=0,1,2$, we denote the oriented cuffs of $\Pi^0$. For each $i=0,1,2$, we choose $\omega^{i}_0(c) \in \pi_1(\Pi^0)$ to be an element in the conjugacy class that corresponds to the cuff $\omega^i_0(C)$, such that $\omega^{0}_0(c) \omega^{1}_0(c) \omega^{2}_0(c)={{\operatorname {id}}}$. .1cm Fix a frame $P \in {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$, and fix six frames $P^{j}_i \in {{\mathcal {N}}}_{\epsilon}(P)$, $i=0,1,2$, $j=1,2$, where ${{\mathcal {N}}}_{\epsilon}(P)$ is the $\epsilon$ neighbourhood of $P$. Denote by $(P^{j}_i)$ the corresponding six-tuple of frames. We define the representation $$\rho(P^{j}_i): \pi_1(\Pi^0) \to {{\bf {PSL}}(2,{\mathbb {C}})}$$ as follows: .1cm Choose a frame $F^{0}_1=(p,u,n) \in { {\mathbb {H}}^{3}}$, and let $F_2=L(F^{0}_1,P^{1}_0,P^{2}_0,r)$. Denote by $F^{j}_1$, $j=1,2$, given by $\omega(F^{1}_1)=L(\omega^{-1}(F_2),P^{2}_1,P^{1}_1,r)$, and $\omega^{2}(F^{2}_1)=L(\omega^{-2}(F_2),P^{2}_2,P^{1}_2,r)$. Let $A_i \in {{\bf {PSL}}(2,{\mathbb {C}})}$ given by $A_0(F^{0}_1)=F^{1}_1$, $A_1(F^{1}_1)=F^{2}_1$, and $A_2(F^{2}_1)=F^{0}_1$. Observe $A_2 A_1 A_0={{\operatorname {id}}}$. We define $\rho(P^{j}_i)=\rho$ by $\rho(\omega^{i}(c))=A_i$. Up to conjugation in ${{\bf {PSL}}(2,{\mathbb {C}})}$, the representation $\rho(P^{j}_i)$ depends only on the six-tuple $(P^{j}_i)$ and $r$. Observe that if $P^{j}_i=P$, for all $i,j$, then ${ {\mathbb {H}}^{3}}/ \rho(P^{j}_i)$ is a planar pair of pants whose all three cuffs have equal length, and the half lengths of the cuffs that correspond to this representation are positive real numbers. We will use the following lemma to show that the skew pants that corresponds to a pair of well connected tripods (see the definition below) is indeed in ${{\bf {\Pi}}}_{D\epsilon,R}$ for some universal constant $D>0$. \[lemma-tripo-0\] Fix a frame $P \in {{\mathcal {F}}}({ {\mathbb {H}}^{3}})$, and fix $P^{j}_i \in {{\mathcal {N}}}_{\epsilon}(P)$, $i=0,1,2$, $j=1,2$. Set $\rho(P^{j}_i)=\rho$. Then $$\big| {{\bf {hl}}}(\omega^{i}_0(C)) - r + \log {{4}\over{3}} \big| \le D\epsilon,$$ for some constant $D>0$, where ${{\bf {hl}}}(\omega^{i}_0(C))$ denotes the half lengths that correspond to the representation $\rho$. In particular, the transformation $\rho(\omega^{i}_0(c))$ is loxodromic. It follows from Lemma \[lemma-bipo\] that $$\big| {{\bf l}}(\omega^{i}_0(C)) - 2r + 2\log {{4}\over{3}} \big| \le D\epsilon,$$ where ${{\bf l}}(\omega^{i}_0(C))$ denotes the cuff length of $\rho(\omega^{i}_0(c))=A_i$. .1cm We have ${{\bf {hl}}}(\omega^{i}(C))={{{{\bf l}}(\omega^{i}(C) )}\over{2}}+k \pi i$, for some $k \in \{0,1\}$. It remains to show that $k=0$. .1cm Let $t \in [0,1]$, and let $P^{j}_i(t)$ be a continuous path in ${{\mathcal {N}}}_{\epsilon}(P)$, such that $P^{j}_i(1)=P^{j}_i$, and $P^{j}_i(0)=P$. Set $\rho_t=\rho(P^{j}_i(t))$. Then for each $t$ we obtain the corresponding number $k(t) \in \{0,1\}$. Since $k(0)=0$ and since $k(t)$ is continuous we have $k(1)=k=0$. .3cm To every frame $F \in {{\mathcal {F}}}({{\bf {M}}^{3} })$ we associate the tripod $T=\omega^{i}(F)$, $i=0,1,2$ and the anti-tripod $\overline{T}=\overline{\omega}^{i}(F)$, $i=0,1,2$, where $\overline{\omega}=\omega^{-1}$. \[def-trip\] Given two frames $F_p=(p,u,n)$ and $F_q=(q,v,m)$ in ${{\mathcal {F}}}({{\bf {M}}^{3} })$, let $T_p=\omega^{i}(F_p)$ and $T_q=\omega^{i}(F_q)$, $i=0,1,2$, be the corresponding tripods. Let $\gamma=(\gamma_0,\gamma_1,\gamma_2)$, be a triple of geodesic segments in ${{\bf {M}}^{3} }$, each connecting the points $p$ and $q$. We say that the pair of tripods $T_p$ and $T_q$ is well connected along $\gamma$, if each pair of frames $\omega^{i}(F_p)$ and $\omega^{-i}(F_q)$ is well connected along the segment $\gamma_i$. .1cm Next we show that to every pair of well connected tripods we can naturally associate a skew pants in the sense of Definition \[def-skew\]. Recall from Section 3 that $\Pi^{0}$ denotes an oriented topological pair of pants equipped with a homeomorphism $\omega_{0}:\Pi^{0} \to \Pi^{0}$, of order three that permutes the cuffs. By $\omega^{i}_0(C)$, $i=0,1,2$, we denote the oriented cuffs of $\Pi^0$. For each $i=0,1,2$, we choose $\omega^{i}_0(c) \in \pi_1(\Pi^0)$ to be an element in the conjugacy class that corresponds to the cuff $\omega^i_0(C)$, such that $\omega^{0}_0(c) \omega^{1}_0(c) \omega^{2}_0(c)={{\operatorname {id}}}$. .1cm Let $a,b \in \Pi^{0}$ be the fixed points of the homeomorphism $\omega_0$. Let $\alpha_0 \subset \Pi^{0}$ be a simple arc that connects $a$ and $b$, and set $\omega^{i}_0(\alpha_0)=\alpha_i$. The union of two different arcs $\alpha_i$ and $\alpha_j$ is a closed curve in $\Pi^0$ homotopic to a cuff. One can think of the union of these three segments as the spine of $\Pi^0$. Moreover, there is an obvious projection from $\Pi^0$ to the spine $\alpha_0\cup \alpha_1 \cup \alpha_2$, and this projection is a homotopy equivalence. .1cm Let $T_p=(p,\omega^{i}(u),n)$ and $T_q=(q,\omega^{i}(v),m)$, $i=0,1,2$, be two tripods in ${{\mathcal {F}}}({{\bf {M}}^{3} })$ and $\gamma=(\gamma_0,\gamma_1,\gamma_2)$ a triple of geodesic segments in ${{\bf {M}}^{3} }$ each connecting the points $p$ and $q$. One constructs a map $\phi$ from the spine of $\Pi^0$ to ${{\bf {M}}^{3} }$ by letting $\phi(a)=p$, $\phi(b)=q$, and by letting $\phi:\alpha_i \to \gamma_i$ be any homeomorphism. By precomposing this map with the projection from $\Pi^0$ to its spine we get a well defined map $\phi:\Pi^0 \to {{\bf {M}}^{3} }$. By $\rho(T_p,T_q,\gamma):\pi_1(\Pi^0) \to {{\mathcal {G}}}$ we denote the induced representation of the fundamental group of $\Pi^0$. .1cm In principle, the representation $\rho(T_p,T_q,\gamma)$ can be trivial. However if the the tripods $T_p$ and $T_q$ are well connected along $\gamma$, we prove below that the representation $\rho(T_p,T_q,\gamma)$ is admissible (in sense of Definition \[def-adm\]) and that the conjugacy class $[\rho(T_p,T_q,\gamma)]$ is a skew pants in terms of Definition \[def-skew\]. \[lemma-tripo\] Let $T_p$ and $T_q$ be two tripods that are well connected along a triple of segments $\gamma$ and set $\rho= \rho(T_p,T_q,\gamma)$. Then $$\big| {{\bf {hl}}}(\omega^{i}_0(C)) - r + \log {{4}\over{3}} \big| \le D\epsilon,$$ for some constant $D>0$. In particular, the conjugacy class of transformations $\rho(\omega^{i}_0(C))$ is loxodromic. Observe that there exist $P^{j}_i \in {{\mathcal {N}}}_{\epsilon}(P)$, such that $\rho(P^{j}_i)=\rho(T_p,T_q,\gamma)$. The lemma follows from Lemma \[lemma-tripo-0\]. Recall that ${{\bf {\Pi}}}_{\epsilon,R}$ is the set of skew pants whose half-lengths are $\epsilon$ close to $\frac{R}{2}$, and that $R=2(r-\log \frac{4}{3} )$. If we write $\pi(T_p,T_q,\gamma)=[\rho(T_p,T_q,\gamma)]$, then by Lemma \[lemma-tripo\], $\pi$ maps well connected pairs of tripods to pairs of skew pants in ${{\bf {\Pi}}}_{D\epsilon,R}$. .1cm We have Let $T_p$ and $T_q$ be two tripods that are well connected along a triple of segments $\gamma=(\gamma_0,\gamma_1,\gamma_2)$. Set $${{\bf b}}_{\gamma}(T_p,T_q)=\prod\limits_{i=0}^{i=2} {{\bf a}}_{\gamma_{i}}(\omega^{i}(F_p) ) , (\omega^{-i}(F_q) ) .$$ Observe that two tripods $T_p$ and $T_q$ are $(\epsilon,r)$ well connected along a triple of geodesic segments $\gamma$, if and only if ${{\bf b}}_{\gamma}(T_p,T_q)>0$. .1cm We define the space of well connected tripods as the space of all triples $(T_p,T_q,\gamma)$, such that the tripods $T_p$ and $T_q$ are well connected along $\gamma$. It follows from the exponential mixing statement that given any two tripods $T_p$ and $T_q$, and for $r$ large enough, there will exist at least one triple of segments $\gamma$ so that $T_p$ and $T_q$ are well connected along $\gamma$ (in fact, it can be shown that there will be many such segments). We define the measure ${\widetilde}{\mu}$ on the set of well connected tripods by $$\label{wt-mu} d{\widetilde}{\mu}(T_p,T_q,\gamma)={{\bf b}}_{\gamma}(T_p,T_q)\, d\lambda_{T}(T_p,T_q,\gamma),$$ where $\lambda_{T}(T_p,T_q,\gamma)$ is the product of the Liouville measure $\Lambda$ (for ${{\mathcal {F}}}({{\bf {M}}^{3} })$) on the first two terms, and the counting measure on the third term. The measure $\lambda_{T}$ is infinite (since there are infinitely many geodesic segments between any two points $p,q \in {{\bf {M}}^{3} }$) but ${{\bf b}}_{\gamma}(T_p,T_q)$ has compact support (that is, only finitely many such triples of connections $\gamma$ are “good"), so ${\widetilde}{\mu}$ is finite. Recall that $R=2(r-\log \frac{4}{3} )$ (see the discussion after Lemma \[lemma-tripo\] above). We define the measure $\mu$ on ${{\bf {\Pi}}}_{D\epsilon,R}$ by $\mu=\pi_{*} {\widetilde}{\mu}$. This is the measure from Theorem \[theorem-mes\]. It follows from the construction that this measure is invariant under the involution ${{\mathcal {R}}}:{{\bf {\Pi}}}\to {{\bf {\Pi}}}$ (see Section 3 for the definition), that is $\mu \in {{\mathcal {M}}}^{{{\mathcal {R}}}}_0({{\bf {\Pi}}})$. .1cm In order to prove Theorem \[theorem-mes\] it remains to construct the corresponding measure $\beta \in {{\mathcal {M}}}_0({N^{1}}(\sqrt{\Gamma}))$ and prove the stated properties. The “predicted foot" map ${{\bf {f}}}_{\delta}$ {#section-4.7} ------------------------------------------------ By $F_p=(p,u,n)$ and $F_q=(q,v,m)$ we continue to denote two frames in ${{\mathcal {F}}}({{\bf {M}}^{3} })$. Suppose the frames $\omega^{i}(F_p)$ and $\overline{\omega}^{i}(F_q)$ are well connected along the geodesic segments $\gamma_i$, $i=0,1$. In our terminology this means that the bipods $B_p$ and $B_q$ are well connected along the segments $\gamma_0$ and $\gamma_1$. Let $\delta_{2} \in \Gamma$ denote the closed geodesic in ${{\bf {M}}^{3} }$ freely homotopic to $\gamma_{0} \cup \gamma _{1}$. We now associate the “geometric feet" to $(B_p,B_q,\gamma_0,\gamma_1)$. We first define the geodesic ray $\alpha_p:[0,\infty) \to {{\bf {M}}^{3} }$ by $\alpha_p(0)=p$, $\alpha'_p(0)=\overline{\omega}(u)$, and we likewise define the geodesic ray $\alpha_q:[0,\infty) \to {{\bf {M}}^{3} }$ by $\alpha_q(0)=q$, $\alpha'_q(0)=\omega(v)$. Then for $t \in [0,\infty)$, and $i=0,1$, we let $\beta^{t}_i$ be the geodesic segment homotopic relative endpoints to the piecewise geodesic arc $(\alpha_p[0,t])^{-1} \cdot \gamma_{i} \cdot \alpha_q[0,t])$. (The endpoints of both segments $\beta^{t}_{1}$ and $\beta^{t}_{2}$ are $\alpha_p(t)$ and $\alpha_q(t)$, and $\beta^{0}_{i}=\gamma_i$). We let $\beta^{\infty}_i$ be the limiting geodesic of $\beta^{t}_i$, when $t \to \infty$. For each $t>0$ and $i=0,1$, there is an obvious choice of common orthogonal from $\delta_{2}$ to $\beta^{t}_i$, which varies continuously with $t \in [0,\infty]$. We let $f^{t}_{i} \in {N^{1}}(\delta_{2})$ be the foot of this common orthogonal at $\delta_{2}$, and we let $f_i=f^{\infty}_i$. For a closed geodesic $\delta \in \Gamma$, let ${\mathbb {T}}_{\delta}$ denote the solid torus whose core curve is $\delta$. As an alternative point of view, we can lift $\gamma_0 \cup \gamma_1$ to a closed curve in the solid torus ${\mathbb {T}}_{\delta_{2}}$ (there is a unique such lift to a closed curve in ${\mathbb {T}}_{\delta_{2}}$). We can then lift $F_p$ and $F_q$, and also $\alpha_p[0,\infty]$ and $\alpha_q[0,\infty]$, where $\alpha_p(\infty), \alpha_q(\infty) \in \partial{{\mathbb {T}}_{\delta_{2}}}$. Then we define $\beta^{t}_i$ (and $\beta^{\infty}_i$) as before and there will be unique common orthogonals from (the lift of) $\delta_{2}$ to $\beta^{t}_{i}$, $t \in [0,\infty]$. By Lemma \[lemma-ph-2\] we see that ${{\bf d}}_{\delta_{2}}(f_0,f_1)={{\bf {hl}}}(\delta_{2})$, so $f_0$ and $f_1$ represent the same point in ${N^{1}}(\sqrt{\delta_{2}})$. Therefore we have defined the mapping $$(B_p,B_q,\gamma_0,\gamma_1) \mapsto {{\bf {f}}}_{\delta_{2}}(B_p,B_q,\gamma_0,\gamma_1) \in {N^{1}}(\sqrt{\delta_{2}}),$$ on the set of all well connected bipods such that the $\gamma_0 \cup \gamma_1$ is homotopic to $\delta_{2}$. We think of the vector ${{\bf {f}}}_{\delta_{2}}(B_p,B_q,\gamma_0,\gamma_1) \in {N^{1}}(\sqrt{\delta_{2}})$ as the geometric foot of $(B_p,B_q,\gamma_0,\gamma_1)$. Assume now that we are given a third geodesic segment $\gamma_2$ between $p$ and $q$ (also known as the third connection) such that $(T_p,T_q,\gamma)$ is a pair of well connected tripods along the triple of segments $\gamma=(\gamma_0,\gamma_1,\gamma_2)$. Above, we have defined the skew pants $\Pi=\pi(T_p,T_q,\gamma)$, such that $\partial{\Pi}=\delta_0+\delta_1 +\delta_2$, where $\delta_i$ is homotopic to $\gamma_{i-1} \cup \gamma_{i+1}$ (using the convention $\gamma_i=\gamma_{i+3}$). Let $h_i \in {N^{1}}(\delta_2)$, $i=0,1$, denote the foot of the common orthogonal from $\delta_2$ to $\delta_i$. Recall that since ${{\bf d}}_{\delta_{2}}(h_0,h_1)={{\bf {hl}}}(\delta_2)$ the projections of $h_0$ and $h_1$ to ${N^{1}}(\sqrt{\delta_{2}})$ agree, and as before we let ${\operatorname{foot}}_{\delta_{2}}(\Pi) \in {N^{1}}(\sqrt{\delta_{2}})$ denote this projection. We say that ${\operatorname{foot}}_{\delta_{2}}(\Pi)$ is the foot of the skew pants $\Pi$ on the cuff $\delta_2$. We will now verify that on ${N^{1}}(\sqrt{\delta_{2}})$ we have ${{\bf d}}_{\delta_{2}}(f_0,h_1)={{\bf d}}_{\delta_{2}}(f_1,h_0)=O(e^{-\frac{r}{4}})$. This will imply that the pairs $\{h_0,h_1\}$ and $\{f_0,f_1 \}$ project to vectors in ${N^{1}}(\sqrt{\delta_{2}})$ that are $e^{-\frac{r}{4}}$ close. \[prop-tripo-1\] With the above notation we have that for $r$ large and $\epsilon$ small the inequalities $${{\operatorname {dis}}}(f_0,h_1), {{\operatorname {dis}}}(f_1,h_0) \le De^{-{{r}\over{4}} }$$ holds for some universal constant $D>0$. Assume that we are given a skew pants $\Pi=\pi(T_p,T_q,\gamma)$, where $\gamma=(\gamma_0,\gamma_1,\gamma_2)$ is a triple of good connections. Recall that $\delta_i$ is a cuff of $\Pi$ that is homotopic to $\gamma_{i-1} \cup \gamma_{i+1}$. Then for $i=0,1$, the geodesics $\delta_2$ and $\delta_i$ (or more precisely the appropriate lifts of $\delta_2$ and $\delta_i$ to the solid torus cover corresponding to $\delta_{2}$) satisfy (\[assumption\]). On the other hand, since $\gamma_2$ is a good connection, and from the definition of a good connection between two frames, it follows that for some universal constant $E>0$ the segment $\beta^{\frac{r}{4}}_{0}$ (considered in the solid torus cover ${\mathbb {T}}_{\delta_{2}}$) has the endpoints $E$ close to $\delta_1$. Similarly the segment $\beta^{\frac{r}{4}}_{1}$ has the endpoints $E$ close to $\delta_0$. The inequality ${{\operatorname {dis}}}(f_0,h_1), {{\operatorname {dis}}}(f_1,h_0) \le De^{-{{r}\over{4}} }$ now follows from Lemma \[lemma-ph-1\]. .1cm For each skew pants $\Pi=\pi(T_p,T_q,\gamma)$ we let $${{\bf {f}}}_{\delta_{2}}(\Pi)={{\bf {f}}}_{\delta_{2}}(T_p,T_q,\gamma)= {{\bf {f}}}_{\delta_{2}}(B_p,B_q,\gamma_0,\gamma_1).$$ That is, we have defined the map $(\Pi,\delta^{*}) \mapsto {{\bf {f}}}_{\delta}(\Pi,\delta) \in {N^{1}}(\sqrt{\delta})$ on the set of all marked skew pants ${{\bf {\Pi}}}^{*}_{D\epsilon,R}$ that contain the geodesic $\delta$ in its boundary. Recall that we have already defined the mapping $(\Pi,\delta^{*}) \mapsto {\operatorname{foot}}_{\delta}(\Pi,\delta) \in {N^{1}}(\sqrt{\delta})$. We have \[prop-dis\] Let $(\pi(T_p,T_q,\gamma),\delta^{*}) \in {{\bf {\Pi}}}^{*}$. Then for $r$ large and $\epsilon$ small we have $${{\bf d}}({\operatorname{foot}}_{\delta}(\pi(T_p,T_q,\gamma),{{\bf {f}}}_{\delta}(T_p,T_q,\gamma)) \le De^{-{{r}\over{4}} },$$ for some constant $D>0$. It follows from Proposition \[prop-tripo-1\]. Given skew pants $\Pi=\pi(T_p,T_q,\gamma)$, the new foot ${{\bf {f}}}_{\delta_{2}}(T_p,T_q,\gamma)$ “predicts" the location of the old foot ${\operatorname{foot}}_{\delta_{2}}(T_p,T_q,\gamma)$ (up to an exponentially small error in $r$) without knowing the third connection $\gamma_2$. The proof of Theorem \[theorem-mes\] ------------------------------------- Fix $\delta \in \Gamma$. For a given measure $\alpha$ on ${N^{1}}(\sqrt{\Gamma})$ we let $\alpha_{\delta}$ denote the restriction of $\alpha$ on ${N^{1}}(\sqrt{\delta})$. It remains to construct the measure $\beta$ on ${N^{1}}(\sqrt{\Gamma})$ from Theorem \[theorem-mes\] and estimate the Radon-Nikodym derivative of $\beta_{\delta}$ with respect to the Euclidean measure on ${N^{1}}(\sqrt{\delta})$. Recall that $(B_p,B_q,\gamma_0,\gamma_1)$ is a well connected pair of bipods along the pair of segments $\gamma_0$ and $\gamma_1$ if $${{\bf a}}_{\gamma_{0}}(F_1,F_2) {{\bf a}}_{\gamma_{1}}(\omega(F_1),\overline{\omega}(F_2))>0.$$ We define the set $S_{\delta}$ by saying that $(F_p,F_q,\gamma_0,\gamma_1) \in S_{\delta}$ if $(B_p,B_q,\gamma_0,\gamma_1)$ is a well connected pair of bipods along a pair of segments $\gamma_0$ and $\gamma_1$ such that $\gamma_0 \cup \gamma_1$ is homotopic to $\delta$. In the previous subsection we have defined the map $${{\bf {f}}}_{\delta}:S_{\delta} \to {N^{1}}(\sqrt{\delta}).$$ Recall that that the bundle ${N^{1}}(\sqrt{\delta})$ has the natural ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ action by isometries. Now, we define the action of the torus ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ on $S_{\delta}$ so that the map ${{\bf {f}}}_{\delta}$ becomes equivariant with respect to the torus actions on $S_{\delta}$ and ${N^{1}}(\sqrt{\delta})$, that is for each $\tau \in {\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ we have $$\label{vazno} {{\bf {f}}}_{\delta}(\tau+(B_p,B_q,\gamma_0,\gamma_1))=\tau+{{\bf {f}}}_{\delta}(B_p,B_q,\gamma_0,\gamma_1),$$ where $\tau+(B_p,B_q,\gamma_0,\gamma_1)$ denotes the new element of $S_{\delta}$ (obtained after applying the action by $\tau$ to $(B_p,B_q,\gamma_0,\gamma_1)$). Let ${\mathbb {T}}_{\delta}$ be the open solid torus cover associated to $\delta$ (so $\delta$ has a unique lift to a closed geodesic in ${\mathbb {T}}_{\delta}$ which we denote by ${\widehat}{\delta}(\delta)$). Given a pair of well connected bipods in $S_{\delta}$, each bipod lifts in a unique way to a bipod in ${{\mathcal {F}}}({\mathbb {T}}_{\delta})$ such that the pair of the lifted bipods is well connected in ${\mathbb {T}}_{\delta}$. We denote by ${\widetilde}{S}_{\delta}$ the set of such lifts, so ${\widetilde}{S}_{\delta}$ is in one-to-one correspondence with $S_{\delta}$. We observe that the group of automorphisms of the solid torus ${\mathbb {T}}_{\delta}$ is isomorphic to the group of isomorphisms of the unit normal bundle ${N^{1}}(\delta)$, that is in turn isomorphic to ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ which acts on both ${N^{1}}(\delta)$ and on ${{\mathcal {F}}}^{2}({\mathbb {T}}_{\delta})$ so as to map ${\widetilde}{S}_{\delta}$ to itself. Since ${\widetilde}{S}_{\delta}$ and $S_{\delta}$ are in one-to-one correspondence we have the induced action of ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ on $S_{\delta}$. The equivariance (\[vazno\]) follows from the construction. Let $C_{\delta}$ be the space of well connected tripods $(T_p,T_q,\gamma)$, where $\gamma=(\gamma_0,\gamma_1,\gamma_2)$, such that $\gamma_0 \cup \gamma_1$ is homotopic to $\delta$. Let $\chi:C_{\delta} \to S_{\delta}$ be the forgetting map (the term forgetting map refers to forgetting the third connection $\gamma_2$), so $\chi(T_p,T_q,\gamma_0,\gamma_1,\gamma_2)=(B_p,B_q,\gamma_0,\gamma_1)$. It follows from Proposition \[prop-dis\] that for any pair of well connected tripods $T=(T_p,T_q,\gamma) \in C_{\delta}$ we have $$\label{f-foot-new} |{{\bf {f}}}_{\delta}(\chi (T))-{\operatorname{foot}}_{\delta}(\pi(T,\gamma))|<Ce^{-{{r}\over{4}}},$$ where $\pi(T,\gamma)$ is the corresponding skew pants. .3cm Next, we define the measure $\nu_{\delta}$ on $S_{\delta}$ by $$d\nu_{\delta}(B_p,B_q,\gamma_0,\gamma_1)={{\bf a}}_{\gamma_{0}}(F_p,F_q){{\bf a}}_{\gamma_{1}}(\omega(F_p),\overline{\omega}(F_q))\, d\lambda_{B}(B_p,B_q,\gamma_0,\gamma_1),$$ where $\lambda_B$ is the measure on $S_{\delta}$ defined as the product of the Liouville measures on the first two terms and the counting measure on the other two terms. We make two observations. The first one is that $\lambda_{B}$ is invariant under the ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ action on $S_{\delta}$. The second one is as follows. Let $\tau \in {\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$, and for $(B_p,B_q,\gamma_0,\gamma_1) \in S_{\delta}$ we let $$(B_{p(\tau)},B_{q(\tau)},\gamma_0(\tau),\gamma_1(\tau))=\tau+(B_p,B_q,\gamma_0,\gamma_1).$$ denote the corresponding element of $S_{\delta}$. It follows from the definition of the affinity functions that $${{\bf a}}_{\gamma_{0}}(F_p,F_q){{\bf a}}_{\gamma_{1}}(\omega(F_p),\overline{\omega}(F_q))={{\bf a}}_{\gamma_{0}(\tau)}(F_{p(\tau)},F_{q(\tau)}){{\bf a}}_{\gamma_{1}(\tau)}(\omega(F_{p(\tau)}),\overline{\omega}(F_{q(\tau)})),$$ for any $\tau$. These two observations show that the measure $\nu_{\delta}$ is invariant under the ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ action on $S_{\delta}$. Since the map ${{\bf {f}}}_{\delta}$ is invariant under the ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ actions (see $(\ref{vazno})$), it follows from the above two observations that the measure $({{\bf {f}}}_{\delta})_{*} \nu_{\delta}$ is invariant under the ${\mathbb {C}}/(2\pi i {\mathbb {Z}}+l(\delta){\mathbb {Z}})$ action on ${N^{1}}(\sqrt{\delta})$. Therefore, the measure $({{\bf {f}}}_{\delta})_{*} \nu_{\delta}$ is equal to a multiple of the Euclidean measure ${{\operatorname {Eucl}}}_{\delta}$ on ${N^{1}}(\sqrt{\delta})$. We write $$\label{eqos-1} ({{\bf {f}}}_{\delta})_{*} \nu_{\delta}=E_{\delta} {{\operatorname {Eucl}}}_{\delta},$$ for some constant $E_{\delta} \ge 0$. The other natural measure on $S_{\delta}$ is defined as follows. Let $\chi:C_{\delta} \to S_{\delta}$ be the forgetting map (defined above). Recall that ${\widetilde}{\mu}$ is the measure (defined by $(\ref{wt-mu})$ above) on the space of well connected tripods given by $$d{\widetilde}{\mu}(T_p,T_q,\gamma)={{\bf b}}_{\gamma}(T_p,T_q)\, d\lambda_{T}(T_p,T_q,\gamma),$$ where $\lambda_{T}(T_p,T_q,\gamma)$ is the product of the Liouville measure $\Lambda$ (for ${{\mathcal {F}}}({{\bf {M}}^{3} })$) on the first two terms, and the counting measure on the third term. Then we get a new measure on $S_{\delta}$ by $\chi_{*} ({\widetilde}{\mu}|_{C_{\delta}})$, where ${\widetilde}{\mu}|_{C_{\delta}}$ is the restriction of ${\widetilde}{\mu}$ to the set $C_{\delta}$. The two measures satisfy $$\begin{aligned} \left| \frac{d \chi_{*}({\widetilde}{\mu}|_{C_{\delta}})} { d\nu_{\delta}} \right| &= \sum_{\gamma_{2}} {{\bf a}}_{\gamma_{2}}(\omega^{2}(F_p),\overline{\omega}^{2}(F_q)) \\ &={{\bf a}}(\omega^{2}(F_p),\overline{\omega}^{2}(F_q)).\end{aligned}$$ But by the mixing we have $${{\bf a}}(\omega^{2}(F_p),\overline{\omega}^{2}(F_q))= \frac{1}{\Lambda({{\mathcal {F}}}({{\bf {M}}^{3} }))} (1+ O(e^{-{\bf q} r}) ),$$ so we find that for some constant $C=C(\epsilon,{{\bf {M}}^{3} })>0$ we have $$\left| \frac{d \chi_{*}({\widetilde}{\mu}|_{C_{\delta}})} { d\nu_{\delta}}- \frac{1}{\Lambda({{\mathcal {F}}}({{\bf {M}}^{3} }))} \right|<Ce^{-{\bf q} r},$$ which implies $$\label{**} \frac{1}{\Lambda({{\mathcal {F}}}({{\bf {M}}^{3} }))} (1-Ce^{-{\bf q} r} )\nu_{\delta} \le \chi_{*}( {\widetilde}{\mu}|_{C_{\delta}} ) \le \frac{1}{\Lambda({{\mathcal {F}}}({{\bf {M}}^{3} }))} (1+Ce^{-{\bf q} r} ) \nu_{\delta}.$$ Applying the mapping $({{\bf {f}}}_{\delta})_*$, and from (\[eqos-1\]) we obtain $$\frac{E_{\delta}}{\Lambda({{\mathcal {F}}}({{\bf {M}}^{3} }))} (1-Ce^{-{\bf q} r} ){{\operatorname {Eucl}}}_{\delta} \le {{\bf {f}}}_{*}(\chi_{*}( {\widetilde}{\mu}|_{C_{\delta}})) \le \frac{E_{\delta}}{\Lambda({{\mathcal {F}}}({{\bf {M}}^{3} }))} (1+Ce^{-{\bf q} r} ) {{\operatorname {Eucl}}}_{\delta}.$$ .3cm We let $$\beta_{\delta}= {{\bf {f}}}_{*}(\chi_{*}( {\widetilde}{\mu}|_{C_{\delta}})).$$ It follows that the Radon-Nikodym derivative of $\beta_{\delta}$ satisfies desired inequality from Theorem \[theorem-mes\]. On the other hand, it follows from (\[f-foot-new\]) that $\beta_{\delta}$ and ${\widehat{\partial}}\mu|_{{N^{1}}(\sqrt{\delta}) }$ are $O(e^{-\frac{r}{4}})$ equivalent. This completes the proof. [99]{} L. Bowen, *Weak Forms of the Ehrenpreis conjecture and the surface subgroup conjecture.* arXiv:math/0411662 M. Brin, M. Gromov, *On the ergodicity of frame flows*. Inventiones Math. 60, no. 1, 1–7. (1980) D. Calegari, *SCL*. MSJ Memoirs, 20. Mathematical Society of Japan, Tokyo, (2009) R. Canary, D. Epstein, P. Green, *Notes on notes of Thurston.* With a new foreword by Canary. London Math. Soc. Lecture Note Ser., 328, Fundamentals of hyperbolic geometry: selected expositions, 1–115, Cambridge Univ. Press, Cambridge, (2006) D. Cooper, D. Long, A. Reid, *Essential closed surfaces in bounded $3$-manifolds.* Journal American Mathematical Society 10 (1997), no. 3, 553–563. D. Epstein, A. Marden, V. Markovic, *Quasiconformal homeomorphisms and the convex hull boundary.* Ann. of Math. (2) 159 (2004), no. 1, 305–336. A. Eskin, C. McMullen, *Mixing, counting, and equidistribution in Lie groups.* Duke Math. J. 71 (1993), no. 1, 181–209. W. Fenchel, *Elementary geometry in hyperbolic space.* With an editorial by Heinz Bauer. de Gruyter Studies in Mathematics, 11. Walter de Gruyter Co., Berlin, (1989) A. Fletcher, V. Markovic, *Quasiconformal maps and Teichmüller theory.* Oxford Graduate Texts in Mathematics, 11. Oxford University Press, Oxford, (2007) C. Kourouniotis, *Complex length coordinates for quasifuchsian groups.* Mathematika, 41 (1994), 173–188. J. Kahn, V. Markovic, *The Weil-Petersson distance between finite degree covers of punctured Riemann surfaces and random ideal triangulations.* arXiv:0806.2304 M. Lackenby, *Surface subgroups of Kleinian groups with torsion.* Invent. Math. 179 (2010), no. 1, 175–190 C. Moore, *Exponential decay of correlation coefficients for geodesic flows.* Group representations, ergodic theory, operator algebras, and mathematical physics (Berkeley, Calif., 1984), 163–181, Math. Sci. Res. Inst. Publ., 6, Springer, New York, (1987) M. Pollicott, *Exponential mixing for the geodesic flow on hyperbolic three-manifolds.* J. Statist. Phys. 67 (1992), no. 3-4, 667–673 S. Tan, *Complex Fenchel-Nielsen coordinates for quasi-Fuchsian structures.* Internat. J. Math. 5 (1994), no. 2, 239–251 C. Series, *An extension of Wolpert’s derivative formula.* Pacific J. Math. 197 (2001), no. 1, 223–239
--- abstract: | The goal of this paper is to define a certain [*Chow weight structure*]{} ${{w_{Chow}}}$ for the category ${DM(S)}$ of Voevodsky’s motivic complexes with integral coefficients (as described by Cisinski and Deglise) over any excellent finite-dimensional separated scheme $S$. Our results are parallel to (though substantially weaker than) the corresponding ’rational coefficient’ statements proved by D. Hebert and the author. As an immediate consequence of the existence of ${{w_{Chow}}}$, we obtain certain (Chow)-weight spectral sequences and filtrations for any (co)homology of $S$-motives. author: - 'Mikhail V. Bondarko [^1]' title: On weights for relative motives with integral coefficients --- Introduction {#introduction .unnumbered} ============ The goal of this paper is to prove that the [*Chow weight structure*]{} ${{w_{Chow}}}$ (as introduced in [@bws] and in [@bzp] for Voevodsky’s motives over a perfect field $k$) can also be defined (somehow) for the category ${DM(S)}$ of motives with integral coefficients over any excellent separated finite dimensional base scheme $S$. As was shown in [@bws], the existence of ${{w_{Chow}}}$ yields several nice consequences. In particular, there exist [*Chow-weight*]{} spectral sequences and filtrations for any (co)homological functor $H:{DM(S)}\to {\underline{A}}$. Now we list the contents of the paper. More details can be found at the beginnings of sections. In §\[sprem\] we recall some basic properties of countable homotopy colimits in triangulated categories, Voevodsky’s motives, and weight structures. Most of the statements of the section are contained in [@neebook], [@degcis], and [@bws]; yet we also prove some new results. In §\[swchow\] we define ${{w_{Chow}}}$ and study its properties. In particular, we study the ’functoriality’ of ${{w_{Chow}}}$ (with respect to the functors of the type $f^*,f_*,f^!,f_!$, for $f$ being a quasi-projective morphism of schemes). More ’nice’ properties of ${{w_{Chow}(S)}}$ can be verified for $S$ being a regular scheme over a field. We also describe the relation of ${{w_{Chow}}}(S)$ with its ’rational’ analogue for ${DM_{{{\mathbb{Q}}}}(S)}$. The author is deeply grateful to prof. F. Deglise for his very helpful explanations. For a category $C,\ A,B\in{Obj}C$, we denote by $C(A,B)$ the set of $C$-morphisms from $A$ into $B$. For categories $C,D$ we write $C\subset D$ if $C$ is a full subcategory of $D$. For a category $C,\ X,Y\in{Obj}C$, we say that $X$ is a [*retract*]{} of $Y$ if ${\operatorname{id}}_X$ can be factorized through $Y$ (if $C$ is triangulated or abelian, then $X$ is a retract of $Y$ whenever $X$ is its direct summand). The Karoubization of $B$ is the category of ’formal images’ of idempotents in $B$ (so $B$ is embedded into an idempotent complete category). For an additive $D\subset C$ the subcategory $D$ is called [*Karoubi-closed*]{} in $C$ if it contains all retracts of its objects in $C$. The full subcategory of $C$ whose objects are all retracts of objects of $D$ (in $C$) will be called the [*Karoubi-closure*]{} of $D$ in $C$. $M\in {Obj}C$ will be called compact if the functor $C(M,-)$ commutes with all small coproducts that exist in $C$. In this paper (in contrast with [@bws]) we will only consider compact objects in those categories that are closed with respect to arbitrary small coproducts. ${\underline{C}}$ below will always denote some triangulated category; usually it will be endowed with a weight structure $w$ (see Definition \[dwstr\] below). We will use the term ’exact functor’ for a functor of triangulated categories (i.e. for a functor that preserves the structures of triangulated categories). We will call a covariant (resp. contravariant) additive functor $H:{\underline{C}}\to {\underline{A}}$ for an abelian ${\underline{A}}$ [*homological*]{} (resp. [*cohomological*]{}) if it converts distinguished triangles into long exact sequences. For $f\in{\underline{C}}(X,Y)$, $X,Y\in{Obj}{\underline{C}}$, we will call the third vertex of (any) distinguished triangle $X\stackrel{f}{\to}Y\to Z$ a cone of $f$; recall that distinct choices of cones are connected by (non-unique) isomorphisms. We will often specify a distinguished triangle by two of its morphisms. For a set of objects $C_i\in{Obj}{\underline{C}}$, $i\in I$, we will denote by ${\langle}C_i{\rangle}$ the smallest strictly full triangulated subcategory containing all $C_i$; for $D\subset {\underline{C}}$ we will write ${\langle}D{\rangle}$ instead of ${\langle}{Obj}D{\rangle}$. We will call the Karoubi-closure of ${\langle}C_i{\rangle}$ in ${\underline{C}}$ the [*triangulated category generated by $C_i$*]{}. For $X,Y\in {Obj}{\underline{C}}$ we will write $X\perp Y$ if ${\underline{C}}(X,Y)={\{0\}}$. For $D,E\subset {Obj}{\underline{C}}$ we will write $D\perp E$ if $X\perp Y$ for all $X\in D,\ Y\in E$. For $D\subset {\underline{C}}$ we will denote by $D^\perp$ the class $$\{Y\in {Obj}{\underline{C}}:\ X\perp Y\ \forall X\in D\}.$$ Sometimes we will denote by $D^\perp$ the corresponding full subcategory of ${\underline{C}}$. Dually, ${}^\perp{}D$ is the class $\{Y\in {Obj}{\underline{C}}:\ Y\perp X\ \forall X\in D\}$. We will say that some $C_i$, $i\in I$, [*weakly generate*]{} ${\underline{C}}$ if for $X\in{Obj}{\underline{C}}$ we have: ${\underline{C}}(C_i[j],X)={\{0\}}\ \forall i\in I,\ j\in{{\mathbb{Z}}}\implies X=0$ (i.e. if $\{C_i[j]\}^\perp$ contains only zero objects). Below by default all schemes will be excellent separated of finite Krull dimension. We will call a morphism $X\to S$ quasi-projective if it factorizes through an embedding of $X$ into ${\mathbb{P}}^N(S)$ (for a large enough $N$). We will call a scheme (satisfying these conditions) [*pro-smooth*]{} if it can be presented as the limit of a filtering projective system of smooth varieties over a field $K$ such that the connecting morphisms are smooth, affine, and dominant. Note that here we can assume $K$ to be perfect, since for any field its spectrum is the limit of a ’smooth affine dominant’ projective system of smooth varieties over the corresponding prime field. Besides, any pro-smooth scheme is regular since a filtered inductive limit of regular rings is regular if it is Noetherian. We will sometimes need certain stratifications of a (regular) scheme $S$. Recall that a stratification ${\alpha}$ is a presentation of $S$ as $\cup S_l^{\alpha}$ where $S_l^{\alpha}$, $l\in L$ ($L$ is a finite set), are pairwise disjoint locally closed subschemes of $S$. Omitting ${\alpha}$, we will denote by $j_l:S_l^{\alpha}\to S$ the corresponding immersions. We do not demand the closure of each $S_l^{\alpha}$ to be the union of strata; we will only assume that $S$ ’can be glued from $S_l^{\alpha}$ step by step’. This means: there exists a rooted binary tree $T^{{\alpha}}$ whose leaf nodes are exactly $S_l^{\alpha}$, any parent node $X$ of $T^{{\alpha}}$ has exactly two child nodes, such that the union of the leaf node descendants for the left child node of $X$ is open in the union of all the leaf node descendants of $X$ (see <http://en.wikipedia.org/wiki/Binary_tree> for the corresponding definitions). We will say that a stratification ${\alpha}$ is the union of stratifications $\beta$ and $\gamma$ if the roots of the trees for $\beta$ and $\gamma$ are the child nodes for the root for ${\alpha}$. Below we will only consider stratifications that we call [*very regular*]{}; this means that there exists a tree $T^{\alpha}$ as above such that for any node $X$ of $T^{{\alpha}}$ the union of the leaf node descendants of $X$ (this includes $X$ if $X$ is a leaf node itself) is regular. Below we will identify a Zariski point (of a scheme $S$) with the spectrum of its residue field. We will call a morphism $f:Y\to H$ quasi-projective if there exists a quasi-compact immersion $Y\to {\mathbb{P}}^N(X)$. Preliminaries: relative motives and weight structures {#sprem} ===================================================== In §\[scolim\] we recall the theory of countable (filtered) homotopy colimits and study certain related notions. In §\[sbrmot\] we recall some of basic properties of motives over $S$ (as considered in [@degcis]; we also deduce certain results that were not stated in ibid. explicitly). In §\[sws\] we recall some basics of the theory of weight structures (as developed in [@bws]); we also prove some new lemmas on the subject. Countable homotopy colimits in triangulated categories and big extension-closures {#scolim} ---------------------------------------------------------------------------------- We recall the basics of the theory of countable (filtered) homotopy colimits in triangulated categories (as introduced in [@bockne]; some more details can be found in [@neebook] and in §4.2 of [@bws]). We will only apply the results of this paragraph to triangulated categories closed with respect to arbitrary small coproducts; so we will not mention this restriction below (though this does not really decrease the generality of our results). \[dcoulim\] Suppose that we have a sequence of objects $Y_i$ (starting from some $j\in{{\mathbb{Z}}}$) and maps $\phi_i:Y_{i}\to Y_{i+1}$. We consider the map $d:\oplus {\operatorname{id}}_{Y_i}\bigoplus \oplus (-\phi_i): D\to D$ (we can define it since its $i$-th component can be easily factorized as the composition $Y_i\to Y_i\bigoplus Y_{i+1}\to D$). Denote a cone of $d$ by $Y$. We will write $Y=\operatorname{\varinjlim}Y_i$ and call $Y$ the [*homotopy colimit*]{} of $Y_i$; we will not consider any other (homotopy) colimits in this paper. \[rcoulim\] 1\. Note that these homotopy colimits are not really canonical and functorial in $Y_i$ since the choice of a cone is not canonical. They are only defined up to non-canonical isomorphisms. 2\. By Lemma 1.7.1 of [@neebook] the homotopy colimit of $Y_{i_j}$ is the same for any subsequence of $Y_i$. In particular, we can discard any (finite) number of first terms in $Y_i$. 3\. By Lemma 1.6.6 of [@neebook] the homotopy colimit of $X\stackrel{{\operatorname{id}}_X}{\to}X\stackrel{{\operatorname{id}}_X}{\to} X\stackrel{{\operatorname{id}}_X}{\to} X\stackrel{{\operatorname{id}}_X}{\to}\dots$ is $X$. Hence we obtain that $\operatorname{\varinjlim}X_i\cong X$ if for $i\gg 0$ all $\phi_i$ are isomorphisms and $X_i\cong X$. We also recall the behaviour of colimits under (co)representable functors. \[coulim\] 1. For any $C\in{Obj}{\underline{C}}$ we have a natural surjection $ {\underline{C}}(Y,C)\to \operatorname{\varprojlim}{\underline{C}}(Y_i,C)$. 2\. This map is bijective if all $\phi_i[1]^*: {\underline{C}}(Y_{i+1}[1],C)\to {\underline{C}}(Y_i[1],C)$ are surjective for all $i\gg 0$. 3\. If $C$ is compact then ${\underline{C}}(C,Y)= \operatorname{\varinjlim}{\underline{C}}(C,Y_i)$. Below we will also need a new (simple) piece of homological algebra: the definition of a [*strongly extension-closed*]{} class of objects, and some properties of this notion. \[dses\] 1\. $D\subset {Obj}{\underline{C}}$ will be called [*extension-closed*]{} if it contains $0$ and for any distinguished triangle $A\to B\to C$ in ${\underline{C}}$ we have: $A,C\in D\implies B\in D$ (hence it is also [*strict*]{}, i.e. contains all objects of $C$ isomorphic to its elements). We will call the smallest extension-closed subclass of objects of ${\underline{C}}$ that contains a class $C'\subset {Obj}{\underline{C}}$ the extension-closure of $C'$. 2\. $C\subset {Obj}{\underline{C}}$ will be called [*strongly extension-closed*]{} if it contains $0$, and if for any $\phi_i:Y_{i}\to Y_{i+1}$ such that $Y_0\in C$, $\operatorname{\operatorname{Cone}}(\phi_i)\in C$ for all $i\ge 0$, we have $\operatorname{\varinjlim}_{i\ge 0} Y_i\in C$ (i.e. $C$ contains all possible cones of the corresponding distinguished triangle; note that those are isomorphic). 3\. The smallest strongly extension-closed Karoubi-closed class of objects of ${\underline{C}}$ that contains a class $C'\subset {Obj}{\underline{C}}$ and is closed with respect to arbitrary (small) coproducts will be called the big extension-closure of $C'$. Now we verify certain properties of these notions. \[lbes\] 1. \[iseses\] Let $C$ be strongly extension-closed. Then it is also extension-closed. 2. \[isesperp\] Suppose that for $C',D\subset {Obj}{\underline{C}}$, we have $ C'\perp D$. Then for the big extension-closure $C$ of $C'$ we also have $C\perp D$. 3. \[isescperp\] Suppose that for $C',D\subset {Obj}{\underline{C}}$, all objects of $D$ are compact, we have $D\perp C'$. Then $D$ is also orthogonal to the big extension-closure $C$ of $C'$. 4. \[isefun\] For an exact functor $F:{\underline{C}}\to {{\underline{D}}}$, any $C'\subset {Obj}{\underline{C}}$, and its extension-closure $C$ we have: the class $F(C)$ is contained in the extension-closure of $F(C')$ in ${{\underline{D}}}$. 5. \[isesfun\] Suppose that an exact functor $F:{\underline{C}}\to {{\underline{D}}}$ of triangulated categories commutes with arbitrary coproducts. Then for any $C'\subset {Obj}{\underline{C}}$ and its big extension-closure $C$ the class $F(C)$ is contained in the big extension-closure of $F(C')$ in ${{\underline{D}}}$. \[iseses\]. It suffices to note for any distinguished triangle $X\to Y\to Z$ the object $Y$ is the colimit of $X\stackrel{f}{\to} Y\stackrel{{\operatorname{id}}_Y}{\to} Y \stackrel{{\operatorname{id}}_Y}{\to} Y \stackrel{{\operatorname{id}}_Y}{\to} Y\to \dots$; the cone of $f$ is $Z$, whereas the cone of ${\operatorname{id}}_Y$ is $0$. \[isesperp\]. Since for any $d\in D$ the functor ${\underline{C}}(-,d)$ converts arbitrary coproducts into products, it suffices to verify (for any $d\in D$): if for $Y_i$ as in Definition \[dses\](2) we have $Y_0\perp d$, $\operatorname{\operatorname{Cone}}(\phi_i)\perp d$ for all $i\ge 0$, then $\operatorname{\varinjlim}Y_i\perp d$. Now, for any $i\ge 0$ we have a long exact sequence $$\dots \to {\underline{C}}(Y_{i+1}[1],d) \to {\underline{C}}(Y_i[1],d) \to {\underline{C}}(\operatorname{\operatorname{Cone}}(\phi_i),d) (={\{0\}}) \to {\underline{C}}(Y_{i+1},d) \to {\underline{C}}(Y_i,d)\to \dots$$ Hence ${\underline{C}}(Y_{i+1}[1],d)$ surjects onto $ {\underline{C}}(Y_i[1],d)$, whereas the obvious induction yields that ${\underline{C}}(Y_j,d)={\{0\}}$ for any $j\ge 0$. Then Lemma \[coulim\](2) yields ${\underline{C}}(\operatorname{\varinjlim}Y_i,d)\cong \operatorname{\varprojlim}{\underline{C}}(Y_i,d)={\{0\}}$. \[isescperp\]. Note that for any $d\in D$ the functor ${\underline{C}}(d,-)$ converts arbitrary coproducts into coproducts. So, similarly to the reasoning above, it suffices to fix some $d\in D$ and prove that $d\perp \operatorname{\varinjlim}_{i\ge 0}Y_i$ if $d\perp Y_0$ and $d\perp \operatorname{\operatorname{Cone}}(\phi_i)$ for all $i\ge 0$. The half-exact sequence ${\underline{C}}(d,Y_i)\to {\underline{C}}(d,Y_{i+1})\to {\underline{C}}(d, \operatorname{\operatorname{Cone}}(\phi_i))$ yields that $d\perp Y_j$ for all $j\ge 0$. It remains to apply Lemma \[coulim\](3). \[isefun\], \[isesfun\]. Obvious from the definitions given above. \[rperp\] Assertion \[isescperp\] of the Lemma immediately implies the following simple fact: for any class $D$ of compact objects of ${\underline{C}}$ the class $D^{\perp}$ is strongly extension-closed, Karoubi-closed, and also closed with respect to arbitrary coproducts. Indeed, it suffices to note that $D$ is orthogonal to the big extension-closure of $D^{\perp}$. On relative Voevodsky’s motives with integral coefficients (after Cisinski and Deglise) {#sbrmot} --------------------------------------------------------------------------------------- We list some properties of the triangulated categories $DM(-)$ of relative motives (as considered by Cisinski and Deglise). \[pcisdeg\] Let $X,Y$ be (excellent separated finite dimensional) schemes; $f:X\to Y$ is a finite type morphism. 1. \[imotcat\] For any $X$ a tensor triangulated category ${DM(X)}$ with the unit object ${{\mathbb{Z}}}_X$ is defined; it is closed with respect to arbitrary small coproducts. ${DM(X)}$ is the category of [*Voevodsky’s motivic complexes*]{} over $X$, as described (and thoroughly studied) in §11 of [@degcis]. 2. \[iadd\] ${DM}(X\sqcup Y)={DM}(X)\bigoplus {DM}(Y)$; ${DM}(\emptyset)={\{0\}}$. 3. \[imotgen\] The (full) subcategory ${DM_c(X)}\subset {DM(X)}$ of compact objects is tensor triangulated, and ${{\mathbb{Q}}}_X\in {Obj}{DM_c(S)}$. ${DM_c(X)}$ weakly generates ${DM(X)}$. 4. \[imotfun\] For any $f$ of finite type the following functors are defined: $f^*: {DM}(Y) \leftrightarrows {DM(X)}:f_*$ and $f_!: {DM(X)}\leftrightarrows {DM(Y)}:f^!$; $f^*$ is left adjoint to $f_*$ and $f_!$ is left adjoint to $f^!$. We call these the [**motivic image functors**]{}. Any of them (when $f$ varies) yields a $2$-functor from the category of (separated finite-dimensional excellent) schemes with morphisms of finite type to the $2$-category of triangulated categories. Besides, the functors $f^*$ and $f_!$ preserve compact objects (i.e. they could be restricted to the subcategories ${DM_c}(-)$) and arbitrary (small) coproducts in ${DM}(-)$. 5. \[iexch\] For a Cartesian square of finite type morphisms $$\begin{CD} Y'@>{f'}>>X'\\ @VV{g'}V@VV{g}V \\ Y@>{f}>>X \end{CD}$$ we have $g^*f_!\cong f'_!g'{}^*$ and $g'_*f'{}^!\cong f^!g_*$. if $g$ is smooth or if $f$ is smooth projective. 6. \[itate\] For any $X$ there exists a Tate object ${{\mathbb{Z}}}(1)\in{Obj}{DM_c(X)}$; tensoring by it yields exact Tate twist functors $-(1)$ on ${DM_c(X)}\subset {DM(X)}$. Both of these functors are auto-equivalences; we will denote the corresponding inverse functors by $-(-1)$. Tate twists commute with all motivic image functors mentioned (up to an isomorphism of functors) and with arbitrary (small) direct sums. Besides, for $X={\mathbb{P}}^1(Y)$ there is a functorial isomorphism $f_!({{\mathbb{Q}}}_{{{\mathbb{Z}}}^1(Y)})\cong {{\mathbb{Z}}}_Y\bigoplus {{\mathbb{Z}}}_Y(-1)[-2]$. 7. \[icompgen\] ${DM_c(X)}$ is the triangulated subcategory of ${DM}(X)$ generated by all $u_!{{\mathbb{Z}}}_U(n)$ for all $u$ being the compositions of finite étale morphisms with open embeddings and smooth projective morphisms, $n\in {{\mathbb{Z}}}$. 8. \[iupstar\] $f^*$ is symmetric monoidal; $f^*({{\mathbb{Z}}}_Y)={{\mathbb{Z}}}_X$. 9. \[ipur\] $f_*\cong f_!$ if $f$ is proper; $f^!(-)\cong f^*(-)(s)[2s]$ if $f$ is smooth quasi-projective (everywhere) of relative dimension $s$. 10. \[iglup\] Let $i:Z\to X$ be a closed immersion, $U=X\setminus Z$; $j:U\to X$ is the complementary open immersion. Then the compositions $i_*j^*$, $i^*j_!$, and $i^!j_*$ are zero, whereas the adjunction transformation $j^*j_*\to 1_{{DM}(U)}$ is an isomorphisms of functors. 11. \[iglu\] In addition to the assumptions of the previous assertion let $Z,X$ be regular. Then the motivic image functors yield [*gluing data*]{} for ${DM}(-)$ (in the sense of §1.4.3 of [@bbd]; see also Definition 8.2.1 of [@bws]). That means that (in addition to the statements given by the previous assertions) the following statements are also valid. \(i) $i_*$ is a full embeddings; $j^*=j^!$ is isomorphic to the localization (functor) of ${DM(X)}$ by $i_*({DM}(Z))$. Hence the adjunction transformations $i^*i_*\to 1_{{DM}(Z)}\to i^!i_!$ and $1_{{DM}(U)}\to j^!j_!$ are isomorphisms of functors. \(ii) For any $M\in {Obj}{DM(X)}$ the pairs of morphisms $$\label{eglu} j_!j^!(M)(=j_!j^*(M)) \to M \to i_*i^*(M)(\cong i_!i^*M)$$ and $i_!i^!(M) \to M \to j_*j^*(M)$ can be completed to distinguished triangles (here the connecting morphisms come from the adjunctions of assertion \[imotfun\]). 12. \[icont\] Let $S$ be a scheme which is the limit of an essentially affine (filtering) projective system of schemes $S_{\beta}$ (for ${\beta}\in B$) such that the connecting morphisms are dominant. Then ${DM_c(S)}$ is isomorphic to the $2$-colimit of the categories ${DM_c}(S_{\beta})$. For these isomorphisms all the connecting functors are given by the corresponding motivic inverse image functors; see the next assertion and Remark \[ridmot\] below. 13. \[icontp\] In the setting of the previous assertion for some ${\beta}_0\in B$ we denote the corresponding morphisms $S\to S_{{\beta}_0}$ and $S_{{\beta}}\to S_{{\beta}_0}$ (when the latter is defined) by $p_{{\beta}_0}$ and $p_{{\beta},{\beta}_0}$, respectively. Let $M\in {DM_c}(S_{{\beta}_0})$, $N\in {DM}(S_{{\beta}_0})$. Then ${DM(S)}(p_{{\beta}_0}^*M, p_{{\beta}_0}^*N)=\operatorname{\varinjlim}_{{\beta}} {DM}(S_{{\beta}})(p_{{\beta},{\beta}_0}^*M, p_{{\beta},{\beta}_0}^*N)$. Almost all of these properties of Voevodsky’s motivic complexes are stated in (part B of) the Introduction of ibid.; the proofs are mostly contained in §11 and §10 of ibid. In particular, see Theorem 11.4.5 of loc. cit.; note that a quasi-projective $f$ mentioned in assertion \[ipur\] can be presented as the composition of an open embedding with a smooth projective morphism. The functors preserving compact motives are treated in §4.2 of ibid. Our assertions \[iglup\] and \[iglu\] were proved in §2.3 of ibid. Assertion \[icontp\] is given by formula (4.3.4.1) of ibid. $f^*$ and $f_!$ respect coproducts since they admit right adjoints. So, we will only prove those assertions that are not stated in ibid. (explicitly). Assertion \[icompgen\] is an immediate consequence of Theorem 11.1.13 of [@degcis]. One only should unwind the definitions involved. In loc. cit. the generators were given by Tate twists of the motives $M_X(U)$ for all smooth $u$. Now, the (’Nisnevich’) Mayer-Vietoris distinguished triangle (see §11.1.9(1) of loc. cit.) allows us to consider only generators $U$ as above (and also connected); in this case we can replace $M_X(U)$ by a Tate twist of $u_*{{\mathbb{Z}}}_U$. \[ridmot\] In [@degcis] the functor $g^*$ was constructed for any morphism $g$ not necessarily of finite type; it commutes with arbitrary coproducts and preserves compact objects. Besides, for such a $g$ and any smooth projective $f:X'\to X$ we have an isomorphism $g^*f_*\cong f'_*g'{}^*$ (for the corresponding $f'$ and $g'$; cf. parts \[iexch\] and \[ipur\] of the proposition). Note also: if $g$ is a pro-open immersion, then one can also define $g^!=g^*$. So, one can also define $j_K^!$ that commutes with arbitrary coproducts. The system of these functors satisfy the second assertion in part \[iexch\] of the proposition (for a finite type $g$). 4\. In [@degcis] properties of motives with rational coefficients (those were called Beilinson motives) were studied in great detail (see Appendix C of ibid., it is no wonder that it is easier to deal with rational coefficients than with integral ones). When treating pro-smooth schemes, we will need the following statement. \[lega43\] Let $X=\operatorname{\varinjlim}_{{\beta}\in B} X_{\beta}$, be pro-smooth, $Y$ is a regular scheme quasi-projective over $X$. Then there exist a ${\beta}_0\in B$ and a quasi-projective $Y_{{\beta}_0}/X_{{\beta}_0}$ such that: $Y\cong Y_{{\beta}_0}\times_{X_{{\beta}_0}}X$ and for all ${\beta}\ge {\beta}_0$ the schemes $Y_{\beta}=Y_{{\beta}_0}\times_{X_{{\beta}_0}}X_{\beta}$ are regular. Let $Y$ be closed in an open subscheme $U$ of ${\mathbb{P}}^n(X)$ (for some $n>0$). By Proposition 8.6.3 of [@ega43], there exists a ${\beta}_1\in B$, an open $U_{{\beta}_1}\subset X_{{\beta}_1}$, and a closed $Y_{{\beta}_1}\subset U_{{\beta}_1}$ such that $Y=Y_{{\beta}_1}\times_{X_{{\beta}_1}}X$. Hence it suffices to consider the case when $Y=Y_{{\beta}_1}\times_{X_{{\beta}_1}}X$, $Y_{{\beta}_1}$ is closed in $X_{{\beta}_1}$. The question is whether (in the situation described) we can choose a ${\beta}_0\ge {\beta}_1$ such that all the corresponding $Y_{\beta}$ are regular. Since the connecting morphisms of $Y_{\beta}$ are smooth, it suffices to find a ${\beta}_0$ such that $Y_{{\beta}_0}$ is regular. Moreover, we obtain that the preimages of the singularity loci $S_{\beta}$ of $Y_{\beta}$ with respect to the projections $p_{\beta}:Y\to Y_{\beta}$ form a projective system of closed subschemes of $Y$ (i.e. that they ’decrease’). Since $Y$ is noetherian, it suffices to verify: there cannot exist a Zariski point $P$ of $Y$ that belongs to all of $p_{\beta}{^{-1}}(S_{\beta})$. Assume the converse. Aapplying loc. cit. again we obtain that $P$ is the preimage of a Zariski point $P_0\in Y_{{\beta}_0}$ for a certain ${\beta}_0\ge {\beta}_1$. Then we obtain that the local equations that characterize $Y_{{\beta}_0}$ in $X_{{\beta}_0}$ at the point $P_0$ do not yield a regular sequence; hence the same is true for $Y$ in $X$ at the point $P$. Thus $Y$ is not regular at $P$, and we obtain a contradiction. The following statements follow from Proposition \[pcisdeg\] easily. The limit argument that we use in the proof of assertion 2 of the Lemma is closely related with Remark 6.6 of [@lesm]. \[l4onepo\] 1\. Let $S=\cup S_l^{\alpha}$, $l\in L$, be a (very regular) stratification (see the Notation section). Then any $M\in {Obj}{DM(S)}$ belongs to the extension-closure (see Definition \[dses\](1)) of $j_{l!}j_l^*(M)$. In particular, ${{\mathbb{Z}}}_S$ belongs to the extension-closure of $j_!({{\mathbb{Z}}}_{S_l^{\alpha}})$. 2\. For any pro-smooth $S$ (see the Notation) and a smooth quasi-projective $x:X\to S$ we have: ${DM(S)}(x_!{{\mathbb{Z}}}_X[-r](-q),{{\mathbb{Z}}}_S) = {\{0\}}$ if $r>2q$. 1\. We prove the assertion by induction on the number of strata. In the case when $\#L=1$ all the assertions are obvious. Now let $\#L>1$. By definition, ${\alpha}$ is the union of two regular stratifications of $\beta$ and $\gamma$ of $U,Z\subset X$ respectively, such that $X,U,Z$ are regular, $Z$ is closed in $X$. We denote the open immersion $U \to S$ by $j$ and the (closed) immersion $Z\to S$ by $i$. By the inductive assertion, both $j_!j^*(M)$ and $i_!i^*M$ belong to the envelope in question (here we use the $2$-functoriality of $(-)^*$ and $(-)_!$). Hence the distinguished triangle (\[eglu\]) yields the result. 2\. Certainly, we can assume that $X$ is connected (see Proposition \[pcisdeg\](\[iadj\])). We present $x$ as the composition $X\stackrel{i}{\to}Y \stackrel{u}{\to} P\stackrel{p}{\to} S$ where $i$ is a closed embedding of pro-smooth schemes, $u$ is an open embedding, and $p$ is a smooth projective morphism of codimension $d$ with connected domain. Applying Proposition \[pcisdeg\](\[imotfun\], \[ipur\],\[iupstar\]) we obtain: $$\begin{gathered} {DM(S)}(x_!{{\mathbb{Z}}}_X[-r](-q),{{\mathbb{Z}}}_S) ={DM(S)}(p_!u_!i_!{{\mathbb{Z}}}_X[-r](-q),{{\mathbb{Z}}}_S)\cong {DM(S)}(u_!i_!{{\mathbb{Z}}}_X,p^!{{\mathbb{Z}}}_S(q)[r])\\ \cong {DM(S)}(u_!i_!{{\mathbb{Z}}}_X,p^!{{\mathbb{Z}}}_S(q)[r])\cong{DM}(P)(u_!i_!{{\mathbb{Z}}}_X,{{\mathbb{Z}}}_P(q+d)[r+2d]) \\ \cong {DM}(Y)(i_!{{\mathbb{Z}}}_X,u^!{{\mathbb{Z}}}_P(q+d)[r+2d])\cong {DM}(Y)(i_!{{\mathbb{Z}}}_X, {{\mathbb{Z}}}_Y(q+d)[r+2d]).\end{gathered}$$ Now, by Lemma \[lega43\] we can assume that $i$ is the filtered projective limit of closed embeddings $i_{\beta}: U_{\beta}\to Y_{\beta}$ of smooth varieties over a perfect field (whereas the connecting morphisms of the system are smooth affine dominant). Proposition \[pcisdeg\](\[iexch\], \[icont\]) yields that it suffices to prove the vanishing of ${DM}(Y_{\beta})(i_{{\beta}!}{{\mathbb{Z}}}_{X_{\beta}}, {{\mathbb{Z}}}_{Y_{\beta}}(q+d)[r+2d])$ for all ${\beta}\in B$. Example 11.2.3 of ibid. easily yields that the latter group is isomorphic to the motivic cohomology group $H^{r+2d-2c,q+d-c}_{\cal M}(X_{\beta})$. It vanishes since $r+2d-2c>2q+2d-2c$ (where $c$ is the codimension of $X_{\beta}$ in $Y_{\beta}$; see the notation and the calculations in loc. cit.). Weight structures: short reminder and a new existence lemma {#sws} ----------------------------------------------------------- \[dwstr\] I A pair of subclasses ${\underline{C}}_{w\le 0},{\underline{C}}_{w\ge 0}\subset{Obj}{\underline{C}}$ will be said to define a weight structure $w$ for ${\underline{C}}$ if they satisfy the following conditions: \(i) ${\underline{C}}_{w\ge 0},{\underline{C}}_{w\le 0}$ are additive and Karoubi-closed in ${\underline{C}}$ (i.e. contain all ${\underline{C}}$-retracts of their objects). \(ii) [**Semi-invariance with respect to translations.**]{} ${\underline{C}}_{w\le 0}\subset {\underline{C}}_{w\le 0}[1]$, ${\underline{C}}_{w\ge 0}[1]\subset {\underline{C}}_{w\ge 0}$. \(iii) [**Orthogonality.**]{} ${\underline{C}}_{w\le 0}\perp {\underline{C}}_{w\ge 0}[1]$. \(iv) [**Weight decompositions**]{}. For any $M\in{Obj}{\underline{C}}$ there exists a distinguished triangle $$\label{wd} B\to M\to A\stackrel{f}{\to} B[1]$$ such that $A\in {\underline{C}}_{w\ge 0}[1],\ B\in {\underline{C}}_{w\le 0}$. II The category ${{\underline{Hw}}}\subset {\underline{C}}$ whose objects are ${\underline{C}}_{w=0}={\underline{C}}_{w\ge 0}\cap {\underline{C}}_{w\le 0}$, ${{\underline{Hw}}}(Z,T)={\underline{C}}(Z,T)$ for $Z,T\in {\underline{C}}_{w=0}$, will be called the [*heart*]{} of $w$. III ${\underline{C}}_{w\ge i}$ (resp. ${\underline{C}}_{w\le i}$, resp. ${\underline{C}}_{w= i}$) will denote ${\underline{C}}_{w\ge 0}[i]$ (resp. ${\underline{C}}_{w\le 0}[i]$, resp. ${\underline{C}}_{w= 0}[i]$). IV We denote ${\underline{C}}_{w\ge i}\cap {\underline{C}}_{w\le j}$ by ${\underline{C}}_{[i,j]}$ (so it equals ${\{0\}}$ for $i>j$). V We will call ${\underline{C}}^b=\cup_{i\in {{\mathbb{Z}}}} {\underline{C}}_{w\le i}\cap \cup_{i\in {{\mathbb{Z}}}} {\underline{C}}_{w\ge i}$ the class of [*bounded*]{} objects of ${\underline{C}}$. We will say that $w$ is bounded if ${\underline{C}}^b={Obj}{\underline{C}}$. Besides, we will call $\cup_{i\in {{\mathbb{Z}}}} {\underline{C}}_{w\le i}$ the class of [*bounded above*]{} objects. VI $w$ will be called [*non-degenerate from above*]{} if $\cap_l {\underline{C}}^{w\ge l}={\{0\}}.$ VII Let ${\underline{C}}$ and ${\underline{C}}'$ be triangulated categories endowed with weight structures $w$ and $w'$, respectively; let $F:{\underline{C}}\to {\underline{C}}'$ be an exact functor. $F$ will be called [*left weight-exact*]{} (with respect to $w,w'$) if it maps ${\underline{C}}_{w\le 0}$ to ${\underline{C}}'_{w'\le 0}$; it will be called [*right weight-exact*]{} if it maps ${\underline{C}}_{w\ge 0}$ to ${\underline{C}}'_{w'\ge 0}$. $F$ is called [*weight-exact*]{} if it is both left and right weight-exact. VIII We call a category $\frac A B$ a [*factor*]{} of an additive category $A$ by its (full) additive subcategory $B$ if ${Obj}{\bigl(}\frac A B{\bigl)}={Obj}A$ and $(\frac A B)(M,N)= A(M,N)/(\sum_{O\in {Obj}B} A(O,N) \circ A(M,O))$. \[rstws\] 1\. A weight decomposition (of any $M\in {Obj}{\underline{C}}$) is (almost) never canonical; still we will sometimes denote (any choice of) a pair $(B,A)$ coming from in (\[wd\]) by $(w_{\le 0}M,w_{\ge 1}M)$. For an $l\in {{\mathbb{Z}}}$ we denote by $w_{\le l}M$ (resp. $w_{\ge l}M$) a choice of $w_{\le 0}(M[-l])[l]$ (resp. of $w_{\ge 1}(M[1-l])[l-1]$). We will call (any choices of) $(w_{\le l}M,w_{\ge l}M)$ [*weight truncations*]{} of $M$. 2\. A simple (and yet useful) example of a weight structure comes from the stupid filtration on the homotopy categories of cohomological complexes $K(B)$ for an arbitrary additive $B$. In this case $K(B)_{w\le 0}$ (resp. $K(B)_{w\ge 0}$) will be the class of complexes that are homotopy equivalent to complexes concentrated in degrees $\ge 0$ (resp. $\le 0$). The heart of this weight structure is the Karoubi-closure of $B$ in $K(B)$. in the corresponding category. 3\. In the current paper we use the ’homological convention’ for weight structures; it was previously used in [@hebpo], [@wildic], and [@brelmot], whereas in [@bws] and in [@bger] the ’cohomological convention’ was used. In the latter convention the roles of ${\underline{C}}_{w\le 0}$ and ${\underline{C}}_{w\ge 0}$ are interchanged i.e. one considers ${\underline{C}}^{w\le 0}={\underline{C}}_{w\ge 0}$ and ${\underline{C}}^{w\ge 0}={\underline{C}}_{w\le 0}$. So, a complex $X\in {Obj}K(B)$ whose only non-zero term is the fifth one has weight $-5$ in the homological convention, and has weight $5$ in the cohomological convention. Thus the conventions differ by ’signs of weights’; $K(B)_{[i,j]}$ is the class of retracts of complexes concentrated in degrees $[-j,-i]$. Now we recall those properties of weight structures that will be needed below (and that can be easily formulated). We will not mention more complicated matters (weight spectral sequences and weight complexes) here; instead we will just formulate the corresponding ’motivic’ results below. \[pbw\] Let ${\underline{C}}$ be a triangulated category. 1. \[iext\] Let $w$ be a weight structure for ${\underline{C}}$. Then ${\underline{C}}_{w\le 0}$, ${\underline{C}}_{w\ge 0}$, and ${\underline{C}}_{w=0}$ are extension-closed (see Definition \[dses\](1)). Besides, for any $M\in {\underline{C}}_{w\le 0}$ we have $w_{\ge 0}M\in {\underline{C}}_{w=0}$ (for any choice of $w_{\ge 0}M$). 2. \[iwefun\] For ${\underline{C}},w$ as in the previous assertions weight decompositions are [*weakly functorial*]{} i.e. any ${\underline{C}}$-morphism of objects has a (non-unique) extension to a morphism of (any choices of) their weight decomposition triangles. 3. \[iadjco\] A composition of left (resp. right) weight-exact functors is left (resp. right) weight-exact. 4. \[iadj\] Let ${\underline{C}}$ and ${{\underline{D}}}$ be triangulated categories endowed with weight structures $w$ and $v$, respectively. Let $F: {\underline{C}}\leftrightarrows {{\underline{D}}}:G$ be adjoint functors. Then $F$ is left weight-exact whenever $G$ is right weight-exact. 5. \[iloc\] Let $w$ be a weight structure for ${\underline{C}}$; let ${{\underline{D}}}\subset {\underline{C}}$ be a triangulated subcategory of ${\underline{C}}$. Suppose that $w$ yields a weight structure $w_{{{\underline{D}}}}$ for ${{\underline{D}}}$ (i.e. ${Obj}{{\underline{D}}}\cap {\underline{C}}_{w\le 0}$ and ${Obj}{{\underline{D}}}\cap {\underline{C}}_{w\ge 0}$ give a weight structure for ${{\underline{D}}}$). Then $w$ also induces a weight structure on ${\underline{C}}/{{\underline{D}}}$ (the localization i.e. the Verdier quotient of ${\underline{C}}$ by the Karoubi-closure of ${{\underline{D}}}$) in the following sense: the Karoubi-closures of ${\underline{C}}_{w\le 0}$ and ${\underline{C}}_{w\ge 0}$ (considered as classes of objects of ${\underline{C}}/{{\underline{D}}}$) give a weight structure $w'$ for ${\underline{C}}/{{\underline{D}}}$ (note that ${Obj}{\underline{C}}={Obj}{\underline{C}}/{{\underline{D}}}$). Besides, there exists a full embedding $\frac {{{\underline{Hw}}}}{{{\underline{Hw}}}_{{{\underline{D}}}}}\to {{\underline{Hw}}}'$; ${{\underline{Hw}}}'$ is the Karoubi-closure of $\frac{{{\underline{Hw}}}}{{{\underline{Hw}}}_{{{\underline{D}}}}}$ in ${\underline{C}}/{{\underline{D}}}$. Moreover, assume that the embedding of $i:{{\underline{D}}}\to {\underline{C}}$ possesses both a left and a right adjoint i.e. that ${{\underline{D}}}\stackrel{i_*}{\to}{\underline{C}}\stackrel{j^*}{\to}{\underline{C}}/{{\underline{D}}}$ is a part of gluing data (so that there also exist $j_*$ and $j^!$ and these six functors satisfy the conditions of Proposition \[pcisdeg\](\[iglup\], \[iglu\]); see Chapter 9 of [@neebook]). Then all objects of ${{\underline{Hw}}}'$ come from ${{\underline{Hw}}}$ (i.e. we do not need a Karoubi-closure here). 6. \[igluws\] Let ${{\underline{D}}}\stackrel{i_*}{\to}{\underline{C}}\stackrel{j^*}{\to}{{\underline{E}}}$ be a part of gluing data (as described in the previous assertion). Then for any pair of weight structures on ${{\underline{D}}}$ and ${{\underline{E}}}$ (we will denote them by $w_{{\underline{D}}}$ and $w_{{\underline{E}}}$, respectively) there exists a weight structure $w$ for ${\underline{C}}$ such that both $i_*$ and $j^*$ are weight-exact (with respect to the corresponding weight structures). Besides, $i^!$ and $j_*$ are right weight-exact (with respect to the corresponding weight structures); $i^*$ and $j_!$ are left weight-exact. Moreover, ${\underline{C}}_{w\ge 0}=C_1=\{M\in {Obj}{\underline{C}}:\ i^!(M)\in {{\underline{D}}}_{w_{{\underline{D}}}\ge 0} ,\ j^*(M)\in {{\underline{E}}}_{w_{{\underline{E}}}\ge 0} \}$ and ${\underline{C}}_{w\le 0}=C_2=\{M\in {Obj}{\underline{C}}:\ i^*(M)\in {{\underline{D}}}_{w_{{\underline{D}}}\le 0} ,\ j^*(M)\in {{\underline{E}}}_{w_{{\underline{E}}}\le 0} \}$. Lastly, $C_1$ (resp. $C_2$) is the envelope of $j_!({{\underline{E}}}_{w\le 0})\cup i_*({{\underline{D}}}_{w\le 0})$ (resp. of $ j_*({{\underline{E}}}_{w\ge 0})\cup i_*({{\underline{D}}}_{w\ge 0})$). 7. \[igluwsn\] In the setting of assertion \[igluws\], the weight structure $w$ described is the only weight structure for ${\underline{C}}$ such that the functors $i_!,j_!,j^*$ and $i^*$ are left weight-exact (we will say that $w$ is [*glued from*]{} $w_{{\underline{D}}}$ and $w_{{\underline{E}}}$). Most of the assertions were proved in [@bws] (pay attention to Remark \[rstws\](3)!); some more precise information can be found in (the proof of) Proposition 1.2.3 of [@brelmot]. It only remains to note that the ’moreover part’ of assertion \[iloc\] generalizes Theorem 1.7 of [@wildic]. The proof of loc. cit. carries over to our (abstract) setting without any changes (note here that the existence of the natural transformation $j_!\to j_*$ used there is an immediate consequence of the adjunctions given by the gluing data setting). \[rlift\] Part \[iloc\] of the proposition can be re-formulated as follows. If $i_*:{{\underline{D}}}\to {\underline{C}}$ is an embedding of triangulated categories that is weight-exact (with respect to certain weight structures for ${{\underline{D}}}$ and ${\underline{C}}$), an exact functor $j^*:{\underline{C}}\to {{\underline{E}}}$ is equivalent to the localization of ${\underline{C}}$ by $i_*({{\underline{D}}})$, then there exists a unique weight structure $w'$ for ${{\underline{E}}}$ such that $j^*$ is weight-exact; ${{\underline{Hw}}}_{{{\underline{E}}}}$ is the Karoubi-closure of $\frac {{{\underline{Hw}}}} {i_*({{\underline{Hw}}}_{{{\underline{D}}}})}$ (with respect to the natural functor $\frac {{{\underline{Hw}}}}{i_*({{\underline{Hw}}}_{{{\underline{D}}}})}\to {{\underline{E}}}$). Now we prove a certain statement on the existence of weight structures (as none of the existence results stated in §4 of [@bws] are sufficient for our purposes); it is a slight reformulaion of the main result of [@paukcomp]. \[pnews\] Let ${\underline{C}}$ be triangulated category that is closed with respect to all coproducts; let $C'\subset {\underline{C}}$ be a (proper!) set of compact objects such that $C'\subset C'[1]$. Then the classes $C_1=C'^{\perp}[-1]$ and $C_2$ being the big extension-closure of $C'$ yield a weight structure for ${\underline{C}}$ (i.e. there exists a $w$ such that ${\underline{C}}_{w\ge 0}=C_1$, ${\underline{C}}_{w\le 0}=C_2$). Obviously, $(C_1,C_2)$ are Karoubi-closed in ${\underline{C}}$, $C_1[1]\subset C_1$, $C_2\subset C_2[1]$. Besides, $C_2\perp C_1[1]$ by Lemma \[lbes\](\[isesperp\]). It remains to verify that any $M\in {Obj}{\underline{C}}$ possesses a weight decomposition with respect to $(C_1,C_2)$. We apply (a certain modification of) the method used in the proof of Theorem 4.5.2(I) of [@bws] (cf. also the construction of crude cellular towers in §I.3.2 of [@marg]). We construct a certain sequence of $M_k$ for $k\ge 0$ by induction in $k$ starting from $M_0=M$. Suppose that $M_k$ (for some $k\ge 0$) is already constructed; then we take $P_k=\coprod_{(c,f):\,c\in C',f\in {\underline{C}}(c,M_k)}c$; $M_{k+1}$ is a cone of the morphism $\coprod_{(c,f):\,c\in C',f\in {\underline{C}}(c,M_k)}f:P_k\to M_k$. Now we ’assemble’ $P_k$. The compositions of the morphisms $h_k:M_{k}\to M_{k+1}$ given by this construction yields morphisms $g_i:M\to M_i$ for all $i\ge 0$. Besides, the octahedral axiom of triangulated categories immediately yields $\operatorname{\operatorname{Cone}}(h_k)\cong P_k[1]$. Now we complete $g_k$ to distinguished triangles $B_k\stackrel{b_k}{\to}M \stackrel{g_k}{\to}M_k$. The octahedral axiom yields the existence of morphisms $s_i:B_i\to B_{i+1}$ that are compatible with $b_k$ such that $\operatorname{\operatorname{Cone}}(s_i)\cong P_i$ for all $i\ge 0$. We consider $B=\operatorname{\varinjlim}B_k$; by Lemma \[coulim\](1) $(b_k)$ lift to a certain morphism $g: B\to M$. We complete $b$ to a distinguished triangle $B\stackrel{b}{\to} M\stackrel{a}{\to} A\stackrel{f}{\to} B[1]$. This triangle will be our candidate for a weight decomposition of $M$. First we note that $B_0=0$; since $\operatorname{\operatorname{Cone}}(s_i)\cong P_i$ we have $B\in C_2$ by the definition of the latter. It remains to prove that $A\in C_1[1]$ i.e. that $C'\perp A$. For a $c\in C'$ we should check that ${\underline{C}}(c,A)={\{0\}}$. The long exact sequence $$\dots \to {\underline{C}}(c,B)\to {\underline{C}}(c,M)\to {\underline{C}}(c,A)\to {\underline{C}}(c, B[1])\to {\underline{C}}(c,M[1])\to\dots$$ translates this into: ${\underline{C}}(c,-)(b)$ is surjective and ${\underline{C}}(c,-)(b[1])$ is injective. Now, by Lemma \[coulim\](3), ${\underline{C}}(c,B)\cong\operatorname{\varinjlim}{\underline{C}}(c,B_i)$ and ${\underline{C}}(c,B[1])\cong\operatorname{\varinjlim}{\underline{C}}(c,B_i[1])$. Hence the long exact sequences $$\dots \to {\underline{C}}(c,B_k)\to {\underline{C}}(c,M)\to {\underline{C}}(c,M_k)\to {\underline{C}}(c, B[1])\to {\underline{C}}(c,M_k[1])\to\dots$$ yield: it suffices to verify that $\operatorname{\varinjlim}{\underline{C}}(c,M_k) ={\{0\}}$ (note here that $h_k$ are compatible with $s_k$). Lastly, ${\underline{C}}(c,P_k)$ surjects onto ${\underline{C}}(c,M_k)$; hence the group ${\underline{C}}(c,M_k)$ dies in ${\underline{C}}(c,M_{k+1})$ for any $k\ge 0$ and we obtain the result. Much interesting information on $w$ can be obtained from (the dual to) Theorem 2.2.6 of [@bger]. Our main results: the construction and properties of the Chow weight structure {#swchow} ============================================================================== In §\[schowunr\] we define a certain weight structure ${{w_{Chow}}}$ for ${DM(S)}$. and study its properties. In particular, we describe the ’functoriality’ of ${{w_{Chow}}}$ (with respect to functors of the type $f^*,f_*,f^!$, and $f_!$, for $f$ being a quasi-projective morphism of schemes). In §\[srat\] we discuss the relation of our ’integral weights’ with the ones for motives with rational coefficients. In §\[sappl\] we briefly describe some (immediate) consequences of our results. The main results {#schowunr} ----------------- \[dwchow\] For a scheme $S$ we denote by $C'=C'(S)$ the set of all $p_!({{\mathbb{Z}}}_P)(r)[2r-q]$, where $p:P\to Z$ runs through all quasi-projective morphisms with regular domain, $r\in {{\mathbb{Z}}}$, $q\ge 0$. Note that we can assume $C'$ to be a (proper) set; its objects are compact in ${DM(S)}$. Then we denote by ${{w_{Chow}}}={{w_{Chow}}}(S)$ the weight structure corresponding to $S$ by Proposition \[pnews\] (i.e. ${DM(S)}_{{{w_{Chow}}}\ge 0}=C'^\perp[-1]$, ${DM(S)}_{{{w_{Chow}}}\le 0}$ is the big extension-closure of $C'$). We prove the main properties of ${{w_{Chow}}}$. \[twchowa\] I The functor $-(n)[2n](=\otimes {{\mathbb{Z}}}(n)[2n]):{DM(S)}\to{DM(S)}$ is weight-exact with respect to ${{w_{Chow}}}$ for any $S$ and any $n\in {{\mathbb{Z}}}$. II Let $f:X\to Y$ be a quasi-projective morphism of schemes. 1\. $f_!$ is left weight-exact, $f^!$ is right weight-exact. Moreover, $f^*$ is left weight-exact and $f_*$ is right weight-exact if $X$ and $Y$ are regular. 2\. Suppose moreover that $f$ is smooth. Then $f^*$ and $f^!$ are also weight-exact. 3\. Moreover, $f^*$ is weight-exact for $f$ being a (filtered) projective limit of smooth quasi-projective morphisms such that the corresponding connecting morphisms are dominant smooth affine. III Let $i:Z\to X$ be a closed immersion of regular schemes; let $j:U\to X$ be the complimentary open immersion. 1\. ${Chow}(U)$ is the factor (in the sense of Definition \[dwstr\](VIII)) of ${Chow}(X)$ by $i_*({Chow}(Z))$. 2. For $M\in {Obj}{DM(X)}$ we have: $M\in {DM(X)}_{{{w_{Chow}}}\ge 0}$ (resp. $M\in {DM(X)}_{{{w_{Chow}}}\le 0}$) whenever $j^!(M)\in {DM}(U)_{{{w_{Chow}}}\ge 0}$ and $i^!(M)\in {DM}(Z)_{{{w_{Chow}}}\ge 0}$ (resp. $j^*(M)\in {DM}(U)_{{{w_{Chow}}}\le 0}$ and $i^*(M)\in {DM}(Z)_{{{w_{Chow}}}\le 0}$). IV Let $S=\cup S_l^{\alpha}$ be a very regular stratification (of a regular $S$), $j_l:S_l^{\alpha}\to S$ are the corresponding immersions. Then for $M\in {Obj}{DM(S)}$ we have: $M\in {DM(S)}_{{{w_{Chow}}}\ge 0}$ (resp. $M\in {DM(S)}_{{{w_{Chow}}}\le 0}$) whenever $j_l^!(M)\in {DM}(S_l^{\alpha})_{{{w_{Chow}}}\ge 0}$ (resp. $j_l^*(M)\in {DM}(S_l^{\alpha})_{{{w_{Chow}}}\le 0}$) for all $l$. V For a regular $S$ the following statements are valid. 1\. Any object of ${DM_c(S)}$ is bounded above with respect to ${{w_{Chow}}}(S)$. 2\. ${{w_{Chow}}}(S)$ is non-degenerate from above. 3\. ${{\mathbb{Z}}}_S\in {DM(S)}_{{{w_{Chow}}}\le 0}$. 4\. If $S$ is also pro-smooth, then ${{\mathbb{Z}}}_S\in {DM(S)}_{{{w_{Chow}}}=0}$. I By Proposition \[pbw\](\[iadj\]) it suffices to verify that all $-(n)[2n]$ are left weight-exact. The latter is immediate from the definition of ${DM(S)}_{{{w_{Chow}}}\le 0}$ by Lemma \[lbes\](\[isesfun\]). II1. The statement for $f_!$ is obvious (here we apply Lemma \[lbes\](\[isesfun\] again). Next, Proposition \[pbw\](\[iadj\]) yields the statement for $f^!$. In order to verify the remaining parts of the assertion it suffices to consider the case when $f$ is either smooth, or is a closed embedding of regular schemes. If $f$ is smooth then Proposition \[pcisdeg\](\[iexch\]) (together with Lemma \[lbes\](\[isesfun\]) yields that $f^*({DM}(Y)_{{{w_{Chow}}}\le 0})\subset {DM}(X)_{{{w_{Chow}}}\le 0}$. Applying Proposition \[pbw\](\[iadj\]) we also obtain the right weight-exactness of $f_*$. By loc. cit. we obtain: it remains to verify the left weight-exactness of $f^*$ when $f$ is a closed embedding. Denote by $j:U\to Y$ the open embedding complimentary to $f$. It suffices to check (after making obvious reductions) that for $M=p_!({{\mathbb{Z}}}_P)$ where $P:P\to Y$ is smooth quasi-projective, we have: $f^*M$ can be obtained from objects of the form $q_{i!}({{\mathbb{Z}}}_{Q_i})$ for some quasi-projective $q_i:Q_i\to X$, $Q_i$ are regular, by ’extensions’ (cf. Lemma \[lbes\](\[iseses\]). Now, choose a very regular stratification ${\alpha}$ of $P$ each of whose components is mapped by $p$ either to $X$ or to $U$. By Lemma \[l4onepo\](1), it suffices to verify that $i^*p_!j_{l!}^{\alpha}({{\mathbb{Z}}}_{P^l_{\alpha}})\in C'(X)$. Now, if $p\circ j_l$ factorizes through $U$, then $f^*p_!j_{l!}^{\alpha}({{\mathbb{Z}}}_P)$ factorizes through $f^*j_!=0$ (see Proposition \[pcisdeg\](\[iglup\])). On the other hand, if $p\circ j_l=f\circ p'$ (for some $p':P^l_{\alpha}\to X$) then $f^*p_!j_{l!}^{\alpha}({{\mathbb{Z}}}_{P^l_{\alpha}})\cong f^*f_!p'_!({{\mathbb{Z}}}_{P^l_{\alpha}})\cong j_{l!}({{\mathbb{Z}}}_{P^l_{\alpha}})\in C'(X)$. 2\. Obviously, we can assume that $f$ is equi-dimensional of relative dimension $s$. Then $f^!(-)\cong f^*(-)(s)[2s]$ by Proposition \[pcisdeg\](\[ipur\]). Hence both the left and the right hand side are weight-exact by the combination of the previous assertions of our theorem (note also that we actually verified the left weight-exactness of $f^*$ for an arbitrary smooth $f$ in the proof of the previous assertion). 3\. Passing to the limit (using Remark \[ridmot\]) we prove the left weight-exactness of $f^*$. In order to verify its right weight-exactness (by the definition of ${{w_{Chow}}}$) for $X=\operatorname{\varprojlim}_{\beta}X_{\beta}$ (${\beta}\in B$), $Y_0=X$, $p_{{\beta}}:X_{\beta}\to X$, and $p_{{\beta},0}: X_{\beta}\to X_0$, we should check: for any $O\in C'(X)$, $N\in {DM}(Y)_{{{w_{Chow}}}\ge 1}$, we have $f^*M\perp O$. Since all the $p_{{\beta},0}^*$ are (right) weight-exact, we can replace $X$ by any $X_{\beta}$ in this statement; hence we may assume that $O=f^*N$ for some $N\in C'(Y)$ (by Lemma \[lega43\]). Then it remains to combine the previous assertion with Proposition \[pcisdeg\](\[icontp\]). III Since $i_*\cong i_!$ in this case, $i_*$ is weight-exact by assertion II1. $j^*$ is weight-exact by assertion II2. 1\. ${DM}(U)$ is the localization of ${DM}(X)$ by $i_*({DM}(Z))$ by Proposition \[pcisdeg\](\[iglu\]). Hence Proposition \[pbw\](\[iloc\]) yields the result (see Remark \[rlift\]). 2\. Proposition \[pcisdeg\](\[iglu\]) yields: ${{w_{Chow}}}(X)$ is exactly the weight structure obtained by ’gluing ${{w_{Chow}}}(Z)$ with ${{w_{Chow}}}(U)$’ via Proposition \[pbw\](\[igluws\]) (here we use part \[igluwsn\] of loc. cit.). Hence loc. cit. yields the result (note that $j^*=j^!$). IV The assertion can be easily proved by induction on the number of strata using assertion III2. V1. Immediate from Proposition \[pcisdeg\](\[icompgen\]). 2\. Loc.cit. also yields that for any $N\in {Obj}{DM(S)}$ there exists a non-zero morphism in ${DM(S)}(O[i],N)$ for some $i\in {{\mathbb{Z}}}$, $O\in C'(S)$. This is equivalent to assertion V2 (by the definition of ${{w_{Chow}}}$). 3\. We have ${{\mathbb{Z}}}_S\in C'(S)$; hence ${{\mathbb{Z}}}_S\in {DM(S)}_{{{w_{Chow}}}\le 0}$. 4. Let $S\cong \operatorname{\varprojlim}S_{\beta}$ (as in the definition of pro-smooth schemes). Assertion II3 (along with Proposition \[pcisdeg\](\[iupstar\])) yields: it suffices to verify the statement in question for one of $S_{\beta}$. Hence we may assume that $S$ is a smooth quasi-projective variety over a perfect field $K$. Moreover, by assertion II2 it suffices to consider the case $S={\operatorname{Spec}\,}K$. In this case the statement is immediate from Lemma \[l4onepo\](2). \[rexplwd\] 1\. We do not know whether a (general) compact motif always possesses a ’compact weight decomposition’, though this is always true for motives with rational coefficients (by the central results of [@hebpo] and [@brelmot]; cf. §\[srat\] below). This makes the search of ’explicit’ weight decompositions (even more) important. Note that the latter allow the calculation of [*weight filtrations*]{} and [*weight spectral sequences*]{} for cohomology; see Proposition \[pwss\] below. The problem here is that our results do not provide us with ’enough’ compact objects of ${DM(S)}_{{{w_{Chow}}}\ge 0}$; they do not yield any non-${{\mathbb{Q}}}$-linear objects of ${DM(S)}_{{{w_{Chow}}}\ge 0}$ (see §\[srat\]) at all unless $S$ is a scheme over a field. 2\. Still we describe an important case when an explicit weight decomposition is known. Adopt the notation and assumptions of Proposition \[pcisdeg\](\[iglu\]). Let $p_U:P_U\to U$ be a projective morphism such that $P_U$ is pro-smooth; denote $p_{U!}{{\mathbb{Z}}}_{P_U}$ by $N$. Then we have $ j_!(N[1])\in {DM(X)}_{{{w_{Chow}}}}\le 1$; hence $w_{\le 0}j_!(N[1])\in {DM(X)}_{{{w_{Chow}}}=0}$. Now assume that $P_U$ possesses a pro-smooth $X$-model i.e. that $P_U=P\times_X U$ for a projective $p:P\to X$, $P$ is pro-smooth. Then for $M=p_!{{\mathbb{Z}}}_P$ we have: $N=j^*(M)(=j^!(M))$. Since $M\in {DM(X)}_{{{w_{Chow}}}=0}$, the distinguished triangle $$i_*i^*(M)\to j_!(N)[1](=j_!j^!(M)[1]) \to M[1]$$ (given by Proposition \[pcisdeg\](\[iglu\])) yields a ${{w_{Chow}}}(X)$-weight decomposition of $j_!(N)[1]$ (note that both of its components are compact). Hence weight decompositions relate $P_U$ with $P$ when the latter exists; still they exist and are ’weakly functorial’ (see Proposition \[pbw\](\[iwefun\])) in $N$ in the general case also. Note that such an $X$-model for $P_U$ always exists if $X$ is a variety over a characteristic $0$ field (by Hironaka’s resolution of singularities); hence our methods yield a certain substitute for the resolution of singularities in a more general situation. This can be applied to the study of motivic cohomology of $P_U$ with integral coefficients via weight filtration (see Proposition \[pwss\](II1) below) for the cohomology of $j_!N$; see also Proposition 3.3.3 of [@brelmot] for more detail (in the setting of motives with rational coefficients). 3\. If $S$ is a variety over a field (or a pro-smooth scheme) there is a subcategory of ${DM(S)}$ (that also lies in ${DM_c(S)}$) such that our ${{w_{Chow}}}(S)$ restricts to a ’very explicit’ weight structure for it. This is the category of ’smooth motives’ considered in [@lesm]; by Corollary 6.14 of ibid. it equals the subcategory of ${DM(S)}$ generated by ’homological motives’ of smooth projective $P/S$ (and the heart of this restriction is given by retracts of these $M_S(P)$. One can also restrict ${{w_{Chow}}}$ to the subcategory of Tate motives inside these smooth ones; see Corollary 6.16 of ibid. Now we prove for a regular $S$ that positivity of objects of ${DM(S)}$ (with respect to ${{w_{Chow}}}$) can be ’checked at points’. Also, ’weights of compact objects are lower semi-continuous’. \[ppoints\] Let $M\in {Obj}{DM(S)}$. Denote by ${{\mathcal{S}}}$ the set of (Zariski) points of $S$. For a $K\in {{\mathcal{S}}}$ we will denote the corresponding morphism $K\to S$ by $j_K$. 1\. Let $S$ be regular. Then $M\in {DM(S)}_{{{w_{Chow}}}\ge 0}$ if and only if for any $K\in {{\mathcal{S}}}$ we have $j_K^!(M)\in {DM}(K)_{{{w_{Chow}}}\ge 0}$; 2. Let $K$ be a generic point of $S$, $M\in {Obj}{DM_c(S)}$. Suppose that $j_K^*(M)\in {DM(K)}_{{{w_{Chow}}}\le 0}$ Then there exists an open immersion $j:U\to S$, $K\in U$, such that $j^*(M)\in {DM}(U)_{{{w_{Chow}}}\le 0}$. 1\. Combining parts II1 and II3 of Theorem \[twchowa\] we obtain: if $M\in {DM(S)}_{{{w_{Chow}}}\ge 0}$ then $j_K^!(M)\in {DM}(K)_{{{w_{Chow}}}\ge 0}$ for any $K\in {{\mathcal{S}}}$. Now we prove the converse implication. We prove it via certain noetherian induction: we suppose that our assumption is true for motives over any regular subscheme of $S$ that is not dense in it. Let $M\in {Obj}{DM(S)}$ satisfy $j_K^!(M)\in {DM}(K)_{{{w_{Chow}}}\ge 0}$ for any $K\in {{\mathcal{S}}}$. We should check that $N\perp M$ for any $N\in C'(S)[-1]$. So, we choose some $g\in {DM(S)}(N,M)$ and prove that $g=0$. We choose a generic point $K$ of $S$. By part II3 of Theorem \[twchowa\] we have $j_K^*(N)\in {DM}(K)_{{{w_{Chow}}}\le -1}$; since $j_K^*=j_K^!$ we also have $j_K^*(N)\in {DM}(K)_{{{w_{Chow}}}\ge 0}$. Hence $j_{K}^*(g)=0$. By Proposition \[pcisdeg\](\[icontp\]) there exists an open immersion $j:U\to S$ ($K\in U$) such that $j^*(g)=0$. We choose a very regular stratification ${\alpha}$ of $S$ such that $U=S_{l_0}^{{\alpha}}$ is one of its components. By Lemma \[l4onepo\](1) it suffices to verify that $j_{l!}j_l^*(N)\perp M$ for any $l\in L$. Now, we have ${DM(S)}(j_{l!}j_l^*(N),M)\cong {DM}(S_{l}^{{\alpha}})(j_l^*N,j_l^!M)$. By the inductive assumption we have ${DM}(S_{l}^{{\alpha}})(j_l^*N,j_l^!M)={\{0\}}$ for any $l\neq l_0$ (since $N\in {DM}(S_{l}^{{\alpha}})_{{{w_{Chow}}}\le -1}$), whereas ${DM}(S_{l}^{{\alpha}})(j_l^*N,j_l^!M)={DM}(S_{l}^{{\alpha}})(j_l^*N,j_l^*M)={\{0\}}$. 2\. We consider a weight decomposition of $M$: $B\to M\stackrel{g}{\to} A\to B[1]{\to} M[1]$. We obtain that $j^*_K(g)=0$ (since $j^*_K(M)\perp {DM(K)}_{{{w_{Chow}}}\ge 0}[1]$). Again, by Proposition \[pcisdeg\](\[icontp\]) we obtain: there exists an open immersion $j:U\to S$ ($K\in U$) such that $j^*(g)=0$. Hence $j^*(M)$ is a retract of $j^*(B)$. Since $j^*(B)\in {DM}(U)_{{{w_{Chow}}}\le 0}$ and ${DM}(U)_{{{w_{Chow}}}\ge 0}$ is Karoubi-closed in ${DM}(U)$, we obtain the result. Comparison with weights for Beilinson motives {#srat} ---------------------------------------------- In [@degcis] certain categories of ${DM_{\Lambda}(S)}$ were constructed and studied for any coefficient ring $\Lambda \subset {{\mathbb{Q}}}$ ($1\in \Lambda$; actually any commutative ring with a unit is possible). In this paragraph we will consider only the categories of the type ${DM_{{{\mathbb{Q}}}}(S)}$ (along with ${DM(S)}$) since their properties are better understood. Yet note: it also makes some sense to invert only the positive residue field characteristics of the corresponding base fields (in $\Lambda$); see the second assertion in Proposition 11.1.5 of ibid and [@bzp]. The properties of ${DM_{c,{{\mathbb{Q}}}}(S)}\subset {DM_{{{\mathbb{Q}}}}(S)}$ were stated in (part C of) the Introduction of ibid.; see also §2 of [@hebpo] and §1.1 of [@brelmot]. Here we only note that all the results on ${DM}(-)$ that were stated and proved above also hold for motives with rational coefficients. Besides, for any finite type $f$ all the corresponding motivic image functors (i.e. $f^*$, $f_*$, $f_!$, and $f^!$) respect compactness of motives. Moreover, the analogue of Lemma \[l4onepo\](2) holds for an any regular $S$ (i.e. pro-smoothness is not needed); Proposition \[pcisdeg\](\[iglu\]) holds for arbitrary closed embeddings. This allowed to construct in §2.3 of [@brelmot] a certain bounded Chow weight structure for ${DM_{c,{{\mathbb{Q}}}}(S)}$ that extends to ${DM_{{{\mathbb{Q}}}}(S)}$. Combining Proposition 2.3.4(I2) and Proposition 2.3.5 of ibid. one can easily prove that this weight structure can be described similarly to Definition \[dwchow\] above (this was our motivation for giving such a definition). Note also: in the case when $S$ is [*reasonable*]{} (i.e. if there exists an excellent separated scheme $S_0$ of dimension lesser than or equal to $2$ such that $S$ is of finite type over $S_0$) then this weight structure also has another more simple description (in terms of certain Chow motives over $S$ that yield its heart) given by Theorem 3.3 of [@hebpo] and Theorem 2.1.1 of [@brelmot]. So, we know much more on weights for motives with rational coefficients than for the ones with integral ones. Yet the results of the current can be somewhat useful for the study of motives and cohomology with integral coefficients. We note that we have natural comparison functors ${DM}(-)\otimes {{\mathbb{Q}}}\to {DM_{{{\mathbb{Q}}}}}(-)$; they commute with all functors of the type $f^*$, and are isomorphisms (and commute with ’everything else’) when restricted to regular base schemes (see Proposition 11.1.5 of [@degcis]). It easily follows that for any regular $S$ the comparison isomorphism ${DM}(S)\otimes {{\mathbb{Q}}}\to {DM_{{{\mathbb{Q}}}}}(S)$ is weight-exact (here we take $(({DM}(S)\otimes {{\mathbb{Q}}})_{{{w_{Chow}}}_\le 0}, ({DM}(S)\otimes {{\mathbb{Q}}})_{{{w_{Chow}}}_\ge 0})$ being the images of $({DM}(S)_{{{w_{Chow}}}_\le 0}, {DM}(S)_{{{w_{Chow}}}_\ge 0})$; we do not need Karoubizations). We obtain (in particular) that the weight filtrations and weight spectral sequences (see the next paragraph) for any (co)homology theory with rational coefficients defined via our ’current’ ${{w_{Chow}}}$ coincide with the ones defined using the ’rational version’ (whereas the latter is certainly easier to calculate). Applications {#sappl} ------------- First we recall that the embedding ${{\underline{Hw}}_{{Chow}(S)}}\to K({{\underline{Hw}}_{{Chow}(S)}})$ factorizes through a certain exact [*weight complex*]{} functor $t_S:{DM(S)}\to K({{\underline{Hw}}_{{Chow}(S)}})$ (similarly to Proposition 5.3.3 of [@bws], this follows from the existence of the Chow weight structure for ${DM(S)}$ along with the fact that it admits a differential graded enhancement; the latter property of ${DM(S)}$ can be easily verified since it is defined in terms of certain derived categories of sheaves over $S$). Now we discuss (Chow)-weight spectral sequences and filtrations for homology and cohomology of motives. We note that any weight structure yields certain weight spectral sequences for any (co)homology theory. \[pwss\] Let ${\underline{A}}$ be an abelian category. I Let $H:{DM(S)}\to {\underline{A}}$ be a homological functor; for any $r\in {{\mathbb{Z}}}$ denote $H\circ [r]$ by $H_r$. For an $M\in {Obj}{DM(S)}$ we denote by $(M^i)$ the terms of $t(M)$ (so $M^i\in {Obj}{{\underline{Hw}}_{{Chow}(S)}}$; here we can take any possible choice of $t(M)$). Then the following statements are valid. 1\. There exists a ([*Chow-weight*]{}) spectral sequence $T=T(H,M)$ with $E_1^{pq}= H_q(M^p)$; the differentials for $E_1T(H,M)$ come from $t(M)$. It converges to $H_{p+q}(M)$ if $M$ is bounded. 2\. $T(H,M)$ is ${DM(S)}$-functorial in $M$ (and does not depend on any choices) starting from $E_2$. II1. Let $H:{DM(S)}\to {\underline{A}}$ be any contravariant functor. Then for any $m\in {{\mathbb{Z}}}$ the object $(W^{m}H)(M)=\operatorname{\operatorname{Im}}(H({{w_{Chow}}}_{\ge m}M)\to H(M))$ does not depend on the choice of ${{w_{Chow}}}_{\ge m}M$; it is functorial in $M$. We call the filtration of $H(M)$ by $(W^{m}H)(M)$ its [*Chow-weight*]{} filtration. 2\. Let $H$ be cohomological. For any $r\in {{\mathbb{Z}}}$ denote $H\circ [-r]$ by $H^r$. Then the natural dualization of assertion I is valid. For any $M\in {Obj}{DM(S)}$ we have a spectral sequence with $E_1^{pq}= H^{q}(M^{-p})$; it converges to $H^{p+q}(M)$ if $M$ is bounded. Moreover, in this case the step of filtration given by ($E_{\infty}^{l,m-l}:$ $l\ge k$) on $H^{m}(M)$ equals $(W^k H^{m})(M)$ (for any $k,m\in {{\mathbb{Z}}}$). $T$ is functorial in $H$ and $M$ starting from $E_2$. I Immediate from Theorem 2.3.2 of ibid. II1. This is Proposition 2.1.2(2) of ibid. 2\. Immediate from Theorem 2.4.2 of ibid. \[rintel\] 1\. We obtain certain functorial [*Chow-weight*]{} spectral sequences and filtrations for any (co)homology of motives. In particular, we have them for étale and motivic (co)homology of motives. Note that these results that cannot be proved using ’classical’ (i.e. Deligne’s) methods, since the latter heavily rely on the degeneration of (an analogue of) $T$ at $E_2$. 2\. $T(H,M)$ can be naturally described in terms of the [virtual $t$-truncations]{} of $H$ (starting from $E_2$); see §2 of [@bger] and §4.3 of [@brelmot]. [1]{} Beilinson A., Bernstein J., Deligne P., Faisceaux pervers// Asterisque 100, 1982, 5–171. Bockstedt M., Neeman A., Homotopy limits in triangulated categories// Compositio Math. 86 (1993), 209–234. Bondarko M., Weight structures vs. $t$-structures; weight filtrations, spectral sequences, and complexes (for motives and in general)// J. of K-theory, v. 6, i. 03, pp. 387–504, 2010, see also <http://arxiv.org/abs/0704.4003> Bondarko M., Motivically functorial coniveau spectral sequences; direct summands of cohomology of function fields// Doc. Math., extra volume: Andrei Suslin’s Sixtieth Birthday (2010), 33–117. Bondarko M.V., $\mathbb{Z}[\frac{1}{p}]$-motivic resolution of singularities// Compositio Math., vol. 147, is. 5, 2011, 1434–1446. Bondarko M.V., Weights for Voevodsky’s motives over a base; relation with mixed complexes of sheaves, Int. Math. Res. Notes (2013), doi: 10.1093/imrn/rnt088, <http://imrn.oxfordjournals.org/content/early/2013/05/17/imrn.rnt088.short>, <http://arxiv.org/abs/1304.6059> Bondarko M.V., Gersten weight structures for motivic homotopy categories; direct summands of cohomology of function fields and coniveau spectral sequences, preprint. Cisinski D.-C., Deglise F., Triangulated categories of mixed motives, preprint, <http://arxiv.org/abs/0912.2110> , Dieudonné J., Grothendieck A., Eléments de géométrie algébrique. IV. Étude locale des schémas et des morphismes de schémas IV, Troisiéme Partie, Publ. Math. IHES 28, 1966. Hébert D., Structures de poids à la Bondarko sur les motifs de Beilinson// Compositio Math., vol. 147, is. 5, 2011, 1447–1462. Levine M. Smooth motives, in: R. de Jeu, J.D. Lewis (eds.), Motives and Algebraic Cycles. A Celebration in Honour of Spencer J. Bloch, Fields Institute Communications 56, American Math. Soc. (2009), 175–231. Margolis H.R., Spectra and the Steenrod Algebra: Modules over the Steenrod algebra and the stable homotopy category, North-Holland Mathematical Library, 29. North-Holland Publishing Co., Amsterdam, 1983. xix+489 pp. Neeman A. Triangulated Categories. Annals of Mathematics Studies 148 (2001), Princeton University Press, viii+449 pp. Pauksztello D., A Note on Compactly Generated Co-t-Structures// Communications in Algebra vol. 40, iss. 2, 2012. Voevodsky V. Triangulated category of motives, in: Voevodsky V., Suslin A., and Friedlander E., Cycles, transfers and motivic homology theories, Annals of Mathematical studies, vol. 143, Princeton University Press, 2000, 188–238. Wildeshaus J., Motivic intersection complex, in: J.I. Burgos and J. Lewis (eds.), Regulators III, Proceedings of the conference held at the University of Barcelona, July 11–23, 2010, Contemp. math. (American Mathematical Society), v. 571, Providence, R.I.: American Mathematical Society (2012), 255–276. [^1]: The work is supported by RFBR (grants no. 12-01-33057 and 14-01-00393) and by the Saint-Petersburg State University research grant no. 6.38.75.2011.
--- abstract: 'We present a new high-resolution angle-resolved photoemission study of 1*T*-TiSe$_{2}$ in both, its room-temperature, normal phase and its low-temperature, charge-density wave phase. At low temperature the photoemission spectra are strongly modified, with large band renormalisations at high-symmetry points of the Brillouin zone and a very large transfer of spectral weight to backfolded bands. A theoretical calculation of the spectral function for an excitonic insulator phase reproduces the experimental features with very good agreement. This gives strong evidence in favour of the excitonic insulator scenario as a driving force for the charge-density wave transition in 1*T*-TiSe$_{2}$.' author: - 'H. Cercellier' - 'C. Monney' - 'F. Clerc' - 'C. Battaglia' - 'L. Despont' - 'M. G. Garnier' - 'H. Beck' - 'P. Aebi' - 'L. Patthey' - 'H. Berger' title: 'Evidence for an excitonic insulator phase in 1*T*-TiSe$_{2}$' --- Transition-metal dichalcogenides (TMDC’s) are layered compounds exhibiting a variety of interesting physical properties, mainly due to their reduced dimensionality [@AdvPhys24117]. One of the most frequent characteristics is a ground state exhibiting a charge-density wave (CDW), with its origin arising from a particular topology of the Fermi surface and/or a strong electron-phonon coupling [@clerc:155114]. Among the TMDC’s 1*T*-TiSe$_{2}$ shows a commensurate 2$\times$2$\times$2 structural distortion below 202 K, accompanied by the softening of a zone boundary phonon and with changes in the transport properties [@PhysRevB.14.4321; @PhysRevLett.86.3799]. In spite of many experimental and theoretical studies, the driving force for the transition remains controversial. Several angle-resolved photoelectron spectroscopy (ARPES) studies suggested either the onset of an excitonic insulator phase [@PhysRevB.61.16213; @PhysRevLett.88.226402] or a band Jahn-Teller effect [@PhysRevB.65.235101]. Furthermore, TiSe$_{2}$ has recently attracted strong interest due to the observation of superconductivity when intercalated with Cu [@Morosan2006]. In systems showing exotic properties, such as Kondo systems for example [@Adv_Phys_45_299_1996_Malterre], the calculation of the spectral function has often been a necessary and decisive step for the interpretation of the ARPES data and the determination of the ground state of the systems. In the case of 1*T*-TiSe$_{2}$, such a calculation for an excitonic insulator phase lacked so far. In this letter we present a high-resolution ARPES study of 1*T*-TiSe$_{2}$, together with theoretical calculations of the excitonic insulator phase spectral function for this compound. We find that the experimental ARPES spectra show strong band renormalisations with a very large transfer of spectral weight into backfolded bands in the low-temperature phase. The spectral function calculated for the excitonic insulator phase is in strikingly good agreement with the experiments, giving strong evidence for the excitonic origin of the transition. The excitonic insulator model was first introduced in the sixties, for a semi-conductor or a semi-metal with a very small indirect gap E$_{G}$ [@PhysRevLett.19.439; @RevModPhys.40.755; @PhysRev.158.462; @bronold:165107]. Thermal excitations lead to the formation of holes in the valence band and electrons in the conduction band. For low free carrier densities, the weak screening of the electron-hole Coulomb interaction leads to the formation of stable electron-hole bound states, called excitons. If the exciton binding energy E$_{B}$ is larger than the gap energy E$_{G}$, the system becomes unstable upon formation of excitons. This instability can drive a transition to a coherent ground state of condensed excitons, with a periodicity given by the spanning vector $\textbf{w}$ that connects the valence band maximum to the conduction band minimum. In the particular case of TiSe$_{2}$, there are three vectors ($\textbf{w}_{i}$, $i=1,2,3$) connecting the Se 4p-derived valence band maximum at the $\Gamma$ point to the three symmetry-equivalent Ti 3d-derived conduction band minima at the $L$ points of the Brillouin zone (BZ) (see inset of fig. \[fig1\]b)). Our calculations are based on the BCS-like model of Jérome, Rice and Kohn [@PhysRev.158.462], adapted for multiple $\textbf{w}_{i}$. The band dispersions for the normal phase have been chosen of the form $$\begin{aligned} \label{dispersions} \epsilon_{v}(\textbf{k})=\epsilon_{v}^{0}+\hbar^{2}\frac{k_{x}^{2}+k_{y}^{2}}{2m_{v}}+t_{v}\cos(\frac{2\pi k_{z}}{c})\nonumber \\ \epsilon_{c}^{i}(\textbf{k},\textbf{w}_{i})=\epsilon_{c}^{0}+\hbar^{2}\Big(\frac{(k_{x}-w_{ix})^{2}}{2m_{c}^{x}}+\frac{(k_{y}-w_{iy})^{2}}{2m_{c}^{y}}\Big)\nonumber\\ +t_{c}\cos\Big(\frac{2\pi(k_{z}-w_{iz})}{c}\Big)%\nonumber\end{aligned}$$ for the valence ($\epsilon_{v}$) and the three conduction ($\epsilon_{c}^{i}$) bands respectively, with $c$ the lattice parameter perpendicular to the surface in the normal (1$\times$1$\times$1) phase, $t_{v}$ and $t_{c}$ the amplitudes of the respective dispersions perpendicular to the surface and $m_{v}$, $m_{c}$ the effective masses. The parameters for equations \[dispersions\] were derived from photon energy dependent ARPES measurements carried out at the Swiss Light Source on the SIS beamline, using a Scienta SES-2002 spectrometer with an overall energy resolution better than 10 meV, and an angular resolution better than 0.5$^{\circ}$. The fit to the data gives for the Se 4p valence band maximum -20 $\pm$ 10 meV, and for the Ti 3d conduction band a minimum -40 $\pm$ 5 meV [@fits]. From our measurements we then find a semimetallic band structure with a negative gap (*i.e.* an overlap) E$_{G}$=-20 $\pm$ 15 meV for the normal phase of TiSe$_{2}$, in agreement with the literature [@PhysRevLett.55.2188]. The dispersions deduced from the ARPES data are shown in fig. \[fig1\]a) (dashed lines). Within this model the one-electron Green’s functions of the valence and the conduction bands were calculated for the excitonic insulator phase. For the valence band, one obtains $$\begin{aligned} G_{v}(\textbf{k},z)=\Big(z-\epsilon_{v}(\textbf{k})-\sum_{\textbf{w}_{i}}\frac{|\Delta|^{2}(\textbf{k},\textbf{w}_{i})}{z-\epsilon_{c}(\textbf{k}+\textbf{w}_{i})}\Big)^{-1} \ .\end{aligned}$$ This is a generalized form of the equations of Ref. [@PhysRev.158.462] for an arbitrary number of $\textbf{w}_{i}$. The order parameter $\Delta$ is related to the number of excitons in the condensed state at a given temperature. For the conduction band, there is a system of equations describing the Green’s functions $G_{c}^{i}$ corresponding to each spanning vector vector $\textbf{w}_{i}$: $$\begin{aligned} \big[z-\epsilon_{c}^{i}(\mathbf{k}+\mathbf{w_{i}})\big]G_{c}^{i}(\mathbf{k}+\mathbf{w_{i}},z) =1+\Delta^{*}(\mathbf{k},\mathbf{w_{i}})\nonumber \\ \times \sum_{j}\frac{\Delta(\mathbf{k},\mathbf{w_{j}})G_{c}^{j}(\mathbf{k}+\mathbf{w_{j}},z)}{z-\epsilon_{v}(\mathbf{k})}\end{aligned}$$ This model and the derivation of the Green’s functions will be further described elsewhere [@Claude]. The spectral function calculated along several high-symmetry directions of the BZ is shown in fig. \[fig1\]a) for an order parameter $\Delta$=0.05 eV. Its value has been chosen for best agreement with experiment. The color scale shows the spectral weight carried by each band. For presentation purposes the $\delta$-like peaks of the spectral function have been broadened by adding a constant 30 meV imaginary part to the self-energy. In the normal phase (dashed lines), as previously described we consider a semimetal with a 20 meV overlap, with bands carrying unity spectral weight. In the excitonic phase, the band structure is strongly modified. The first observation is the appearance of new bands (labeled C1, V2 and C3), backfolded with the spanning vector $\textbf{w}=\Gamma L$. The C1, V2 and C3 branches are the backfolded replicas of branches C2, V3 and C4 respectively. In this new phase the $\Gamma$ and $L$ points are now equivalent, which means that the excitonic state has a 2$\times$2$\times$2 periodicity of purely electronic origin, as expected from theoretical considerations [@PhysRevLett.19.439; @PhysRev.158.462]. Another effect of exciton condensation is the opening of a gap in the excitation spectrum. This results in a flattening of the valence band near $\Gamma$ in the $\Gamma M$ direction (V1 branch) and in the $A\Gamma$ direction (V3 branch), and also an upward bend of the conduction band near $L$ and $M$ (C2 and C4 branches). It is interesting to notice that in the vicinity of these two points, the conduction band is split (arrows). This results from the backfolding of the $L$ points onto each other, according to the new periodicity of the excitonic state [@PhysRevB.18.2866]. The spectral weight carried by the bands is shown in fig. \[fig1\]b). The largest variations occur near the $\Gamma$, $L$ and $M$ points, where the band extrema in the normal phase are close enough for excitons to be created. Away from these points, the spectral weight decreases in the backfolded bands (C1, V2, C3) and increases in the others. The intensity of the V1 branch, for example, decreases by a factor of 2 when approaching $\Gamma$, whereas the backfolded C1 branch shows the opposite behaviour. Such a large transfer of spectral weight into the backfolded bands is a very uncommon and striking feature. Indeed, in most compounds with competing potentials (CDW systems, vicinal surfaces,...), the backfolded bands carry an extremely small spectral weight [@didiot:081404; @battaglia:195114; @J.Voit10202000]. In these systems the backfolding results mainly from the influence of the modified lattice on the electron gas, and the weight transfer is related to the strength of the new crystal potential component. Here, the case of the excitonic insulator is completely different, as the backfolding is an intrinsic property of the excitonic state. The large transfer of spectral weight is then a purely electronic effect, and turns out to be a characteristic feature of the excitonic insulator phase. Fig. \[fig2\] shows ARPES spectra recorded at a photon energy h$\nu$=31 eV as a function of temperature. At this photon energy, the normal emission spectra correspond to states located close to the $\Gamma$ point. For the sake of simplicity the description is in terms of the surface BZ high-symmetry points $\bar{\Gamma}$ and $\bar{M}$. The 250 K spectra exhibit the three Se 4p-derived bands at $\bar{\Gamma}$ and the Ti 3d-derived band at $\bar{M}$ widely described in the literature [@PhysRevLett.88.226402; @PhysRevB.65.235101; @PhysRevB.61.16213]. The thick dotted lines (white) are fits by equation \[dispersions\], giving for the topmost 4p band a maximum energy of -20 $\pm$ 10 meV, and for the Ti 3d a minimum energy of -40 $\pm$ 5 meV. The small overlap E$_{G}$=-20 $\pm$ 15 meV in the normal phase is consistent with the excitonic insulator scenario, as the exciton binding energy is expected to be close to that value. [@PhysRevLett.88.226402; @PhysRevB.61.16213]. The position of both band maxima in the occupied states is most probably due to a slight Ti overdoping of our samples [@PhysRevB.14.4321]. In our case, a transition temperature of 180 $\pm$ 10 K was found from different ARPES and scanning tunneling microscopy measurements, indicating a Ti doping of less than 1 %. On the 250 K spectrum at $\bar{\Gamma}$, the intensity is low near normal emission. This reduced intensity and the residual intensity at $\bar{M}$ around 150 meV binding energy (arrows) may arise from exciton fluctuations (see reduction of spectral weight near $\Gamma$ in the V1 branch in fig. 1b). Matrix elements do not appear to play a role as the intensity variation only depends very slightly on photon energy and polarization. In the 65 K data (fig. \[fig2\]b)), the topmost 4p band flattens near $\bar{\Gamma}$ and shifts to higher binding energy by about 100 meV (thin white, dotted line). This shift is accompanied by a larger decrease of the spectral weight near the top of the band. The two other bands (fine black lines) are only slightly shifted and do not appear to participate in the transition. In the $\bar{M}$ spectrum strong backfolded valence bands can be seen, and the conduction band bends upwards, leading to a maximum intensity located about 0.25 $\AA^{-1}$ from $\bar{M}$ (thin white dotted line). This observation is in agreement with Kidd *et al.* [@PhysRevLett.88.226402], although in their case the conduction band was unoccupied in the normal phase. The calculated spectral functions corresponding to the data of fig. \[fig2\] are shown in fig. \[fig3\], using the free-electron final state approximation with a 10 eV inner potential and a 4.6 eV work function (see inset). The effect of temperature was taken into account via the order parameter and the Fermi function. Only the topmost valence band was considered, as the other two are practically not influenced by the transition (see above, fig. 2). The behavior of this band is extremely well reproduced by the calculation. In the 65 K calculation the valence band is flattened near $\bar{\Gamma}$, and the spectral weight at this point is reduced to 44 $\%$, close to the experimental value of 35 $\%$. The agreement is very satisfying, considering that the calculation takes into account only the lowest excitonic state. The experimental features appear broader than in the calculation, but at finite temperatures one may expect the existence of excitons with non-zero momentum, leading to a spread of spectral weight away from the high-symmetry points. In the near-$\bar{M}$ spectral function, the backfolded valence band is strongly present in the 65 K calculation, with comparable spectral weight as at $\bar{\Gamma}$ and as the conduction band at $\bar{M}$. The conduction band maximum intensity is located away from $\bar{M}$ as in the experiment. The small perpendicular dispersion of the free-electron final state causes an asymmetry of the intensity of the conduction band on each side of $\bar{M}$, which is also visible in fig. \[fig2\]. In our calculation, as opposed to the ARPES spectra, the conduction band is unoccupied and only the occupied tail of the peaks is visible. This difference may be simply due to the final state approximation used in the calculation, a slight shift of the chemical potential due to the transition, or to atomic displacements that would shift the conduction band [@PhysRevB.65.235101; @PhysRevLett.88.226402; @WhangboMyungHwan_ja00050a044]. Such atomic displacements, in terms of a band Jahn-Teller effect, were suggested as a driving force for the transition. However, the key point is that, although the lattice distortion may shift the conduction band, the very small atomic displacements ($\approx$ 0.02 $\AA$ [@PhysRevB.14.4321]) in 1$T$-TiSe$_{2}$ are expected to lead to a negligable spectral weight in the backfolded bands [@J.Voit10202000]. As an example, 1$T$-TaS$_2$, another CDW compound known for very large atomic displacements [@Spijkerman1997] (of order $>$ 0.1 $\AA$) introduces hardly detectable backfolding of spectral weight in ARPES. Clearly, an electronic origin is necessary for obtaining such strong backfolding in the presence of such small atomic displacements. Therefore, our results allow to rule out a Jahn-Teller effect as the driving force for the transition of TiSe$_{2}$. Furthermore, the ARPES spectra also show evidence for the backfolded conduction band at the $\bar{\Gamma}$ point. Fig. \[fig4\] shows constant energy cuts around the Fermi energy, taken from the data of fig. \[fig2\]b and \[fig3\]b (arrows). In the ARPES data two slightly dispersive peaks, reproduced in the calculation, clearly cross the Fermi level. These features turn out to be the populated tail of the backfolded conduction band, whose centroid is located just above the Fermi level. To our knowledge no evidence for the backfolding of the conduction band had been put forward so far. In summary, by comparing ARPES spectra of 1*T*-TiSe$_{2}$ to theoretical predictions for an excitonic insulator, we have shown that the superperiodicity of the excitonic state with respect to the lattice results in a very large transfer of spectral weight into backfolded bands. This effect, clearly evidenced by photoemission, turns out to be a characteristic feature of the excitonic insulator phase, thus giving strong evidence for the existence of this phase in 1*T*-TiSe$_{2}$ and its prominent role in the CDW transition. Skillfull technical assistance was provided by the workshop and electric engineering team. This work was supported by the Fonds National Suisse pour la Recherche Scientifique through Div. II and MaNEP. [10]{} J. A. Wilson [*et al.*]{}, Adv. Phys. [**24**]{}, 117 (1975). F. Clerc [*et al.*]{}, Phys. Rev. B [**74**]{}, 155114 (2006). F. J. Di Salvo [*et al.*]{}, Phys. Rev. B [**14**]{}, 4321 (1976). M. Holt [*et al.*]{}, Phys. Rev. Lett. [**86**]{}, 3799 (2001). T. Pillo [*et al.*]{}, Phys. Rev. B [**61**]{}, 16213 (2000). T. E. Kidd [*et al.*]{}, Phys. Rev. Lett. [**88**]{}, 226402 (2002). K. Rossnagel [*et al.*]{}, Phys. Rev. B [**65**]{}, 235101 (2002). E. Morosan [*et al.*]{}, Nature Physics [**2**]{}, 544 (2006). D. Malterre [*et al.*]{}, Adv. Phys. [**45**]{}, 299 (1996). W. Kohn, Phys. Rev. Lett. [**19**]{}, 439 (1967). B. I. Halperin and T. M. Rice, Rev. Mod. Phys. [**40**]{}, 755 (1968). D. Jérome [*et al.*]{}, Phys. Rev. [**158**]{}, 462 (1967). F. X. Bronold and H. Fehske, Phys. Rev. B [**74**]{}, 165107 (2006). The fit parameters are : $\epsilon_{v}^{0}$=-0.08$\pm$0.005 eV, $m_{v}$=-0.23$\pm$0.02 $m_{e}$, where $m_{e}$ is the free electron mass, $t_{v}$=0.06$\pm$0.005 eV ; $\epsilon_{c}^{0}$=-0.01$\pm$0.0025 eV, $m_{c}^{x}$=5.5$\pm$0.2 $m_{e}$, $m_{c}^{y}$=2.2$\pm$0.1 $m_{e}$, $t_{c}$=0.03$\pm$0.0025 eV O. Anderson [*et al.*]{}, Phys. Rev. Lett. [**55**]{}, 2188 (1985). C. Monney [*et al.*]{}, to be published J. A. Wilson [*et al.*]{}, Phys. Rev. B [**18**]{}, 2866 (1978). C. Didiot [*et al.*]{}, Phys. Rev. B [**74**]{}, 081404(R) (2006). C. Battaglia [*et al.*]{}, Phys. Rev. B [**72**]{}, 195114 (2005). J. Voit [*et al.*]{}, Science [**290**]{}, 501 (2000). M. H. Whangbo and E. Canadell, J. Am. Chem. Soc. [**114**]{}, 9587 (1992). A. Spijkerman [*et al.*]{}, Phys. Rev. B [**56**]{}, 13757 (1997).
--- abstract: 'We report on the discovery of a faint ($M_V \sim -10.6 \pm 0.2$) dwarf spheroidal galaxy on deep F606W and F814W *Hubble Space Telescope* images of a Virgo intracluster field. The galaxy is easily resolved in our images, as our color magnitude diagram (CMD) extends $\gtrsim 1$ magnitude beyond the tip of the red giant branch (RGB). Thus, it is the deepest CMD for a small dwarf galaxy inside a cluster environment. Using the colors of the RGB stars, we derive a metal abundance for the dwarf of \[M/H\]$= -2.3 \pm 0.3$, and show that the metallicity dispersion is less than 0.6 dex at 95% confidence. We also use the galaxy’s lack of AGB stars and the absence of objects brighter than $M_{bol} \sim -4.1 \pm 0.2$ to show that the system is old ($t \gtrsim 10$ Gyr). Finally, we derive the object’s structural parameters, and show that the galaxy displays no obvious evidence of tidal threshing. Since the tip of the red giant branch distance ($(m-M)_0 = 31.23 \pm 0.17$ or $D = 17.6 \pm 1.4$ Mpc) puts the galaxy near the core of the Virgo cluster, one might expect the object to have undergone some tidal processing. Yet the chemical and morphological similarity between the dwarf and the dSph galaxies of the Local and M81 Group demonstrates that the object is indeed pristine, and not the shredded remains of a much larger galaxy. We discuss the possible origins of this galaxy, and suggest that it is just now falling into Virgo for the first time.' author: - 'Patrick R. Durrell, Benjamin F. Williams, Robin Ciardullo, John J. Feldmeier, Ted von Hippel, Steinn Sigurdsson, George H. Jacoby, Henry C. Ferguson, Nial R. Tanvir, Magda Arnaboldi, Ortwin Gerhard, J. Alfonso L. Aguerri, Ken Freeman, and Matt Vinciguerra' title: The Resolved Stellar Populations of a Dwarf Spheroidal Galaxy in the Virgo Cluster --- Introduction ============ An understanding of the processes that affect the formation, evolution, and sometimes destruction of dwarf galaxies is critical to our overall picture of galaxy formation. Since dwarf galaxies are the most common type of galaxy in the Universe, any model of galaxy formation is incomplete if we cannot understand these objects. Moreover, under the current paradigm of hierarchical structure formation [@kauf93; @cole00], massive galaxies are formed from the mergers of less massive objects. Thus, dwarf galaxies are an important building block of the universe, and an understanding of their properties will help illuminate the processes of galaxy formation more generally. The faintest examples of dwarf galaxies are the dwarf spheroidals (dSphs). Using the nomenclature of @ggh03, these are low-luminosity ($M_V \gtrsim -14$), low central surface brightness ($\mu_V \gtrsim 22$ mag arcsec$^{-2}$) objects with shallow radial profiles and stellar populations that are dominated by old and intermediate-aged stars[^1]. Most of what we know about dSphs comes from observations of $\sim 20$ galaxies in the low-density environment of the Local Group. From the data, it appears that dSphs are fairly homogeneous in nature, with small, but dense dark matter halos and low metallicities that are in proportion to their luminosity [for a review see @mateo98]. Much less is known about dSphs in the cluster environment. Surveys in Virgo and Fornax have successfully found large numbers of dE and bright dSph galaxies [@vcc; @fcc; @phil98; @th02; @sab03], but there is still considerable uncertainty in just how many of these lowest-luminosity objects exist [@hmi03]. The scarce information we do have on cluster dSph galaxies suggests that their properties are similar to those of Local Group dwarfs [@hmi05; @cald06]. If correct, then such a result is somewhat surprising, since the tidal forces of the cluster environment are expected to take their toll on these objects. Indeed, @bekki01 have shown that dSphs can be stripped of their outer stars in only a few Gyr. This would imply that both the morphology and chemistry of cluster dSphs should differ significantly from their Local Group counterparts. Until recently, abundance studies of cluster dSph galaxies (based on photometry of individual stars) have been impractical. For example, although @har98 were able to resolve the brightest red giant branch (RGB) stars in a Virgo cluster dE,N galaxy using a 32 ks exposure with the *Hubble Space Telescope’s WFPC2* camera, these data were limited to a single filter (F814W). However, with the advent of the *Advanced Camera for Surveys (ACS),* it is now possible to measure the broad-band colors of red giants in these systems. Indeed, @cald06 has recently used the *ACS* to resolve the stellar populations of four dE/dSph galaxies in the Virgo cluster, and obtain color-magnitude diagrams of their stars down to $I\sim 28$, making it the deepest study of individual stars in cluster dwarf galaxies. Their data appear to confirm that the dE/dSph galaxies of Virgo obey the same luminosity-metallicity relation as the dE/dSph galaxies of the Local Group. Here, we present an extremely deep F606W and F814W study of a faint ($M_V \sim -10.6$) dSph that was serendipitously found during a survey of intracluster stars in Virgo [@VICS1]. We use the data, which reach $I \sim 28.4$, to show that the galaxy is remarkably similar to undisturbed dSphs in the Local Group, both in morphology, and in stellar content. In Section 2, we describe both our data reductions and our artificial star experiments, and present a color-magnitude diagram of the dSph’s red giant branch stars. In Section 3, we use these data to derive the system’s distance as well as its stellar population. We show that the galaxy has a mean metallicity that is very low (\[M/H\] $\sim -2.3 \pm 0.3$), and is composed entirely of stars older than 8 to 10 Gyr. We also derive the galaxy’s structural parameters, and show that its central surface brightness and core radius are typical of dSphs in the Local Group. In Section 4, we compare the galaxy’s properties to those of Local Group objects, and attempt to investigate the effects of environment on its history. We show that the galaxy displays no obvious evidence of tidal disruption, and has a mean metallicity appropriate for its luminosity. We conclude by considering the possible origin of this object. Observations and Data Analysis ============================== Between 30 May 2005 and 7 June 2005 we used the *Advanced Camera for Surveys* on the *Hubble Space Telescope* to obtain deep F606W and F814W images of a Virgo intracluster field ($\alpha(2000) = $12:28:10.80, $\delta(2000) = $12:33:20.0, orientation 112.58 degrees). This field, which is projected $\sim 180$ kpc from M87 [assuming a mean Virgo distance $d= 16$ Mpc; @jbb04] was chosen to be near the cluster center, but away from any known object: the closest cataloged galaxy is the dwarf elliptical VCC 1051, which is $\sim 5\arcmin$ (24 kpc) to the northwest; the nearest large galaxy, M86, is over 170 kpc away. Our F814W ($I$-band) data consisted of 22 exposures, with 26880 s of integration time; the F606W (wide $V$-band) observation included 52 exposures totaling 63440 s. These data were co-added, and re-sampled using the [multidrizzle]{}[^2] task within PyRAF[^3] [@koekemoer] to produce images with $0\farcs 03$ pixel$^{-1}$. The details of these reductions, and an image of the field illustrating its position in the cluster is given by @VICS1. Immediately after data acquisition, an inspection of our images revealed the presence of a previously unknown galaxy in the field. The object, whose center is located at $\alpha(2000) = $12:28:15.5, $\delta(2000) = $+12:33:37.0 (uncertainty $\sim 0\farcs 2$) is $\sim 15\arcsec$ in extent and clearly resolved into stars. A color image of the object is shown in Figure \[image\]. Our point source photometry was similar (but not identical to) that performed by @VICS1. To avoid dealing with variations in the *ACS* point spread function (PSF), we began by limiting our photometric reductions to a $2200 \times 1600$ pixel ($66\arcsec\times 48\arcsec$) region surrounding the galaxy. We then chose three bright, isolated stars on our F606W image, and four bright isolated stars on the F814W image to define the PSFs, and used DAOPHOT II and ALLSTAR to perform the photometry [@stet87; @stet92]. Two DAOPHOT II/ALLSTAR passes were performed to detect as many of the stars as possible. The F606W and F814W datasets were then merged with a 1.5 pixel matching criterion to create a preliminary object catalog of $\sim 1000$ objects. Finally, we used the DAOPHOT $\chi$ and $r_{-2}$ image parameters to remove background galaxies and severely blended stars from the list [@stet87; @kron80]. Our instrumental magnitudes were placed on the VEGAMAG system using the prescription and zero points provided by @sir05. To obtain the offset between our ALLSTAR magnitudes and the VEGAMAG system, we used the profiles of the PSF stars on each image; the rms scatter in this calibration was $\sim 0.02$ mag. We note that since the region under consideration is small (only $\sim 8\%$ of the entire *ACS* field), differential effects associated with charge transfer efficiency were negligible ($\sim 0.01-0.02$ mag), and were ignored. Artificial Star Experiments --------------------------- To ascertain the photometric uncertainties and incompleteness in our data, we used the traditional method of adding simulated stars of known brightness to the science frames and re-reducing them following the exact same procedures as for the original data. To each frame, we added 9000 stars (300 runs with 30 stars added per run) to an elliptical region centered on the dwarf galaxy, and re-executed our two-pass DAOPHOT II/ALLSTAR photometric procedure, including the rejection of non-stellar sources, and the merging of the datasets. By limiting the number of fake stars to 30 per run, we did not significantly alter the crowding conditions on the images; by defining the mean F606W$-$F814W color of our artificial stars to be roughly equal to that of the observed objects (F606W$-$F814W = 1.0), we ensured that our experiments adequately reproduced losses during the catalog merging and image classification processes. Finally, by defining the stars’ luminosity function to be a rising exponential between F814W = 22 and F814W = 29, we improved our statistics at the faint end of the luminosity function, where the uncertainties are largest. The result of these experiments were the photometric completeness function $f(m)$, defined as the ratio of the stars recovered to stars added [*regardless of the magnitude that was actually measured,*]{} $\Delta(m)$, the mean shift in magnitude of the stars ($\Delta(m) = m_{out} - m_{in}$), and $\sigma(m)$, the dispersion about this mean. All of these functions were computed in 0.5 mag bins, to ease the computational requirements. To ascertain each filter’s limiting magnitude (defined as the 50% completeness fraction), we performed similar artificial star tests using a variety of F606W$-$F814W input colors. The results from these experiments are illustrated in Figure \[completeness\]. From these experiments, we obtained $m_{lim} = 29.1 \pm 0.1$ in F606W and $28.4 \pm 0.1$ in F814W, where the errors reflect both the rms scatter due to the use of different colors, and the uncertainty associated with varying levels of crowding within the galaxy. As expected, these limits are slightly brighter than those found by @VICS1, whose analysis focused on the uncrowded regions of the frame. Moreover, our classification criteria kept only those objects with the best photometric accuracy: obvious galaxies and extremely blended objects were removed from the catalog. As result, even at F814W = 27, $\sim 10\%$ of the artificial stars were not recovered. Fortunately, since we are not making any inferences based solely on the absolute number of stars detected, these missing objects do not affect our analyses in any significant way. A color-magnitude diagram for our artificial stars is shown in Figure \[fakestars\]. The figure illustrates that the photometric errors increase with magnitude, so that by $m_{lim}$, the uncertainties are $\sim 0.2$ mag. The figure also shows that there is a small blueward shift at faintest levels; this is due to the uneven depth of the two images. (At the faintest levels, our F814W photometry only detects objects whose photometric errors are in the positive direction.) Fortunately, down to the limiting magnitudes of the frames, this color shift is $< 0.1$ mag. These corrections, while small, will be accounted for in the following sections. Color Magnitude Diagram ----------------------- The color-magnitude diagram for all the stellar objects in our field is displayed in the left-hand panel of Figure \[cmd\]. The diagram shows that most objects lie in a sequence near F606W$-$F814W $\sim 1$ that extends up to F814W $\sim 27.1$. However, there is a significant amount of scatter about this line. This scatter comes from two sources: the RGB stars that pervade Virgo’s intracluster space [@ftv; @dur02; @VICS1], and, to a lesser extent, unresolved background galaxies. To minimize the effects of these contaminants, we identified a subsample of stars within the galaxy’s F814W $\sim 26.5$ mag arcsec$^{-2}$ isophote. This subsample, which is defined via an ellipse with semi-major axis $8\farcs 1$, an axis ratio of $b/a = 0.5$, and a position angle of $43^\circ$ (see Section 3.4) should have minimal contamination: based on star counts in the rest of our field, only $\sim 16$ out of the 171 objects brighter than $F814W=28.4$ should be contaminants. Thus, we can use the data from this 103 arcsec$^2$ region to form a “clean” color-magnitude diagram of the galaxy’s stars. The elliptical region which defines our subsample is shown in Figure \[image\]; the right-hand panel of Figure \[cmd\] shows its CMD. Analysis ======== The CMD in Figure \[cmd\] displays the hallmarks of a red-giant branch population with F814W $> 27$. There are few stars brighter than this limit, and no evidence of any bright blue stars arising from a young stellar population. Since the data clearly extend more than 1 mag down the RGB, we can use the diagram to investigate both the distance and metallicity of the galaxy. TRGB Distance ------------- The tip of the red giant branch (TRGB) in the $I$-band has repeatedly been shown to be an excellent distance indicator for old, metal-poor populations. In stellar systems with \[M/H\] $< -1$, the TRGB is essentially independent of both age and metallicity [@lfm93; @sakai96; @fer00; @bell01; @mcc04; @mou05; @cald06], so all that is required is a measure of the bright-end truncation of the RGB and an estimate of foreground reddening [$E(B-V) = 0.025$; @schlegel98]. Our only limitation is the small size of the galaxy, which restricts the number of stars available for analysis. Although foreground/background contamination in our ‘dwarf-only’ CMD (right panel of Figure \[cmd\]) is small, to make our TRGB detection as unambiguous as possible we have further restricted our sample to objects with F606W$-$F814W $< 1.3$. Stars redder than this are not on the metal-poor RGB and are unlikely to be members of the galaxy. This makes very little difference to the analysis, since it excludes only 3 objects from consideration. Moreover, since the RGB tip for red, metal-rich stars is fainter than that for blue objects, eliminating these stars should not affect our distance determination. We note that the *ACS* F814W filter bandpass is very similar to that of the traditional $I$-band: according to @sir05, the transformation between these two systems is $I = F814W - 0.008$ (for stars with F606W$-$F814W=1) in the VEGAMAG system. Visual inspection of the Figure \[cmd\] shows a rather sharp transition at F814W $\sim I \sim 27.1$, albeit with a small number of stars. While the increased Poissonian noise from small numbers can lead to a possible systematic bias in $I_{TRGB}$, @mf95 have shown that with at least $\sim 100$ stars in the top magnitude of the RGB, such biases are small. Since our analysis includes $\sim 130$ objects, we are safely above this threshold. Another possible source of systematic error is image crowding: in dense fields, stellar blends can make the RGB tip appear brighter than it should be. However, in our case, this effect should not be important. Our artificial star experiments demonstrate that the mean magnitude shift at F814W $\sim 27$ is very small, less than 0.01 mag. This result, which is confirmed by additional artificial star experiments with F814W $< 27.1$, is due principally to our image classification criteria, which rejects the most heavily blended objects ( those most likely to bias the results towards brighter magnitudes). A quantitative estimate of the RGB tip location can be derived using the Sobel edge-detection filter of @lfm93. This technique, which was originally used on binned functions, has since been modified to work on discrete stellar samples by treating each object as a Gaussian distribution with a dispersion equal to the expected photometric error [@sakai96]. We note that more rigorous techniques have been employed to derive $I_{TRGB}$, such as the method of maximum-likelihood [@mendez02; @mou05], and the second derivative of the luminosity function [@cioni00]. However, maximum-likelihood methods are better for datasets where the RGB power-law slope is observed for over a magnitude. In our case, photometric incompleteness and image crowding make the use of the technique problematic. Moreover, the small number of stars limits the reliability of second derivative calculations. Because our background contamination is so small, the simpler Sobel edge-detection filter should be adequate for our purpose. To employ the Sobel-edge detection method, we treated each star as a Gaussian distribution with a mean equal to its observed magnitude (plus the small $\Delta m$ shift predicted by our artificial star experiments), and a dispersion, $\sigma(m)$, equal to the photometric uncertainty at that magnitude. We then co-added the Gaussians, to produce the luminosity function shown in the top panel of Figure \[sobel\]. Even with the large photometric errors ($\sigma \sim 0.07$ at F814W = 27), the sharp rise of the luminosity function is evident, as is a small contribution from brighter stars, which presumably arise from the galaxy’s AGB component (more on this below). To determine the location of the tip of the red giant branch, we applied to our continuous luminosity function the edge-detection algorithm $$E(m) = \Phi (m+ \sigma(m)) - \Phi(m - \sigma(m))$$ where $\sigma(m)$ is the photometric error defined via our artificial star experiments [@sakai96]. This function, which is displayed in the bottom panel of Figure \[sobel\], reveals a rather wide peak near F814W $\sim 27.2$. From the figure, the location of the RGB discontinuity is at $I_{TRGB} = 27.22 \pm 0.15$, where the uncertainty is based on the full-width-half-maximum of the distribution. This error is likely conservative [@sakai96], though it is similar to that expected based on the number of stars available [@mf95]. To derive the distance to the dwarf, we adopt $M_{I,TRGB}=-4.06 \pm 0.07$ (random error) as the absolute magnitude of the metal-poor RGB tip [@fer00]. This number is very similar to that of other recent determinations: the empirical calibration of @da90 yields $M_{I,TRGB}= -3.96 \pm 0.06$ for objects with \[Fe/H\]$=-2.3$, while a more recent determination by @bell01 gives $M_{I,TRGB} = -4.04 \pm 0.12$ for \[Fe/H\]$=-1.7$. (The metallicity dependence of $M_{I,TRGB}$ over this range is much smaller than these random errors.) Thus we derive $(m-M)_I=31.28 \pm 0.17$ for the dwarf. Assuming a foreground reddening of $E(B-V) = 0.025$ [@schlegel98] [and thus $A_{F814W} \sim A_I = 0.046$; @sir05], we derive a distance modulus of $(m-M)_0 = 31.23 \pm 0.17$, or $D = 17.6\pm 1.4$ Mpc (random error only). This distance is similar to the Cepheid, surface brightness fluctuation, and planetary nebula luminosity function distances to the core of Virgo [@jacoby90; @freedman01; @tonry01]. Although the three-dimensional structure of the Virgo cluster is complex, the line-of-sight depth of the Virgo core (as defined by the early-type galaxies) surrounding M87 is at least $\sim 2-3$ Mpc [@nt2000; @jbb04], and the distance to the object is consistent with being located near the center of the cluster. Metallicity ----------- As noted earlier, much of the color spread in the dSph’s RGB is due to photometric errors. Thus, to measure metallicity, our approach was to derive a [*mean*]{} abundance for the galaxy, and then attribute any additional scatter on the RGB to a dispersion in metallicity. Since the preceding analysis has shown that the galaxy contains no significant AGB population (also see Section 3.4), we can ignore the alternative that the color spread is due to the presence of young or intermediate-age stars. To derive the mean metallicity of the dSph galaxy, we compared the ridge line derived from the CMD in Figure \[cmd\] with the scaled-solar abundance isochrones of @gir05, which are the Padova isochrones [@gir00; @gir02] transformed directly to the *ACS/WFC* filter system. In addition, we have also used the observed fiducial sequences from @brown05, which are derived from photometry of Milky Way clusters covering a wide range in metallicity. Figure \[ridgeline\] compares the @gir05 12.5 Gyr sequence (shifted by $(m-M)_{\rm F814W}= 31.29$ and $E({\rm F606W}-{\rm F814W})=0.025$) to the observed colors of the galaxy’s RGB stars, both with and without the $\Delta m$ magnitude shifts predicted by our artificial star experiments. Also shown for comparison is the M92 fiducial sequence of @brown05. According to @brown05, this cluster has an \[Fe/H\] value of $-2.14$, but is enhanced in $\alpha$-process elements by \[$\alpha$/Fe\] $= +0.3$, thus implying $Z \sim 0.0003$. Consequently, its position in the color-magnitude diagram is consistent with the $Z=0.0004$ scaled-solar Padova isochrone. As Figure \[ridgeline\] illustrates, the dSph galaxy is extremely metal-poor: the colors of its RGB stars are bluer than the metal-poor M92 fiducial and very near the $Z=0.0001$ Padova isochrone. For a more quantitative estimate, we can use the mean color of the RGB at F814W $= 27.6\pm 0.1$ (or $M_{\rm F814W} = -3.7$ at the distance of the dwarf) and interpolate in the Padova models; this particular magnitude was chosen to be deep enough to adequately sample the RGB (not at the RGB tip), but not so faint as to be adversely affected by large photometric errors and photometric incompleteness. This procedure yields a mean metallicity for the system of \[M/H\] $= -2.3 \pm 0.3$, where the quoted error is that of the mean, not of the distribution. This error does not include possible systematic errors in the Padova models, but does include a $\pm 0.1$ dex uncertainty due to distance. The errors in our color calibration ($\sim 0.03$ mag) and the reddening ($\sim 0.02~$mag) contribute an additional $\pm 0.3$ dex uncertainty to our determination. To estimate an upper limit to the metallicity spread in the galaxy, we compared the observed width of the RGB to the expected dispersion derived from our artificial star experiments. To do this, we used the data displayed in the right-hand panel of Figure \[cmd\], while excluding the one, extremely red (F606W$-$F814W = 2.06, F814W = 27.5) object from consideration. We then formed the sum $$\chi^2 = \sum_i { \left( c_{obs} - c_{iso} \right)^2 \over \sigma_c^2 }$$ where $c_{obs}$ is the observed F606W$-$F814W color of each star with $27.3 < {\rm F814W} < 28.0$, $c_{iso}$, the predicted color for the star (given its observed F814W magnitude and a system metallicity of \[M/H\] $= -2.3$), and $\sigma_c$ the expected color dispersion at magnitude F814W, derived from our artificial star experiments. The resulting value, $\chi^2/\nu = 1.16$ for 95 degrees of freedom, indicates that the true red giant branch is slightly wider, but not inconsistent with, the distribution expected from a single metallicity population. This same type of analysis also demonstrates that any additional (intrinsic) color dispersion on the red giant branch must be less than $\sigma_{{\rm F606W}-{\rm F814W}} \sim 0.09$ at the 95% confidence level. This limits the metallicity spread of our dSph galaxy to $\lesssim 0.6$ dex. AGB Stars and the Age of the Stellar Population ----------------------------------------------- The presence of stars brighter than the RGB tip have often been used to ascertain the existence of intermediate-age ($t < 10$ Gyr) stars: the more luminous the tip of the AGB, the younger the stellar population [@ma82; @arm93; @cald98]. Our CMD shows that our dwarf spheroidal galaxy contains very few stars brighter than F814W $\sim 27.1$. However, since the number of stars present in the galaxy is small, it is difficult to make precise measurements of the AGB phase of evolution. Moreover, as @VICS1 have demonstrated, our dwarf galaxy resides within a sea of intracluster stars, so any RGB stars foreground to the dwarf can masquerade as ‘brighter’ AGB objects. Nevertheless, it is possible to place a limit on the presence (or absence) of AGB stars in our galaxy. To do this, we used our derived distance to the dSph and the stars’ observed colors to estimate the bolometric magnitudes of the galaxy’s brightest objects. Specifically, we converted the stars’ F814W$-$F606W colors to $(V-I)$ using the color terms of @sir05, and applied the @da90 bolometric correction $$BC_I = 0.881 - 0.243 (V-I)_0$$ to generate values of $M_{bol}$ for all objects fainter than F814W $= 26.0$. This procedure is uncertain at the $\sim 0.2$ mag level: in addition to $\sim 0.05$ mag errors associated with the color transforms and bolometric correction, our distance to the galaxy is uncertain by $\sim 0.15$ mag. Nevertheless, this calculation is sufficiently accurate for our purpose. A histogram of the derived values of $M_{bol}$ for the brightest stars in the right-hand panel of Figure \[cmd\] is shown in Figure \[agb\]. Also plotted are the maximum attainable luminosities for AGB stars produced by 3, 5, 8, and 10 Gyr old populations [@rej06]. As the figure illustrates, our sample of stars contains two objects that may indicate the existence of a (small) number of intermediate age objects. Both are sufficiently luminous ($\sim 0.8$ and $\sim 0.4$ mag brighter than the RGB tip) to exclude the possibility of image blending. However, both are also located on the very outskirts of the the galaxy, where the likelihood of contamination is greatest. Since the density of intracluster stars (immediately surrounding the dwarf) with $26 < {\rm F814W} < 27$ is 36 arcmin$^{-2}$, we should expect our sample to contain $\sim 1$ contaminating source in this magnitude range. Thus, it is uncertain whether these two bright objects are actual members of the galaxy. Two of the three remaining stars with $-4.2 < M_{bol} < -4.0$ are projected near the center of the galaxy, and are thus likely to be associated with the dSph. If we apply the age-$M_{bol}$ relation of @rej06 to these objects, then the inferred age of the stellar population is more than 8-10 Gyr. This is similar to the ages of the old populations seen in Local Group dSphs [@mateo98]. We cannot exclude the possibility that these objects may be blends due to the more crowded environment; if this is indeed the case, this would strengthen the conclusion that the galaxy contains no stars with ages less than $\sim 10$ Gyr. Surface Brightness Profile -------------------------- In order to determine the basic structural parameters for our dSph, we performed surface photometry on smoothed images of our frames. To do this, we began by increasing the per-pixel signal-to-noise of the F606W and F814W images by re-binning the data $3 \times 3$. We then smoothed each frame with a circular Gaussian filter to produce images with a $\sim 0 \farcs 6$ point-spread-function, and fit ellipses to the dSph’s contours using the [ellipse]{} task in IRAF/STSDAS [@busko96 based on the algorithms of @jedr87]. To facilitate these fits, the center of the galaxy was kept fixed for each contour, but the position angle and ellipticity were allowed to vary to produce the best results. In addition, to aid in the measurement of this low surface brightness object, models were also computed for three nearby background galaxies, whose light contaminates the dSph’s outer contours. By subtracting these models from the original image, we were able to improve the fits for the target galaxy. Finally, to compute the surface brightness of each fitted contour, a sky background was determined using the median pixel value derived within ‘empty’ regions of our frame. The resulting surface brightness profiles (out to a radius of $9 \farcs 3$ along the major axis) are plotted in Figure \[profile\] as a function of the geometric mean radius ($r=\sqrt{ab}$). Figure \[profile\] confirms that our galaxy is indeed a dwarf spheroidal. The central surface brightness of the galaxy is typical of Local Group dwarf spheroidals [@mateo98], with $\mu_0 = 24.54 \pm 0.03$ in F606W and $\mu_0 = 23.70 \pm 0.03$ in F814W. Moreover, as the dotted line in the figure demonstrates, the profile of the galaxy is well fit by a @king62 model. To derive this line, we convolved a series of @king62 profiles with the smoothed PSF, and fit the F814W data using all measurements with errors less than 0.3 mag. This procedure yields an excellent fit $\chi^2/\nu = 7.9/20 = 0.40$, with a core radius of $r_c = 2 \farcs 6 \pm 0\farcs 7$ and a tidal radius of $r_t = 10\arcsec \pm 3\arcsec$ where the uncertainties are derived via Monte Carlo simulations [see @VICS2]. At the distance of the dwarf, these measurements correspond to $r_c = 220 \pm 80$ pc and $r_t = 850 \pm 250$ pc, where the errors include the uncertainties in both the fit and the distance. We have also fit our surface brightness profiles with the commonly used @sersic profile [see also @gd05],  $\Sigma = \Sigma_0 e^{{-(r/r_0)}^{1/n}}$. Least squares fits to the F814W and F606W data yield shape parameters very close to the $n \sim 0.5$ value expected for isothermal distributions: for the F814W image, $\Sigma_0 = 23.70 \pm 0.03$ mag arcsec$^{-2}$, $r_0 = 3\farcs 16 \pm 0\farcs 16$ and $n = 0.61 \pm 0.12$, while the F606W profile has $\Sigma_0 = 24.52 \pm 0.03$ mag arcsec$^{-2}$, $r_0 = 3\farcs 34 \pm 0\farcs 06$ and $n = 0.59 \pm 0.12$. Both fits (also plotted in Figure \[profile\]) are only slightly worse than those for the King model ($\chi^2/\nu = 0.59$ and $0.63$ for the F814W and F606W fits, respectively), but are still a very good match to the data. We can also use the data of Figure \[profile\] to compute the integrated magnitude of our galaxy. If we sum the flux contained within each elliptical isophote, then the total magnitude of the dwarf spheroidal is F606W$_{tot} = 20.56 \pm 0.05$ and F814W$_{tot} = 19.86 \pm 0.05$. These values are consistent with the F606W $= 20.46$ and F814W $=19.74$ magnitudes obtained from simple photometry using an $11\arcsec$ circular aperture centered on the galaxy. To convert these magnitudes to $V$ and $I$, we used the color transformations of @sir05, and to obtain absolute magnitudes, we applied the distance and reddening values given in Section 3.1; no additional corrections to our integrated magnitudes were required as our profiles extend to very faint surface brightnesses. The data imply that the dSph galaxy has a total absolute magnitude of $M_V =-10.6 \pm 0.2$ (and $M_I=-11.4\pm 0.2$), where the error again includes both the uncertainties of the photometry and the distance. Comparison with Local Group dSph Galaxies ----------------------------------------- Detailed observations in the Local Group demonstrate that dSph galaxies possess a variety of star-formation histories and chemical abundances [@gw94; @mateo98]. This has been attributed to the effect of environment on their formation and evolution. If so, then the properties of such galaxies in dense clusters might be distinctly different from those of dSphs seen locally. Unfortunately, although a few dwarf galaxies have been studied in other nearby groups, such as the M81 system [@cald98; @kks05; @dacosta05], very little is known about dSphs in rich clusters. Figure \[comparison\] compares the observed central surface brightness, core radius, and metallicity of our Virgo dSph with the corresponding properties of dSph galaxies in the Local and M81 Groups. The most striking aspect of the figure is how normal our dSph appears. The galaxy’s structural parameters are similar to those of the Leo II and Ursa Minor dwarfs,  typical of objects present in the local neighborhood. This result is consistent with the studies of @dur97, @hmi03, and @cald06, who also found no significant structural differences between the dwarfs of the Local Group and those of Virgo and Fornax. For comparison, the Virgo results from @cald06 are also plotted in Figure \[comparison\]. We reach a similar conclusion from the metallicity measurement of our dSph galaxy. Dwarf elliptical and spheroidal galaxies in the Local Group obey a well-established luminosity-metallicity relation [@smith85; @cald92; @cald98; @cote00; @cald06], where the more luminous galaxies are (on average) more metal-rich. This result, which is usually attributed to increased mass loss in systems with small potential wells [@ds86], predicts the metallicity of our dSph galaxy fairly accurately. However, galaxies in clusters are expected to lose much of their mass to intracluster space via tidal encounters with other galaxies and with the cluster as a whole [@moore96; @moore98; @bekki01]. If our dSph had been exposed to these forces for any length of time, we would expect its metallicity to be more in line with that of a higher-luminosity object. This does not appear the case: if anything, our galaxy’s metallicity lies [*below*]{} the mean luminosity-metallicity relation. This suggests that most of the galaxy’s original stars are still bound to the system, or perhaps the luminosity-metallicity relation differs in the cluster environment. The structure of the galaxy appears to support this conclusion. Tidal features are often detected in dSph galaxies via photometric deviations from a @king62 profile [@ih95; @palma03; @maj05]. As Figure \[profile\] demonstrates, our Virgo dwarf is well-fit by a King model all the way down to $\mu_V \sim 27.5$ ($\mu_I \sim 26.5$), where the galaxy’s surface brightness merges into that of the intracluster background [@VICS1]. Moreover, a plot of ($x,y$) positions using only those stars with colors and magnitudes similar to those of the galaxy’s RGB objects ( $0.4 < {\rm F606W}-{\rm F814W} < 1.3$ and $26.8 < {\rm F814W} < 28.0$), reveals no obvious evidence for tidal distortion (Figure \[xyplot\]). We note that our observations are not deep enough to see any low-level tidal disruption of this dSph galaxy: the large streams seen in the Coma and Centaurus Clusters are only visible at surface brightnesses below $\mu_V \sim 26$ [@tm98; @gw98; @cal00], and the tails associated with the Local Group galaxies Carina and Ursa Minor are at $\mu_V \gtrsim 29$ [@palma03; @maj05]. Thus we cannot unambiguously state whether the dSph galaxy has undergone any tidal stripping. Nevertheless, the metallicity arguments above suggest that the observed dSph is not the low-luminosity remains of a much more massive system ( as expected in galaxy harassment scenarios). Tidal threshing is not the only process that could affect dwarf galaxies in the cluster environment. In the Local Group, ram pressure stripping may be a prime factor in the evolution of dSph galaxies [@lf83; @vdb94; @ggh03; @marco03]. Indeed, the N-body simulations of @mayer06 suggest that both tides and ram pressure are needed to explain the properties of Local Group objects. If true, then the effects of intracluster gas should be magnified in the denser regions of the Virgo Cluster core. If low-luminosity dwarfs can barely hold onto their gas (and enrich themselves) in the Local Group, then Virgo dSphs should easily be stripped by the Virgo intracluster medium. The result would be a dSph galaxy much like those observed in the Local Group (as suggested by the structural similarities), but with less chemical enrichment, and a very narrow red giant branch. Our dSph does show some evidence for enhanced pressure stripping: as Figure \[comparison\] shows, the galaxy’s mean metallicity is slightly less than that predicted from dSph observations in the Local and M81 Groups. However, our RGB photometry is not precise enough to test for self-enrichment. Local Group dwarfs typically have metallicity dispersions of $\gtrsim 0.3$ dex [@mateo98]. Our data, though consistent with a zero abundance spread, is best fit with an RGB color dispersion of $\sigma_{{\rm F606W}-{\rm F814W}} = 0.05$ ($\sim 0.4$ dex) and still admits a metallicity spread as great as $\sigma_{{\rm F606W}-{\rm F814W}} = 0.09$ ($\sim 0.6$ dex). Thus, while the data are suggestive, we cannot say for certain that the Virgo environment has had any additional effect on the dSph’s chemical properties. Deeper photometric studies of other Virgo dSph galaxies would be very useful in ascertaining any differences between the chemical evolution of low-luminosity dwarf galaxies in clusters and those in the group environment. Discussion ========== From the analyses in the previous section, it is clear that the dSph galaxy in our field lies in the Virgo cluster and appears relatively undisturbed by its surroundings. Its stellar population is very metal-poor (\[M/H\] $\sim -2.3$) and old, with no indication of stars younger than $\sim 10$ Gyr. In this sense it is similar to the purely old Local Group dSph galaxies of Ursa Minor and Draco. It is also similar to the dwarf galaxies recently observed by @cald06 in the Virgo Cluster core – their central surface brightnesses and metallicities are also included in Figure \[comparison\]. They, too, obey the luminosity-metallicity relation for small galaxies and appear undisturbed by the cluster environment. The appearance of such ‘pristine’ old, low-luminosity dwarf galaxies within the Virgo Cluster begs a question – how do these objects last so long in the cluster? Small galaxies should be easy prey for the tidal forces associated with the cluster environment [@moore98], and, indeed, the velocity distribution of Virgo dE galaxies supports the idea that at least some fraction of the cluster’s dwarfs are the stripped remains of larger, late-type systems [@con01]. Furthermore, it is plausible that many of Virgo’s metal-poor intracluster stars [@VICS1] originated in such galaxies. Nevertheless, it seems clear that some dwarfs remain intact; perhaps these are dwarfs that have recently fallen into the cluster, or have rather high M/L ratios. A weak constraint on the history of our dSph galaxy can be obtained from its observed tidal radius. From @king62, the limiting radius of the galaxy should be $$r_t = R_p \left( {M_g \over M_C (3 + \epsilon)} \right)^{1/3}$$ where $M_g$ is the mass of the galaxy, $M_C$, the mass of the cluster, $R_P$, the galaxy’s pericluster radius, and $\epsilon$, the eccentricity of the orbit. The mass of the galaxy can be obtained by analogy with Local Group objects: according to [@mateo98], dwarf galaxies with $M_V \sim -10.6$ have mass-to-light ratios of $M/L_V \sim 10$, implying $M_g \sim 1.5 \times 10^7 M_{\odot}$. The Virgo cluster mass is similarly obtainable from the x-ray observations of @nb95: their data give a core radius of 56 kpc, a mass per unit length of $12 \times 10^{10} M_{\odot}$ kpc$^{-1}$, and a total mass within 180 kpc (the projected cluster distance of the galaxy) of $2 \times 10^{13} M_{\odot}$. A value of $\epsilon \sim 0.5$ then excludes any orbit which extends into the inner $\sim 80$ kpc of the cluster. Of course, there are any number of ways to satisfy this condition. For instance, the dSph may just now be falling into Virgo for the first time. Cepheids, the planetary nebula luminosity function, the surface brightness fluctuation method, and the tip of the red giant branch, all place the core of Virgo at a distance of between 14.5 and 17 Mpc [@freedman01; @jacoby90; @tonry01; @jensen03; @jbb04; @cald06]. Our dSph distance estimate of $17.6 \pm 1.4$ Mpc is consistent with this range of values, but it also admits the possibility that the galaxy is slightly behind the main body of the cluster. If so, then the dSph’s location may associate it with the infalling M86 system, which is $\sim 3$ Mpc beyond M87 [@bohringer94; @jacoby90; @tonry01; @jbb04]. If this scenario is correct, then our dwarf galaxy formed long ago in a smaller cluster/group, and the shredding of the galaxy is only now about to begin. If the dwarf [*did*]{} originate and live much of its life in the cluster environment, then it would have to have a high mass to withstand the tidal forces of Virgo’s densest regions. This would require a mass-to-light ratio that is much larger than the ‘normal’ value of $M/L_V \sim 10$ observed in the Local Group. CDM models do suggest that dwarf galaxies can form out of dark matter halos much more massive than dynamical measurements suggest ( $M \sim 10^{9 - 10} M_{\odot}$ [@stoehr02; @hayashi03; @kravtsov04], and such objects would be able to resist tidal disruption. However, galaxies with large dark matter halos would not easily be stripped of their ISM by ram pressure or galactic winds. @mb00 point out that while low mass ($M \sim 10^{6 - 7} M_{\odot}$) objects will rapidly lose their gas to the intracluster medium, more massive systems are likely to hold on to at least some of their interstellar medium. As a result, their stellar populations should be younger and more metal-rich than their lower-mass counterparts. While it is beyond the scope of this paper to investigate the precise masses and conditions under which early gas removal is expected, the old, metal-poor population of our dwarf spheroidal argues against the presence of a very massive dark halo. The existence of such a massive halo could, however, be tested via high-resolution spectroscopic observations with the next generation of large telescopes. Unfortunately, our lack of knowledge about the dwarf’s current location and orbital characteristics precludes our placing a strong constraint on its origin. While we have no data on the presence (or absence) of H I gas in the galaxy, that gas has certainly not created many new stars in the past $\sim 8$ Gyr. Thus, if gas were detected, it would not affect our basic conclusions. Summary ======= We report the discovery of a dwarf spheroidal galaxy (with $M_V=-10.6\pm 0.2$) on deep *ACS* F606W and F814W images of an intracluster field in the Virgo Cluster. The distance to the galaxy is $17.6\pm 1.4$ Mpc based on the location of its TRGB; this value is consistent with a location close to the core of the Virgo cluster, although we cannot rule out the possibility that this object is part of the M86 subcluster falling into Virgo from behind. The galaxy is clearly resolved into stars, and our observations extend more than a magnitude down its red giant branch. We find that the galaxy is composed entirely of old ($> 8-10$ Gyr), and is very metal-poor, with \[M/H\]$ = -2.3 \pm 0.3$. This metallicity is consistent with that expected for a galaxy of its luminosity; if anything, the galaxy lies slightly below the mean luminosity-metallicity relation. Thus, the object is not the remains of a larger galaxy that has been tidally stripped or harassed. Moreover, the dSph’s structural properties are similar to those derived for dwarf galaxies in the Local and M81 groups, suggesting that many of the same physical processes that govern the formation and evolution of local dSphs also apply to this Virgo Cluster object. Based on this similarity, we suggest that our dSph galaxy is likely on its initial infall into the center of the cluster. The authors would like to thank Tom Brown for assistance regarding dither patterns for the *ACS* observations, and Chris Palma for reading an earlier version of this paper. Support for this work was provided by NASA grant GO-10131 from the Space Telescope Science Institute and by NASA through grant No. NAG5-9377. JJF was supported by the NSF through grant AST-0302030, and TvH was supported by NASA under grant No. NAG5-13070 issued through the Office of Space Science. [*Facilities:*]{} Armandroff, T.E., Da Costa, G.S., Caldwell, N., & Seitzer, P. 1993, , 106, 986 Bekki, K., Couch, W.J., & Drinkwater, M.J. 2001, , 552, L105 Bellazzini, M., Ferraro, F.R., & Pancino, E. 2001, , 556, 635 Binggeli, B., Sandage, A., & Tammann, G.A. 1985, , 90, 1681 Böhringer, H., Briel, U.G., Schwarz, R.A., Voges, W., Hartner, G., & Trümper, J. 1994, , 368, 828 Brown, T.M., Ferguson, H.C., Smith, E., Guhathakurta, P., Kimble, R.A., Sweigart, A.V., Renzini, A., Rich, R.M., & VandenBerg, D.A. 2005, , 130, 1693 Busko, I.C. 1996, ASP Conf. Ser. 101: Astronomical Data Analysis Software and Systems V, ed. G.H. Jacoby & J. Barnes (San Francisco: ASP), 139 Calcáneo-Roldán, C., Moore, B., Bland-Hawthorn, J., Malin, D., & Sadler, E.M. 2000, , 314, 324 Caldwell, N., Armandroff, T.E., Seitzer, P., & Da Costa, G.S. 1992, , 103, 840 Caldwell, N., Armandroff, T.E., Da Costa, G.S., & Seitzer, P. 1998, , 115, 535 Caldwell, N. 2006, , in press (astro-ph/0607476) Cioni, M.-R.L., van der Marel, R.P., Loup, C., & Habing, H.J. 2000, , 359, 601 Cole, S., Lacey, C.G., Baugh, C.M., & Frenk, C S. 2000, , 319, 168 Conselice, C.J., Gallagher, J.S., & Wyse, R.F.G. 2001, , 559, 791 Côté, P., Marzke, R.O., West, M.J., & Minniti, D. 2000, , 533, 869 Da Costa, G.S. 2005, IAU Colloq. 198: Near-Fields Cosmology with Dwarf Elliptical Galaxies, eds. H. Jerjen & B. Binggeli (Cambridge: Cambridge University Press), 35 Da Costa, G.S., & Armandroff, T.E. 1990, , 100, 162 Dekel, A., & Silk, J. 1986, , 303, 39 Durrell, P.R. 1997, , 113, 531 Durrell, P.R., Ciardullo, R., Feldmeier, J.J., Jacoby, G.H., & Sigurdsson, S. 2002, , 570, 119 Ferguson, H.C. 1989, , 98, 367 Ferguson, H.C., Tanvir, N.R., & von Hippel, T. 1998, , 391, 461 Ferrarese, L., Mould, J.R., Kennicutt, R.C., Huchra, J., Ford, H.C., Freedman, W.L., Stetson, P.B., Madore, B.F., Sakai, S., Gibson, B.K., Graham, J.A., Hughes, S.M., Illingworth, G.D., Kelson, D.D., Macri, L., Sebo, K., & Silbermann, N.A. 2000, , 529, 745 Freedman, W.L., Madore, B.F., Gibson, B.K., Ferrarese, L., Kelson, D.D., Sakai, S., Mould, J.R., Kennicutt, R.C. Jr., Ford, H.C., Graham, J.A., Huchra, J.P., Hughes, S.M.G., Illingworth, G.D., Macri, L.M., & Stetson, P.B. 2001, , 553, 47 Gallagher, J.S., & Wyse, R.F.G. 1994, , 106, 1225 Girardi, L., Bressan, A., Bertelli, G., & Chiosi, C. 2000, , 141, 371 Girardi, L., Bertelli, G., Bressan, A., Chiosi, C., Groenewegen, M.A.T., Marigo, P., Salasnich, B., & Weiss, A. 2002, , 391, 195 Girardi, L. 2005, private communication Graham, A.W., & Driver, S.P. 2005, , 22, 118 Grebel, E.K., Gallagher, J.S., & Harbeck, D. 2003, , 125, 1926 Gregg, M.D., & West, M.J. 1998, , 396, 549 Harbeck, D., Gallagher, J.S., Grebel, E.K., Koch, A., & Zucker, D.B., , 632, 159 Harris, W.E., Durrell, P.R., Pierce, M.J., & Secker, J. 1998, , 395, 45 Hayashi, E., Navarro, J.F., Taylor, J.E., Stadel, J., & Quinn, T. 2003, , 584, 541 Hilker, M., Mieske, S., & Infante, L. 2003, , 397, L9 Hilker, M., Mieske, S., & Infante, L. 2005, in IAU Colloq. 198, Near-Field Cosmology with Dwarf Elliptical Galaxies, eds. H. Jerjen & B. Binggeli (Cambridge: Cambridge University Press), 290 Irwin, M.J., & Hatzidimitriou, D. 1995, , 277, 1354 Jacoby, G.H., Ciardullo, R., & Ford, H.C. 1990, , 356, 332 Jedrzejewski, R.I. 1987, , 226, 747 Jensen, J.B., Tonry, J.L., Barris, B.J., Thompson, R.I., Liu, M.C., Rieke, M.J., Ajhar, E.A., & Blakeslee, J.P. 2003, , 583, 712 Jerjen, H., Binggeli, B., & Barazza, F.D. 2004, , 127, 771 Karachentsev, I.D., Karachentseva, V.E., & Sharina, M.E. 2005, in IAU Colloq. 198, Near-Field Cosmology with Dwarf Elliptical Galaxies, eds. H. Jerjen & B. Binggeli (Cambridge: Cambridge University Press), 295 Kauffmann, G., White, S.D.M., & Guideroni, B. 1993, , 264, 201 King, I. 1962, , 67, 471 Koekemoer, A.M., Fruchter, A.S., Hook, R.N., & Hack, W. 2002, in The 2002 HST Calibration Workshop: Hubble after the Installation of the ACS and the NICMOS Cooling System, eds. S. Arribas, A. Koekemoer, & B. Whitmore (Baltimore: Space Telescope Science Institute), 337 Kravtsov, A.V., Gnedin, O.Y., & Klypin, A.A. 2004, , 609, 482 Kron, R.G. 1980, , 43, 305 Lee, M.G., Freedman, W.L., & Madore, B.F. 1993, , 417, 553 Lin, D.N.C., & Faber, S.M. 1983, , 266, L21 Madore, B.F., & Freedman, W.L. 1995, , 109, 1645 Majewski, S.R., Frinchaboy, P.M., Kunkel, W.E., Link, R., Muñoz, R.R., Ostheimer, J.C., Palma, C., Patterson, R.J., & Geisler, D. 2005, , 130, 2677 Marcolini, A., Brighenti, F., & D’Ercole, A. 2003, , 345, 1329 Mateo, M. 1998, , 36, 435 Mayer, L., Mastropietro, C. , Wadsley, J., Stadel, J., & Moore, B. 2006, , 369, 1021 McConnachie, A.W., Irwin, M.J., Ferguson, A.M.N., Ibata, R.A., Lewis, G.F., & Tanvir, N. 2004, , 350, 243 McConnachie, A.W., & Irwin, M.J. 2006, , 365, 1263 Méndez, B., Davis, M., Moustakas, J., Newman, J., Madore, B.F., & Freedman, W.L. 2002, , 124, 213 Moore, B., Katz, N., Lake, G., Dressler, A., & Oemler, A. 1996, , 379, 613 Moore, B., Lake, G., & Katz, N. 1998, , 495, 139 Mori, M., & Burkert, A. 2000, , 538, 559 Mouchine, M., Ferguson, H.C., Rich, R.M., Brown, T.M., & Smith, T.E. 2005, , 633, 810 Mould, J., & Aaronson, M. 1982, , 263, 629 Neilsen, E.H., & Tsvetanov, Z.I. 2000, , 536, 255 Nulsen, P.E.J., & Böhringer, H. 1995, , 274, 1093 Palma, C., Majewski, S.R., Siegel, M.H., Patterson, R.J., Ostheimer, J.C., & Link, R. 2003, , 125, 1352 Phillipps, S., Parker, Q.A., Schwartzenberg, J.M., & Jones, J.B. 1998, , 493, L59 Rejkuba, M., Da Costa, G.S., Jerjen, H., Zoccali, M., & Binggeli, B. 2006, , 448, 983 Sabatini, S., Davies, J., Scaramella, R., Smith, R., Baes, M., Linder, S.M., Roberts, S., & Tests, V. 2003, , 341, 981 Sakai, S., Madore, B.F., & Freedman, W.L. 1996, , 461, 713 Saviane, I, Held, E.V., & Piotto, G. 1996, , 315, 40 Sérsic, J.-L. 1968, Atlas de Galaxias Australes (Cordoba: Observatorio Astronomico) Smith, G.H. 1985, , 97, 1058 Schlegel, D.J., Finkbeiner, D.P., & Davis, M. 1998, , 500, 525 Sirianni, M., Jee, M.J., Benítez, N., Blakeslee, J.P., Martel, A.R., Meurer, G., Clampin, M., De Marchi, G., Ford, H.C., Gilliland, R., Hartig, G.F., Illingworth, G.D., Mack, J., & McCann, W.J. 2005, , 117, 1049 Stetson, P.B. 1987, , 99, 191 Stetson, P.B. 1992, in ASP Conf. Ser. 25: Astronomical Data Analysis Software and Systems I., eds. D.M. Worral, C. Biemesderfer, & J. Barnes (San Francisco: ASP), 297 Stoehr, F., White, S.D.M., Tormen, G., & Springel, V. 2002, , 335, 84 Tonry, J.L., Dressler, A., Blakeslee, J.P., Ajhar, E.A., Fletcher, A.B., Luppino, G.A., Metzger, M.R., & Moore, C.B. 2001, , 546, 681 Trentham, N., & Hodgkin, S. 2002, , 333, 423 Trentham, N., & Mobasher, B. 1998, , 293, 53 van den Bergh, S. 1994, , 428, 617 Williams, B.F., Ciardullo, R., Durrell, P.R., Feldmeier, J.J., Jacoby, G.H., Sigurdsson, S., Vinciguerra, M., von Hippel, T., Ferguson, H.C., Tanvir, N.R., Arnaboldi, M., Gerhard, O., Aguerri, J.A.L., & Freeman, K.C. 2006a, , in press (astro-ph/0610386) Williams, B.F., Ciardullo, R., Durrell, P.R., Vinciguerra, M., Feldmeier, J.J., Jacoby, G.H., Sigurdsson, S., von Hippel, T., Ferguson, H.C., Tanvir, N.R., Arnaboldi, M., Gerhard, O., Aguerri, J.A.L., & Freeman, K. 2006b, , in press (astro-ph/0609211) [^1]: dSph galaxies are the faint end of the dwarf elliptical \[dE\] galaxy sequence, and when we note dE galaxies we are including those gas-poor dwarf galaxies more luminous than $M_V=-14$ [^2]: multidrizzle is a product of the Space Telescope Science Institute, which is operated by AURA for NASA. http://stsdas.stsci.edu/pydrizzle/multidrizzle [^3]: PyRAF is a product of the Space Telescope Science Institute, which is operated by AURA for NASA.
--- abstract: | With the renewed and growing interest in geometric continuity in mind, this article gives a general definition of geometrically continuous polygonal surfaces and geometrically continuous spline functions on them. Polynomial splines defined by $G^1$ gluing data in terms of rational functions are analyzed further. A general structure for a spline basis is defined, and a dimension formula is proved for spline spaces of bounded degree on polygonal surfaces made up of rectangles and triangles. Lastly, a comprehensive example is presented, and practical perspectives of geometric continuity are discussed. The whole objective of the paper is to put forward a modernized, practicable framework of modeling with geometric continuity.\ [**Keywords.**]{} Geometric continuity, Polynomial splines, Dimension. author: - 'Raimundas Vidunas[^1]' bibliography: - 'cg-references.bib' title: Building geometrically continuous splines --- Introduction ============ There is growing interest in geometric continuity, especially in Computer Aided Geometric Design [@flex], [@beccari], [@Bercovier] and isogeometric analysis [@Kapl2014], [@Peters14]. Nevertheless, apprehension of its technical details, applications, potential appears to be variable and inconsistent among active researchers, perceivably impeding communication and faster development of the subject. Substantially updating, improving the presentation in author’s PhD thesis [@raimundas §6], this article aims to facilitate a more uniform understanding of practical requisites of geometric continuity. Geometric continuity [@Boehm87], [@derose], [@PetersHandbook] is a general technique to produce visually smooth surfaces in Computer Aided Geometric Design (CAGD). A resulting surface is typically made of parametric patches in ${{\ensuremath{\mathbb{R}}}}^3$, with each patch defined by a (usually) polynomial map from a polygon in ${{\ensuremath{\mathbb{R}}}}^2$, in such a way that the patches fit each other continuously along the edges. [*Geometric continuity*]{} requires continuousness of tangent planes (and possibly, of curvature, higher osculating spaces) along the glued edges and around the formed vertices. The special case of [*parametric continuity*]{} occurs when the parametrizing polygons are situated next to each other in ${{\ensuremath{\mathbb{R}}}}^2$, forming a polygonal mesh. A whole surface in ${{\ensuremath{\mathbb{R}}}}^3$ is then parametrized by a single map from the polygonal mesh, and the single map is required to be $C^r$ continuous (with chosen $r{\geqslant}1$ differential continuity). Parametric continuity is not adequate for modeling surfaces of arbitrary topology [@LoopDerose]. Direct geometric continuity conditions in ${{\ensuremath{\mathbb{R}}}}^3$ lead to complex models that are not simple to use and modify [@DeRose90], [@Peters96i], [@Che05]. The other approach [@derose], [@Hahn87] is to define a geometric data structure of glueing a set of polygons abstractly, in manifold theoretic terms. A concrete surface in ${{\ensuremath{\mathbb{R}}}}^3$ is then viewed as a [*realization*]{} of an abstract polygonal manifold ${{\ensuremath{\mathcal{M}}}}$ by a map ${{\ensuremath{\mathcal{M}}}}\to{{\ensuremath{\mathbb{R}}}}^3$ defined by a triple $(f_1,f_2,f_3)$ of suitably defined [*geometrically continuous $G^r$ functions*]{} on ${{\ensuremath{\mathcal{M}}}}$. The actual goal is to define the spaces of $G^r$ functions, sufficiently rich so that they could generate satisfactorily smooth surfaces in ${{\ensuremath{\mathbb{R}}}}^3$. Of particular interest are $G^r$ [*polynomial splines*]{}, that is, $G^r$ functions that restrict to polynomial functions on each polygon. Geometric continuity of abstract polygonal surfaces is customarily defined with reference to [*differential manifolds*]{} [@Wiki] so that the $G^r$ functions on the polygonal surfaces would correspond to the $C^r$ functions on a differential surface. The gluing data is then the transition maps between open neighborhoods of the glued polygonal edges [@derose], or the jet germs of the transition maps [@PetersHandbook], or the induced transformations of tangent spaces [@Hahn87]. In terms of direct $G^r$ gluing in ${{\ensuremath{\mathbb{R}}}}^3$, this corresponds to fixing the [*shape parameters*]{} [@piegl87] and working with induced linear relations for the coefficients of the component functions $(f_1,f_2,f_3)$. The geometric continuity conditions and theoretical grounding in differential geometry are well understood in principle [@Gregory89], [@PetersHandbook], [@Zheng95]. Sufficiency of the known constrains for smooth realizations has not been proved yet. In particular, Peters and Fan [@Peters2010] noticed a topological balancing restriction around [*crossing vertices*]{} (see Definition \[def:crossing\] and §\[sec:crossing\] here) in a context of $G^1$ gluing of rectangles. The first new result of this paper is a generalization of this restriction with Theorem \[cond:comp2\]. The restriction depends on the assumption of polynomial (or more generally, $C^2$) specializations of the $G^1$ functions on the polygons. Other contributions of this paper are: a general definition of $G^1$ polygonal surfaces and $G^1$ functions on them (in §\[sec:gsplines\]), a dimension formula for certain spaces of $G^1$ polynomial splines of bounded degree (in §\[eq:spdim\]), and an extensive, illuminating example in §\[sec:poctah\]. A general strategy of building a basis $G^1$ splines of bounded degree is presented in §\[sec:generate\], and demonstrated on $G^1$ polygonal surfaces made up of rectangles and triangles (in §\[sec:boxdelta\], §\[sec:poctah\]). In addition, §\[sec:practical\] discusses sufficiency of defined spline spaces for smooth generalization in ${{\ensuremath{\mathbb{R}}}}^3$. The purpose of the whole paper is to encourage a uniform level of familiarity and communication in the accelerating field of geometric continuity. There is no shortage of general definitions [@GuHeQin06], [@Tosun:2011], frameworks [@beccari], [@Vecchia:2008], special constructions [@Prautzsch:1997], [@Reif:1995] deploying geometric continuity. But it becomes harder to distinguish their relative merits, level of generality, addressed issues, triviality or prominence of technical details. There is a lack of mutual comparisons and extended examples in the literature. Recent communication on the subject revealed odd patterns of superficial understanding of basic routines, incongruent focus on purportedly important questions or practicality. My updated experience of writing PhD thesis [@raimundas] and [@VidunasAMS] is worth sharing, apparently. We start in §\[sec:geoglue\] with a review of $G^r$ continuity conditions, updated with the restriction on the crossing vertices generalizing [@Peters2010]. This serves as a preparation for our general definitions of $G^1$ polygonal surfaces and splines in §\[sec:gsplines\]. The focus is not on the most comfortable theory building, but on a transparent overview of technical details and on grasping a full potential of geometric continuity. One underlying suggestion is that grounding in differential geometry should not be taken too seriously. Ultimately, a designer does not need to be deeply aware of conformity with differential geometry. The key objective for expansive applications is defining a spline space with preferably minimal gluing data, and estimating the quality of realizations in ${{\ensuremath{\mathbb{R}}}}^3$ by those splines. Section \[sec:freedom\] presents essential technical details for understanding a basis structure of spline spaces and computing their dimension. Section \[sec:boxdelta\] proves general formulas for the dimension of spline spaces on $G^1$ polygonal surfaces made up exclusively of rectangles and triangles. Section \[sec:poctah\] gives an extensive, illustrative example of a $G^1$ polygonal surface, and demonstrates computation of spline bases on it. Finally, Section \[sec:practical\] summarizes the results and offers a few practical perspectives of using geometric continuity.\ [**Acknowledgment.** ]{} This article originated as an alternative version to the work [@MVV15] with Bernard Mourrain and Nelly Villamizar. The author appreciates rich discussions, shared literature and revived interest in geometric continuity, provoking this work in full. Geometrically continuous gluing {#sec:geoglue} =============================== This section is devoted to defining the $G^r$ gluing data and restrictions on it, with more attention to the $r=1$ case. This is a preparation for defining the [*$G^1$ polygonal surfaces*]{} and [*$G^1$ spline functions*]{} on them in §\[sec:gsplines\]. The $G^1$ gluing data is generally clear from differential geometry and CAGD practice, though a comprehensive set of restrictions to ensure possibility of $G^r$ gluing around vertices was still not formalized. We spell out different representations of geometric continuity conditions and full terminology, preparing for inclusive definitions of §\[sec:gsplines\]. Building geometric continuous surfaces by defining at first some minimal gluing data (or an abstract surface, polygonal complex) to patch a collection of polygons has a definite motivation from differential geometry. We recall a standard definition [@Warner83] of differential surfaces. Let $r$ denote a positive integer, and let ${K}$ denote a finite set. A $C^r$ [*differential surface*]{} is defined as a connected topological Hausdorff [@Wiki] manifold ${{\ensuremath{\mathcal{S}}}}$ with a collection $\{(V_k,\psi_k)\}_{k\in{K}}$ such that 1. $\{V_k\}_{k\in{K}}$ is an open covering of ${{\ensuremath{\mathcal{S}}}}$. 2. Each $\psi_k$ is a homeomorphism $\psi_k \!:\! U_k\!\to V_k$, where $U_k$ is an open set in ${{\ensuremath{\mathbb{R}}}}^2$. 3. For distinct $k,\ell\in{K}$ such that $V_{k,\ell}:=V_k\cap V_{\ell}$ is not an empty set, let $U_{k,\ell}:=\psi_k^{-1}(V_{k,\ell})$, $U_{\ell,k}:=\psi_\ell^{-1}(V_{k,\ell})$. Then the map is required to be a $C^r$-diffeomorphism. The collection $\{(V_k,\psi_k)\}_{k\in I}$ is a $C^r$ [*atlas*]{} on ${{\ensuremath{\mathcal{S}}}}$, and the maps $\psi_\ell^{-1}\circ\psi_k$ are called [*transition maps*]{}. A function $f:{{\ensuremath{\mathcal{S}}}}\to{{\ensuremath{\mathbb{R}}}}$ is [*$C^r$ continuous*]{} on $\cal S$ if for any $k\in{K}$ the function $f \circ \psi_{k}^{-1}: U_{k} \to {{\ensuremath{\mathbb{R}}}}$ is $C^r$ continuous. In contrast to differential geometry, CAGD aims to build surfaces from [*closed polygons*]{} rather than from open sets. It is easy to underestimate the technical difficulty of properly defining the gluing data, $G^r$ functions and relating them to differential manifolds, $C^r$ functions. Given a collection of polygons and homeomorphisms between their edges, a topological surface and continuous functions on it are defined easily. \[def:g0complex\] Let ${K}_0,{K}_1$ denote finite sets. A [*$G^0$ polygonal surface*]{} $\cal M$ is a pair $(\{{\Omega}_k\}_{k\in {K}_0},\{\lambda_k\}_{k\in {K}_1})$ such that 1. $\{{\Omega}_k\}_{k\in {K}_0}$ is a collection of (possibly coinciding) convex closed polygons in ${{\ensuremath{\mathbb{R}}}}^2$. 2. $\{\lambda_k\}_{k\in {K}_1}$ is a collection of homeomorphism $\lambda_k:\tau_{k}\to{\tau}'_{k}$ between pairs of polygonal edges $\tau_{k}\subset{\Omega}_{\ell_k}$, ${\tau}'_{k}\subset{{\Omega}}_{\ell'_k}$. 3. Each polygonal edge can be paired with at most one other edge, and it cannot be paired with itself. 4. The equivalence relation on the polygons generated by the incidences $\lambda_k$ of glued edges has exactly one orbit. A $G^0$ polygonal surface ${{\ensuremath{\mathcal{M}}}}$ is called [*linear*]{} if all $\lambda_k$ are affine-linear homeomorphisms. Any $G^0$ polygonal surface ${{\ensuremath{\mathcal{M}}}}$ has a structure of a topological surface, as the disjoint union of the polygons with some points identified to equivalence classes by the homeomorphisms $\lambda_k$. Condition [[*(iv)*]{}]{} means that this topological surface is connected. The identifications of polygonal edges and vertices give a combinatorial complex of [*polygons*]{}, [*edges*]{} and [*vertices*]{} (as [*faces*]{} of the complex [@Ziegler00]) on the topological surface. A common edge on ${{\ensuremath{\mathcal{M}}}}$ defined by $\lambda_k$ in [[*(ii)*]{}]{} will be denoted by $\tau_{k}\sim {\tau}'_{k}$. These edges are called an [*interior edges*]{} of ${{\ensuremath{\mathcal{M}}}}$. The other edges are [*boundary edges*]{}. We use the set notation $\{P_1,\ldots,P_n\}$ to denote an equivalence class of identified polygonal vertices. A vertex is a [*boundary vertex*]{} is on a boundary edge, and it is an [*interior vertex*]{} otherwise. A [*continuous function*]{} on a $G^0$ polygonal surface ${{\ensuremath{\mathcal{M}}}}$ is defined by assigning a continuous function $f_k$ to each polygon ${\Omega}_k$ ($k\in N_0$) so that their restrictions to the polygonal edges are compatible with the homeomorphisms $\lambda_k$ ($k\in N_1$). The set of continuous functions on ${{\ensuremath{\mathcal{M}}}}$ is denoted by $G^0({{\ensuremath{\mathcal{M}}}})$. Our intention is to define sets $G^r({{\ensuremath{\mathcal{M}}}})$ of $G^r$ [*geometrically continuos functions*]{} on a polygonal surface ${{\ensuremath{\mathcal{M}}}}$, such that a generic triple $(f_1,f_2,f_3)$ of functions from the same $G^r({{\ensuremath{\mathcal{M}}}})$ would give a map ${{\ensuremath{\mathcal{M}}}}\to{{\ensuremath{\mathbb{R}}}}^3$ whose image is a $G^r$ continuous surface in ${{\ensuremath{\mathbb{R}}}}^3$ as defined in CAGD [@Hahn87], [@Peters96i]. In particular, the $G^1$ continuous surfaces have [*tangent plane continuity*]{} of polygonal patches along the glued edges. We will refer to a map ${{\ensuremath{\mathcal{M}}}}\to{{\ensuremath{\mathbb{R}}}}^3$ with a $G^r$ continuous image as a [*$G^r$ smooth realization*]{} of ${{\ensuremath{\mathcal{M}}}}$. The described topological surface ${{\ensuremath{\mathcal{M}}}}$ should remain the underlying topological space. To keep technical details simpler, we consider only linear polygonal surfaces and throughly analyze only the $r=1$ case. We will use the following technical definitions. \[def:c1cl\] By a [*$C^r$ function*]{} on a closed polygon ${\Omega}\subset{{\ensuremath{\mathbb{R}}}}^2$ we mean a continuous function on ${\Omega}$ that can be extended to a $C^r$ on an open set containing ${\Omega}$. We denote the space of these functions by $C^r({\Omega})$. We could require weaker condition of being $C^r$ functions on the interior of ${\Omega}$ and continuous functions on ${\Omega}$. However, in CAGD applications one typically allows the polygonal restrictions $g_1,g_2$ to be in a specific class of $C^\infty$ functions (on open neighborhoods of ${\Omega}_1$ or ${\Omega}_2$), such as polynomial, rational or trigonometric functions. \[def:stcoor\] Let ${\Omega}$ denote a convex closed polygon in ${{\ensuremath{\mathbb{R}}}}^2$, and let $\tau$ denote an edge of ${\Omega}$. By a [*coordinate system attached to $\tau$*]{} we mean a pair $(u,v)$ of linear functions on ${{\ensuremath{\mathbb{R}}}}^2$ such that: - $u$ attains the values $0$ and $1$ at the endpoints of $\tau$; - $v=0$ on the edge $\tau$, and $v>0$ on the interior of ${\Omega}$. If additionally $(v,u)$ is a coordinate system attached to the other edge of ${\Omega}$ at the endpoint $u=0$, the pair $(u,v)$ is called a [*standard coordinate system*]{} attached to $\tau$ or to the end vertex $u=0$. There are exactly two standard coordinate systems attached to $\tau$. \[def:inout\] Let ${\Omega}$ denote a convex closed polygon in ${{\ensuremath{\mathbb{R}}}}^2$. Let $\tau$ denote an edge of ${\Omega}$, and let $L\subset {{\ensuremath{\mathbb{R}}}}^2$ denote the line containing $\tau$. The [*inward side (from $\tau$, towards $\Omega$)*]{} is the open half-plane from $L$ that contains the interior of $\Omega$. The [*outward side (from $\tau$, away from $\Omega$)*]{} is the other open half-plane from $L$. Gluing two polygons {#sec:edgeglue} ------------------- There are two basic problems in constructing geometrically continuous surfaces: gluing of two patches along an edge, and gluing several patches around a vertex. The former is a relatively straightforward routine [@derose], [@Zheng95]. The analogy with differential geometry can be followed closely using transition maps. Simple pictures (from [@Hahn87], [@Boehm87], for example) can be useful in supplementing this discussion. Let ${\Omega}_1,{\Omega}_2$ denote two convex closed polygons in ${{\ensuremath{\mathbb{R}}}}^2$, and let ${\Omega}^0_{1}\subset{\Omega}_{1}$, ${\Omega}^0_{2}\subset{{\Omega}}_{2}$ denote their interiors. Let $\lambda:\tau_{1}\to{\tau}_{2}$ denote a linear homeomorphism between their edges $\tau_{1}\subset{\Omega}_{1}$, ${\tau}_{2}\subset{{\Omega}}_{2}$, and let ${{\ensuremath{\mathcal{M}}}}_0$ denote the $G^0$ polygonal surface $(\{\Omega_1,\Omega_2\},\{\lambda\})$. Let $U_1\supset \tau_1$, $U_2\supset \tau_2$ denote open neighborhoods of the two edges. Suppose we have a $C^r$ diffeomorphism $\psi_0:U_1\to U_2$ such that - It specializes to $\lambda:\tau_{1}\to{\tau}_{2}$ when restricted to the edge $\tau_1$; - It maps the interior ${\Omega}^0_{1}\cup U_1$ to the exterior $U_2\setminus {\Omega}_{2}$ of the polygon ${\Omega}_2$. We have a $C^r$ differential surface ${{\ensuremath{\mathcal{M}}}}_r$ by considering the disjoint union of ${\Omega}^0_{1}\cup U_1$ and ${\Omega}^0_{2}\cup U_2$ modulo the equivalence relation defined by the transition map $\psi_0$. The disjoint union of ${\Omega}^0_{1}$ and ${\Omega}^0_{2}$ is a proper subset of ${{\ensuremath{\mathcal{M}}}}_r$. The convexity assumption makes condition [[*(A2)*]{}]{} possible. A $C^r$ function on ${{\ensuremath{\mathcal{M}}}}_r$ is given by a pair $(f_1,f_2)$ of functions $f_1\in C^r({\Omega}_{1})$, $f_2\in C^r({\Omega}_{2})$ such that $$\label{eq:crcont} f_1=f_2\circ\psi_0 \qquad \mbox{on} \quad U_1.$$ We can define the [*geometrically continuous $G^r$ functions*]{} on ${{\ensuremath{\mathcal{M}}}}_0$ as pairs $(g_1,g_2)$ of functions $g_1\in C^0({\Omega}_1)$, $g_2\in C^0({\Omega}_2)$ such that $(g_1|_{{\Omega}_1},g_2|_{{\Omega}_2})$ is a restriction of a $C^r$ function on ${{\ensuremath{\mathcal{M}}}}_r$. We seek a more constructive definition. Let $(u_1,v_1)$, $(u_2,v_2)$ denote coordinate systems attached to $\tau_1$, $\tau_2$, respectively. We assume that $\lambda$ identifies the endpoints $u_1=0$ and $u_2=0$; then it identifies the endpoints $u_1=1$ and $u_2=1$. Let $\ell_1$ denote the line in ${{\ensuremath{\mathbb{R}}}}^2$ containing $\tau_1$. The transition map $\psi_0$ has Taylor expansions [@Wiki] in , $v_1$ of order $r$ at each point $P\in\tau_1$. In particular, we can write the action of $\psi_0$ as $$\label{eq:thetat} {u_1\choose v_1} \mapsto {u_2\choose v_2}={u_1\choose 0}+ \sum_{k=1}^r \theta_k(u_1)v_1^k+\psi_r(u_1,v_1),$$ where each $\theta_k(u_1)$ is a $C^{r-k}$ map to ${{\ensuremath{\mathbb{R}}}}^2$ from the open subset $\ell_1\cap U_1$ of $\ell_1$, and $\psi_r(u_1,v_1)$ is a $C^r$ map $U_1\to {{\ensuremath{\mathbb{R}}}}^2$ that vanishes at each point of $\tau_1$ together with all derivatives of order ${\leqslant}r$. If $(f_1,f_2)$ is a $C^r$ function on ${{\ensuremath{\mathcal{M}}}}_r$, then $f_1$ has Taylor expansions in , $v_1$ of order $r$ at each point $P\in\tau_1$, and similarly, $f_2$ has Taylor expansions in , $v_2$ of order $r$ at each point $Q\in\tau_2$. Condition (\[eq:crcont\]) gives invertible linear transformations of the Taylor coefficients of $f_1$ at each $P\in\tau_1$ to the Taylor coefficients of $f_2$ at $Q=\lambda(P)$. The linear transformations are determined by the $r$-th order Taylor expansion of $\psi_0$ in (\[eq:thetat\]) at each $P\in\tau_1$. The order $r$ Taylor expansions at a single point can be interpreted as [*$r$-th order jets*]{} [@Wiki] of $\psi_0$, $f_1$ or $f_2$. The jets are equivalence classes of $C^r$ functions or maps, with the equivalence defined as having the same Taylor expansion of order $r$. The jets can be represented by polynomials (or polynomial vectors) in formal variables $\tilde{u},\tilde{v}$ of degree ${\leqslant}r$. The linear spaces of jets are denoted by $J^r_P(U_1,{{\ensuremath{\mathbb{R}}}}^2)$ for the $C^r$ functions $U_1\to{{\ensuremath{\mathbb{R}}}}^2$ at $P\in U_1$, etc. Multiplication and composition of jets is defined by corresponding polynomial operations modulo the monomials in $\tilde{u},\tilde{v}$ of degree $r+1$. The relevant jets are denoted by $J^r_P(\psi_0)$, $J^r_P(f_1)$, $J^r_Q(f_2)$ for $P\in\tau_1$, $Q\in\tau_2$. Morevoer, the jet $J^r_P(g)$ can be defined for a function $g\in C^0({\Omega}_1)$ at a boundary point $P\in{\Omega}_1$ if $g$ is extendible to a $C^r$ function in a neighborhood of $P$, because all Taylor coefficients are determined by the behavior of $g$ on ${\Omega}_1$. Now we can give a general definition of $G^r$ functions, more computable than (\[eq:crcont\]). \[eq:grfdef\] Suppose we have functions $g_1\in C^1({\Omega}_1)$, $g_2\in C^1({\Omega}_2)$. The pair $(g_1,g_2)$ is called a [*geometrically continuous $G^r$ function*]{} on ${{\ensuremath{\mathcal{M}}}}_0$ if $g_1|_{\tau_1}=g_2\circ \lambda$ and $$\label{eq:grjet} J^r_P(g_1)=J^r_Q(g_2)\circ J^r_P(\psi_0)$$ for all points $P\in\tau_1$ and $Q=\lambda(P)$. We denote the linear space of $G^r$ functions by $G^r({{\ensuremath{\mathcal{M}}}}_0,\psi_0)$. The space is independent from the term $\psi_r$ in (\[eq:thetat\]). To define the $G^r$ functions, it is thus enough to specify continuously varying $J^r_P(\psi_0)$ for each $P\in\tau_1$, following (\[eq:thetat\]) and the continuity restrictions on all $\theta_k(u_1)$ but ignoring $\psi_r(u_1,v_1)$. The technical term for this continuously varying family of jets is the [*jet bundle*]{} $J^r_{\tau_1}(\psi_0)$. Since CAGD typically uses $C^\infty$ restrictions $g_1,g_2$ (as remarked in Definition \[def:c1cl\]), it is natural to define the jet bundle $J^r_{\tau_1}(\psi_0)$ by equation (\[eq:thetat\]) with all $\theta_k(u_1)$ in the same or a similar class of $C^\infty$ functions. For example, if we want to define [*polynomial $G^r$ splines*]{}, we may choose all $\theta_k(u_1)$ to be continuous rational functions of $u_1$. We can always take $\psi_r(u_1,v_1)=0$. Condition [[*(A2)*]{}]{} leads to additional constrains on $J^r_P(\psi_0)$; see Proposition \[def:g1vf\] soon. Dualization leads to a coordinate-independent (and at the same time, more computable) formulation of the geometric continuity condition. For that, we concentrate on the linear transformation of the Taylor coefficients of $g_1,g_2$ rather than on the transformation (\[eq:grjet\]) of whole jets. Up to a constant multiple, a Taylor coefficient is a differentiation functional $\partial^{j+k}/\partial u_i^j\partial v_i^k$ (of order $j+k{\leqslant}r$, with $i\in\{1,2\}$). Applying these differentiations with $i=1$ to (\[eq:crcont\]) we get expressions of $\partial^{j+k}/\partial u_1^j\partial v_1^k$ (of $f_1$) in terms of differentiations in $u_2,v_2$ (of $f_2$) of order . The same transformation of differential operators of order ${\leqslant}r$ is induced by (\[eq:grjet\]). Rather than transforming the space of functions, we seek to use the corresponding transformation of differentiation operators. We work out this correspondence for $r=1$. The [*tangent space*]{} $T_P$ of ${{\ensuremath{\mathbb{R}}}}^2$ at a point $P\in{{\ensuremath{\mathbb{R}}}}^2$ can be defined [@Wiki] as the linear space of [*derivations*]{}, or [*directional derivatives*]{}. For a vector $\,\overrightarrow{\!AB}$, the direction derivative $\partial_{AB}$ is defined as $$\label{eq:dirder} \partial_{AB} f(P) = \lim_{t\to 0} \frac{f(P+t\;\overrightarrow{\!AB})-f(P)}{t}$$ If $A,B$ are the endpoints $u_1=0$, $u_1=1$ of $\tau_1$, then $\partial/\partial u_1=\partial_{AB}$. We have then where $C$ is the point $u_1=0$, $v_1=1$. With $r=1$, the $G^1$-structure defining jet bundle $J^1_{\tau_1}(\psi_0)$ can be written as $$\label{eq:jetb1} {u_1\choose v_1} \mapsto {u_2\choose v_2}={u_1+{{\ensuremath{\beta}}}(u_1)v_1\choose {{\ensuremath{\gamma}}}(u_1) v_1}$$ for some ${{\ensuremath{\beta}}}, {{\ensuremath{\gamma}}}\in C^0(\tau_1)$. This expression can be taken as a representative transition map in the equivalence class $J^1_{\tau_1}(\psi_0)$ of transition maps. If $(g_1(u_1,v_1)$, $g_2(u_2,v_2))$ is a $G^1$ function on ${{\ensuremath{\mathcal{M}}}}_0$, then we have $\partial g_1/\partial u_1(u_1,0)\!=\partial g_2/\partial u_2(u_1,0)$ for all $u_1\in[0,1]$ as a consequence of , and $$\label{eq:g1func0} \frac{\partial g_1}{\partial v_1}(u_1,0) = {{\ensuremath{\beta}}}(u_1)\frac{\partial g_2}{\partial u_2}(u_1,0) +{{\ensuremath{\gamma}}}(u_1)\frac{\partial g_2}{\partial v_2}(u_1,0).$$ Defining a space of $G^1$ functions on ${{\ensuremath{\mathcal{M}}}}_0$ is thereby equivalent to giving a continuous family $\Theta$ of linear isomorphisms of the tangent spaces $T_P$ and $T_{\lambda(P)}$ for all $P\in\tau_1$, defined by $\partial/\partial u_1\mapsto\partial/\partial u_2$ and $$\label{eq:tbiso} \frac{\partial}{\partial v_1} \mapsto {{\ensuremath{\beta}}}(u_1)\frac{\partial}{\partial u_2} +{{\ensuremath{\gamma}}}(u_1)\frac{\partial}{\partial v_2}$$ with specified functions ${{\ensuremath{\beta}}}(u_1), {{\ensuremath{\gamma}}}(u_1)$. Each tangent space is isomorphic to ${{\ensuremath{\mathbb{R}}}}^2$ as a linear space. The vectors are directly identified as directional derivatives by (\[eq:dirder\]). We say that a non-zero derivative is [*parallel*]{} to an edge $\tau'$ if its differentiation direction is parallel $\tau'$. Otherwise the derivative is called [*transversal*]{} [@Wiki] to $\tau'$. The tangent spaces form [*trivial tangent bundles*]{} [@Wiki] $\tau_1\times{{\ensuremath{\mathbb{R}}}}^2$, $\tau_2\times{{\ensuremath{\mathbb{R}}}}^2$, and $\Theta$ is an isomorphism of the tangent bundles (compatible with ). The isomorphism $\Theta$ can be specified without a coordinate system, using only derivative bases and the linear function $u_1|_{\tau_1}$. Since the derivatives along $\tau_1\sim\tau_2$ are identified, it is enough to identify pairs of transversal derivatives at each $P\in\tau_1$ and $Q=\lambda(P)$. This leads to the characterization of geometric continuity in terms of [*transversal vector fields*]{} [@Wiki] along the edges, as in [@Hahn87 Corollary 3.3]. Condition [[*(A2)*]{}]{} is then easier to reflect as well (with reference to Definition \[def:inout\]), leading to the requirement ${{\ensuremath{\gamma}}}(u_1)<0$ on $\tau_1$. A thorough discussion of condition [[*(A2)*]{}]{} and topological degenerations to avoid in $G^1$ gluing is presented in §\[sec:topology\]. The following statement is the special case $r=1$ of [@Hahn87 Lemma 3.2]. The vector field $D_1$ can be chosen to be constant, like $\partial/\partial v_1$ in (\[eq:tbiso\]). \[def:g1vf\] Let $D_1$ denote a transversal $C^0$ vector field on $\tau_1$ pointing to the inward side (towards $\Omega_1$), and let $D_2$ denote a transversal $C^0$ vector field on $\tau_2$ pointing to the outward side (away from $\Omega_2$). Then the space of functions $(g_1,g_2)$ satisfying $$\label{eq:vfdat} g_1(P)=g_2(Q), \qquad D_1g_1(P)=D_2g_2(Q)$$ for any $P\in\tau_1$, $Q=\lambda(P)$ is the space $G^1({{\ensuremath{\mathcal{M}}}}_0,\varphi_0)$ for some diffeomorphism $\varphi_0$ between open neighborhoods of $\tau_1,\tau_2$. The vectors in $D_1$, $D_2$ are viewed as directional derivatives. The data in (\[eq:vfdat\]) defines isomorphisms $T_{P}\to T_{Q}$ of tangent spaces, continuously varying with $P$ and compatible with $\lambda:\tau_1\to\tau_2$. After attaching coordinate systems $(u_1,v_1)$, $(u_2,v_2)$ to $\tau_1$, $\tau_2$, the tangent bundle isomorphism can be characterized by transformation (\[eq:tbiso\]) with some continuous functions ${{\ensuremath{\beta}}}(u_1)$, ${{\ensuremath{\gamma}}}(u_1)$. We define $\varphi_0$ by (\[eq:jetb1\]). \[ex:joining\] Suppose that the two polygons of ${{\ensuremath{\mathcal{M}}}}_0$ are triangles , ${\Omega}_2=A_2B_2C_2$, that , $\tau_2=A_2B_2$, and that $\lambda$ maps $A_1$ to $A_2$. We attach [*standard*]{} coordinate systems $(u_1,v_1)$, $(u_2,v_2)$ to $\tau_1$, $\tau_2$, such that $u_1=0$, $u_2=0$ define the edges $A_1C_1$, $A_2C_2$, respectively. Consider a tangent bundle isomorphism $\Theta$ with $$\label{eq:exdev} {{\ensuremath{\beta}}}(u_1)=2u_1, \qquad {{\ensuremath{\gamma}}}(u_1)=-1$$ in (\[eq:tbiso\]). Since $u_1,\partial/\partial u_1$ are synonymous to $u_2,\partial/\partial u_2$ for $G^1$ functions, identification (\[eq:tbiso\]) can be rewritten as $$\left( \frac{\partial}{\partial v_1}-u_1\,\frac{\partial}{\partial u_1} \right) \mapsto -\left( \frac{\partial}{\partial v_2}-u_2\,\frac{\partial}{\partial u_2} \right).$$ In other words, for any $P\in A_1B_1$ the isomorphism $T_P\to T_{\lambda(P)}$ is defined by $$\partial_{PC_1} \mapsto -\,\partial_{QC_2}$$ (together with the trivial $\partial_{A_1B_1} \mapsto \partial_{A_2B_2}$). The vector fields $\partial_{PC_1}$, $\partial_{QC_2}$ pointing “exactly" towards the third vertices $C_1$, $C_2$ are identified with the minus sign. This is a nice characterization of a tangent bundle isomorphism. In particular, the two end-points of $\tau_1\sim\tau_2$ are symmetric. If the coordinate systems $(u_1,v_1)$, $(u_2,v_2)$ are changed to the standard alternatives with $u_1=0$, $u_2=0$ on the edges $B_1C_1$, $B_2C_2$ (respectively), the coordinate change is $$\label{eq:trcoor} (u_k,v_k)\mapsto (1-u_k-v_k,v_k) $$ for $k \in\{1,2\}$, and the derivative changes are $$\label{eq:reders} \left( \frac{\partial}{\partial u_k},\frac{\partial}{\partial v_k} \right) \mapsto \left( -\frac{\partial}{\partial u_k}, \, \frac{\partial}{\partial v_k}-\frac{\partial}{\partial u_k}\right). $$ The $\Theta$-defining functions ${{\ensuremath{\beta}}}(u_1),{{\ensuremath{\gamma}}}(u_1)$ have the same expressions (\[eq:exdev\]) in the alternative coordinates and derivative bases. \[def:join\] Let $P_1\in {\Omega}_1$ denote an endpoint of $\tau_1$, and let $P_2=\lambda(P_1)\in\tau_2$. Let $\widehat{\tau}_1\subset {\Omega}_1$ , $\widehat{\tau}_2\subset {\Omega}_2$ denote the other polygonal edges incident to $P_1$, $P_2$, respectively. The edge $\tau_1\sim\tau_2$ of ${{\ensuremath{\mathcal{M}}}}_0$ is called a [*joining edge*]{} at the vertex $\{P_1, P_2\}$ if the tangent space isomorphism $T_{P_1}\to T_{P_2}$ maps a derivative parallel to $\widehat{\tau}_1$ to a derivative parallel to $\widehat{\tau}_2$. If we choose the standard coordinates $(u_1,v_1)$, $(u_2,v_2)$ with $u_1=0$, $u_2=0$ at $P_1,P_2$ (respectively), the joining edge is characterized by ${{\ensuremath{\beta}}}(P_1)=0$ in (\[eq:tbiso\]). We call the edges $\widehat{\tau}_1$, $\widehat{\tau}_2$ [*opposite to each other*]{} at $\{P_1,P_2\}$. They are forced to transverse into each other across the common vertex in smooth realizations of ${{\ensuremath{\mathcal{M}}}}_0$. In Example \[ex:joining\], the gluing edge is a joining edge at both of its endpoints. Glueing around a vertex {#sec:g1vertex} ----------------------- Another basic situation to consider is $G^r$ gluing of several polygons ${\Omega}_1,\ldots,{\Omega}_n$ around a common vertex. Let $P_1,\ldots,P_n$ denote their vertices, respectively, to be glued to the common vertex. For each polygon ${\Omega}_k$, let $\tau_k,\tau'_k$ denote the two edges incident to $P_k$. We denote ${K}=\{1,\ldots,n\}$, and label cyclically $P_{0}=P_n$, $\tau'_{0}=\tau'_n$. Assume that for each $k\in{K}$ the edge $\tau_k$ is glued to $\tau'_{k-1}$ so that the vertices $P_k$, $P_{k-1}$ are thereby identified. More explicitly, we assume linear homeomorphisms $\lambda_k:\tau_k\to\tau'_{k-1}$ such that $\lambda_k(P_k)=P_{k-1}$, and $C^r$ transition maps $\varphi_k$ between open neighborhoods of $\tau_k$ and $\tau'_{k-1}$ specializing to $\lambda_k$. Once local coordinates around each $P_k$ are fixed, we may replace the transition maps $\varphi_k$ by their jet bundles $J^r_{\tau_k}(\varphi_k)$ as a more concise required data. Let ${{\ensuremath{\mathcal{M}}}}^*_0$ denote the $G^0$ polygonal surface defined by the polygons $\Omega_k$ and the homeomorphisms $\lambda_k$ with $k\in{K}$. Let ${{\ensuremath{\mathcal{P}}}}_0$ denote the common vertex $\{P_1,\ldots,P_n\}$. \[def:g1v\] Suppose that for each $k\in{K}$ we have a function $g_k\in C^0({\Omega}_k)$ extendible to a $C^r$ function on an open neighborhoods of ${\Omega}_k$. Set $g_0=g_n$. The sequence $(g_1,\ldots,g_n)$ is called a [*geometrically continuous $G^r$ function*]{} on ${{\ensuremath{\mathcal{M}}}}^*_0$ if for each $k\in{K}$ the pair $(g_k,g_{k-1})$ is a $G^r$ function on the polygonal surface $(\{{\Omega}_k,{\Omega}_{k-1}\},\{\lambda_k\})$ as in §\[sec:edgeglue\], glued by $\varphi_k$ as well. In particular, (\[eq:grjet\]) becomes $$\label{eq:grvjet} J^r_{P_k}(g_k)=J^r_{P_{k-1}}(g_{k-1})\circ J^r_{P_k}(\varphi_k). $$ A natural restriction on the gluing data is $$\label{eq:grcjet} J^r_{P_n}(\varphi_1\circ\ldots\circ\varphi_n)(\tilde{u},\tilde{v})=(\tilde{u},\tilde{v})$$ in some (or any) local coordinates around $P_n$. This is a necessary condition for existence of satisfactorily many $G^r$ functions, because the equations (\[eq:grvjet\]) imply $$\label{eq:jetcomp} J^r_{P_n}(g_n)=J^r_{P_n}(g_n)\circ J^r_{P_n}(\varphi_1\circ\ldots\circ\varphi_n).$$ This is a restrictive functional equation on $g_n$ unless (\[eq:grcjet\]) is satisfied. Remark \[rm:fulltg\] below clarifies more explicitly. For each $k\in {K}$, let us choose the standard coordinate system $(u_k,v_k)$ attached to $\tau_k$ with $u_k=0$ at $P_k$. Then $(v_k,u_k)$ is a standard coordinate system attached to $\widehat{\tau}_k$. With these coordinates, it is straightforward to rewrite (\[eq:grcjet\]) as a composition of jets: $$\label{eq:grcjett} J^r_{P_1}(\varphi_1)\circ\ldots\circ J^r_{P_n}(\varphi_n)(u_n,v_n)=(u_n,v_n).$$ This form has two advantages. It involves composition of computationally more definite objects, that is, jets at single points rather than possibly transcendental transition maps. Secondly, it allows to define the gluing data in terms of jet bundles rather than transition maps. For $r=1$, each jet bundle $J^1_{\tau_k}(\varphi_k)$ with $k\in {K}$ is determined by continuous functions ${{\ensuremath{\beta}}}_k(u_k)$, ${{\ensuremath{\gamma}}}_k(u_k)$, so that it acts like in (\[eq:jetb1\]): $$\label{eq:jetb2} {u_k\choose v_k} \mapsto {v_{k-1}\choose u_{k-1}} ={u_k+{{\ensuremath{\beta}}}_k(u_k)v_k\choose {{\ensuremath{\gamma}}}_k(u_k) v_k}.$$ Specialization to $P_k$ gives this transformation $J^1_{P_k}(\varphi_k)$ of $J^1$-jets in the standard coordinates: $$\label{eq:jetbp} \left( \begin{array}{c} 1 \\ u_{k-1} \\ v_{k-1} \end{array} \right) =\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & {{\ensuremath{\gamma}}}_k(0) \\ 0 & 1 & {{\ensuremath{\beta}}}_k(0) \end{array} \right) \left( \begin{array}{c} 1 \\ u_k \\ v_k \end{array} \right).$$ Ignoring the trivial transformation of the constant terms, condition (\[eq:grcjett\]) becomes $$\label{eq:jetbp1} \left( \begin{array}{cc} 0 & {{\ensuremath{\gamma}}}_1(0) \\ 1 & {{\ensuremath{\beta}}}_1(0) \end{array} \right) \,\cdots\, \left( \begin{array}{cc} 0 & {{\ensuremath{\gamma}}}_n(0) \\ 1 & {{\ensuremath{\beta}}}_n(0) \end{array} \right) = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right).$$ Note that the matrix product is non-commutative, hence we do not use the $\prod$-notation. The dual transformations of partial derivatives are similar: $$\label{eq:tgp} {\partial/\partial u_{k}\choose \partial/\partial v_{k}} =\left( \begin{array}{cc} 0 & 1 \\ {{\ensuremath{\gamma}}}_k(0) & {{\ensuremath{\beta}}}_k(0) \end{array} \right) {\partial/\partial u_{k-1} \choose \partial/\partial v_{k-1}}.$$ This is an expression of the induced isomorphism $T_{P_n}\to T_{P_{n-1}}$ of the tangent spaces. The transposed version of (\[eq:jetbp1\]) $$\label{eq:tsbp1} \left( \begin{array}{cc} 0 & 1 \\ {{\ensuremath{\gamma}}}_n(0) & {{\ensuremath{\beta}}}_n(0) \end{array} \right) \,\cdots\, \left( \begin{array}{cc} 0 & 1 \\ {{\ensuremath{\gamma}}}_1(0) & {{\ensuremath{\beta}}}_1(0) \end{array} \right) = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right)$$ expresses the dual to (\[eq:grcjett\]) requirement that the composition must be the identity map. (See the same Remark \[rm:fulltg\].) The partial matrix products $$\label{eq:jetbpk} \left( \begin{array}{cc} 0 & 1 \\ {{\ensuremath{\gamma}}}_k(0) & {{\ensuremath{\beta}}}_k(0) \end{array} \right) \,\cdots\, \left( \begin{array}{cc} 0 & 1 \\ {{\ensuremath{\gamma}}}_1(0) & {{\ensuremath{\beta}}}_1(0) \end{array} \right) $$ with $k\in {K}\setminus \{n\}$ express the derivatives $\partial/\partial u_{k}$, $\partial/\partial v_{k}$ (of the component $g_k$ of presumed $G^1$ functions on ${{\ensuremath{\mathcal{M}}}}_0^*$) in terms of $\partial/\partial u_{n}$, $\partial/\partial v_{n}$ (of the component $g_n$). This allows us to represent the involved partial derivatives graphically as directional derivatives in the tangent space $T_{P_n}$; see Figure \[fig:g1v\][[*(a)*]{}]{}. The derivatives $\partial/\partial u_{k}$ and $\partial/\partial v_{k-1}$ are automatically identified, and the convex sectors $\widehat{{\Omega}}_k$ bounded by consecutive vectors $\vb_{k-1}$, $\vb_k$ represent derivatives in the polygonal corners $P_k$. It is not automatic that the fan [@Wiki] of tangent sectors will cover ${{\ensuremath{\mathbb{R}}}}^2$ exactly once, as we discuss in §\[sec:topology\]. This schematic picture can visually reveal features and quality of the gluing data for CAGD. As explained in §\[rm:gconstr\], we can start constructing good gluing data on a settled topological manifold ${{\ensuremath{\mathcal{M}}}}$ by choosing a sector partition as in Figure \[fig:g1v\][[*(a)*]{}]{} around each vertex, drawing directional derivatives on the sector boundaries, and copying relations between derivative vectors to the values ${{\ensuremath{\beta}}}_k(0)$, ${{\ensuremath{\gamma}}}_k(0)$ to be interpolated to a global $G^1$ gluing structure. $$\begin{picture}(160,144)(-72,-72) \put(0,0){\line(1,0){71}} \put(0,0){\line(0,1){71}} \put(0,0){\line(-2,1){64}} \put(0,0){\line(-3,-2){60}} \put(0,0){\line(1,-4){17}} \thicklines \put(0,0){\vector(1,0){50}} \put(0,0){\vector(0,1){50}} \put(0,0){\vector(-2,1){40}} \put(0,0){\vector(-3,-2){38}} \put(0,0){\vector(1,-4){11}} \put(2,6){$P_n$} \put(-30,46){$\widehat{{\Omega}}_1$} \put(34,52){$\widehat{{\Omega}}_n,{\Omega}_n$} \put(40,-40){$\widehat{{\Omega}}_{n-1}$} \put(-54,-5){$\widehat{{\Omega}}_2$} \put(-22,-42){$\cdots$} \put(-70,-70){{{\it (a)}}} \put(2,33){$\vb_0\!=\!\vb_{\!n}$} \put(-29,17){$\vb_1$} \put(31,-8){$\vb_{\!n-1}$} \end{picture} \qquad\;\; \begin{picture}(160,150)(-75,-75) \put(0,0){\line(1,0){71}} \put(0,0){\line(0,1){71}} \put(0,0){\line(-3,2){60}} \put(0,0){\line(-3,-1){68}} \put(0,0){\line(-1,-4){17}} \thicklines \put(0,0){\vector(1,0){50}} \put(0,0){\vector(0,1){50}} \put(0,0){\vector(-3,2){40}} \put(0,0){\vector(-3,-1){38}} \put(0,0){\vector(-1,-4){11}} \put(2,5){$P_1$} \put(-34,50){$\widehat{{\Omega}}_2$} \put(32,40){$\widehat{{\Omega}}_1,{\Omega}_1$} \put(-38,-41){$\widehat{{\Omega}}_n$} \put(-55,2){$\vdots$} \put(-70,-70){{{\it (b)}}} \put(3,33){$\vb_1$} \put(-4,-32){$\vb_n$} \put(31,-8){$\vb_0$} \end{picture}$$ \[ex:hahnsym\] A simple way to choose the relations between derivatives along polygonal edges at the vertex ${{\ensuremath{\mathcal{P}}}}_0=\{P_1,\ldots,P_n\}$ is to choose $n$ vectors of the same length, and evenly space them at the angle $2\pi/n$. The gluing data then has $$\label{eq:hsym} {{\ensuremath{\beta}}}_k(0) = 2\cos{\small \frac{2\pi}n}, \qquad {{\ensuremath{\gamma}}}_k(0)=-1$$ for all $k\in N$. In the context of gluing rectangles, Hahn [@Hahn87 §8] proposed to choose this symmetric $G^1$ (or even $G^r$) gluing structure at all vertices of a linear polygonal surface ${{\ensuremath{\mathcal{M}}}}\supset{{\ensuremath{\mathcal{M}}}}_0^*$, and straightforwardly interpolate the restricted values ${{\ensuremath{\beta}}}_k(0)$. In particular, if our $\tau_k\sim\tau'_{k-1}$ connects ${{\ensuremath{\mathcal{P}}}}_0$ to an interior vertex on ${{\ensuremath{\mathcal{M}}}}$ of valency $m$, Hahn’s proposal is to take ${{\ensuremath{\gamma}}}_k(u_1)=-1$ and let ${{\ensuremath{\beta}}}_k(u_1)\in C^r(\tau_1)$ interpolate $${{\ensuremath{\beta}}}_k(0)= 2\cos{\small \frac{2\pi}n}, \qquad {{\ensuremath{\beta}}}_k(1)= -2\cos{\small \frac{2\pi}m},$$ with all derivatives of order ${\leqslant}r$ at both end-points set to zero. The minus sign in ${{\ensuremath{\beta}}}_k(1)$ appears because the transformation of standard coordinates and derivatives is $$\label{eq:rectuv} (u_k,v_k)\mapsto (1-u_k,v_k), \qquad \left( \frac{\partial}{\partial u_k},\frac{\partial}{\partial v_k} \right) \mapsto \left( -\frac{\partial}{\partial u_k}, \, \frac{\partial}{\partial v_k}\right).$$ For comparison, if ${\Omega}_k$ is a triangle (and ${\Omega}_{k-1}$ is a rectangle) then the change of standard coordinates and derivatives is (\[eq:trcoor\])–(\[eq:reders\]). Then we have to interpolate to $${{\ensuremath{\beta}}}_k(1)= 1-2\cos{\small \frac{2\pi}m}.$$ If ${\Omega}_{k-1}$ is a triangle as well, then we have to transform $(v_{k-1},u_{k-1})$ by (\[eq:trcoor\])–(\[eq:reders\]) also. The interpolation is then to $${{\ensuremath{\beta}}}_k(1)= 2-2\cos{\small \frac{2\pi}m},$$ like in Example \[ex:joining\]. In the context of $G^1$ gluing, linear interpolation of the requisite values ${{\ensuremath{\beta}}}_k(0)$, ${{\ensuremath{\beta}}}_k(1)$ of the symmetric gluing around interior vertices (of a linear polygonal surface ${{\ensuremath{\mathcal{M}}}}$) is often used in CAGD. Let us refer to this kind of gluing data as [*linear Hahn gluing*]{}. \[def:crossing\] Suppose that $n=4$ polygons are being glued around the common vertex ${{\ensuremath{\mathcal{P}}}}_0\in{{\ensuremath{\mathcal{M}}}}_0^*$. The vertex is called [*a crossing vertex*]{} if all four edges $\tau_1\sim\tau'_4$, $\tau_2\sim\tau'_1$, $\tau_3\sim\tau'_2$, $\tau_4\sim\tau'_3$ are joining edges (respectively) at $\{P_1,P_4\}$, $\{P_1,P_2\}$, $\{P_2,P_3\}$, $\{P_3,P_4\}$, as in Definition \[def:join\]. Since we use standard coordinate systems, the characterizing condition is ${{\ensuremath{\beta}}}_1(0)={{\ensuremath{\beta}}}_2(0)={{\ensuremath{\beta}}}_3(0)={{\ensuremath{\beta}}}_4(0)=0$. Then condition (\[eq:tsbp1\]) leads to $$\label{eq:qbetas} {{\ensuremath{\gamma}}}_1(0){{\ensuremath{\gamma}}}_3(0)=1,\qquad {{\ensuremath{\gamma}}}_2(0){{\ensuremath{\gamma}}}_4(0)=1.$$ The special case ${{\ensuremath{\gamma}}}_1(0)={{\ensuremath{\gamma}}}_2(0)={{\ensuremath{\gamma}}}_3(0)={{\ensuremath{\gamma}}}_4(0)=-1$ gives Hahn’s symmetric gluing (\[eq:hsym\]). \[rm:fulltg\] If conditions (\[eq:jetcomp\]), (\[eq:tsbp1\]) are not satisfied, there will be directional derivatives in $T_{P_{n}}$ that always evaluate the component $g_n$ of any $G^1$ function of Definition \[def:g1v\] to zero. This is clearly the case for the directions corresponding to the eigenvalues $\neq 1$ of the matrix product in (\[eq:jetbp1\]). If the matrix product is $1\ 1\choose 0\ 1$, then the eigenvector direction will give zero derivatives. Then we do not have a “full” tangent space, that is, enough functionals on the Taylor coefficients. There should be two linearly independent derivatives at any vertex of our polygonal surfaces acting non-trivially on [considered]{} spaces of $G^1$ functions. The next subsection formalizes an additional condition caused by this guiding principle, generalizing a relevant obstruction in [@Peters2010]. Restriction on crossing vertices {#sec:crossing} -------------------------------- It may seem that linear Hahn gluing always gives spline spaces $G^r({{\ensuremath{\mathcal{M}}}})$ that allow smooth realizations of the surface pieces such as ${{\ensuremath{\mathcal{M}}}}^*_0\subset{{\ensuremath{\mathcal{M}}}}$ around their common vertices. Rather unexpectedly, Peters and Fan [@Peters2010] noticed that even if the gluing data satisfies (\[eq:jetbp1\]) there are feasibility issues of smooth realization around crossing vertices. Their context was linear Hahn gluing of rectangles. The noticed necessary restriction appears to be topological, but we generalize it to a local obstruction. \[def:balance\] Let ${{\ensuremath{\mathcal{P}}}}=\{P_1,P_2,P_3,P_4\}$ denote a crossing vertex of a polygonal surface ${{\ensuremath{\mathcal{M}}}}$. A pair of opposite edges $\tau^*_1$, $\tau^*_2$ at ${{\ensuremath{\mathcal{P}}}}$ is called [*balanced*]{} if (at least) one of the following conditions holds: - the other endpoints of $\tau^*_1$, $\tau^*_2$ are vertices in ${{\ensuremath{\mathcal{M}}}}$ of the same order; - the other endpoint of $\tau^*_1$ or $\tau^*_2$ is on the boundary of ${{\ensuremath{\mathcal{M}}}}$. The crossing vertex ${{\ensuremath{\mathcal{P}}}}$ is called [*balanced*]{} if both pairs of opposite edges are balanced. The observation in [@Peters2010] is this: if ${{\ensuremath{\mathcal{M}}}}$ is a linear polygonal surface of rectangles, and the linear Hahn gluing is used, then a smooth realization of ${{\ensuremath{\mathcal{M}}}}$ by polynomial splines exits only if all its crossing vertices are balanced. It turns out that the obstruction is essentially local after all. It arrises because of differentiability of the gluing data and assumed polynomial restrictions. Here is a generalization of the Peters–Fan obstruction, in the local setting of §\[sec:g1vertex\]. \[cond:comp2\] Let ${{\ensuremath{\mathcal{M}}}}^*_0$ denote a polygonal surface as in §$\ref{sec:g1vertex}$ with $n=4$. Suppose that the interior vertex ${{\ensuremath{\mathcal{P}}}}_0$ is a crossing vertex, and that the gluing data $({{\ensuremath{\beta}}}_k,{{\ensuremath{\gamma}}}_k)$ for $k\in N=\{1,2,3,4\}$ are differentiable functions on the edges $\tau_k$. Let $G^1_2({{\ensuremath{\mathcal{M}}}}^*_0)$ denote the linear space of $G^1$ functions $(g_1,g_2,g_3,g_4)$ on ${{\ensuremath{\mathcal{M}}}}_0^*$ such that $g_k\in C^2({\Omega}_k)$ for $k\in N$. A smooth realization of ${{\ensuremath{\mathcal{M}}}}^*_0$ by functions in $G^1_2({{\ensuremath{\mathcal{M}}}}^*_0)$ exists if and only if $$\begin{aligned} \label{eq:ddv1} {{\ensuremath{\beta}}}'_1(0)+\frac{{{\ensuremath{\gamma}}}'_2(0)}{{{\ensuremath{\gamma}}}_2(0)} &=& -{{\ensuremath{\gamma}}}_1(0) \left({{\ensuremath{\beta}}}'_3(0)+\frac{{{\ensuremath{\gamma}}}'_4(0)}{{{\ensuremath{\gamma}}}_4(0)}\right),\\ \label{eq:ddv2} {{\ensuremath{\beta}}}'_2(0)+\frac{{{\ensuremath{\gamma}}}'_3(0)}{{{\ensuremath{\gamma}}}_3(0)} &=& -{{\ensuremath{\gamma}}}_2(0) \left({{\ensuremath{\beta}}}'_4(0)+\frac{{{\ensuremath{\gamma}}}'_1(0)}{{{\ensuremath{\gamma}}}_1(0)}\right). \end{aligned}$$ We differentiate (\[eq:tbiso\]) with respect to $u_1$ and adjust the index notation, to get this transformation of the mixed derivative for the presumed $g_k$: $$\frac{\partial^2}{\partial u_k\partial v_k} \mapsto {{\ensuremath{\beta}}}'(u_k)\frac{\partial}{\partial u_{k-1}} +{{\ensuremath{\beta}}}(u_k)\frac{\partial^2}{\partial u_{k-1}^2} +{{\ensuremath{\gamma}}}'(u_k)\frac{\partial}{\partial v_{k-1}} +{{\ensuremath{\gamma}}}(u_k)\frac{\partial^2}{\partial u_{k-1}\partial v_{k-1}}.$$ Transformation (\[eq:tgp\]) of derivatives at $u_1=0$ is extended to $$\label{eq:trE3} \left( \begin{array}{c} \partial/\partial u_{k}\\ \partial/\partial v_{k} \\ \!\partial^2/\partial u_{k}\partial v_{k}\!\! \end{array} \right) =\left( \begin{array}{ccc} 0 & 1 & 0 \\ {{\ensuremath{\gamma}}}_k(0) & 0 & 0 \\ {{\ensuremath{\gamma}}}'_k(0) & {{\ensuremath{\beta}}}'_k(0) & {{\ensuremath{\gamma}}}_k(0) \end{array} \right) \left( \begin{array}{c} \partial/\partial u_{k-1}\\ \partial/\partial v_{k-1} \\ \!\partial^2\!/\partial u_{k-1}\partial v_{k-1}\!\!\! \end{array} \right) \!.$$ Following (\[eq:tsbp1\]) and keeping in mind (\[eq:qbetas\]), we compute the matrix product $$\left( \begin{array}{ccc} 0 & 1 & 0 \\ {{\ensuremath{\gamma}}}_4(0) & 0 & 0 \\ {{\ensuremath{\gamma}}}'_4(0) & {{\ensuremath{\beta}}}'_4(0) & {{\ensuremath{\gamma}}}_4(0) \end{array} \right) \cdots \left( \begin{array}{ccc} 0 & 1 & 0 \\ {{\ensuremath{\gamma}}}_1(0) & 0 & 0 \\ {{\ensuremath{\gamma}}}'_1(0) & {{\ensuremath{\beta}}}'_1(0) & {{\ensuremath{\gamma}}}_1(0) \end{array} \right) =\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ H_1 & H_2 & 1 \end{array} \right) \,$$ with $$\begin{aligned} H_1 & = {{\ensuremath{\beta}}}_4'(0)+{{\ensuremath{\gamma}}}_3(0){{\ensuremath{\gamma}}}'_1(0)+{{\ensuremath{\gamma}}}_4(0)\big({{\ensuremath{\beta}}}_2'(0)+{{\ensuremath{\gamma}}}_1(0){{\ensuremath{\gamma}}}'_3(0)\big),\\ H_2 & = {{\ensuremath{\beta}}}_3'(0)+{{\ensuremath{\gamma}}}_2(0){{\ensuremath{\gamma}}}'_4(0)+{{\ensuremath{\gamma}}}_3(0)\big({{\ensuremath{\beta}}}_1'(0)+{{\ensuremath{\gamma}}}_4(0){{\ensuremath{\gamma}}}'_2(0)\big).\end{aligned}$$ Since the derivative $\partial^2g_n/\partial u_{n}\partial v_{n}$ is well defined, we have $$\label{eq:singproof} \left( H_1\frac{\partial}{\partial u_{n}}+H_2\frac{\partial}{\partial u_{n}}\right) g_n=0.$$ If the differential operator is not zero, it gives a directional derivative (in terms of Figure \[fig:g1v\] [[*(a)*]{}]{}, towards the interior of one of the polygons) that annihilates any function in $G^1_2({{\ensuremath{\mathcal{M}}}}^*_0)$. Any realization of ${{\ensuremath{\mathcal{M}}}}^*_0$ would be singular at ${{\ensuremath{\mathcal{P}}}}_0$ in that direction. Therefore we must have $H_1=H_2=0$, leading to (\[eq:ddv1\])–(\[eq:ddv2\]) after applying (\[eq:qbetas\]). In the Peters-Fan setting [@Peters2010], we have all ${{\ensuremath{\gamma}}}_k(0)=-1$, ${{\ensuremath{\gamma}}}_k'(0)=0$. The restrictions (\[eq:ddv1\])–(\[eq:ddv2\]) are then simply $$\label{eq:quadrsym} {{\ensuremath{\beta}}}'_1(0)={{\ensuremath{\beta}}}'_3(0), \qquad {{\ensuremath{\beta}}}'_2(0)={{\ensuremath{\beta}}}'_4(0).$$ With the linear Hahn gluing of rectangles, the derivatives ${{\ensuremath{\beta}}}'_k(0)$ are determined by the order of the neighboring vertices. The mentioned balancing condition follows. \[ex:pfcontra\] This examples shows that a smooth realization may exist with the equations (\[eq:ddv1\])–(\[eq:ddv2\]) not fulfilled, if the $C^2$-differentiability condition is dropped. Let us assume that the four polygons of ${{\ensuremath{\mathcal{M}}}}^*_0$ are the rectangles , $\Omega_2=[-1,0]\times [0,1]$, $\Omega_3=[-1,0]^2$, $\Omega_4=[0,1]\times [-1,0]$, and set $$\begin{aligned} \textstyle {{\ensuremath{\beta}}}_1(u_1)=\frac12 u_1, \hspace{25pt} & \quad {{\ensuremath{\beta}}}_2(u_2)={{\ensuremath{\beta}}}_3(u_3)={{\ensuremath{\beta}}}_4(u_4)=0,\\ \textstyle {{\ensuremath{\gamma}}}_1(u_1)=-1+\frac12 u_1, & \quad {{\ensuremath{\gamma}}}_2(u_2)={{\ensuremath{\gamma}}}_3(u_3)={{\ensuremath{\gamma}}}_4(u_4)=-1.\end{aligned}$$ Conditions (\[eq:ddv1\])–(\[eq:ddv2\]) are not satisfied. Using the global coordinates , $y=v_1=u_2=-v_3=-u_4$, consider these functions on ${{\ensuremath{\mathcal{M}}}}^*_0$: $$\begin{aligned} \textstyle G_1 = \big(x-\frac14 x^2,x,x,x-\frac14 h(x,y)\big),\qquad G_2 = \big(y+\frac14 x^2,y,y,y+\frac14 h(x,y)\big),\end{aligned}$$ with $h(x,y)=(x+y)^2$ for $x+y{\geqslant}0$ and $h(x,y)=0$ for $x+y{\leqslant}0$. We have and $G_1,G_2\in G^1({{\ensuremath{\mathcal{M}}}}^*_0)$. The functions $(G_1,G_2,1)$ define a smooth embedding of ${{\ensuremath{\mathcal{M}}}}^*_0$ into ${{\ensuremath{\mathbb{R}}}}^3$. Lemma \[cond:comp2\] does not apply because . On the level of polynomial specializations of $G_1,G_2$, the common vertex is not a crossing vertex as $\Omega_4$ is split along $u_4=v_4$ for them. Topological restrictions {#sec:topology} ------------------------ Degenerations of geometrically continuous gluing reveal subtle differences between the differential geometry setting and CAGD objectives. Particularly condition [[*(A2)*]{}]{} is not reflected in the transformations (\[eq:tbiso\]), (\[eq:tgp\]) and constrains (\[eq:grcjett\]), (\[eq:tsbp1\]). As we mentioned before Proposition \[def:g1vf\], condition [[*(A2)*]{}]{} leads to the requirement that ${{\ensuremath{\gamma}}}(u_1)$ in (\[eq:jetb1\]), (\[eq:tbiso\]) must be a negative function on $\tau_1$. The particular condition ${{\ensuremath{\gamma}}}(u_1)<0$ depends on our consistent use of positive coordinates on the polygons. In loose intuitive terms, the coordinate function $v_1$ on ${\Omega}_1$ [*should be*]{} negative on ${\Omega}^0_2$, so it [*should “project"*]{} to $v_2$ with a negative coefficient. Equivalently, the standard derivatives such as $\partial/\partial v_1$, $\partial/\partial v_2$ always give transversal vector fields pointing towards the interior sides, in contrast to Proposition \[def:g1vf\]. If one takes ${{\ensuremath{\gamma}}}(u_1)>0$ then the implied transition map $\psi_0:U_1\to U_2$ identifies interior points in $\Omega_1\cup U_1$ and $\Omega_2\cup U_2$ of the two polygons of §\[sec:edgeglue\]. This is not desirable in CAGD, as we want the underlying topological surface to be as in Definition \[def:g0complex\]. One can formally define the space $G^1({{\ensuremath{\mathcal{M}}}}_0)$ with ${{\ensuremath{\gamma}}}(u_1)>0$ and use it to model surfaces with a sharp edge, so that “smooth" realizations in ${{\ensuremath{\mathbb{R}}}}^3$ would have polygonal patches meeting at the angle $0$ rather than the properly continuous angle $\pi$. But then $G^1({{\ensuremath{\mathcal{M}}}}_0)$ should not be identified with the $C^1$ functions on ${{\ensuremath{\mathcal{M}}}}_r$ of §\[sec:edgeglue\] (with $r=1$). For example, the first component of a $C^1$ function $(f_1,f_2)$ on ${{\ensuremath{\mathcal{M}}}}_r$ then determines $f_2$ uniquely on $\Omega_2\cup U_2$. Most strikingly, the algebraic conditions (\[eq:grcjett\])–(\[eq:tsbp1\]) of geometric continuity around vertices allow the tangent sectors in Figure \[fig:g1v\][[*(a)*]{}]{} to overlap in a “winding up” fashion, beyond the angle $2\pi$. If this happens, a realization of ${{\ensuremath{\mathcal{M}}}}^*_0$ (of §\[sec:g1vertex\]) in ${{\ensuremath{\mathbb{R}}}}^3$ by $G^1$ functions will have the interior vertex ${{\ensuremath{\mathcal{P}}}}_0$ as a similarly branching point, with the polygonal patches winding up and forming a self-intersecting surface in ${{\ensuremath{\mathbb{R}}}}^3$ around the image of ${{\ensuremath{\mathcal{P}}}}_0$. This “winding up” degeneracy can even happen at a boundary vertex (of a large polygonal surface ${{\ensuremath{\mathcal{M}}}}$), where a sequence of polygons $\Omega_1,\ldots,\Omega_n$ is glued around a common vertex $\{P_1,\ldots,P_n\}$ as in Figure \[fig:g1v\][[*(b)*]{}]{}, without $\lambda_1:\tau_1\to\tau'_n$. All tangent spaces $T_{P_k}$ can be transformed to $T_{P_1}$ by following the partial matrix products $$\label{eq:jetbpk2} \left( \begin{array}{cc} 0 & 1 \\ {{\ensuremath{\gamma}}}_k(0) & {{\ensuremath{\beta}}}_k(0) \end{array} \right) \,\cdots\, \left( \begin{array}{cc} 0 & 1 \\ {{\ensuremath{\gamma}}}_2(0) & {{\ensuremath{\beta}}}_2(0) \end{array} \right) \qquad \mbox{with } k\in \{2,\ldots,n\}$$ like in (\[eq:jetbpk\]), and draw a similar set of sectors in $T_{P_1}$. Normally, the union of sectors should be a proper subset of ${{\ensuremath{\mathbb{R}}}}^2$, but the algebraic conditions do not disallow them to overlap beyond the angle $2\pi$. For both interior and boundary vertices, it is enough to require that no other sector intersects an initial sector $\widehat{\Omega}_1\subset T_{P_1}$ or $\widehat{\Omega}_n\subset T_{P_n}$. By an affine linear transformation, we can have $(1\ 0)$ and $(0\ 1)$ as the generating vectors of the initial sector. Then the initial sector coincides with the first quadrant. The condition of not intersecting the first quadrant is formulated as follows. \[th:sectors\] Let $M$ denote a $2\times 2$ matrix with real entries. Let ${\Omega}$ denote a convex sector in ${{\ensuremath{\mathbb{R}}}}^2$ from the origin, generated by the two row vectors of $M$. Then ${\Omega}$ intersects the first quadrant if and only if there are two positive entries in a row of $M$, or on the main diagonal. Let ${\Omega}_0$ denote the first quadrant. If ${\Omega}$ and ${\Omega}_0$ intersect, then either (at least) one of the boundaries of ${\Omega}$ lies inside ${\Omega}_0$, or ${\Omega}_0$ lies completely inside ${\Omega}_1$. In the former case, we have a row of positive entries. In the latter case, the diagonal entries are positive by the convexity assumption. Here is a formulation of restrictions to prevent the “winding up" tangent spaces and locally self-intersecting realizations in ${{\ensuremath{\mathbb{R}}}}^3$: - For an interior vertex, we require that every partial matrix product (\[eq:jetbpk\]) has a non-positive entry on each row and on the main diagonal. - For an boundary vertex of valency $n$, we require that every matrix product (\[eq:jetbpk2\]) has a non-positive entry on each row and on the main diagonal, and the bottom row of the product with $k=n$ should not be $(1\ 0)$. In some applications, one my wish the stronger condition on boundary vertices of the total angle of the tangents sectors to be less than $\pi$. Then the matrix products in (\[eq:jetbpk2\]) should not have negative entries in the second column. As we see, the unstated CAGD-oriented requirement to fit the polygons tightly leads to significant deviations from the differential geometry setting. Even the common sense requirement [[*(iii)*]{}]{} of Definition \[def:g0complex\] can be viewed in this light. This type of conditions are absent in differential geometry, where all gluing is determined by the equivalence relation (of points on the coordinate open sets in ${{\ensuremath{\mathbb{R}}}}^2$) defined by the transition maps. Only with [[*(B1)*]{}]{}, [[*(B2)*]{}]{} satisfied we have the intended topological space ${{\ensuremath{\mathcal{M}}}}$ of the corresponding differential surface. Only then we can talk sensibly about “full" tangent spaces at the common vertices such as ${{\ensuremath{\mathcal{P}}}}$ in ${{\ensuremath{\mathcal{M}}}}_0$, defined by the equivalence isomorphisms $T_P\to T_{\lambda(P)}$. The topological constraints could be dropped in some applications, for example, when modeling analytical surfaces with branching points, or surfaces with sharp wing-like “interior" edges, or with winding-up boundary. In these specific applications, one should extend Definition \[def:crossing\], modify Theorem \[cond:comp2\] so to allow winding up of $8, 12, \ldots$ polygons along joining edges, or a combination (with any even number of polygons) of winding up and “reflection" back from contra-[[*(A2)*]{}]{} sharp edges. Apart from this consideration, the topological constrains do not affect our algebraic dimension count in §\[sec:dformula\]. Geometrically continuous splines {#sec:gsplines} ================================ We are ready to define a $G^1$ version of polygonal surfaces of Definition \[def:g0complex\]. The following data structure should be useful in most applications. Allowing T-splines [@He:2006] elegantly would be an important adjustment. \[def:g1complex\] Let ${K}_0,{K}_1$ denote finite sets. A [*(geometrically continuous) $G^1$ polygonal surface*]{} is a collection $(\{{\Omega}_k\}_{k\in {K}_0},\{(\lambda_k,\Theta_k)\}_{k\in {K}_1})$ such that 1. The pair $(\{{\Omega}_k\}_{k\in {K}_0},\{(\lambda_k)\}_{k\in {K}_1})$ is a linear $G^0$ polygonal surface of Definition \[def:g0complex\]. 2. Each $\Theta_k$ is an isomorphism of tangent (or jet) bundles on the polygonal edges glued by $\lambda_k:\tau_{k}\to{\tau}'_{k}$, compatible with $\lambda_k$ and condition [[*(A2)*]{}]{}. Concretely, $\Theta_k$ can be given by one of the following objects (with adaptation of their local context in §\[sec:geoglue\]): - a transition map satisfying [[*(A1)*]{}]{} and [[*(A2)*]{}]{}; - a jet bundle isomorphism as in (\[eq:jetb1\]), with ${{\ensuremath{\gamma}}}(u_1)<0$; - a tangent bundle isomorphism as in (\[eq:tbiso\]), with ${{\ensuremath{\gamma}}}(u_1)<0$; - an identification of transversal vector fields as in Proposition \[def:g1vf\]. 3. At each interior vertex, the following restrictions must be satisfied: - one of the equivalent conditions (\[eq:grcjet\]), (\[eq:grcjett\]), (\[eq:jetbp1\]), (\[eq:tsbp1\]); - conditions (\[eq:ddv1\])–(\[eq:ddv2\]); - condition [[*(B1)*]{}]{}. 4. At each boundary vertex, condition [[*(B2)*]{}]{} must be satisfied. A [*$G^1$ function*]{} on the $G^1$ polygonal surface ${{\ensuremath{\mathcal{M}}}}$ of Definition \[def:g1complex\] is a collection $(g_\ell)_{\ell\in {K}_0}$ with , such that for any $k\in {K}_1$ and the defined gluing $(\lambda_k:\tau_1\to\tau_2,\Theta_k)$ of polygonal edges $\tau_1\subset\Omega_i$, $\tau_2\subset\Omega_j$ (with $i,j\in {K}_0$) the functions $g_i,g_j$ are related as $$g_i|_{\tau_1}=g_j\circ \lambda_k, \qquad (D_1g_i)|_{\tau_1} = (D_2g_j) \circ \lambda_k.$$ Here we assume the gluing data $\Theta_k$ is given in terms of transversal vector fields $D_1:\tau_1\to{{\ensuremath{\mathbb{R}}}}^2$, $D_2:\tau_2\to{{\ensuremath{\mathbb{R}}}}^2$ as in Proposition \[def:g1vf\]. Alternatively, a [*$G^1$ function*]{} on ${{\ensuremath{\mathcal{M}}}}$ is a continuous function on the underlying $G^0$ polygonal surface, such that the corresponding conditions (\[eq:g1func0\]) hold on all interior edges $\tau_1\sim\tau_2$, where $(u_1,v_1)$, $(u_2,v_2)$ are coordinate systems attached to $\tau_1,\tau_2$ and matched by $\lambda_k:\tau_1\to\tau_2$, and ${{\ensuremath{\beta}}}(u_1),{{\ensuremath{\gamma}}}(u_1)$ represent the gluing data in those coordinates. Note that the compatibility conditions of §\[sec:g1vertex\], §\[sec:crossing\] apply to the gluing data, not to the splines. \[def:g1splines\] A $G^1$ polygonal surface is called [*rational*]{} if for any $k\in{K}_1$ the gluing data $\Theta_k$ is defined by rational functions ${{\ensuremath{\beta}}}_k(u_k),{{\ensuremath{\gamma}}}_k(u_k)\in {{\ensuremath{\mathbb{R}}}}(u_k)$ in (\[eq:jetb1\]) or (\[eq:tbiso\]) with respect to some (or any) coordinate systems $(u_k,v_k)$ attached to the glued edges. In particular, we have $$\label{eq:edgeabc} {{\ensuremath{\beta}}}_k(u_k)=\frac{b_k(u_k)}{a_k(u_k)}, \qquad {{\ensuremath{\gamma}}}_k(u_k)=\frac{c_k(u_k)}{a_k(u_k)}$$ for some polynomials $a_k(u_k),b_k(u_k),c_k(u_k)\in{{\ensuremath{\mathbb{R}}}}[u_k]$ without a common factor. A $G^1$ function on a rational $G^1$ polygonal surface is a [*(polynomial) spline*]{} if all restrictions $g_\ell$ ($\ell\in {K}_0$) to the polygons are polynomial functions. \[rm:parametric\] The special case of constant ${{\ensuremath{\beta}}}_k(u_k),{{\ensuremath{\gamma}}}_k(u_k)$ is equivalent to [*$C^1$ parametric continuity*]{}, as termed in CAGD. In the context of §\[sec:edgeglue\], formula (\[eq:jetb1\]) then gives an affine-linear map that transforms ${\Omega}_1$ to a polygon ${\Omega}^*_1$ next to ${\Omega}_2$, with the edge $\tau_1$ mapped onto $\tau_2$. The $G^1$ functions on ${{\ensuremath{\mathcal{M}}}}_0$ then correspond to the $C^1$ functions on ${\Omega}^*_1\cup{\Omega}_2$. Definitions \[def:g0complex\], \[def:g1complex\] allow surfaces of arbitrary topology, including non-orientable surfaces. To introduce an orientation, one can fix an ordering on the standard coordinate systems (of Definition \[def:stcoor\]) on each edge and keep the orderings of coordinates compatible by properly relating them on the edges of each common polygon and around every common vertex. If ${{\ensuremath{\mathcal{M}}}}$ is an orientable surface with boundary, an orientation of ${{\ensuremath{\mathcal{M}}}}$ gives orientations of the boundary components as well. In [@Peters:2008 Chapter 3], orientation and [*rigid embeddings*]{} of polygons into ${{\ensuremath{\mathbb{R}}}}^2$ allow to resolve the topological restriction [[*(A2)*]{}]{} automatically. By itself, orientation is not relevant to the topological issues of §\[sec:topology\]. In particular, all triangles (or rectangles) are linearly equivalent in any orientation via the barycentric (or tensor product) coordinates presented in §\[sec:polyfaces\] here. As discused in §\[sec:topology\], the negative sign of ${{\ensuremath{\gamma}}}_k(u)$ is determined by our choice of positive coordinates on each polygon. Definitions \[def:g0complex\], \[def:g1complex\] allow edges to connect a vertex of ${{\ensuremath{\mathcal{M}}}}$ to itself, via sequences of glued polygonal vertices. For example, one can define $G^1$ surfaces without a boundary from a single rectangle, by identifying their opposite edges in classical ways [@cltopology] to form a torus, a Klein bottle or a projective space. Their $G^1$ gluing data can be even defined as in Remark \[rm:parametric\], utilizing parametric continuity along both obtained interior edges up to affine translations and reflections. Similar surfaces with a single vertex and made up of two triangles are presented in [@raimundas Examples 6.16–6.18, 6.23, 6.31]. The edges in the simple polygonal surfaces ${{\ensuremath{\mathcal{M}}}}_0$, ${{\ensuremath{\mathcal{M}}}}^*_0$ of §\[sec:edgeglue\], §\[sec:g1vertex\] connect different topological vertices. \[ex:bercovier\] In [@Bercovier], the $C^1$ structure of a mesh of quadrilaterals in ${{\ensuremath{\mathbb{R}}}}^2$ is translated to a $G^1$ gluing structure on a set of rectangles (or copies of the square $[0,1]^2$) via bilinear parametrizations of the quadrilaterals. The bilinear parametrizations are unique up to the symmetries of the tensor product coordinates of the rectangles (see §\[sec:polyfaces\] here). Given two adjacent quadrilaterals $O_1P_1Q_1R_1$, $O_1P_1Q_2R_2$, the identification of $C^1$ functions on their union with $G^1$ functions on two rectangles (glued along the pre-images of $O_1P_1$) gives rational $G^1$ gluing data with polynomials $a_k$, $b_k$, $c_k$ in (\[eq:edgeabc\]) of degree ${\leqslant}1, {\leqslant}2, {\leqslant}1$. Analysis of the explicit Bézier coefficients in [@Bercovier (29)] reveals that the polynomial $b_k$ is linear iff the [*twist vectors*]{} , $\,\overrightarrow{\!O_1P_1}+\,\overrightarrow{\!Q_2R_2}$ are linearly dependent. The polynomial $a_k$ is a constant iff $\,\overrightarrow{\!R_2Q_2}$ is parallel to $\,\overrightarrow{\!O_1P_1}$. In particular, we cannot have constant $a_k,c_k$ and quadratic $b_k$ in this construction. The functions ${{\ensuremath{\beta}}}_k$, ${{\ensuremath{\gamma}}}_k$ are polynomials of degree 1 iff the twist $\,\overrightarrow{\!O_1P_1}+\,\overrightarrow{\!Q_2R_2}$ is zero. If both twist vectors are zero, then ${{\ensuremath{\gamma}}}_k$ is a constant and ${{\ensuremath{\beta}}}_k$ is a linear polynomial. It is commonly agreed (in [@flex] as well) that constructions with zero twists lead to unsatisfactory spline spaces. \[eq:octahed\] The octahedral example in [@VidunasAMS] is constructed from 8 triangles, combinatorially glued in the same way as the Platonic octahedral. The polygonal surface has 12 edges and 6 vertices. Topologically, it is a compact, orientable surface of genus $0$, homeomorphic to a sphere. All vertices are crossing vertices, and the gluing data on each edge is the same symmetric gluing of Example \[ex:joining\]. A corresponding structure of a $C^1$ differential surface is given in [@VidunasAMS §4], with all transition maps (around the edges and vertices) being fractional-linear rational functions. A dimension formula for spline spaces of bounded degree is recalled in Example \[ex:platonic\] here. Generally, the rational gluing data with constant $\gamma(u)=\gamma$ and linear $\beta(u)=\lambda u+\eta$ is realized by the fractional-linear transition map $$\label{eq:frlintr} (u,v)\mapsto \left( \frac{u+\eta\,v}{1-\lambda \,v}, \frac{\gamma\,v}{1-\lambda\,v} \right).$$ In fact, any rational gluing data can be realized by a rational transition map. A corresponding transition map for (\[eq:edgeabc\]) is $$(u,v)\mapsto \left( \frac{a_k(u)\,u+\eta(u)\,v}{a_k(u)-\lambda(u)\,v}, \frac{c_k(u)\,v}{a_k(u)-\lambda(u)\,v} \right)$$ for any expression $b_k(u)=\lambda(u)\,u+\eta(u)$. One can add $O(v^2)$ terms in the numerators or the denominator. The composition of these transition maps around a vertex is the identity only if they are all fractional-linear maps as in (\[eq:frlintr\]). Constructing a polygonal surface {#rm:gconstr} -------------------------------- The initial step of constructing a $G^1$ polygonal surface is choosing the topology and incidence complex [@Ziegler00] of the underlying $G^0$ polygonal surface ${{\ensuremath{\mathcal{M}}}}$. For a surface of genus $0$ with boundary, one can simply use parametric continuity of a polygonal mesh in ${{\ensuremath{\mathbb{R}}}}^2$, or start with a polygonal mesh. For a closed surface of genus $0$, one can start with a convex polyhedron in ${{\ensuremath{\mathbb{R}}}}^3$ and copy the incidence relations of its edges and facets. For surfaces of genus $g$, one can start with a polygonal subdivision of a regular $4g$-gon, copy its edge identifications and append parallel identifications of the opposite edges [@cltopology]. For non-orientable surfaces, one can similarly start with an appropriate regular polygon and append defining identifications of the opposite edges. With the polygonal $G^0$ structure defined, constructing a flexible rational $G^1$ gluing data on it is not a trivial task. The main flow of the article is independent from this discussion. For most polygonal structures, we can be proceed as follows: - For each inner vertex, choose a fan of tangent sectors as in Figure \[fig:g1v\][[*(a)*]{}]{}, and choose vectors $\vb_1,\ldots,\vb_n=\vb_0$ on each boundary between the sectors. Here $n$ is the valency of a vertex under consideration. For $k\in\{1,\ldots,n\}$, the linear relation between three consecutive vectors $$\label{eq:vects} \vb_{k-1} = {{\ensuremath{\beta}}}'_{k} \vb_k + {{\ensuremath{\gamma}}}'_k \vb_{k+1} \qquad \mbox{or}\qquad \vb_{k+1} = -\frac{{{\ensuremath{\beta}}}'_{k}}{{{\ensuremath{\gamma}}}'_k} \, \vb_k + \frac1{{{\ensuremath{\gamma}}}'_k} \,\vb_{k+1}$$ will determine the end values (such as ${{\ensuremath{\beta}}}_k(0)={{\ensuremath{\beta}}}'_k$, ${{\ensuremath{\gamma}}}_k(0)={{\ensuremath{\gamma}}}'_k$ or ${{\ensuremath{\beta}}}_k(0)=-{{\ensuremath{\beta}}}'_k/{{\ensuremath{\gamma}}}'_k$, ) of eventual glueing data along the edges. The symmetric gluing of Example \[ex:hahnsym\] is a possibility. The specific end-values refer to particular standard coordinate systems attached to the polygonal vertices. The set of crossing vertices is defined by this step. - For each boundary vertex, choose a partial subdivision of ${{\ensuremath{\mathbb{R}}}}^2$ into sectors around the origin as in Figure \[fig:g1v\][[*(b)*]{}]{} keeping in mind condition [[*(B2)*]{}]{}. The linear relations as in (\[eq:vects\]) complete specification of end values of eventual glueing data along all interior edges. The subdivisions of Figure \[fig:g1v\][[*(b)*]{}]{} can be chosen keeping in mind the adjacent vertices, especially noting expected interpolation values at the adjacent crossing vertices. - For each interior edge, adjust the decided end-values of the respective ${{\ensuremath{\beta}}}_k$, ${{\ensuremath{\gamma}}}_k$ to a common coordinate system attached to the edge. Examples of the adjustments are given in (\[eq:trcoor\])–(\[eq:reders\]) and (\[eq:rectuv\]). Generally, the applicable transformations of standard coordinates at both end-points of $\tau_1\sim\tau_2$ have the form (for $i\in\{1,2\}$) $$(u_i,v_i)\mapsto (1-u_i-r_iv_i,q_iv_i), \qquad \mbox{with } q_i>0.$$ We have $q_i=1$ if the two adjacent edges to $\tau_i$ end up at the same “height" $v_i$ from $\tau_i$. This is automatic for rectangles and triangles. The general adjustment is $$\label{eq:genadj} \big({{\ensuremath{\beta}}}(u_1),{{\ensuremath{\gamma}}}(u_1)\big) \mapsto \left( \frac{r_1}{q_1}-\frac{r_2}{q_1}\, {{\ensuremath{\gamma}}}(1-u_1)-\frac{{{\ensuremath{\beta}}}(1-u_i)}{q_1}, \, \frac{q_2}{q_1} \,{{\ensuremath{\gamma}}}(1-u_1) \right).$$ - For each interior edge, interpolate the adjusted values of the ${{\ensuremath{\gamma}}}_k$-functions at their end-vertices by fractional-linear functions. Linear interpolations are possible, though generally $1/{{\ensuremath{\gamma}}}_k$ has as many reasons to be a polynomial as ${{\ensuremath{\gamma}}}_k$. - For each interior edge connecting non-crossing vertices, interpolate the adjusted values of the ${{\ensuremath{\beta}}}_k$-functions by fractional-linear functions. If ${{\ensuremath{\gamma}}}_k$ has a denominator, then the respective ${{\ensuremath{\beta}}}_k$ should have the same denominator. If there are no crossing vertices, this defines a $G^1$ gluing data with linear or fractional-linear functions ${{\ensuremath{\beta}}}_k,{{\ensuremath{\gamma}}}_k$. If there are crossing vertices, it is tricky to cope with the conditions of Theorem \[cond:comp2\], especially if there are edges connecting crossing vertices. It is hard to avoid quadratic interpolation for ${{\ensuremath{\beta}}}_k$ then. By a [*crossing rim*]{} on a $G^1$ polygonal surface we mean a sequence ${{\ensuremath{\mathcal{E}}}}_0,{{\ensuremath{\mathcal{E}}}}_1,\ldots,{{\ensuremath{\mathcal{E}}}}_n$ of edges and a sequence ${{\ensuremath{\mathcal{P}}}}_1,\ldots,{{\ensuremath{\mathcal{P}}}}_n$ of crossing vertices such that ${{\ensuremath{\mathcal{E}}}}_{j-1}$, ${{\ensuremath{\mathcal{E}}}}_j$ are [*opposite*]{} edges at ${{\ensuremath{\mathcal{P}}}}_j$ (as in Definition \[def:join\]) for $j\in\{1,\ldots,n\}$. It is a [*maximal crossing rim*]{} if the edges ${{\ensuremath{\mathcal{E}}}}_0$, ${{\ensuremath{\mathcal{E}}}}_n$ both have a non-crossing end-point. It is a [*crossing equator*]{} if ${{\ensuremath{\mathcal{E}}}}_0={{\ensuremath{\mathcal{E}}}}_n$. The procedure of constructing a complete $G^1$ polygonal surface can be continued as follows: - For each maximal crossing rim, pick an edge ${{\ensuremath{\mathcal{E}}}}_j$ on it and define its ${{\ensuremath{\beta}}}_k$ by linear or fractional-linear interpolation as in [[*(C5)*]{}]{}. Then subsequently (and possibly in both directions from ${{\ensuremath{\mathcal{E}}}}_j$), choose respective ${{\ensuremath{\beta}}}_k$ on the next adjacent edge by interpolating the two adjusted end-values and a derivative value determined by (\[eq:ddv1\]). Each ${{\ensuremath{\beta}}}_k$ can be a rational function with the same denominator as the respective ${{\ensuremath{\gamma}}}_k$ and a quadratic (at worst) numerator. - For every edge on a crossing equator, choose the derivative values ${{\ensuremath{\beta}}}'_k(0)$, ${{\ensuremath{\beta}}}'_k(1)$ and construct ${{\ensuremath{\beta}}}_k$ by cubic interpolation of the numerator (while keeping the denominator as in the respective ${{\ensuremath{\gamma}}}_k$). On a crossing equator with more than one edge, we may start with a fractional-linear interpolation on one edge ${{\ensuremath{\mathcal{E}}}}_j$ as in [[*(C6)*]{}]{}, apply quadratic interpolations on the subsequent edges, and use the cubic interpolation only on the last edge before returning to ${{\ensuremath{\mathcal{E}}}}_j$. To analyze feasibility of at most quadratic interpolations ${{\ensuremath{\beta}}}_k$ in [[*(C7)*]{}]{}, note that a quadratic function $h(x)$ satisfies $$\label{eq:quadh} h'(0)+h'(1)=2h(1)-2h(0).$$ We have a fractional-linear ${{\ensuremath{\gamma}}}_k=c_k/a_k$ on some edge of the equator. We use the same denominator for ${{\ensuremath{\beta}}}_k$. Starting with a quadratic polynomial $h={{\ensuremath{\beta}}}_ka_k$ interpolating the right values $h(0)=0$, $h(1)=r_1a_k(1)-r_2c_k(1)$ and freely undetermined $h'(0)$, we find the value $h'(1)$ by (\[eq:quadh\]), transform it by (\[eq:genadj\]) and use (\[eq:ddv1\]) to determine the subsequent interpolation value $h'(0)$ for the next edge. Repeating this routine along the edges of the equator, we obtain a linear equation for the original undetermined $h'(0)$. The linear equation might be degenerate, leading either to a free choice of this $h'(0)$ or no possibility for completing the quadratic $G^1$ data. It is thus even unclear whether cubic $\beta_k$ on crossing equators can be avoided. A choice of quadratic ${{\ensuremath{\beta}}}_k$ is easily possible if all $q_i=1$, ${{\ensuremath{\gamma}}}_k=-1$. Then all interpolation values ${{\ensuremath{\beta}}}_k'(0)$ are transformed and related by (\[eq:ddv1\]) invariantly on all edges of an equator, hence the initial value $h'(0)$ can be chosen freely. The target ${{\ensuremath{\gamma}}}_k=-1$ restricts Step [[*(C1)*]{}]{}. We are led to solving (\[eq:tsbp1\]) with all ${{\ensuremath{\gamma}}}_i(0)=-1$. Since the determinant is right automatically, we get $2\times2-1=3$ equations for the values ${{\ensuremath{\beta}}}_i(0)$ at each vertex. Let us denote them by ${{\ensuremath{\beta}}}_i$ for shorthand. If $n=3$, there is only Hahn’s symmetric choice. If $n=4$, we must have a pair of joining edges: $$\label{bt1n4} {{\ensuremath{\beta}}}_1={{\ensuremath{\beta}}}_3=0,\quad {{\ensuremath{\beta}}}_4=-{{\ensuremath{\beta}}}_2, \quad\mbox{or}\quad {{\ensuremath{\beta}}}_2={{\ensuremath{\beta}}}_4=0,\quad {{\ensuremath{\beta}}}_3=-{{\ensuremath{\beta}}}_1.$$ For $n=5$, a Gröbner basis [@Wiki] gives the equations $$\label{bt1n5} {{\ensuremath{\beta}}}_4=1+{{\ensuremath{\beta}}}_1{{\ensuremath{\beta}}}_2, \qquad {{\ensuremath{\beta}}}_5=1+{{\ensuremath{\beta}}}_2{{\ensuremath{\beta}}}_3, \qquad {{\ensuremath{\beta}}}_1+{{\ensuremath{\beta}}}_3=1+{{\ensuremath{\beta}}}_1{{\ensuremath{\beta}}}_2{{\ensuremath{\beta}}}_3.$$ The equation system for $n=6$ is $$\begin{aligned} \label{bt1n6} {{\ensuremath{\beta}}}_5={{\ensuremath{\beta}}}_1+{{\ensuremath{\beta}}}_3-{{\ensuremath{\beta}}}_1{{\ensuremath{\beta}}}_2{{\ensuremath{\beta}}}_3, \ {{\ensuremath{\beta}}}_6={{\ensuremath{\beta}}}_2+{{\ensuremath{\beta}}}_4-{{\ensuremath{\beta}}}_2{{\ensuremath{\beta}}}_3{{\ensuremath{\beta}}}_4, \ {{\ensuremath{\beta}}}_1{{\ensuremath{\beta}}}_2+{{\ensuremath{\beta}}}_1{{\ensuremath{\beta}}}_4+{{\ensuremath{\beta}}}_3{{\ensuremath{\beta}}}_4=2+{{\ensuremath{\beta}}}_1{{\ensuremath{\beta}}}_2{{\ensuremath{\beta}}}_3{{\ensuremath{\beta}}}_4. \end{aligned}$$ The solutions gives convenient alternatives to (\[eq:hsym\]). The relations (\[eq:quadh\]), (\[eq:genadj\]), (\[eq:ddv1\]), (\[eq:tsbp1\]) appear to be manageable with any fixed constant $\gamma_k$, $q_i$. In Step [[*(C1)*]{}]{}, one can prescribe constant $-\gamma_k>0$ as ratios of two numbers attached to the polygonal sides of the edge. The product of the ratios $-\gamma_k$ around each edge must necessarily evaluate to $1$ as the determinant in (\[eq:tsbp1\]). It is worth mentioning here that $G^1$ gluing data can be defined by specifying two independent splines “around" each edge, via the corresponding syzygies, $M^1$-spaces of forthcoming §\[eq:edgespace\] and Lemma \[lm:syzygy\] [[*(iii)*]{}]{}. One can even choose freely two global independent splines (or more, if locally supported) and define $G^1$ glueing data to have them in the spline space. Example \[ex:pfcontra\] was constructed this way. Degrees of freedom {#sec:freedom} ================== The main mathematical result of this article is the dimension formula (in Theorem \[th:dimform\]) for spline spaces on rational $G^1$ polygonal surfacees made up of rectangles and triangles. We prepare for this result by taking a closer look at rational $G^1$ gluing along edges and around vertices. This section adds the assumption of [*rational*]{} $G^1$ gluing to the basic setting of §\[sec:edgeglue\]–§\[sec:crossing\]. Further refinement to the context of rectangles and triangles is done in §\[sec:boxdelta\]. The general structure of a spline basis in §\[sec:generate\] must be a good guidance for counting the dimension of spaces of (polynomial or rational) splines on any rational $G^1$ surface. The big example in §\[sec:poctah\] can be preliminarily read after §\[sec:vertexv\], if the Bernstein-Bézier bases of polynomials on rectangles and triangles are familiar (described here in §\[sec:polyfaces\]). Jet evaluation maps {#sec:jeteval} ------------------- We view ${{\ensuremath{\mathbb{R}}}}[u,v]$ as the space of polynomial functions on ${{\ensuremath{\mathbb{R}}}}^2$ in some coordinates $u,v$. At a polygonal vertex $P$, let $u_P,v_P$ denote linear functions such that and define the edges incident to $P$. Let $M(P)$ denote the linear space of bilinear functions in $u_P,v_P$. We naturally identify $M(P)\cong{{\ensuremath{\mathbb{R}}}}[u_P,v_P]/(u_P^2,v_P^2)$ and define the linear map $$\label{eq:wv} W_P:{{\ensuremath{\mathbb{R}}}}[u,v]\to M(P)$$ by following the quotient homomorphism of ${{\ensuremath{\mathbb{R}}}}[u_P,v_P]/(u_P^2,v_P^2)$. This map can be viewed as a partial evaluation of the jet $f\mapsto J_P^{2}f$ (or second order Taylor series) at $P$. We refer to the elements in $M(P)$ as [*$J^{1,1}$-jets*]{}. For a polygonal edge $\tau$ and a coordinate system $(u_\tau,v_\tau)$ attached to it, let $M(\tau)$ similarly denote the space of polynomials in ${{\ensuremath{\mathbb{R}}}}[u_\tau,v_\tau]$ that are linear in $v_\tau$. As a space of functions on ${{\ensuremath{\mathbb{R}}}}^2$, $M(\tau)$ does not depend on the coordinate system. Using $M(\tau)\cong {{\ensuremath{\mathbb{R}}}}[u_\tau,v_\tau]/(v_\tau^2)$, we define the linear map $$\label{eq:we} W_\tau:{{\ensuremath{\mathbb{R}}}}[u,v]\to M(\tau) $$ as the quotient homomorphism of ${{\ensuremath{\mathbb{R}}}}[u_\tau,v_\tau]/(v_\tau^2)$. This represents taking the linear in $v_1$ Taylor approximation of $f\in{{\ensuremath{\mathbb{R}}}}[u,v]$ (around the line containing $\tau$). If $P$ is an end vertex of $\tau$, let $$\label{eq:wev} W_{P,\tau}:M(\tau) \to M(P), $$ denote the specialization of $W_P$ onto $M(\tau)$. The coordinates $(u_\tau,v_\tau)$, $(u_P,v_P)$ are related by a simple linear transformation, especially when $(u_\tau,v_\tau)$ is a standard coordinate system attached to $\tau$ with $P$ at $u_\tau=0$. \[eq:thorough\] Let ${{\ensuremath{\mathcal{M}}}}$ be a $G^1$ polygonal surface. The [*support*]{} of a spline is the set of polygons where it specializes to non-zero functions. A spline on ${{\ensuremath{\mathcal{M}}}}$ [*thoroughly vanishes*]{} along an edge $\tau_1\sim\tau_2$ if it is mapped to $0$ by $W_{\tau_1}$ and $W_{\tau_2}$. In other words, the spline and its first order derivatives then vanish on the edge. It is enough to require vanishing in one of the spaces $W_{\tau_1}$ or $W_{\tau_2}$. A spline [*thoroughly vanishes*]{} at a vertex $\{P_1,\ldots,P_n\}$ if it is mapped to $0$ by all $W_{P_k}$ for $k\in\{1,\ldots,n\}$. Here it is not enough to require vanishing in one of the spaces, since the standard mixed derivatives $\partial^2/\partial u_i\partial v_i$ might be independent. Splines and syzygies {#eq:edgespace} -------------------- Consider $G^1$ gluing $(\lambda,\Theta_0)$ of two polygons ${\Omega}_1$, ${\Omega}_2$ along their edges , as in §\[sec:edgeglue\], but with the additional assumption of rational gluing data. We have the coordinates $(u_1,v_1)$, $(u_2,v_2)$ as $(u_{\tau_1},v_{\tau_1})$, $(u_{\tau_2},v_{\tau_2})$, and concrete rational functions ${{\ensuremath{\beta}}}(u_1),{{\ensuremath{\gamma}}}(u_1)$ in (\[eq:tbiso\]). Let $a(u_1),b(u_1),c(u_1)$ denote polynomials that express $${{\ensuremath{\beta}}}(u_1)=\frac{b(u_1)}{a(u_1)}, \qquad {{\ensuremath{\gamma}}}(u_1)=\frac{c(u_1)}{a(u_1)} $$ as in . We are looking for polynomial splines. The differentiability condition (\[eq:dirder\]) becomes $$\label{eq:syzygy} a(u_1)A(u_1)+b(u_1)B(u_1)+c(u_1)C(u_1)=0,$$ where $$\label{eq:syzd} A(u_1)=-\frac{\partial g_1}{\partial v_1}(u_1,0), \quad B(u_1)=\frac{\partial g_1}{\partial u_1}(u_1,0), \quad C(u_1)=\frac{\partial g_2}{\partial v_2}(u_1,0).$$ In algebraic terms, the triple $(A(u_1),B(u_1),C(u_1))$ is a [*syzygy*]{} [@CoxSheaUse §5.1], [@Wiki] between the polynomials $a(u_1), b(u_1), c(u_1)$. Let ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$ denote the linear space of syzygies between $(a,b,c)$. As we recall in Lemma \[lm:syzygy\], this is a free module over the ring ${{\ensuremath{\mathbb{R}}}}[u_1]$. The [*degree*]{} of a syzygy $(A,B,C)$ is $\max(\deg A,\deg B,\deg C)$. Conversely, given a syzygy $(A,B,C)$ we construct a $G^1$ spline $(g_1, g_ 2)$ on ${{\ensuremath{\mathcal{M}}}}_0$ by $$\begin{aligned} \label{eq:edgeint1} g_1(u_1,v_1)=c_0+\int_0^{u_1} \! B(t)dt-v_1A(u_1)+v_1^2E_1(u_1,v_1),\\ \label{eq:edgeint2} g_2(u_2,v_2)=c_0+\int_0^{u_2} \! B(t)dt+v_2C(v_2)+v_2^2E_2(u_2,v_2),\end{aligned}$$ where $c_0\in{{\ensuremath{\mathbb{R}}}}$ is any constant, and $E_1,E_2$ are any polynomials. If $g_1\in\ker W_{\tau_1}$, $g_2\in\ker W_{\tau_2}$, then $(g_1,g_2)$ is a spline on ${{\ensuremath{\mathcal{M}}}}_0$ with the corresponding syzygy (\[eq:syzd\]) being the zero vector. These splines have only the last terms in (\[eq:edgeint1\])–(\[eq:edgeint2\]). The spline space decomposes $$\label{eq:s1comp} S^1({{\ensuremath{\mathcal{M}}}}_0)=\ker W_{\tau_1}\oplus \ker W_{\tau_2}\oplus M^1(\tau_1,\tau_2),$$ where $M^1(\tau_1,\tau_2)$ denotes the space of $G^1$ splines in $M(\tau_1)\oplus M(\tau_2)$. The latter splines can be considered to have $E_1=E_2=0$ in (\[eq:edgeint1\])–(\[eq:edgeint2\]). The correspondence between the splines and syzygies is given by the linear map $$\label{eq:mzz} \psi:M^1(\tau_1,\tau_2)\to {{\ensuremath{\mathcal{Z}}}}(a,b,c)$$ defined by (\[eq:syzd\]). Its kernel are the constant splines. The space $S^1({{\ensuremath{\mathcal{M}}}}_0)$ will be understood when ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$ or $M^1(\tau_1,\tau_2)$ are known. \[lm:syzygy\] For polynomials $a, b, c \in{{\ensuremath{\mathbb{R}}}}[u_1]$ with $\gcd(a,b,c)=1$ we have the following facts: 1. ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$ is a free ${{\ensuremath{\mathbb{R}}}}[u_1]$-module of rank $2$. 2. Let $d=\max(\deg a,\deg b,\deg c)$. There is a syzygy in ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$ of degree $\mu{\leqslant}d/2$. There is a pair of generators of degrees $\mu$ and $d-\mu$. 3. A pair of generators $(A_1,B_1,C_1)$, $(A_2,B_2,C_2)$ of ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$ satisfies $$\label{eq:syzcp} h_0 \cdot (a,b,c) = (B_1C_2-B_2C_1, C_1A_2-C_2A_1, A_1B_2-A_2B_1).$$ for some $h_0\in{{\ensuremath{\mathbb{R}}}}\setminus\{0\}$. These are standard facts [@CoxSheaUse §6.4] in the theory of parametrization of curves by [*moving lines*]{}, their [*$\mu$-bases*]{}. We prove [[*(iii)*]{}]{} here, since the formulation is somewhat stronger. Let $Z_1,Z_2$ denote the two generators, respectively. They are linearly independent over the field ${{\ensuremath{\mathbb{R}}}}(u_1)$. The syzygy relation (\[eq:syzygy\]) is orthogonality in the 3-dimensional linear space over ${{\ensuremath{\mathbb{R}}}}(u_1)$. Hence relation (\[eq:syzcp\]) holds with $h_0\in{{\ensuremath{\mathbb{R}}}}(u_1)$. Since $a,b,c$ are coprime, $h_0\in{{\ensuremath{\mathbb{R}}}}[u_1]$. If $\deg h_0>0$, then $Z_1,Z_2$ are linearly dependent in ${{\ensuremath{\mathbb{R}}}}[u_1]/(h_0)$. There is then an ${{\ensuremath{\mathbb{R}}}}[u_1]$-linear combination $Z_3=h_1Z_1+h_2Z_2$ with $\deg h_1,\deg h_2<\deg h_0$ and all components of $Z_3$ divisible by $h_0$. The syzygy $Z_3/h_0$ would not be an ${{\ensuremath{\mathbb{R}}}}[u_1]$ combination $Z_1$, $Z_2$, contradicting the assumption that they generate ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$. Evaluations at an edge vertex {#sec:vertexv} ----------------------------- Keeping the same context of gluing two edges $\tau_1$, $\tau_2$, let $P_1\in\tau_1,P_2\in\tau_2$ denote their endpoints $u_1=0$, $u_2=0$. Let $\pi_1:M^1(\tau_1,\tau_2)\to M(\tau_1)$, $\pi_2:M^1(\tau_1,\tau_2)\to M(\tau_2)$ denote the projections from $$\label{eq:mtt} M^1(\tau_1,\tau_2)\subset M^1(\tau_1)\oplus M^1(\tau_2).$$ The evaluation maps (of the $J^{1,1}$-jets at $P_1$, $P_2$) $$\label{eq:wps} W_{P_1,\tau_1}\circ\pi_1: M^1(\tau_1,\tau_2)\to M(P_1), \ W_{P_2,\tau_2}\circ\pi_2: M^1(\tau_1,\tau_2)\to M(P_2)$$ are important for constructing splines on a larger rational $G^1$ polygonal surface. The images in $M(P_1)$, $M(P_2)$ of splines in the similar $M^1$-spaces on adjacent edges will have to match in order to form global splines. The matching is most direct if $(u_1,v_1)$, $(u_2,v_2)$ are standard coordinates attached to $\tau_1$, $\tau_2$. Let $$\label{eq:w0} W_0: M^1(\tau_1,\tau_2)\to M(P_1)\oplus M(P_2)$$ denote the combined evaluation map $(W_{P_1,\tau_1}\circ\pi_1)\oplus(W_{P_2,\tau_2}\circ\pi_2)$. We can denote (\[eq:wps\]) shorter: $W_{P_1,\tau_1}\circ\pi_1=\widehat{\pi}_1\circ W_0$ and $W_{P_2,\tau_2}\circ\pi_2=\widehat{\pi}_2\circ W_0$, where $\widehat{\pi}_1$, $\widehat{\pi}_2$ are the projections from $M(P_1)\oplus M(P_2)$. \[lm:edgew\] - The maps $\widehat{\pi}_1\circ W_0$, $\widehat{\pi}_2\circ W_0$ are surjective. - The dimension of ${\mathrm{im\ }}W_0$ equals $4$ if the edge $\tau_1\sim\tau_2$ is joining at the vertex $\{P_1,P_2\}$, and the dimension is $5$ otherwise. We assume standard coordinates $(u_1,v_1)$, $(u_2,v_2)$ attached to $\tau_1$, $\tau_2$, with $u_1=0$, $u_2=0$ at $P_1,P_2$, respectively. Then (\[eq:edgeint1\])–(\[eq:edgeint2\]) can be followed directly to conclude that a syzygy $$(a'+a''u_1+\ldots,b'+b''u_1+\ldots,c'+c''u_1+\ldots)$$ leads to a spline evaluated by $W_0$ to $$(c_0+b'u_1-a'v_1-a''u_1v_1, c_0+b'u_2+c'v_2+c''u_2v_2).$$ If $Z_1=(A_1,B_1,C_1)$, $Z_2=(A_2,B_2,C_2)$ are generators of ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$, then at $u_1=0$ by Lemma \[lm:syzygy\] [[*(iii)*]{}]{}. Explicit computation of the $\pi_1\circ W_0$ images of the splines corresponding to $Z_1,Z_2,u_1Z_1$ gives then the whole $M(P_1)$, including the constants ${{\ensuremath{\mathbb{R}}}}\subset M(P_1)$. Surjectivity of $\pi_2\circ W_0$ follows similarly, from $B_1C_2-B_2C_1\neq 0$ at $u_1=0$. For any spline $(g_1,g_2)\in M^1(\tau_1,\tau_2)$, the first order jet $J^1_{P_2}g_2$ is determined by $J^1_{P_1}g_1$ by the jet isomorphism (\[eq:jetb1\]) at $u_1=0$. Only the $u_2v_2$ term might be independent from $\widehat{\pi}_1\circ W_0(g_1)$. To establish the (in)dependency, we look for a non-zero spline with $\widehat{\pi}_1\circ W_0(g_1)=0$. We can only use the syzygies $u_1Z_1,u_1Z_2$ and produce $(0,b(0)u_2v_2)\in W_0$, with $A_1C_2-A_2C_1$ proportional to $b(u_1)$. We have $b(0)=0$ exactly in the joining edge case. Separation of edge vertices {#sec:serverts} --------------------------- In the same context of gluing two edges $\tau_1$, $\tau_2$, let $Q_1\in\tau_1,Q_2\in\tau_2$ denote the other endpoints $u_1=1$, $u_2=1$. We have a similar map $W_1:M^1(\tau_1,\tau_2)\to M(Q_1)\oplus M(Q_2)$ as $W_0$ in (\[eq:w0\]). \[def:sepspline\] A [*separating spline*]{} in $M^1(\tau_1,\tau_2)$ has the property that it attains different values at the end vertices $\{P_1,P_2\}$ and $\{Q_1,Q_2\}$. An [*offset spline*]{} is a separating spline that has the first order and the mixed derivatives $\partial^2/\partial u_iv_i$ equal to $0$ at both end-vertices. An offset spline is mapped to constant $J^{1,1}$-jets by both $W_0$, $W_1$. For example, let $Z_0=(A,B,C)$ be a syzygy with a non-zero function $B(u_1)$. Then a spline corresponding by (\[eq:edgeint1\])–(\[eq:edgeint2\]) to the syzygy $B(u_1)\,Z_0$ is separating, because $\int_0^1 B(t)^2dt\neq 0$. A spline corresponding to the syzygy $u_1^2(1-u_1)^2\,B(u_1)\,Z_0$ is then an offset spline. \[lm:separate\] The image of the combined map $$\label{eq:mmmm} W_0\oplus W_1: M^1(\tau_1,\tau_2)\to M(P_1)\oplus M(P_2) \oplus M(Q_1)\oplus M(Q_2)$$ is ${\mathrm{im\ }}W_0\oplus {\mathrm{im\ }}W_1$. Let $Z'_1=(A_1,B_1,C_1)$, $Z'_2=(A_2,B_2,C_2)$ denote generators of ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$. Both $B_1$, $B_2$ cannot be zero functions because of Lemma \[lm:syzygy\] [[*(iii)*]{}]{} and $a(u_1)\neq 0$. Therefore we have a separating spline $B_1Z'_1$ or $B_2Z'_2$. Using it and the constant $1$ spline, we can linearly reduce any target element of ${\mathrm{im\ }}W_0\oplus {\mathrm{im\ }}W_1$ to a pair of $J^{1,1}$-jets without the constant terms. Let $M_O^1(\tau_1,\tau_2)\subset M^1(\tau_1,\tau_2)$ denote the subspace of splines with $c_0=0$ in (\[eq:edgeint1\])–(\[eq:edgeint2\]). Then the restriction $$\label{eq:mz} \psi_O:M_{O}^1(\tau_1,\tau_2)\to {{\ensuremath{\mathcal{Z}}}}(a,b,c)$$ of $\psi$ onto $M_O^1(\tau_1,\tau_2)$ is an isomorphism. Let $F_1$ denote an offset spline in $M_O^1(\tau_1,\tau_2)$. There is a linear combination of $\psi_O^{-1}((1-u_1)^2 Z'_1)$ and $F_1$ that thoroughly vanishes at $\{Q_1,Q_2\}$. There is a similar linear combination of $\psi_O^{-1}((1-u_1)^2 Z'_2)$ and $F_1$. We get two splines that can achieve any values of two first order derivatives at $\{P_1,P_2\}$ and map to $0$ in $ {\mathrm{im\ }}W_1$. Similarly, there are linear combinations of $\psi_O^{-1}(u_1(1-u_1)^2 Z'_1)$ and $\psi_O^{-1}(u_1(1-u_1)^2 Z'_2)$ with $F_1$ that realize 1 or 2 degrees of freedom for the mixed derivatives at $\{P_1,P_2\}$, depending on whether there is joining at this vertex. This gives the full ${\mathrm{im\ }}W_0$. The same argument holds for ${\mathrm{im\ }}W_1$, with $F_1$ replaced by an offset spline vanishing at $\{Q_1,Q_2\}$, and $\psi_O^{-1}((1-u_1)^2 Z'_1)$ replaced by $\psi_O^{-1}(u_1^2 Z'_1)$, etc. To derive dimension formulas and linear bases for $G^1$ splines of bounded degree on polygonal surfaces, we need to estimate how high the degree has to be so that Lemma \[lm:separate\] holds for the bounded degree subspaces of all edge spaces $M^1(\tau_1,\tau_2)$. \[def:sseparate\] Following [@raimundas Definitions 2.14, 2.24], we say that a subspace of $M^1(\tau_1,\tau_2)$ [*separates vertices*]{} if it has separating splines and the constant splines. Thereby it has splines that evaluate to any prescribed values at the two end-vertices. The subspace [*strongly separates vertices*]{} if additionally the first order derivatives in ${\mathrm{im\ }}\!W_0\,\oplus\, {\mathrm{im\ }}\!W_1$ have the maximal freedom. The subspace [*completely separates vertices*]{} if Lemma \[lm:separate\] applies to the restricted ${\mathrm{im\ }}\!W_0\,\oplus\, {\mathrm{im\ }}\!W_1$. A necessary condition of complete separation is existence of an offset spline. Degrees of freedom at vertices {#sec:aroundv} ------------------------------ Consider now the setting of §\[sec:g1vertex\] of gluing polygons ${\Omega}_1,\ldots,{\Omega}_n$ around the common vertex ${{\ensuremath{\mathcal{P}}}}_0=\{P_1,\ldots,P_n\}$. We additionally assume rational $G^1$ gluing data along the edge pairs $\tau_k\subset{\Omega}_k,\tau'_{k-1}\subset{\Omega}_{k-1}$ for $k\in{K}=\{1,\ldots,n\}$, and look for polynomial splines. We extend the notation $P_0=P_n$, $\tau'_0=\tau_{n}$ to $P_{n+1}=P_1$, $\tau_{n+1}=\tau_{1}$. Analogous to the map $W_0$ in (\[eq:w0\]), we are interested in the linear natural map $$\label{eq:wpo0v} W_{{{\ensuremath{\mathcal{P}}}}_0}: S^1({{\ensuremath{\mathcal{M}}}}^*_0)\to M(P_1)\oplus\cdots\oplus M(P_n)$$ and the dimension of its image. This map factors through $$\label{eq:mmms} M^1(\tau_1,\tau'_{n})\oplus M^1(\tau_2,\tau'_1)\oplus\cdots\oplus M^1(\tau_n,\tau'_{n-1}).$$ For $k\in K$, let $\pi_k$ denote the projection to $M(P_k)$ from the direct sum in (\[eq:wpo0v\]). Then $\pi_k\circ W_{{{\ensuremath{\mathcal{P}}}}_0}$ factors through $$\label{eq:mms} M^1(\tau_k,\tau'_{k-1}) \oplus M^1(\tau_{k+1},\tau'_{k}).$$ This factorization determines the relations between $M(P_k)$ and $M(P_{k-1})$, $M(P_{k+1})$. The kernel of $W_{{{\ensuremath{\mathcal{P}}}}_0}$ consists of the splines $(g_1,\ldots,g_n)$ with each . For $k\in K$ let us denote $\crossing_k=1$ if the edge $\tau_k\sim \tau'_{k-1}$ is joining at $\{P_k,P_{k-1}\}$, and $\crossing_k=0$ otherwise. \[prop:Hg\] Let us denote $\crossing_+=1$ if ${{\ensuremath{\mathcal{P}}}}_0$ is a crossing vertex, and let otherwise. Suppose that conditions $(\ref{eq:jetbp1})$ and [[*(B1)*]{}]{} are satisfied. If ${{\ensuremath{\mathcal{P}}}}_0$ is a crossing vertex, suppose that $(\ref{eq:ddv1})$–$(\ref{eq:ddv2})$ are satisfied. Then $$\dim {\mathrm{im\ }}W_{{{\ensuremath{\mathcal{P}}}}_0} = 3+n-\sum_{k=1}^n \crossing_k + \crossing_{+}.$$ The projections $\pi_k\circ W_{{{\ensuremath{\mathcal{P}}}}_0}$ are surjective. Suppose first that all edges are joining at ${{\ensuremath{\mathcal{P}}}}_0$. Then ${{\ensuremath{\mathcal{P}}}}_0$ is a crossing vertex by condition [[*(B1)*]{}]{}. Let us choose an arbitrary $J^{1,1}$-jet in $M(P_4)$. To get corresponding jets in $M(P_3),M(P_2),M(P_1)$, we apply the transformations dual to (\[eq:trE3\]): $$\label{eq:trE3a} \left( \begin{array}{c} 1 \\ u_{k-1}\\ v_{k-1} \\ u_{k-1} v_{k-1}\!\!\! \end{array} \right) =\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & {{\ensuremath{\gamma}}}_k(0) & {{\ensuremath{\gamma}}}'_k(0) \\ 0 & 1 & 0 & {{\ensuremath{\beta}}}'_k(0) \\ 0 & 0 & 0 & {{\ensuremath{\gamma}}}_k(0) \end{array} \right) \left( \begin{array}{c} 1 \\ u_{k}\\ v_{k} \\ u_{k} v_{k} \end{array} \right).$$ These transformations are consistent with the relations in the images of the $W_0$-maps $M^1(\tau_k,\tau'_{k-1})\to M(P_k)\oplus M(P_{k-1})$ of Lemma \[lm:edgew\] for $k=4,3,2$. Conditions (\[eq:ddv1\])–(\[eq:ddv2\]) ensure consistency in the image of as well. We have thus a spline vector in (\[eq:mmms\]) mapping to the formed jets in $M(P_1),\ldots,M(P_4)$. The lift from (\[eq:mmms\]) to $S^1({{\ensuremath{\mathcal{M}}}}^*_0)$ is possible because (for each $k\in \{1,2,3,4\}$) the splines in both components in (\[eq:mms\]) match at $M(P_k)$. It follows that $\pi_4\circ W_{{{\ensuremath{\mathcal{P}}}}_0}$ and the other projections are surjective, and that $\dim {\mathrm{im\ }}W_{{{\ensuremath{\mathcal{P}}}}_0} = 4$. If there is an edge that is not joining at ${{\ensuremath{\mathcal{P}}}}_0$, we may assume it to be $\tau_1\sim \tau'_n$. Let us choose an arbitrary jet in $M(P_n)$. Its $J^{1}$ part is transformed by the linear maps in (\[eq:jetbp1\]) to unique $J^1$ parts in the other $M(P_k)$ consistent with the images of the $W_0$-maps $M^1(\tau_k,\tau'_{k-1})\to M(P_k)\oplus M(P_{k-1})$. Dependency of the $u_{n-1}v_{n-1}$ term (of the jet in $M(P_{n-1})$) on the $u_nv_n$ term is determined by whether $\crossing_{n}=1$ or $0$. Dependency of the next $u_kv_k,u_{k+1}v_{k+1}$ terms is similarly determined by $\crossing_{k+1}$. Once the $u_1v_1$ term is set, there is no cyclical restriction on the $u_nv_n$ term. Hence we have $n-\sum_{k=1}^n \crossing_k$ degrees of freedom for the $u_kv_k$ terms. The dimension formula follows, together with surjectivity of $\pi_n\circ W_{{{\ensuremath{\mathcal{P}}}}_0}$ and the other projections. \[prop:Hgb\] Suppose we have a boundary vertex ${{\ensuremath{\mathcal{P}}}}_0$ defined with the current specifications but without the gluing identification of $\tau_1$ and $\tau'_n$. Consider the map $W_{{{\ensuremath{\mathcal{P}}}}_0}$ as in $(\ref{eq:wpo0v})$. Then the projections $\pi_k\circ W_{{{\ensuremath{\mathcal{P}}}}_0}$ are surjective, and $$\dim {\mathrm{im\ }}W_{{{\ensuremath{\mathcal{P}}}}_0} = 3+n-\sum_{k=2}^n \crossing_k.$$ The proof is similar to the second part of the proof of Lemma \[prop:Hg\]. Generators of spline spaces {#sec:generate} --------------------------- Now we consider a rational $G^1$ polygonal surface ${{\ensuremath{\mathcal{M}}}}$, possibly with many interior edges and vertices, and the space $S^1({{\ensuremath{\mathcal{M}}}})$ of splines on it. The splines defined by polynomials of bounded degree will form linear spaces of finite dimension. We are interested in counting these dimensions and constructing bases of these spaces. For each interior edge ${{\ensuremath{\mathcal{E}}}}$ let ${{\ensuremath{\mathcal{M}}}}_{{\ensuremath{\mathcal{E}}}}$ denote the polygonal surface as in §\[sec:edgeglue\], §\[sec:vertexv\] with a single interior edge $\tau_1\sim\tau_2$ that uses the glueing data $\Theta_k$ of ${{\ensuremath{\mathcal{E}}}}$. By the construction in §\[sec:edgeglue\], the end-points of ${{\ensuremath{\mathcal{M}}}}_{{\ensuremath{\mathcal{E}}}}$ are distinct even if ${{\ensuremath{\mathcal{E}}}}$ connects a vertex of ${{\ensuremath{\mathcal{M}}}}$ with itself. The projection map $$\label{eq:edgeproj} \pi_{{\ensuremath{\mathcal{E}}}}: S^1({{\ensuremath{\mathcal{M}}}}) \to M^1(\tau_1,\tau_2)$$ is certainly not surjective when ${{\ensuremath{\mathcal{E}}}}$ connects a vertex with itself. However, any element of as in Lemma \[lm:separate\] will lift to a spline in $S^1({{\ensuremath{\mathcal{M}}}})$ of the same degree, with the support on $\Omega_1\cup\Omega_2$ and thoroughly vanishing on all edges other than $\tau_1,\tau_2$. One can take $E_1=E_2=0$ in (\[eq:edgeint1\])–(\[eq:edgeint2\]). Lemma \[lm:separate\] implies that $$\label{eq:edge0} \mbox{codim}\; \ker (W_0\oplus W_1)= \dim {\mathrm{im\ }}W_0+\dim {\mathrm{im\ }}W_1$$ inside $M^1(\tau_1,\tau_2)$. By Lemma \[eq:wps\], $\dim {\mathrm{im\ }}W_k\in\{4,5\}$ for $k=1,2$. For a boundary edge $\tau$, we have the similar projection $$\label{eq:edgeprojb} \pi_\tau: S^1({{\ensuremath{\mathcal{M}}}}) \to M(\tau)$$ by following (\[eq:we\]). Let $P,Q$ denote the end-points of $\tau$. With reference to (\[eq:wev\]), any element of $\ker W_{P,\tau}\cap \ker W_{Q,\tau}\subset M(\tau)$ similarly lifts to a spline in $S^1({{\ensuremath{\mathcal{M}}}})$, thoroughly vanishing on all edges expect $\tau$. The co-dimension of these splines inside $M(\tau)$ equals $8$. The splines that are mapped to zero by all projections (\[eq:edgeproj\]), (\[eq:edgeprojb\]) are the splines that throughly vanish on all edges. For each polygon ${\Omega}$, let $L_{{\Omega}}$ denote the product of linear equations $l_\tau$ of all edges $\tau$ of ${\Omega}$. Then any polynomial multiple of $L_{{\Omega}}^2$ (as a function on ${\Omega}$) lifts to spline in $S^1({{\ensuremath{\mathcal{M}}}})$ with the support on $\Omega$. We described all splines that thoroughly vanish at all vertices. To combine and lift the $J^{1,1}$-jet spaces $W(P)$ at the polygonal vertices to global splines in $S^1({{\ensuremath{\mathcal{M}}}})$, consider an interior vertex ${{\ensuremath{\mathcal{P}}}}=\{P_1,\ldots,P_n\}$ of ${{\ensuremath{\mathcal{M}}}}$. Let ${{\ensuremath{\mathcal{M}}}}^*_{{{\ensuremath{\mathcal{P}}}}}$ denote the polygonal surface as in §\[sec:g1vertex\], §\[sec:aroundv\], with a single interior vertex ${{\ensuremath{\mathcal{P}}}}_0$ of valency $n$ and the same gluing data as on the edges of ${{\ensuremath{\mathcal{M}}}}$ incident to ${{\ensuremath{\mathcal{P}}}}$. The boundary vertices of ${{\ensuremath{\mathcal{M}}}}^*_{{{\ensuremath{\mathcal{P}}}}}$ are all distinct from each other and from ${{\ensuremath{\mathcal{P}}}}_0$ by the construction in §\[sec:g1vertex\]. We have the projection map $$\label{eq:vertproj} \pi_{{\ensuremath{\mathcal{P}}}}: S^1({{\ensuremath{\mathcal{M}}}}) \to S^1({{\ensuremath{\mathcal{M}}}}^*_{{{\ensuremath{\mathcal{P}}}}_0}) $$ that we compose with $W_{{{\ensuremath{\mathcal{P}}}}_0}$ in (\[eq:wpo0v\]). Let $S^1_0({{\ensuremath{\mathcal{M}}}}^*_{{{\ensuremath{\mathcal{P}}}}_0})$ denote the subspace of $S^1({{\ensuremath{\mathcal{M}}}}^*_{{{\ensuremath{\mathcal{P}}}}_0})$ of splines that throughly vanish on the boundary edges. The image of $S^1_0({{\ensuremath{\mathcal{M}}}}^*_{{{\ensuremath{\mathcal{P}}}}_0})$ under $W_{{{\ensuremath{\mathcal{P}}}}_0}$ is the same as the whole image of $W_{{{\ensuremath{\mathcal{P}}}}_0}$, because the spline pieces $(g_k,g_{k-1})$ in each $M^1(\tau_k,\tau'_{k-1})$ can be modified without changing their interface image in $M(P_k)\oplus M(P_{k-1})$ to thorough vanishing at the respective boundary vertex by multiplying the corresponding syzygies by $(1-u_1)^2$ and using offset splines. An element $h\in {\mathrm{im\ }}W_{{{\ensuremath{\mathcal{P}}}}_0}$ is lifted to a global spline in $S^1({{\ensuremath{\mathcal{M}}}})$ as follows. First we lift $h$ to a spline $(g_1,\ldots,g_n)$ in the subspace $S^1_0({{\ensuremath{\mathcal{M}}}}^*_{{{\ensuremath{\mathcal{P}}}}_0})$ as just described. Each polygon $\Omega$ of ${{\ensuremath{\mathcal{M}}}}$ incident to ${{\ensuremath{\mathcal{P}}}}$ may be matched with several polygons of ${{\ensuremath{\mathcal{M}}}}^*_{{{\ensuremath{\mathcal{P}}}}_0}$, corresponding to those $P_k\in {{\ensuremath{\mathcal{P}}}}$ that are vertices of $\Omega$. We assign to this $\Omega$ a polynomial restriction that equals the sum of the corresponding components $g_k$. In this sum, the $J^{1,1}$-jet in any $M(P_k)$ is non-zero only in one term, giving consistency with $h\in {\mathrm{im\ }}W_{{{\ensuremath{\mathcal{P}}}}_0}$. The $J^{1,1}$-jets are properly related by the transformations like (\[eq:trE3a\]). If there is an edge $\tau'_1\sim\tau'_2$ of ${{\ensuremath{\mathcal{M}}}}$ that connects ${{\ensuremath{\mathcal{P}}}}$ with itself, the spline image in $M(\tau'_1)\oplus M(\tau'_2)$ will be the sum of two terms corresponding to the endpoints of $\tau'_1\sim\tau'_2$, both in ${{\ensuremath{\mathcal{P}}}}$. Assigning zero polynomials to the polygons of ${{\ensuremath{\mathcal{M}}}}$ that are not incident to ${{\ensuremath{\mathcal{P}}}}$ completes a lift of $h$ to a spline in $S^1({{\ensuremath{\mathcal{M}}}})$. In summary, the spline space $S^1({{\ensuremath{\mathcal{M}}}})$ can be generated by the splines of these kinds: - At each interior vertex ${{\ensuremath{\mathcal{P}}}}$, one can choose finitely many splines realizing the degrees of freedom enumerated in Lemma \[prop:Hg\], and evaluating to $0$ on all polygons not incident to ${{\ensuremath{\mathcal{P}}}}$. Particularly among these splines: 1. There is a spline with the value 1 and the other jet terms (i.e., the first order derivatives and the mixed derivatives) equal to $0$ at ${{\ensuremath{\mathcal{P}}}}$. Its projections to the spaces $M^1(\tau_k,\tau'_{k-1})$ of incident edges $\tau_k\sim\tau'_{k-1}$ are offset splines evaluating to 1 at $P_k\in{{\ensuremath{\mathcal{P}}}}$ and to 0 at the other end-vertex. 2. There are 2 independent splines that vanish at ${{\ensuremath{\mathcal{P}}}}$ and have (some) non-zero first order derivatives at ${{\ensuremath{\mathcal{P}}}}$. Their standard mixed derivatives might be forced to $0$ if there are no edges joining at ${{\ensuremath{\mathcal{P}}}}$. 3. With reference to Lemma \[prop:Hg\], there are $n-\sum e_k+e_+$ independent splines that vanish with the first order derivatives at ${{\ensuremath{\mathcal{P}}}}$. They have some mixed derivatives non-zero. - At a boundary vertex, one can similarly choose finitely many splines realizing the degrees of freedom enumerated in Lemma \[prop:Hgb\]. - At an interior edge $\tau_1\sim \tau_2$, we have the splines lifted from $\ker(W_0\oplus W_1)$ as described between the formulas (\[eq:edgeproj\]) and (\[eq:edge0\]). These splines vanish on all polygons not containing $\tau_1$ or $\tau_2$, and thoroughly vanish on all other edges. - At a boundary edge $\tau$, we have the space of splines similarly vanishing on the polygons not containing $\tau$, etc. The co-dimension of this space inside $M(\tau)$ equals $8$, as mentioned above. - On each polygon $\Omega$, we have the splines that are polynomial multiples of $L_{\Omega}^2$ on $\Omega$ and zero on the other polygons. This break up offers a general strategy for counting the dimension of spline spaces of bounded degree and constructing their bases. Section \[eq:spdim\] demonstrates this on the example of $G^1$ polygonal surfaces made up of rectangles and triangles. The splines in [[*(D3)*]{}]{}–[[*(D5)*]{}]{} can be chosen from distinct subpaces of the direct sum $R[u,v]$-copies over the set of polygons. The subspaces (one for each edge and for each polygon) are defined by vanishing everywhere except in one $M^1$-projection or on one polygon. A set of generating splines in [[*(D1)*]{}]{}–[[*(D2)*]{}]{} may project extensively to the incident $M^1$-spaces. Rectangle-triangle surfaces {#sec:boxdelta} =========================== From now on, we consider rational $G^1$ polygonal surfaces ${{\ensuremath{\mathcal{M}}}}$ made up of rectangles and triangles. The main forthcoming results are dimension formulas for the spaces of $G^1$ splines of bounded degree on these surfaces. This is presented in §\[sec:dformula\]. Sections \[sec:asyz\], \[sec:sepverts\] refine §\[eq:edgespace\]–§\[sec:aroundv\] for the context of rectangles and triangles. Here is our degree convention for $G^1$ splines on a rational $G^1$ polygonal surface ${{\ensuremath{\mathcal{M}}}}$ made up of rectangles and triangles. For an integer $k{\geqslant}0$, let $S^1_k({{\ensuremath{\mathcal{M}}}})$ denote the space of splines such that all restrictions to the triangles have degree ${\leqslant}k$, and all restrictions to the rectangles have bidegree ${\leqslant}(k,k)$. We say that a spline [*has degree $k$*]{} if it is in $S^1_k({{\ensuremath{\mathcal{M}}}})$ but not in $S^1_{k-1}({{\ensuremath{\mathcal{M}}}})$. In the context of formulas (\[eq:s1comp\]), (\[eq:mtt\]), let $M^1_k(\tau_1,\tau_2)$ denote the subspace of $M^1(\tau_1,\tau_2)$ of splines $$\label{eq:splines2} (h_0(u)+h_1(u)v_1,h_0(u)+h_2(u)v_2)$$ such that $\deg h_0(u){\leqslant}k$, and for $j\in\{1,2\}$ we have $\deg h_j(u){\leqslant}k$ if $\tau_j$ is an edge of a rectangle, or $\deg h_j(u){\leqslant}k-1$ if $\tau_j$ is an edge of a triangle. Partitioning the space of splines {#eq:spdim} --------------------------------- Here we estimate the dimension of each subspace in the partitioning [[*(D1)*]{}]{}–[[*(D5)*]{}]{} of the whole spline space, and prove a preliminary dimension formula. Following the partitioning in §\[sec:generate\], we have these spline spaces: 1. For large enough $k$, each vertex ${{\ensuremath{\mathcal{P}}}}$ contributes $$\label{eq:E1spline} 3+n({{\ensuremath{\mathcal{P}}}})+e_{+}({{\ensuremath{\mathcal{P}}}})- e_{\perp}({{\ensuremath{\mathcal{P}}}}) $$ independent splines by [[*(D1)*]{}]{}–[[*(D2)*]{}]{}, or Lemmas \[prop:Hg\], \[prop:Hgb\]. Here $n({{\ensuremath{\mathcal{P}}}})$ is the valency of ${{\ensuremath{\mathcal{P}}}}$; $e_{+}({{\ensuremath{\mathcal{P}}}})=1$ if ${{\ensuremath{\mathcal{P}}}}$ is a crossing vertex, and $e_{+}({{\ensuremath{\mathcal{P}}}})=0$ otherwise; $e_{\perp}({{\ensuremath{\mathcal{P}}}})$ counts the number of instances when an edge is joining at ${{\ensuremath{\mathcal{P}}}}$. Note that an edge may be joining ${{\ensuremath{\mathcal{P}}}}$ at both of its end-points, contributing 2 to $e_{\perp}({{\ensuremath{\mathcal{P}}}})$. 2. For large enough $k$, each interior edge ${{\ensuremath{\mathcal{E}}}}=\tau_1\sim \tau_2$ contributes $$\label{eq:E2spline} \dim M^1_k(\tau_1,\tau_2)-10+e_{\perp}({{\ensuremath{\mathcal{E}}}})$$ independent splines by [[*(D3)*]{}]{}. Here $e_{\perp}({{\ensuremath{\mathcal{E}}}})\in\{0,1,2\}$ counts the number of instances when ${{\ensuremath{\mathcal{E}}}}$ is joining a vertex. 3. By [[*(D4)*]{}]{}, a boundary edge on a rectangle contributes $2k-6$ independent splines for $k{\geqslant}3$, and a boundary edge on a triangle contributes $2k-7$ independent splines for $k{\geqslant}4$. 4. By [[*(D5)*]{}]{}, each rectangle contributes $(k-3)^2$ independent splines for $k{\geqslant}3$, and each triangle contributes $(k-4)(k-5)/2$ independent splines for $k{\geqslant}4$. The degree $k$ in [[*(E1)*]{}]{}–[[*(E2)*]{}]{} is large enough if all edge spaces $M^1_k(\tau_1,\tau_2)$ completely separate the end-vertices in the sense of Definition \[def:sseparate\]. In §\[sec:sepverts\] we determine an explicit lower bound for such $k$. Before that, §\[sec:asyz\] gives an explicit expression for $\dim M^1_k(\tau_1,\tau_2)$ for the large enough $k$. Let $N_\Box$, $N_\Delta$, $N_0$, $N_0^+$, $N_1$, $N_1^\partial$ denote the number of rectangles, triangles, all vertices, crossing vertices, all edges and boundary edges of ${{\ensuremath{\mathcal{M}}}}$, respectively. \[lm:dimsep\] If $k{\geqslant}4$ and all spaces $M^1_k(\tau_1,\tau_2)$ completely separate the vertices, then $$\begin{aligned} \dim S^1_k({{\ensuremath{\mathcal{M}}}})= &\; 3\,N_0+N_0^+ + \sum_{\tau_1\sim\tau_2} \dim M^1_k(\tau_1,\tau_2)-10\,N_1 \nonumber\\[1pt] & + (2k+3) \,N_1^\partial +\#\{\mbox{\rm boundary edges on rectangles}\} \\ & + \left( k^2-6k+13 \right) N_{\Box} + \frac{k^2-9k+26}{2} \, N_{\Delta}. \nonumber\end{aligned}$$ Summing up the dimension of the spaces in [[*(E1)*]{}]{}–[[*(E2)*]{}]{}, we have $$\begin{aligned} & \sum_{\mbox{\scriptsize vertices}\,{{\ensuremath{\mathcal{P}}}}} n({{\ensuremath{\mathcal{P}}}})= 4\, N_\Box + 3 \, N_\Delta, \qquad \sum_{\mbox{\scriptsize vertices}\,{{\ensuremath{\mathcal{P}}}}} e_+({{\ensuremath{\mathcal{P}}}})= N_0^+, \\ & \sum_{\mbox{\scriptsize vertices}\,{{\ensuremath{\mathcal{P}}}}} e_{\perp}({{\ensuremath{\mathcal{P}}}})= \, \sum_{\mbox{\scriptsize edges}\;\,{{\ensuremath{\mathcal{E}}}}} e_{\perp}({{\ensuremath{\mathcal{E}}}}).\end{aligned}$$ In particular, the $e_{\perp}$’s cancel out. Adding the spaces in [[*(E3)*]{}]{}–[[*(E4)*]{}]{} gives the stated dimension formula. Building up a linear basis for $S^1_k({{\ensuremath{\mathcal{M}}}})$ following the partitioning [[*(E1)*]{}]{}–[[*(E4)*]{}]{} is straightforward. Even if some spaces $M^1_k(\tau_1,\tau_2)$ do not separate the vertices completely, the dimension formula may still hold, as exemplified in §\[sec:exbasis\]. The basis structure may need to be adjusted merely by taking into account dependencies between the spaces ${\mathrm{im\ }}W_0\subset M(P_1)\oplus M(P_2)$, ${\mathrm{im\ }}W_1\subset M(Q_1)\oplus M(Q_2)$ of §\[sec:serverts\]. as implicitly suggested in the proof of Lemma \[lm:lbound\]. Dimension and the syzygy module {#sec:asyz} ------------------------------- Here we go back to the setting of §\[sec:vertexv\] with the additional assumption of rectangles and triangles, and compute $\dim M^1_k(\tau_1,\tau_2)$. For $j\in\{1,2\}$, let $r_j=1$ if ${\Omega}_j$ is a rectangle, and let $r_j=0$ if ${\Omega}_j$ is a triangle. The splines in $M^1_k(\tau_1,\tau_2)$ correspond to the syzygies $(A,B,C)\in{{\ensuremath{\mathcal{Z}}}}(a,b,c)$ such that $$\label{eq:syzdeg} \deg A{\leqslant}k-1+r_1,\qquad \deg B{\leqslant}k-1, \qquad \deg C{\leqslant}k-1+r_2.$$ Let ${{\ensuremath{\mathcal{Z}}}}_k(a,b,c)$ denote the linear subspace of ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$ of syzygies with these degree specifications. \[lm:dimmz\] $\dim M^1_k(\tau_1,\tau_2)=\dim {{\ensuremath{\mathcal{Z}}}}_k(a,b,c)+1$. The map $\psi$ in (\[eq:mzz\]) restricts to surjective $M^1_k(\tau_1,\tau_2)\to{{\ensuremath{\mathcal{Z}}}}_k(a,b,c)$. The kernel of this map is one-dimensional. If both ${\Omega}_1,{\Omega}_2$ are triangles, then ${{\ensuremath{\mathcal{Z}}}}_k(a,b,c)$ is the space of syzygies of degree ${\leqslant}k-1$. If a rectangle is involved, the ${{\ensuremath{\mathcal{Z}}}}_k$-grading of ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$ is twisted with respect to the degree grading. Nevertheless, Lemma \[lm:syzygy\] can be adjusted helpfully. The following definitions and lemma generalize to any grading of ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$. The lemma can be proved as a straightforward exercise in homological algebra [@Wiki]. We say that a syzygy $Z\in {{\ensuremath{\mathcal{Z}}}}(a,b,c)$ has [*twisted degree*]{} $k$ if it is in ${{\ensuremath{\mathcal{Z}}}}_k(a,b,c)$ but not in ${{\ensuremath{\mathcal{Z}}}}_{k-1}(a,b,c)$. We denote the twisted degree by ${\mbox{tdeg} \;Z}$. The [*leading term*]{} of a syzygy $Z\in {{\ensuremath{\mathcal{Z}}}}(a,b,c)$ with ${\mbox{tdeg} \;Z}=k$ is the vector $ \big( \star u_1^{k-1+r_1}, \star \, u_1^{k-1}, \star \, u_1^{k-1+r_2} \big) $ of the expected leading terms of the three components of $Z$. A pair of generators of ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$ is called [*minimal*]{} if their leading terms are linearly independent. \[lm:twsyzygy\] 1. There are non-negative integers $d_1,d_2$ such that any pair of minimal generators of ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$ is in ${{\ensuremath{\mathcal{Z}}}}_{d_1}(a,b,c)\times{{\ensuremath{\mathcal{Z}}}}_{d_2}(a,b,c)$. 2. Suppose that $d_1{\leqslant}d_2$. We have $$\dim {{\ensuremath{\mathcal{Z}}}}_k(a,b,c)=\left\{ \begin{array}{rl} 0, & \mbox{if } k<d_1; \\ k-d_1+1, & \mbox{if } d_1{\leqslant}k<d_2; \\ 2k-d_1-d_2+2, & \mbox{if } k{\geqslant}d_2. \end{array} \right.$$ 3. $d_1+d_2=\max( r_1+\deg a, \deg b, r_2+\deg c) + 2-r_1-r_2.$ Starting with any two generators, we can reduce the twisted degree of (at least) one of them until we have a minimal pair $Z_1,Z_2$. Let $d_1={\mbox{tdeg} \;Z_1}$, $d_2={\mbox{tdeg} \;Z_2}$. The twisted degree of any ${{\ensuremath{\mathbb{R}}}}[u_1]$ linear combination $h_1Z_1+h_2Z_2$ equals $\max(\deg h_1+{\mbox{tdeg} \;Z_1},\deg h_2+{\mbox{tdeg} \;Z_2})$. The first two claims follow. By Lemma \[lm:syzygy\] [[*(iii)*]{}]{}, we have $$\begin{aligned} \deg a {\leqslant}& \, d_1+d_2+r_2-2,\\ \deg b {\leqslant}& \, d_1+d_2+r_1+r_2-2,\\ \deg c {\leqslant}& \, d_1+d_2+r_1-2.\end{aligned}$$ Since the leading terms of $Z_1,Z_2$ are linearly independent, there is an equality for at least one degree. Claim [[*(iii)*]{}]{} follows. \[cor:m1dim\] Let $d_1,d_2$ denote the twisted degrees as in Lemma $\ref{lm:twsyzygy}$. Then $$\dim M_k^1(\tau_1,\tau_2) {\geqslant}2k-\max( r_1+\deg a, \deg b, r_2+\deg c) + r_1 + r_2+1,$$ The exact equality holds when $k{\geqslant}\max(d_1,d_2)$. Combine Lemmas \[lm:dimmz\] and \[lm:twsyzygy\]. In particular, part [[*(ii)*]{}]{} of Lemma \[lm:twsyzygy\] implies $$\dim Z_k(a,b,c) {\geqslant}2k-d_1-d_2+2.$$ Separation of vertices {#sec:sepverts} ---------------------- Here we combine the settings of §\[sec:serverts\], §\[sec:asyz\] and address the question of “large enough $k$" in [[*(E1)*]{}]{}–[[*(E2)*]{}]{}. We find sufficient degree bounds for existence of separating splines of Definition \[def:sepspline\], and for complete separation of vertices as in Definition \[def:sseparate\]. Let $d_1,d_2$ denote the twisted degree of the minimal generators of the syzygy module ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$. We assume $d_1{\leqslant}d_2$. Let $Z_1=(A_1,B_1,C_1)$, $Z_2=(A_2,B_2,C_2)$ denote a minimal pair of the syzygy generators, of degree $d_1,d_2$, respectively. Note that $d_1=0$ only if $B_1=0$ and both polygons ${\Omega}_1$, ${\Omega}_2$ are rectangles. Usually the space $M^1(\tau_1,\tau_2)$ has separating splines of degree $d_1$ or slightly larger. For example, syzygies $(A,B,C)$ with constant $B\neq 0$ give these splines. However, Example \[ex:lagrange\] shows that the construction in Definition \[def:sseparate\] can be optimal in very special cases. \[lm:sepse\] There are separating splines in $M^1(\tau_1,\tau_2)$ of degree . If $B_1\neq 0$, then $B_1Z_1$ gives a separating spline of degree $2d_1-1$ as demonstrated in Definition \[def:sseparate\]. If $B_1=0$ then $B_2(u_1)$ does not change the sign on $u_1\in[0,1]$ because of Lemma \[lm:syzygy\] [[*(iii)*]{}]{} and $c(u_1)/a(u_1)<0$. Then $Z_2$ gives a separating spline. \[ex:lagrange\] Recall [*Legendre orthogonal polynomials*]{} $P_n(x)$ [@Wiki]. They have the property that $\int_{-1}^1 P_n(x)\,x^kdx=0$ for any degree $n$ and all $0{\leqslant}k<n$. To switch to the integration interval $[0,1]$ as in (\[eq:edgeint1\]), we normalize $p_n(x)=P_n(2x-1)$. Consider the rational $G^1$ gluing data with $$\begin{aligned} a(u_1)= & \, p_n(u_1)^2+p_{n-1}(u_1)^2, & \hspace{-10pt} b(u_1)= -p_{n-1}(u_1)^2, \\ c(u_1)= & \, p_n(u_1)p_{n-1}(u_1)-p_n(u_1)^2-p_{n-1}(u_1)^2.\end{aligned}$$ Since $p_n(x),p_{n-1}(x)$ do not have common roots [@specfaar Theorem 5.4.2], we have $a(x)>0$, $c(x)<0$. The syzygy module is generated by $$\begin{aligned} \big( p_{n-1}(u_1),p_{n}(u_1),p_{n-1}(u_1) \big), \qquad \big( p_n(u_1)-p_{n-1}(u_1),-p_{n-1}(u_1),p_n(u_1) \big).\end{aligned}$$ There are no separating splines of degree $<2n-1$. \[lm:csepo\] The space $M_k^1(\tau_1,\tau_2)$ completely separates the end-vertices if it has an offset spline and $k{\geqslant}d_2+3$. We apply the construction with $\psi_0^{-1}$ in the proof of Lemma \[lm:separate\] to the splines $(1-u_1)^2Z_k$, $(1-u_1)^2u_1Z_k$, $u_1^2Z_k$, $u_1^2(1-u_1)Z_k$ with $k=1,2$. \[lm:offss\] The space $M_k^1(\tau_1,\tau_2)$ has an offset spline if . Let $L_0=u_1^2(1-u_1)^2$. Consider the splines $L_0B_1Z_1$ and $L_0Z_2$ in the same way as in the proof of Lemma \[lm:sepse\]. \[eq:csep\] The space $M_k^1(\tau_1,\tau_2)$ completely separates the end-vertices if . Combine Lemmas \[lm:csepo\] and \[lm:offss\]. The bound $d_2+3$ in Lemma \[lm:csepo\] can be improved to only if the edge $\tau_1\sim\tau_2$ is joining at both end-vertices. Then ${\mathrm{im\ }}W_0\oplus {\mathrm{im\ }}W_1$ might be covered by working with $(1-u_1)^2Z_k$, $u_1(1-u_1)Z_k$, $u_1^2Z_k$, $k\in\{1,2\}$, as in the example case in §\[sec:ef\] where $d_1=1$, $d_2=2$. The bound in Lemma \[lm:offss\] can be reduced to $\max(2d_1+2,d_2+3)$ if $\tau_1\sim\tau_2$ is joining at an end-vertex, say at $u_1=0$. Then use the factor in the proof, and adjust the two splines linearly by $L_0Z_1$ to annihilate the mixed derivatives at $u_0=0$. This is the case in §\[sec:ab\] and §\[sec:be\]. The bound can be reduced to $\max(2d_1+1,d_2+2)$ if $\tau_1\sim\tau_2$ is joining at both end-vertices and $d_1<d_2$ by using $L_0=u_1(1-u_1)$ and adjustment by $u_1L_0Z_1$, $(1-u_1)L_0Z_1$. This is the case in §\[sec:ef\]. Generally, the bound $d_2+4$ in Corollary \[eq:csep\] cannot be improved if $d_1=d_2$, as then $\dim M_k^1(\tau_1,\tau_2)=9$ for $k=d_2+3$. The bound $2d_2+3$ is sharply achieved by the same construction as in Example \[ex:lagrange\], with the Legendre polynomials replaced by [*Gegenbauer polynomials*]{} [@Wiki] $C^{(\xi)}_n(x)$ with $\xi=5/2$. We have for all degrees $n$ and $0{\leqslant}k<n$. The bounds in Lemmas \[lm:sepse\], \[lm:offss\], Corollary \[eq:csep\] can be softened using $\max(2d_1,d_2){\leqslant}d_1+d_2$ and part [[*(iii)*]{}]{} of Lemma \[lm:twsyzygy\]. The dimension formula {#sec:dformula} --------------------- Now we are ready to state the general dimension formula for splines spaces on a rational $G^1$ polygonal surface ${{\ensuremath{\mathcal{M}}}}$ made up of rectangles and triangles. For an interior edge ${{\ensuremath{\mathcal{E}}}}$ of ${{\ensuremath{\mathcal{M}}}}$, let $d_1({{\ensuremath{\mathcal{E}}}})$, $d_2({{\ensuremath{\mathcal{E}}}})$ denote the twisted degree of minimal generators of ${{\ensuremath{\mathcal{Z}}}}(a,b,c)$, with $d_1({{\ensuremath{\mathcal{E}}}}){\leqslant}d_2({{\ensuremath{\mathcal{E}}}})$. If $(a,b,c)$ is the polynomial gluing data for ${{\ensuremath{\mathcal{E}}}}$ as in Definition \[def:g1splines\], let $$\label{eq:delt} \delta({{\ensuremath{\mathcal{E}}}})= \max( r_1+\deg a, \deg b, r_2+\deg c). $$ Here $r_1,r_2$ are as in §\[sec:asyz\]. Note that $\delta({{\ensuremath{\mathcal{E}}}})>0$ if a rectangle is involved in gluing along ${{\ensuremath{\mathcal{E}}}}$. Let $\delta({{\ensuremath{\mathcal{M}}}})$ denote the sum of all $\delta({{\ensuremath{\mathcal{E}}}})$ over the interior edges ${{\ensuremath{\mathcal{E}}}}$. \[th:dimform\] Let $k$ denote an integer such that $k{\geqslant}(2d_1({{\ensuremath{\mathcal{E}}}})+3,d_2({{\ensuremath{\mathcal{E}}}})+4)$ for all interior edges ${{\ensuremath{\mathcal{E}}}}$. Then $$\begin{aligned} \label{eq:dimform} \dim S^1_k({{\ensuremath{\mathcal{M}}}})= & \; 3\,N_0+N_0^+ -\delta({{\ensuremath{\mathcal{M}}}}) +(k-{\textstyle\frac{5}{2}}) \,N_1^\partial \nonumber \\[1pt] & + \left( k^2-2k-1 \right) N_{\Box} + \frac{k^2-3k-1}{2} \, N_{\Delta}. $$ By Corollary \[eq:csep\], the spaces $M^1(\tau_1,\tau_2)$ of all interior edges completely separate their end-vertices. We combine Lemma \[lm:dimsep\] and Corollary \[cor:m1dim\], observing that summing up all $r_1,r_2$ and boundary edges on rectangles counts all edges of rectangles. We obtain $$\begin{aligned} \label{eq:dimforme} \dim S^1_k({{\ensuremath{\mathcal{M}}}})= & \; 3\,N_0+N_0^+ +(2k-9) \,N_1 +2\,N_1^\partial-\delta({{\ensuremath{\mathcal{M}}}}) \\[1pt] & +\left( k^2-6k+17 \right) N_{\Box} + \frac{k^2-9k+26}{2} \, N_{\Delta}. $$ We eliminate the number of edges using $$\label{eq:polygee} 2\,N_1=4\, N_{\Box}+3\,N_{\Delta}+N_1^\partial $$ and get the claimed formula. Note that the numbers of triangles and boundary edges have the same parity. Suppose that $\delta({{\ensuremath{\mathcal{E}}}})=1$ for all interior edges ${{\ensuremath{\mathcal{E}}}}$ of ${{\ensuremath{\mathcal{M}}}}$, and $k{\geqslant}6$. Then $$\begin{aligned} \dim S^1_k({{\ensuremath{\mathcal{M}}}})= & \; 3\,N_0+N_0^+ +(k-2) \,N_1^\partial \nonumber +(k+1)\left( (k-3) \, N_{\Box} + \frac{k-4}{2} \, N_{\Delta} \right).\end{aligned}$$ We have $d_1({{\ensuremath{\mathcal{E}}}}){\leqslant}1$, $d_2({{\ensuremath{\mathcal{E}}}}){\leqslant}2$ on all interior edges ${{\ensuremath{\mathcal{E}}}}$, with the equalities whenever two triangles are glued. Hence the bound $k{\geqslant}6$. Then $\delta({{\ensuremath{\mathcal{M}}}})$ counts the interior edges. This count is eliminated using (\[eq:polygee\]). A basis of $S^1_k({{\ensuremath{\mathcal{M}}}})$ can be constructed following the partitioning [[*(E1)*]{}]{}–[[*(E4)*]{}]{}. The dimension formula may still hold for a few smaller $k$, as demonstrated in §\[sec:exbasis\]. We prove in Lemma \[lm:lbound\] that for $k{\geqslant}2$ the numerical value in (\[eq:dimform\]) is a lower bound for the dimension of $S^1_k({{\ensuremath{\mathcal{M}}}})$. The subsequent examples illustrate sharpness of the degree bounds. In the case of orientable surfaces, we can use algebraic topology to eliminate the number of vertices. If ${{\ensuremath{\mathcal{M}}}}$ has a boundary, let $\widetilde{{{\ensuremath{\mathcal{M}}}}}$ denote the topological surface constructed as follows. For each boundary component ${{\ensuremath{\mathcal{B}}}}$ of ${{\ensuremath{\mathcal{M}}}}$, let $n_{{\ensuremath{\mathcal{B}}}}$ denote the number of edges on ${{\ensuremath{\mathcal{B}}}}$. If $n_{{\ensuremath{\mathcal{B}}}}{\geqslant}3$, we adjoin to ${{\ensuremath{\mathcal{M}}}}$ a polygon with $n_{{\ensuremath{\mathcal{B}}}}$ sides and glue its edges consequently to the edges of ${{\ensuremath{\mathcal{B}}}}$ by (say) linear homeomorphisms. If $n_{{\ensuremath{\mathcal{B}}}}=1$ we adjoin a circular disk by gluing its boundary circle to the edge of ${{\ensuremath{\mathcal{B}}}}$ by a homeomorphism. If $n_{{\ensuremath{\mathcal{B}}}}=2$ we adjoin a circular disk, divide its boundary into two half-circles and glue them consequently to the two edges of ${{\ensuremath{\mathcal{B}}}}$. After doing this to all boundary components, we get a closed surface $\widetilde{{{\ensuremath{\mathcal{M}}}}}$ without boundary. If ${{\ensuremath{\mathcal{M}}}}$ is orientable, so is $\widetilde{{{\ensuremath{\mathcal{M}}}}}$. In that case, let $g({{\ensuremath{\mathcal{M}}}})$ denote the genus of $\widetilde{{{\ensuremath{\mathcal{M}}}}}$. \[th:gdimform\] Suppose that the underlying topological space ${{\ensuremath{\mathcal{M}}}}$ is orientable. Let $k$ denote an integer such that $k{\geqslant}(2d_1({{\ensuremath{\mathcal{E}}}})+3,d_2({{\ensuremath{\mathcal{E}}}})+4)$ for all interior edges ${{\ensuremath{\mathcal{E}}}}$. Then $$\begin{aligned} \dim S^1_k({{\ensuremath{\mathcal{M}}}})= & \; 6-6g\big({{\ensuremath{\mathcal{M}}}}\big)+N_0^+ -\delta({{\ensuremath{\mathcal{M}}}}) -3\,\#\{\mbox{\rm boundary components}\} \\[1pt] & +(k-1) \,N_1^\partial + \left( k^2-2k+2 \right) N_{\Box} + \frac{(k-1)(k-2)}{2} \, N_{\Delta}. \end{aligned}$$ The Euler characteristic [@Wiki] for the surface $\widetilde{{{\ensuremath{\mathcal{M}}}}}$ is $$\begin{aligned} 2-2g\big(\widetilde{{{\ensuremath{\mathcal{M}}}}}\big) = & N_0 -N_1+ N_\Box+N_\Delta+\#\{\mbox{\rm boundary components}\}.\end{aligned}$$ Combination with (\[eq:polygee\]) gives $$\begin{aligned} \label{eq:topolov} N_0 = & \, 2-2g\big(\widetilde{{{\ensuremath{\mathcal{M}}}}}\big)+N_\Box+{\textstyle \frac12}\,N_\Delta +{\textstyle \frac12}\,N_1^\partial-\#\{\mbox{\rm boundary components}\}.\end{aligned}$$ The claimed formula follows. For the rational $G^1$ polygonal surfaces made up of triangles only, a formula equivalent to (\[eq:dimforme\]) is formulated in [@raimundas Theorem 6.4.6]. The proof in [@raimundas] has two errors. Firstly, the restrictions of Theorem \[cond:comp2\] are not taken into account. The formula is thereby wrongly claimed for the surfaces with bad crossing vertices. Secondly, the degree bounds in [@raimundas Lemma 6.4.5] for strong and complete separation of vertices do not ensure existence of offset splines. There is a typo $t \leftrightarrow z$ twice on line 8 in the proof of [@raimundas Lemma 6.4.5], and the splines $g_3,h_3$, $\tilde{g}_3,\tilde{h}_3$ might thoroughly vanish at [*both*]{} end-vertices. Theorem 6.4.7 in [@raimundas] matches Lemma \[lm:lbound\] here, but misses the bound $k{\geqslant}2$. The numbers $d_1,d_2$ in [@raimundas §6.4] equal our $d_1({{\ensuremath{\mathcal{E}}}})-1$, $d_2({{\ensuremath{\mathcal{E}}}})-1$, respectively. The crossing vertices are called [*particular vertices*]{} in [@raimundas]. Polynomials on rectangles an triangles {#sec:polyfaces} -------------------------------------- To present our examples more efficiently, we recall the classical Bernstein-Bézier bases of polynomial functions on rectangles and triangles. Besides, we prove that the formula in (\[eq:dimform\]) is a lower bound $\dim S^1_k({{\ensuremath{\mathcal{M}}}})$ when $k{\geqslant}2$. For any three non-collinear points $X,Y,Z\in{{\ensuremath{\mathbb{R}}}}^2$, let $u_{XY}^Z$ denote the linear function that evaluates to $0$ at $X,Y$, and evaluates to $1$ at $Z$. For any polygon with the edges $XY$, $XZ$, the functions $u_{XY}^Z$, $u_{XZ}^Y$ form standard coordinates attached $XY$ or $XZ$ by Definition \[def:stcoor\]. The functions $u_{XY}^Z$, $u_{XZ}^Y$, $u_{YZ}^X$ are the [*barycentric coordinates*]{} [@Wiki] on the triangle . They are positive inside the triangle, and satisfy $$\label{eq:baryc} u_{XY}^Z+u_{XZ}^Y+u_{YZ}^X=1.$$ For a rectangle , we have $u_{XY}^Z+u_{ZW}^X=1$ and $u_{XW}^Y+u_{YZ}^X=1$. We refer to the function pairs $(u_{XY}^Z,u_{ZW}^X)$, $(u_{XW}^Y,u_{YZ}^X)$ as the [*tensor product coordinates*]{} of the rectangle. Let $(u,v,w)$ denote the barycentric coordinates of a triangle. A polynomial function of degree ${\leqslant}k$ can be expressed as a homogeneous polynomial in $u,v,w$ of degree $k$ by homogenizing $1=u+v+w$. It is customary in CAGD to express the polynomial functions of degree ${\leqslant}k$ in the [*Bernstein-Bézier basis*]{}: $$\label{eq:bbb} \frac{k!}{i!j!(k-i-j)!} \, u^i \, v^j \, w^{k-i-j}, \qquad 0{\leqslant}i+j {\leqslant}k.$$ The coefficients in a linear expression in this basis are called [*control coefficients*]{}. In particular, the control coefficients of the constant function 1 are all equal The [*interior control coefficients*]{} are to the terms with all $i,j,k-i-j$ positive. The “corner" terms (with $i{\leqslant}1, j{\leqslant}1$) of a Bernstein-Bézier expression $$c_0 \, w^k+c_1\,kuw^{k-1}+c_2\,kvw^{k-1}+c_3\,k(k-1)uvw^{k-2}+\ldots$$ determine the $J^{1,1}$-jet at the vertex $u=0$, $v=0$: $$c_0+k(c_1-c_0)u+k(c_2-c_0)v+k(k-1)(c_3-c_2-c_1+c_0)uv+\ldots$$ Let $(u,\tilde{u})$, $(v,\tilde{v})$ denote the tensor product coordinates of a rectangle. A polynomial in ${{\ensuremath{\mathbb{R}}}}[u,v]$ of degree $k$ in $u$ and of degree $\ell$ in $v$ has [*bidegree*]{} $(k,\ell)$. The polynomial functions of bidegree $(k,\ell)$ can be expressed in the [*tensor product Bernstein-Bézier basis*]{} $$\label{eq:tpbbb} {k\choose i} {\ell\choose j}\, u^i \,\tilde{u}^{k-i} \, v^j \, \tilde{v}^{\ell-j}, \qquad 0{\leqslant}i{\leqslant}k, \quad 0{\leqslant}j{\leqslant}\ell.$$ The [*interior control coefficients*]{} are to the terms with all $i,k-i,j,\ell-j$ positive. Similarly, the “corner" terms $$c_0 \, \tilde{u}^k\tilde{v}^\ell+c_1\,ku\tilde{u}^{k-1}\tilde{v}^\ell +c_2\,\ell v\tilde{u}^k\tilde{v}^{\ell-1}+c_3\,k\ell uv\tilde{u}^{k-1}\tilde{v}^{\ell-1}+\ldots$$ determine the $J^{1,1}$-jet at the vertex $u=0$, $v=0$: $$c_0+k(c_1-c_0)u+\ell(c_2-c_0)v+k\ell(c_3-c_2-c_1+c_0)uv+\ldots$$ \[lm:lbound\] If $k{\geqslant}2$, the dimension of $S^1_k({{\ensuremath{\mathcal{M}}}})$ is greater than or equal to the value on the right-hand side of $(\ref{eq:dimform})$. Suppose first that $k{\geqslant}4$. For each interior edge ${{\ensuremath{\mathcal{E}}}}=\tau_1\sim\tau_2$, the map $W_0\oplus W_1$ of Lemma \[lm:separate\] restricted to $M^1_k(\tau_1,\tau_2)$ might have smaller image than of the general dimension $10-e_{\perp}({{\ensuremath{\mathcal{E}}}})$. Let $m_{{\ensuremath{\mathcal{E}}}}{\geqslant}0$ denote this (possible) dimension deficit. The dimension of splines in [[*(E1)*]{}]{}, lifted from the direct sum of all $M(P)$ over the polynomial vertices $P$, is at least the sum of all expressions (\[eq:E1spline\]) minus $\sum m_{{\ensuremath{\mathcal{E}}}}$. The dimension of splines contributed by ${{\ensuremath{\mathcal{E}}}}$ in [[*(D3)*]{}]{}, [[*(E2)*]{}]{} equals (\[eq:E2spline\]) plus $m_{{\ensuremath{\mathcal{E}}}}$. Summing up the dimensions as in Lemma \[lm:dimsep\] with the adjustments by $\pm m_{{\ensuremath{\mathcal{E}}}}$ can only underestimate the dimension of the spline space. The adjustments $\pm m_{{\ensuremath{\mathcal{E}}}}$ cancel out, like the numbers $e_\perp({{\ensuremath{\mathcal{E}}}})$. After applying Corollary \[cor:m1dim\] we get the claim for $k{\geqslant}4$. If $k=3$, we have an overestimate by $N_\Delta$ in [[*(E4)*]{}]{}. For each triangle we have 3 expressions for the unique interior control coefficient in terms of the $J^{1,1}$-coefficients in $M(X)$, $M(Y)$ or $M(Z)$. The $\sum m_{{\ensuremath{\mathcal{E}}}}$ new relations give 3 equalities relating these 3 expressions, and only 2 of those equalities are independent. Hence the overestimate by $N_\Delta$ is cancelled by the dependencies among the $\sum m_{{\ensuremath{\mathcal{E}}}}$ new relations. If $k=2$, we have a similar cancelation of the overestimate by $N_\Box$, and it remains to cancel $3N_\Delta$. For a quadratic polynomial on a triangle , each of the control coefficients to $uv$, $uw$, $vw$ has 3 expressions in terms of the $J^{1,1}$-coefficients in $M(X)$, $M(Y)$ or $M(Z)$. The $\sum m_{{\ensuremath{\mathcal{E}}}}$ relations give 9 equalities between those expressions, only 6 of them are independent similarly. Hence the overestimate by $3N_\Delta$ is cancelled as well. Consider a triangulation of a polygonal region $\widetilde{{\Omega}}$ in ${{\ensuremath{\mathbb{R}}}}^2$, and a $G^1$ polygonal surface ${{\ensuremath{\mathcal{R}}}}$ defined by the parametric continuity on the whole $\widetilde{{\Omega}}$. (See Remark \[rm:parametric\].) We have thus $\delta({{\ensuremath{\mathcal{E}}}})=0$, $d_1({{\ensuremath{\mathcal{E}}}})=d_2({{\ensuremath{\mathcal{E}}}})=1$ for each interior edge ${{\ensuremath{\mathcal{E}}}}$. For $k{\geqslant}5$, Theorem \[th:gdimform\] gives $$\begin{aligned} \dim S^1_k({{\ensuremath{\mathcal{R}}}})= 3+N_0^++(k-1) \,N_1^\partial+\frac{(k-1)(k-2)}{2} \, N_\Delta.\end{aligned}$$ This result was proved in [@ms] with an explicit basis congruous to [[*(E1)*]{}]{}–[[*(E4)*]{}]{}. The formula holds for $k=4$ as well [@alf4-1]. Lemma \[lm:lbound\] for this case is proved in [@schumaker79]. The statement of Lemma \[lm:lbound\] does not hold if $k=1$ and crossing vertices are present, since $\dim S^1_1({{\ensuremath{\mathcal{R}}}})=3$. See also [@raimundas Example 6.26], [@bil]. \[ex:torus\] The torus example in [@raimundas Example 6.31] glues two triangles using parametric continuity. The triangles can be glued first to a quadrangle, then to a torus (by parallel translations or the well-known identification [@cltopology] of the opposite edges). The dimension formula is $\dim S^1_k=(k-1)(k-2)$ for $k{\geqslant}5$. But $\dim S^1_4=7$, showing that the bound $d_2+4$ in Theorem \[th:dimform\] can be sharp. \[ex:platonic\] For the octahedral example in [@VidunasAMS] (and Example \[eq:octahed\] here) the dimension formula $\dim S^1_k=4(k-1)(k-2)$ holds for $k{\geqslant}3$. Similarly, [@raimundas Example 6.27] shows that a tetrahedral construction with 4 triangles glued by a linear $\delta({{\ensuremath{\mathcal{E}}}})=1$ data into a topological sphere has $\dim S^1_k=2(k-1)(k-2)$ if . For a similar Platonic [@PetersPlat] cubical construction with 6 rectangles, the dimension formula is $\dim S^1_k=6(k-1)^2$ if . In these examples, all interior control coefficients can be chosen freely, and each choice leads to a unique spline. On the other hand, a Platonic construction of icosahedron with $20$ triangles and all $30$ edges glued linearly has the dimension formula $\dim S^1_k=10k^2-30k-4$ for $k{\geqslant}6$. The example of a pruned octahedron {#sec:poctah} ================================== This extensive example demonstrates the full work and technical details of building a workable $G^1$ polygonal surface and computing a basis of splines of bounded degree. This example can be broadly read without acquittance with the material between §\[sec:serverts\]–§\[sec:generate\] and §\[eq:spdim\]–§\[sec:dformula\]. $$\begin{picture}(170,155)(-10,-3) \thicklines \put(0,150){\line(1,-1){150}} \put(0,0){\line(1,0){150}} \put(0,150){\line(1,0){150}} \put(0,0){\line(0,1){150}} \put(150,0){\line(0,1){150}} \put(0,0){\line(1,2){50}} \put(0,0){\line(2,1){100}} \put(100,50){\line(1,2){50}} \put(50,100){\line(2,1){100}} \put(-11,-5){$A$} \put(152,146){$C$} \put(-11,147){$B$} \put(151,-5){$D$} \put(47,105){$E$} \put(105,48){$F$} \end{picture}$$ Figure \[fig:aocta\] depicts a [Schlegel diagram]{} [@Wiki] of a convex polyhedron in ${{\ensuremath{\mathbb{R}}}}^3$, with 6 triangular facets and one rectangle “at the back" of the picture. We define a closed $G^1$ polygonal surface ${{\ensuremath{\mathcal{H}}}}$ from the same six triangles , , , , , and the rectangle by keeping the same incidence relations. Topologically, ${{\ensuremath{\mathcal{H}}}}$ is a sphere. If we append an edge and split the rectangle into two triangles and , we get the octahedron (combinatorially) as in [@VidunasAMS] and Example \[eq:octahed\]. For an edge from Figure \[fig:aocta\], let $u_X^Y$ denote the linear function on the edge with the values $u_X^Y(X)=0$, $u_X^Y(Y)=1$. Following the notation of §\[sec:polyfaces\], we have $$u_A^B=u_{AE}^B\big|_{AB}=u_{AD}^B\big|_{AB}, $$ etc. We refer to the gluing data along the edge by $({{\ensuremath{\beta}}}_{XY}(u_X^Y),{{\ensuremath{\gamma}}}_{XY}(u_X^Y))$. With the Schlegel diagram in mind, denotes both interior edge of ${{\ensuremath{\mathcal{H}}}}$ and the two polygonal edges. Similarly, we have coinciding denotation for each vertex of ${{\ensuremath{\mathcal{H}}}}$ and the corresponding polygonal vertices. We adjust the notation in (\[eq:wv\])–(\[eq:wev\]), (\[eq:s1comp\]), (\[eq:w0\]) accordingly. The gluing data {#sec:gdata} --------------- With the topological and combinatorial structure decided, we first choose the structure of tangent sectors as in Figure \[fig:g1v\], as prescribed in §\[rm:gconstr\]. We select Hahn’s symmetric data (\[eq:hsym\]) of Example \[ex:hahnsym\]. Therefore $E,F,A,C$ are crossing vertices with these relations between directional derivatives at them: $$\begin{aligned} \label{eq:horder4} \partial_{EA}+\partial_{EC}=0, \qquad \partial_{EB}+\partial_{EF}=0, \nonumber \\ \partial_{FA}+\partial_{FC}=0, \qquad \partial_{FD}+\partial_{FE}=0, \nonumber \\ \partial_{AB}+\partial_{AF}=0, \qquad \partial_{AE}+\partial_{AD}=0,\\ \partial_{CB}+\partial_{CF}=0, \qquad \partial_{CE}+\partial_{CD}=0. \nonumber\end{aligned}$$ At the vertices of valency 3 we have $$\label{eq:horder3} \partial_{BA}+\partial_{BC}+\partial_{BE}=0, \qquad \partial_{DA}+\partial_{DC}+\partial_{DF}=0.$$ Next we choose the glueing data along the edges of ${{\ensuremath{\mathcal{H}}}}$. To glue the triangles and along $EF$, we interpolate the following relations between the derivatives: $$\label{eq:dinterp1} \partial_{EA}+\partial_{EC}=0, \qquad \partial_{EA}+\partial_{EC}=2\,\partial_{EF}.$$ The second expression here is actually $\partial_{FA}+\partial_{FC}=0$ rewritten following (\[eq:reders\]): $$\label{eq:dinterp2} \partial_{FA}=\partial_{EA}-\partial_{EF}, \qquad \partial_{FC}=\partial_{EC}-\partial_{EF}.$$ We choose the linear interpolation of the tangent relations in (\[eq:dinterp1\]): $$\label{eq:croscros} \partial_{EA}+\partial_{EC}=2u_E^F\,\partial_{EF}.$$ Thereby we set up ${{\ensuremath{\beta}}}_{EF}(u_E^F)=2u_E^F$, ${{\ensuremath{\gamma}}}_{EF}(u_E^F)=-1$, as in Example \[ex:joining\]. The edges $EA$, $EC$, $FA$, $FC$ connect crossing vertices just as $EF$. We take the linear glueing data $$\begin{aligned} \partial_{EB}+\partial_{EF}=2u_E^A\,\partial_{EA}, \qquad \partial_{EB}+\partial_{EF}=2u_E^C\,\partial_{EA}, \\ \partial_{FD}+\partial_{FE}=2u_F^A\,\partial_{FA}, \qquad \partial_{FD}+\partial_{FE}=2u_F^C\,\partial_{FA}. \nonumber\end{aligned}$$ The new conditions (\[eq:ddv1\])–(\[eq:ddv2\]) are satisfied across , and , with the derivatives ${{\ensuremath{\beta}}}'=2$ and all constant ${{\ensuremath{\gamma}}}=-1$. As only triangles are glued, the balancing condition of Definition \[def:balance\] holds, just as in the linear Hahn gluing of rectangles in [@Peters2010]. Remarkably, the linear glueing data along $AB$, $AD$, $CB$, $CD$ looks the same: $$\begin{aligned} \label{eq:glue34} \partial_{AE}+\partial_{AD}=2u_A^B\,\partial_{AB}, \qquad \partial_{AF}+\partial_{AB}=2u_A^D\,\partial_{AD}, \nonumber \\ \partial_{CE}+\partial_{CD}=2u_C^B\,\partial_{CB}, \qquad \partial_{CF}+\partial_{CB}=2u_C^D\,\partial_{CD},\end{aligned}$$ because $\partial_{BA}+\partial_{BC}+\partial_{BE}=0$ is rewritten to $\partial_{AE}+\partial_{AD}=2\,\partial_{AB}$ by following (\[eq:reders\]) and (\[eq:rectuv\]): $$\partial_{BA}=-\partial_{AB}, \quad \partial_{BC}=\partial_{AD}, \quad \partial_{BE}=\partial_{AE}-\partial_{AB},$$ etc. The linear glueing data satisfies (\[eq:ddv1\])–(\[eq:ddv2\]), but the balancing condition does not apply across $A$ and $C$. For glueing the triangles and along the edge , we interpolate these relations between directional derivatives: $$\partial_{EA}+\partial_{EC}=0, \qquad \partial_{EA}+\partial_{EC}=3\,\partial_{EB}.$$ The latter relation is $\partial_{BA}+\partial_{BC}+\partial_{BE}=0$ rewritten using $$\partial_{BA}=\partial_{EA}-\partial_{EB}, \quad \partial_{BC}=\partial_{EC}-\partial_{EB}, \quad \partial_{BE}=-\partial_{EB}.$$ The balancing condition across , is not satisfied, though only triangles are involved. To satisfy (\[eq:ddv1\])–(\[eq:ddv2\]) we take the quadratic interpolation $$\label{eq:crossing3} \partial_{EA}+\partial_{EC}=\left( 2u_E^B+(u_E^B)^2\right)\,\partial_{EB}.$$ Similarly, for glueing along we choose $$\label{eq:crossing3a} \partial_{FA}+\partial_{FC}=\left( 2u_F^D+(u_F^D)^2\right)\,\partial_{FD}.$$ Alternative glueing data for the edges , is briefly discussed in §\[sec:abe\]. We constructed full $G^1$ gluing data on ${{\ensuremath{\mathcal{H}}}}$ without reference to coordinate systems, only using directional derivatives and the functions $u_X^Y$ on edges. If barycentric and tensor product coordinates are locally used, and polynomial functions are presented in the Bernstein-Bézier bases (\[eq:bbb\]), (\[eq:tpbbb\]), then the directional derivatives are expressed as $$\partial_{AB}=\frac{\partial}{\partial u_{AE}^B}-\frac{\partial}{\partial u_{BE}^A} =\frac{\partial}{\partial u_{AD}^B}-\frac{\partial}{\partial u_{BC}^A}, \qquad \mbox{etc.}$$ The splines around $EF$, $EA$, $EC$, $F\!A$, $FC$ {#sec:ef} ------------------------------------------------- We seek a spline basis for $S^1_k({{\ensuremath{\mathcal{H}}}})$ with $k=4$ or slightly larger. The degree convention is given in §\[sec:boxdelta\]. We start by constructing the $M^1$-spaces of §\[eq:edgespace\] for the 5 edges connecting the crossing vertices. The formulas are stated for the edge $EF$. To have them for the edges $EA$, $EC$, $F\!A$, $FC$, replace the labels $$\begin{aligned} (E,F,A,C)\mapsto (E,A,B,F), (E,C,B,F), (F,A,E,D) \mbox{ or } (F,C,E,D). $$ Let $M^1(EF)$ denote the corresponding space $M^1(\tau_1,\tau_2)$ in (\[eq:s1comp\]). It consists of the polynomial pairs (\[eq:splines2\]) with $u=u_{EA}^F$ or $u=u_{EC}^F$, $v_1=u_{EF}^{A}$, $v_2=u_{EF}^{C}$, and such that $(h_1(u),h'_0(u),h_2(u))$ is a syzygy between $(1,-2u,1)$. The ${{\ensuremath{\mathbb{R}}}}[u]$-module of syzygies is freely generated by $(1,0,-1)$, $(2u,1,0)$. The space of syzygies of degree ${\leqslant}k$ has dimension $2k+1$. It corresponds to the splines in $M^1(EF)$ of degree ${\leqslant}k+1$. The dimension of this spline space is $2k+2$, as we include the constant splines. Therefore $\dim M^1_k(EF)=2k$. We keep the notation $M(E)$, $M(F)$ of (\[eq:wv\]) for the spaces of $J^{1,1}$-jets of polynomial functions on , and let $M'(E)$, $M'(F)$ denote the similar spaces for polynomials on . Since $E$, $F$ are crossing vertices, the spline jets in $M'(E)$ are determined by $M(E)$ by Lemma \[prop:Hg\], and the jets in $M'(F)$ are determined by $M(F)$. Consider the $W_0$ maps in (\[eq:w0\]), denoting them $$W_{\!EF}:M^1(EF)\to M(E)\oplus M'(E),\quad W_{\!F\!E}:M^1(EF)\to M(F)\oplus M'(F).$$ We use the variables $u,v$ for the jet spaces, with $v$ identified with $u_{EF}^{A}$ or $u_{EF}^{C}$. The explicit action of $W_{\!EF}$ on the spline defined by $(h_0(u),h_1(u),h_2(u))$ in (\[eq:splines2\]) is then $$\begin{aligned} \label{eq:wef} \big( & h_0(0)+h'_0(0)\,u+h_1(0)\,v+h'_1(0)\,uv, \\ & h_0(0)+h'_0(0)\,u-h_1(0)\,v+(-fh_1(0)+2h_0'(0))\,uv\big). \nonumber\end{aligned}$$ Following the derivative transformation (\[eq:dinterp2\]), $W_{\!F\!E}$ sends the same spline to $$\begin{aligned} \label{eq:wef2} \big( & h_0(1)-h'_0(1)\,u+(h_1(1)-h'_0(1))\,v+(-h'_1(1)+h''_0(1))\,uv, \\ & h_0(1)-h'_0(1)\,u+(h'_0(1)-h_1(1))\,v+(h'_1(1)-h''_0(1)-2h'_0(1))\,uv\big). \quad \nonumber\end{aligned}$$ The dual variable transformation for the endpoint symmetry is more cumbersome than jet relations: we should transform $u\mapsto 1-u-v$ and then reduce mod $v^2$. The map $W_{\!EF}\oplus W_{\!F\!E}$ as in Lemma \[lm:separate\] has the image of dimension $8$. Since , there is a chance that the splines in $M_4^1(EF)$ map isomorphically to this image or to . This turns out to be the case. Here are the splines of degree 4 that have exactly one non-zero coefficient in $M(E)\oplus M(F)$. They are presented in terms of the polynomial triples $(h_0(u),h_1(u),h_2(u))$ in (\[eq:splines2\]), with the common factor brought forward: $$\begin{aligned} \label{eq:us} U_1=& \,(1-u)\cdot \left( (1-u)(1+2u), -6u^2, -6u^2 \right), & U_5=& \, u^2 \cdot \big( 3-2u,6(1-u), 6(1-u) \big), \qquad \nonumber \\ U_2=& \, u \, (1-u)\cdot \big( 1-u,-2u, 2(1-2u) \big), & U_6=& \, u^2 \cdot \left( 1-u, 1-2u, 3-4u \right),\nonumber \\ U_3=& \, (1-u)^2\,(1+2u) \cdot \left( 0,1, -1 \right), & U_7=& \, u^2\,(3-2u) \cdot \left( 0, 1, -1 \right), \\ U_4=& \, u \, (1-u)^2 \cdot \left( 0,1, -1 \right), & U_8=& \, u^2\,(1-u) \cdot \left( 0, 1, -1 \right). \nonumber $$ The splines $U_1,U_5$ evaluate to $1$ at $E$ or $F$ (respectively), and their all relevant derivatives $\partial/\partial u$, $\partial/\partial v$, $\partial^2/\partial u\partial v$ at both $E$, $F$ are zero. Similarly, the other splines evaluate exactly one of those derivatives at $E$ or $F$ (of the restriction to ) to $1$. The polynomials $h_0,h_1,h_2$ have degree ${\leqslant}3$, but $h_1,h_2$ are multiplied by $u_{EF}^A$ or $u_{EF}^C$ giving the degree $4$. Note that the splines $U_2,U_6$ do not reflect the symmetry between the triangles and . By the construction, they have a derivative $\partial/\partial u$ along $EF$ equal to 1, and their $uv$ terms (representing a mixed derivative $\partial^2/\partial u\partial v$) in $M(E)$ or $M(F)$ are zero. But their $uv$ terms in $M'(E)$ or $M'(F)$ are non-zero, reflecting the specific relation of $J^{1,1}$-jets around crossing vertices. The splines $U_2+U_4$ and $U_6+U_8$ have symmetric specializations to and , but their $uv$ term in $M(E)$ or $M(F)$ is non-zero as well. $$\begin{aligned} \begin{picture}(100,24)(-4,-3) \put(2,0){\line(1,0){96}} \put(2,0){\vector(1,1){16}} \put(2,0){\vector(1,-1){16}} \put(98,0){\vector(-1,1){16}} \put(98,0){\vector(-1,-1){16}} \put(100,-3){$F$} \put(21,13){$C$} \put(72,13){$C$} \put(-8,-3){$E$} \put(20,-19){$A$} \put(72,-19){$A$} \put(143,22){$U_1:$} \put(282,22){$U_2:$} \put(2,-45){$U_3:$} \put(143,-45){$U_4:$} \put(282,-45){$U_5:$} \put(2,-113){$U_6:$} \put(143,-113){$U_7:$} \put(282,-113){$U_8:$} \end{picture} \hspace{54pt} & \; \begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c} & 1 && 1 && 0 && 0 \\ \bf 1 && \bf 1 && \bf \frac12 && \bf 0 && \bf 0 \\ & 1 && 1 && 0 && 0 \end{array} & \hspace{32pt} \begin{array}{c@{\;}c@{\;}c@{\,}c@{\,}c@{\;}c@{\;}c@{\;}c@{\;}c} & 0 && \frac{5}{12} && 0 && 0 \\ \bf 0 && \bf \frac14 && \bf \frac16 && \bf 0 && \bf 0 \\ & 0 && \frac14 && 0 && 0 \end{array} \\[28pt] \begin{array}{c@{}c@{}c@{}c@{\,}c@{\;}c@{\;}c@{\;}c@{\;}c} & -\frac14 && -\frac14 && 0 && 0 \\ \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 \\ & \frac14 && \frac14 && 0 && 0 \end{array} \hspace{54pt} & \, \begin{array}{c@{\;}c@{\;}c@{}c@{}c@{\;}c@{\;}c@{\;}c@{\;}c} & 0 && \!-\frac{1}{12} && 0 && 0 \\ \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 \\ & 0 && \frac1{12} && 0 && 0 \end{array} & \hspace{32pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c} & 0 && 0 && 1 && 1 \\ \bf 0 && \bf 0 && \bf \frac12 && \bf 1 && \bf 1 \\ & 0 && 0 && 1 && 1 \end{array} \; \\[28pt] \begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\,}c@{\,}c@{\;}c@{\;}c} & 0 && 0 && \frac5{12} && 0 \\ \bf 0 && \bf 0 && \bf \frac16 && \bf \frac14 && \bf 0 \\ & 0 && 0 && \frac14 && 0 \end{array} \hspace{54pt} & \begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{}c@{}c@{}c@{\,}c} & 0 && 0 && -\frac14 && -\frac14 \\ \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 \\ & 0 && 0 && \frac14 && \frac14 \end{array} & \hspace{32pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{}c@{}c@{\;}c@{\;}c} & 0 && 0 && \!-\frac{1}{12} && 0 \\ \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 \\ & 0 && 0 && \frac{1}{12} && 0 \end{array} \end{aligned}$$ Figure \[fig:splinef\] represents the 8 splines in $M^1(EF)$ in the Bernstein-Bézier basis. The middle row (in the bold font) of each array displays the control coefficients of the restriction to the common edge. The restriction to is defined by the two bottom rows, while the restriction is defined by the two upper rows. The transition to the Bernstein-Bézier basis involves homogenization with (\[eq:baryc\]) and ignoring the terms divisible by $(u_{EF}^A)^2$ or $(u_{EF}^C)^2$. The splines around $AB$, $AD$, $CB$, $CD$ {#sec:ab} ----------------------------------------- The gluing data (\[eq:glue34\]) on , , , leads to the same syzygy module ${{\ensuremath{\mathcal{Z}}}}(1,-2u,1)$ as in §\[sec:ef\], generated by the same syzygies $(1,0,-1)$, $(2u,1,0)$. However, these 4 edges join a triangle with a rectangle. The splines of degree ${\leqslant}k$ correspond to the syzygies $(A,B,C)$ with $\deg A{\leqslant}k-1$, $\deg B{\leqslant}k-1$, $\deg C{\leqslant}k$. For example, the syzygy $(0,u^3,2u^4)$ gives a spline of degree $4$. Besides, the vertices $B$, $D$ are not crossing vertices, so there are 5 degrees of freedom around them by Lemma \[lm:edgew\]. Like in the previous section, we give explicit formulas for one edge . For the other edges, replace the labels $$\begin{aligned} (A,B,E,D)\mapsto (A,D,F,B), \ (C,B,E,D) \mbox{ or } (C,D,F,B).\end{aligned}$$ Similarly to §\[sec:ef\], let $M^1(AB)$ denote the corresponding space in (\[eq:s1comp\]). The splines in $M^1(AB)$ correspond to the polynomial pairs (\[eq:splines2\]) with $u=u_{AE}^B$ or $u=u_{AD}^B$, $v_1=u_{AB}^{E}$, $v_2=u_{AB}^{D}$, such that $(h_1(u),h'_0(u),h_2(u))\in {{\ensuremath{\mathcal{Z}}}}(1,-2u,1)$. The spaces $M^1(AB)$ and $M^1(EF)$ coincide in terms of the polynomial triples $(h_0(u),h_1(u),h_2(u))$, but the degree grading is different. $$\begin{aligned} \\[-12pt] \begin{picture}(108,25)(-7,-3) \put(4,0){\line(1,0){92}} \put(4,0){\vector(0,1){22}} \put(4,0){\vector(1,-1){16}} \put(96,0){\vector(0,1){22}} \put(96,0){\vector(-1,-1){16}} \put(98,-3){$B$} \put(8,17){$D$} \put(85,17){$C$} \put(-7,-3){$A$} \put(22,-19){$E$} \put(70,-19){$E$} \put(149,23){$\widetilde{U}_0:$} \put(298,23){$\widetilde{U}_1:$} \put(0,-48){$\widetilde{U}_2:$} \put(143,-48){$\widetilde{U}_3:$} \put(292,-48){$\widetilde{U}_4:$} \put(-12,-118){$\widetilde{U}_5:$} \put(87,-118){$\widetilde{U}_6:$} \put(198,-118){$\widetilde{U}_7:$} \put(312,-118){$\widetilde{U}_8:$} \end{picture} \hspace{49pt} & \hspace{6pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{}c@{}c@{}c@{\;}c@{\;}c} 0 && 0 && -\frac{5}{24} && \frac1{16} && 0 \\[2pt] \bf 0 && \bf 0 && \bf \!-\frac{1}{12} && \bf 0 && \bf 0 \\ & 0 && 0 && 0 && 0 \end{array} & \hspace{20pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{}c@{\,}c@{\;}c@{\;}c@{\;}c} 1 && 1 && -1 && 0 && 0\\ \bf 1 && \bf 1 && \bf 0 && \bf 0 && \bf 0 \\ & 1 && 1 && 0 && 0 \end{array} \hspace{6pt} \\[28pt] \begin{array}{c@{\;}c@{\;}c@{\,}c@{}c@{\,}c@{\;}c@{\;}c@{\;}c} 0 && \frac{3}{8} && -\frac14 && 0 && 0 \\[2pt] \bf 0 && \bf \frac14 && \bf 0 && \bf 0 && \bf 0 \\ & 0 && \frac14 && 0 && 0 \end{array} \hspace{58pt} & \begin{array}{c@{\,}c@{}c@{\,}c@{}c@{\,}c@{\;}c@{\;}c@{\;}c} \!-\frac14 && -\frac14 && -\frac18 && 0 && 0 \\ \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 \\ & \frac14 && \frac14 && 0 && 0 \end{array} & \hspace{20pt} \begin{array}{c@{\;}c@{}c@{}c@{}c@{\,}c@{\;}c@{\;}c@{\;}c} 0 && \!-\frac{1}{16} && \!\!-\frac{1}{24} && 0 && 0 \\ \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 \\ & 0 && \frac1{12} && 0 && 0 \end{array} \\[29pt] \hspace{-4pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c} 0 && 0 && 2 && 1 && 1 \\ \bf 0 && \bf 0 && \bf 1 && \bf 1 && \bf 1 \\ & 0 && 0 && 1 && 1 \end{array} \hspace{80pt} & \hspace{-59pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{\,}c@{}c@{\;}c@{\;}c@{}c} 0 && 0 && \frac{37}{24} && 0 && \!-\frac14 \\[2pt] \bf 0 && \bf 0 && \bf \frac23 && \bf \frac14 && \bf 0 \\ & 0 && 0 && \frac14 && 0 \end{array} \hspace{23pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{}c@{\,}c@{}c@{\,}c@{}c} 0 && 0 && \!-\frac18 && \!-\frac14 && \!-\frac14 \\ \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 \\ & 0 && 0 && \frac14 && \frac14 \end{array} \hspace{-28pt} & \begin{array}{c@{\;}c@{\;}c@{\;}c@{}c@{}c@{}c@{\;}c@{\;}c} 0 && 0 && -\frac{1}{4} && 0 && 0 \\[2pt] \bf 0 && \bf 0 && \bf \!-\frac{1}{12}\! && \bf 0 && \bf 0 \\ & 0 && 0 && \frac{1}{12} && 0 \end{array} \hspace{-10pt}\end{aligned}$$ We keep the notation $M(A)$, $M(B)$ for the spaces of $J^{1,1}$-jets at the triangle corners, and let $M'(A)$, $M'(B)$ denote the spaces of $J^{1,1}$-jets at the rectangle corners. Following (\[eq:w0\]), the $W_0$-map $W_{\!AB}:M^1(AB)\to M(A)\oplus M'(A)$ acts as in (\[eq:wef\]), but $W_{\!BA}:M^1(AB)\to M(B)\oplus M'(B)$ acts by $$\begin{aligned} \big( & h_0(1)-h'_0(1)\,u+(h_1(1)-h'_0(1))\,v+(-h'_1(1)+h''_0(1))\,uv, \quad \nonumber \\ & h_0(1)-h'_0(1)\,u+(2h'_0(1)-h_1(1))\,v-h'_2(1)\,uv\big). \end{aligned}$$ The dimension of ${\mathrm{im\ }}W_{\!BA}$ equals 5, with $h'_2(1)$ as an extra degree of freedom. Note that the coefficients to the terms $-h'_0(1)u$, $(h_1(1)-h'_0(1))v$, sum up to zero, reflecting (\[eq:horder3\]). The splines in $M_k^1(AB)$ are generated by the polynomial multiples of degree ${\leqslant}k-1$ of the syzygies $(1,0,-1)$, $(0,1,2u)$. Together with the constant splines, $\dim M_k^1(AB)=2k+1$. In particular, $\dim M_4^1(AB)=9$ just as the combined dimension of ${\mathrm{im\ }}W_{AB}$ and ${\mathrm{im\ }}W_{BA}$. The expressions in (\[eq:us\]) define splines in $M_4^1(AB)$ just as in $M_4^1(EF)$. Additionally, we have the spline $$\begin{aligned} \label{eq:f9} \widetilde{U}_0 = u^2(1-u) \cdot \left( -{\textstyle\frac12\,}(1-u),1,-3+4u \right)\end{aligned}$$ in terms of (\[eq:splines2\]). This spline in $M^1(EF)$ has degree $5$ and is annihilated by $W_{\!EF}$, $W_{\!F\!E}$, but $W_{\!B\!A\,}\widetilde{U}_0=(0,uv)$. The splines in (\[eq:us\]) have to be adjusted by $\widetilde{U}_0$ to have $f'_2(1)=0$. Here is the “diagonal" spline basis for $\dim M_4^1(AB)$: $$\begin{aligned} \label{eq:sbasis2} &\widetilde{U}_0, \quad \widetilde{U}_1=U_1+6\widetilde{U}_0, \quad \widetilde{U}_2=U_2+2\widetilde{U}_0, \quad \widetilde{U}_3=U_3, \quad \widetilde{U}_4=U_4, \\ & \widetilde{U}_5=U_5-6\widetilde{U}_0, \quad \widetilde{U}_6=U_6-U_8-6\widetilde{U}_0, \quad \widetilde{U}_7=U_7,\quad \widetilde{U}_8=U_8+\widetilde{U}_0. \qquad \nonumber\end{aligned}$$ Figure \[fig:splinab\] presents these splines in $M^1(AB)$ as arrays of relevant Bernstein-Bézier coefficients. $$\begin{aligned} \\[-10pt] \begin{picture}(113,20)(-7,-3) \put(2,0){\line(1,0){94}} \put(2,0){\vector(1,1){16}} \put(2,0){\vector(1,-1){16}} \put(96,0){\vector(-1,1){16}} \put(96,0){\vector(-1,-1){16}} \put(95,-9){$B$} \put(21,13){$C$} \put(70,13){$C$} \put(-7,-3){$E$} \put(20,-19){$A$} \put(70,-19){$A$} \put(130,21){$V_1:$} \put(273,21){$V_2:$} \end{picture} & \hspace{27pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{}c@{\,}c@{\;}c@{\;}c@{\;}c@{\;}c} & 1 && 1 && \frac15 && \!-\frac25 && 0 && 0 \\ \bf 1 && \bf 1 && \bf \frac35 && \bf \frac15 && \bf 0 && \bf 0 && \bf 0 \\ & 1 && 1 && \frac35 && \frac15 && 0 && 0 \end{array} & \hspace{14pt} \begin{array}{c@{\;}c@{\;}c@{\,}c@{}c@{}c@{}c@{}c@{}c@{\;}c@{\;}c@{\;}c@{\;}c} & 0 && \frac{7}{30} && \frac1{20} && \!-\!\frac2{15} && 0 && 0 \\ \bf 0 && \bf \frac16 && \bf \frac2{15} && \bf \frac1{20} && \bf 0 && \bf 0 && \bf 0 \\ & 0 && \frac16 && \frac1{6} && \frac1{12} && 0 && 0 \end{array} \\[28pt] \begin{picture}(113,20)(-7,-3) \put(2,0){\line(1,0){94}} \put(2,0){\vector(1,1){16}} \put(2,0){\vector(1,-1){16}} \put(96,0){\vector(-1,1){16}} \put(96,0){\vector(-1,-1){16}} \put(95,-9){$D$} \put(21,13){$C$} \put(70,13){$C$} \put(-7,-2){$F$} \put(20,-19){$A$} \put(70,-19){$A$} \put(130,21){$V_5:$} \put(273,21){$V_6:$} \put(36,-46){$V_8:$} \put(215,-46){$V_9:$} \end{picture} & \hspace{29pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c} & 0 && 0 && \frac45 && \frac75 && 1 && 1 \\ \bf 0 && \bf 0 && \bf \frac25 && \bf \frac45 && \bf 1 && \bf 1 && \bf 1 \\ & 0 && 0 && \frac25 && \frac45 && 1 && 1 \end{array} & \hspace{14pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\,}c@{}c@{}c@{\,}c@{\,}c@{\;}c@{}c@{\,}c} & 0 && 0 && \frac{11}{12} && \,\frac{6}{5} && 0 && \!-\frac16 \\ \bf 0 && \bf 0 && \bf \frac13 && \bf \frac{11}{20} && \bf \frac{7}{15} && \bf \frac16 && \bf 0 \\ & 0 && 0 && \frac1{12} && \,\frac{1}{6} &&\frac16 && 0 \end{array} \\[28pt] \begin{array}{c@{\;}c@{\;}c@{\;}c@{}c@{}c@{}c@{}c@{}c@{\,}c@{\,}c@{\;}c@{\;}c} & 0 && 0 && \!\!-\frac{1}{10} && \!\!-\frac{2}{15} && 0 && 0 \\ \bf 0 && \bf 0 && \bf \!-\frac{1}{30}\! && \bf \!\!-\frac{1}{20}\! && \bf \!\!-\frac{1}{30} && \bf 0 && \bf 0 \\ & 0 && 0 && 0 && \frac{1}{60} && \frac{1}{30} && 0 \end{array} \hspace{-86pt} & \hspace{115pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{}c@{}c@{}c@{}c@{}c@{}c@{\,}c@{\;}c@{\;}c} & 0 && 0 && \!\!-\frac{1}{12} && \!\!-\frac{1}{10} && \frac{1}{30} && 0 \\ \bf 0 && \bf 0 && \bf \!-\frac{1}{30} && \bf \!\!-\frac{1}{20} && \bf \!\!-\frac{1}{30} && \bf 0 && \bf 0 \\ & 0 && 0 && \!\!-\frac{1}{60} && \!\!-\frac{1}{60} && 0 && 0 \end{array} \hspace{-197pt}\end{aligned}$$ The splines around $EB$ and $FD$ {#sec:be} -------------------------------- The gluing data (\[eq:crossing3\]) on the edges , leads to the syzygy module . The spline spaces $M^1(EB)$, $M^1(FD)$ are defined as in (\[eq:mzz\]). The vertex evaluation maps $W_{EB}\oplus W_{BE}$ and $W_{FD}\oplus W_{DF}$ are defined as in (\[eq:wef\]), (\[eq:wef2\]) in terms of (\[eq:splines2\]). The dimension of their images equals $9$. The syzygy module is generated by $(1,0,-1)$, $(2u+u^2,1,0)$. The dimension of splines of degree ${\leqslant}4$ splines is 7, not enough to cover the 9 degrees of freedom. In particular, there are these relations for the $J^{1,1}$-jets of $M_4^1(EB)$ or $M_4^1(FD)$: $$\begin{aligned} \label{eq:quarel} 2h_0(0)+h_0'(0) & = \, 2h_0(1)-h_0'(1), \\ \label{eq:quarel2} 2h_0'(0)-6h_0'(1) & = \, (h_0''(1)-h_1'(1))+(h_0''(1)-h_2'(1)).\end{aligned}$$ The first relation is a restriction on the edge control coefficients. The dimension of degree ${\leqslant}5$ splines is 9, but the spline $$\label{eq:trivzero} (h_0(u),h_1(u),h_2(u)) = u^2(1-u)^2\cdot (0,1,-1).$$ thoroughly vanishes at the end-vertices. There is still a restriction on the jets of $M_5^1(EB)$ or $M_5^1(FD)$, which is actually $6\times(\ref{eq:quarel})$ minus $(\ref{eq:quarel2})$. Splines of degree 6 are needed to completely separate the vertices. Among 9 splines of degree 6 evaluating the 9 degrees of freedom on the triangles , “diagonally" like in (\[eq:us\]), three splines can be of actual degree 4 and have the same Bernstein-Bézier representations as $U_3$, $U_4$, $U_7$ in Figure \[fig:splinef\]. Let us call these splines $V_3$, $V_4$, $V_7$, respectively. As an example, here is the representation of $V_3$ as an array of degree 6 Bernstein-Bézier coefficients: $$\begin{array}{c@{}c@{\,}c@{}c@{\,}c@{}c@{}c@{}c@{}c@{\;}c@{\;}c@{\;}c@{\;}c} & -\frac16 && -\frac16 && \!-\frac7{60} && \!-\frac1{20} && 0 && 0 \\ \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 \\ & \frac16 && \frac16 && \frac7{60} && \frac1{20} && 0 && 0 \end{array}$$ The middle coefficients $\pm 7/60$, $\pm 1/20$ can be nullified after combination with (\[eq:trivzero\]) and other thoroughly vanishing . Further, 6 splines evaluating the other 6 degrees of freedom individually to $1$ are presented in Figure \[fig:splinfd\], as the similar arrays of Bernstein-Bézier coefficients of degree 6. They can be modified by the same two polynomial multiples of $(0,1,-1)$. The presented splines have the property that the restriction to or has degree $4$. Splines around vertices {#sec:exvertexsp} ----------------------- From the $M^1$-splines presented in Figures \[fig:splinef\], \[fig:splinab\], \[fig:splinfd\] we can build the splines on ${{\ensuremath{\mathcal{H}}}}$ characterized in [[*(E1)*]{}]{}. The first step is to determine the local degrees of freedom in the (4 or 3) associated polygonal vertex spaces $M(P)$ in (\[eq:wv\]) around each vertex of ${{\ensuremath{\mathcal{H}}}}$. The degrees of freedom around $A$, $C$ are presented in Figure \[fig:freeda\], each in the form of 4 blended $2\times 2$ arrays of corner control coefficients (in the Bernstein-Bézier expressions) of degree 4 polynomial functions on the incident polygons. The arrays [[*(b)*]{}]{} and [[*(c)*]{}]{} have a derivative along an edge equal to $1$, but their standard mixed derivatives are non-zero as well. These two arrays are adjusted by [[*(d)*]{}]{} for the maximal symmetry on the triangles. These degrees of freedom are lifted to splines on ${{\ensuremath{\mathcal{H}}}}$ in Figure \[fig:splina\], presented by blended arrays of control coefficients on the incident polygons. The missing spline [[*(c)*]{}]{} is a mirror image of the spline [[*(b)*]{}]{}. The support of these splines consists of the polygons incident to $A$ or $C$, that is, three triangles and the quadrangle. Only the control coefficients for these polygons are shown. The spline [[*(a)*]{}]{} is recognizably built up from copies of the $M^1$-splines $U_1$, $\widetilde{U}_1$. Similarly, the spline [[*(b)*]{}]{} is build up from $U_2+U_4$, $U_3+U_4$, $-\widetilde{U}_2-\widetilde{U}_4$, $\widetilde{U}_3+\widetilde{U}_4$, and the spline [[*(d)*]{}]{} is build up from copies of $U_4$, $\widetilde{U}_4$. In spline [[*(d)*]{}]{}, we can modify the $0$ entry next to the two $\frac1{24}$ entries to $\frac1{36}$, so to lower the degree of the specialization to the rectangle. $$\begin{aligned} \begin{picture}(64,24)(-30,-4) \put(-32,0){\vector(1,0){64}} \put(-32,0){\vector(-1,0){0}} \put(0,-32){\vector(0,1){64}} \put(0,-32){\vector(0,-1){0}} \put(25,4){$F$} \put(3,26){$E$} \put(-32,4){$B$} \put(3,-32){$D$} \put(99,26){{{\it (a)}}} \put(255,26){{{\it (b)}}} \put(25,-56){{{\it (c)}}} \put(174,-56){{{\it (d)}}} \end{picture} && \hspace{-32pt} \begin{array}{ccccc} && \vdots \\[1pt] & 1 & \bf 1 & 1 & \\[1pt] \cdots & \bf 1 & \bf 1 & \bf 1 & \cdots \\[1pt] & 1 & \bf 1 & 1 & \\[-2pt] && \vdots \\ \end{array} & & \hspace{-32pt} \begin{array}{ccccc} && \vdots \\[1pt] & \frac13 & \bf \frac14 & \frac13 & \\[2pt] \cdots & \bf 0 & \bf 0 & \bf 0 & \cdots \\[2pt] & \!\!\!-\frac5{16}\!\! & \bf \!\!-\frac14\!\! & \!\!-\frac13\!\! & \\[0pt] && \vdots \\ \end{array} \\[2pt] & \hspace{-12pt} \begin{array}{ccccc} && \vdots \\ & \!\!-\frac13\!\! & \bf 0 & \frac13 & \\[2pt] \cdots\! & \bf \!\!-\frac14\!\! & \bf 0 & \bf \frac14 & \cdots \\[2pt] & \!\!\!-\frac5{16}\!\! & \bf 0 & \frac13 & \\ && \vdots \\ \end{array} & & \hspace{-18pt} \begin{array}{ccccc} && \vdots \\ & \!\!\!-\frac1{12}\!\! & \bf 0 & \frac1{12} & \\[1pt] \cdots & \bf 0 & \bf 0 & \bf 0 & \cdots \\[1pt] & \frac1{16} & \bf 0& \!\!\!-\frac1{12}\!\! & \\[2pt] && \vdots \\ \end{array}\end{aligned}$$ $$\begin{aligned} \\[-6pt] \begin{picture}(158,50)(-30,-3) \put(0,0){\line(1,0){100}} \put(0,0){\line(1,1){50}} \put(0,0){\line(0,-1){50}} \put(0,-50){\line(1,0){50}} \put(50,-50){\line(0,1){100}} \put(50,-50){\line(1,1){50}} \put(50,50){\line(1,-1){50}} \put(27,-11){$A/C$} \put(101,-4){$F$} \put(58,46){$E$} \put(-8,2){$B$} \put(56,-52){$D$} \put(195,47){{{\it (a)}}} \put(-24,-92){{{\it (b)}}} \put(192,-92){{{\it (d)}}} \end{picture} & \hspace{64pt} \begin{array}{ccccccccc} &&&& \bf 0 \\ &&& 0 & \bf 0 & 0 \\[1pt] && 0 & 0 & \bf \frac12 & 0 & 0 \\[1pt] & 0 & 0 & 1 & \bf 1 & 1 & 0 & 0 \\[1pt] \bf 0 & \bf 0& \bf 0 & \bf 1 & \bf 1 & \bf 1 & \bf \frac12 & \bf 0 & \bf 0 \\[1pt] 0 & 0 & \!\!-1\!\! & 1 & \bf 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & \!\!-1\!\! & \bf 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \bf 0 & 0 \\ 0 & 0 & 0 &0 & \bf 0 \\ \end{array} \\[20pt] \begin{array}{ccccccccc} &&&& \bf 0 \\ &&& 0 & \bf 0 & 0 \\[1pt] && 0 & 0 & \bf \frac16 & 0 & 0 \\[2pt] & 0 & 0 & \frac13 & \bf \frac14 & \frac13 & 0 & 0 \\[1pt] \bf 0 & \bf 0& \bf 0 & \bf 0 & \bf 0 & \bf 0 & \bf 0 & \bf 0 & \bf 0 \\[1pt] 0 & 0 & \!\!-\frac16\!\! & \!\!\!-\frac5{16}\!\! & \bf \!\!-\frac14\!\! & \!\!-\frac13\!\! & 0 & 0 \\[2pt] 0 & 0 & 0 & \frac{7}{24} & \bf 0 & 0 & 0 \\[1pt] 0 & 0 & 0 & 0 & \bf 0 & 0 \\ 0 & 0 & 0 &0 & \bf 0 \\ \end{array} & \hspace{60pt} \begin{array}{ccccccccc} &&&& \bf 0 \\ &&& 0 & \bf 0 & 0 \\ && 0& 0 & \bf 0 & 0 & 0 \\[1pt] & 0 & 0 & \!\!\!-\frac1{12}\!\! & \bf 0 & \frac1{12} & 0 & 0 \\[1pt] \bf 0 & \bf 0 & \bf 0& \bf 0 & \bf 0 & \bf 0 & \bf 0 & \bf 0 & \bf 0 \\[1pt] 0 & 0 & \!\frac{1}{24}\! & \frac1{16} & \bf 0 & \!\!-\frac1{12}\!\!\! & 0 & 0 \\[2pt] 0 & 0 & 0 & \frac1{24} & \bf 0 & 0 & 0 \\[1pt] 0 & 0 & 0 & 0 & \bf 0 & 0 \\ 0 & 0 & 0 &0 & \bf 0 \\ \end{array} \end{aligned}$$ The degrees of freedom around $E$, $F$ are similar. Their lifts to splines on ${{\ensuremath{\mathcal{H}}}}$ are presented in Figure \[fig:splinf\]. The lifted splines have degree 4 or 6; corresponding arrays of Bernstein-Bézier coefficients are depicted. The splines are build from copies of the $M^1$-splines in Figures \[fig:splinef\] and \[fig:splinfd\], with the degree adjusted and the interior coefficients around edges symmetrized using $M^1$-splines like $u^3(1-u)^2\cdot(0,1,-1)$. In particular, the spline [[*(d)*]{}]{} has more zero control coefficients around the vertices $A$, $C$. The degrees of freedom around $B$, $D$ are presented in Figure \[fig:freedb\], each in the form of 3 blended $2\times 2$ arrays of corner control coefficients of degree 6 polynomial functions on the incident polygons. The 3 standard mixed derivatives are totally free. Lifts to splines on ${{\ensuremath{\mathcal{H}}}}$ of degree 4 or 6 are presented in Figure \[fig:splinb\]. An independent spline corresponding to Figure \[fig:freedb\][[*(f)*]{}]{} is not depicted; one can take a mirror image of Figure \[fig:splinb\][[*(d)*]{}]{} for it. The variation of the Bernstein-Bézier coefficients of these splines to negative values is remarkable. The splines in Figure \[fig:splinb\] are build from copies of the $M^1$-splines in Figures \[fig:splinab\] and \[fig:splinfd\]. The adjusting spline $10u^3(1-u)^2\cdot(0,1,-1)$ along a rectangle edge looks as follows in the Bernstein-Bézier form: $$\begin{array}{c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{\;}c@{{\hspace{6pt}}}c@{\;}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c} 0 && 0 && 0 && \! -\frac12 \! && \! -\frac23 \! && 0 && 0 \\ \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 \\ & 0 && 0 && 0 && 1 && 0 && 0 \end{array}$$ $$\begin{aligned} \\[-7pt] \begin{picture}(86,60)(-4,-40) \put(2,-36){\line(1,1){72}} \put(2,-36){\line(1,0){72}} \put(2,-36){\line(0,1){72}} \put(2,36){\line(1,-1){72}} \put(2,36){\line(1,0){72}} \put(74,-36){\line(0,1){72}} \put(45,-3){$E$} \put(77,-37){$F$} \put(77,31){$C$} \put(-9,31){$B$} \put(-8,-39){$A$} \put(142,25){{{\it (a)}}} \put(283,25){{{\it (b)}}} \end{picture} \hspace{63pt} \begin{array}{c@{{\hspace{7pt}}}c@{\;}c@{\,}c@{\,}c@{\;} c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c} \bf 0 && 0 && 0 && 0 && \bf 0 \\ & \bf 0 && 0 && 0 && \bf 0 \\ 0 && \bf 0 && \frac13 && \bf \frac16 && 0 \\ & 0 && \bf 0 && \bf \frac14 && 0 \\ 0 && -\frac13 && \bf 0 && \frac13 && 0 \\ & 0 && \bf \!-\frac14 && \bf 0 && 0 \\ 0 && \bf \!-\frac16 && \!-\frac13 && \bf 0 && 0 \\ & \bf 0 && 0 && 0 && \bf 0 \\ \bf 0 && 0 && 0 && 0 && \bf 0 \\ \end{array} \hspace{-78pt} & \hspace{96pt} \begin{array}{c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c@{}c@{\,}c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c} \bf 0 && 0 && 0 && 0 && \bf 0 \\ & \bf 0 && 0 && 0 && \bf 0 \\ 0 && \bf 0 && -\frac1{12} && \bf 0 && 0 \\ & 0 && \bf 0 && \bf 0 && 0 \\ 0 && \frac1{12}\!\! && \bf 0 && \frac1{12}\!\! && 0 \\ & 0 && \bf 0 && \bf 0 && 0 \\ 0 && \bf 0 && \!-\frac1{12} && \bf 0 && 0 \\ & \bf 0 && 0 && 0 && \bf 0 \\ \bf 0 && 0 && 0 && 0 && \bf 0 \\ \end{array} \\[-74pt] \begin{picture}(86,60)(-4,0) \put(2,-36){\line(1,1){72}} \put(2,-36){\line(1,0){72}} \put(2,-36){\line(0,1){72}} \put(2,36){\line(1,-1){72}} \put(2,36){\line(1,0){72}} \put(74,-36){\line(0,1){72}} \put(45,-3){$F$} \put(77,-37){$E$} \put(76,33){$C$} \put(-9,30){$D$} \put(-9,-37){$A$} \put(-49,-46){{{\it (c)}}} \put(175,-46){{{\it (d)}}} \end{picture} \hspace{68pt} \\[46pt] \hspace{6pt} \begin{array}{c@{{\hspace{7pt}}}c@{\,}c@{\,}c@{\,}c@{\,}c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c} \bf 0 && 0 && 0 && 0 && 0 && 0 && \bf 0 \\ & \bf 0 && 0 && 0 && 0 && 0 && \bf 0 \\ 0 && \bf 0 && -\frac1{10} && 0 && \frac3{10} && \bf \frac15 && 0 \\ & 0 && \bf \frac15 && \frac25 && \frac7{10} && \bf \frac12 && 0 \\ 0 && -\frac1{10} && \bf \frac35 && 1 && \bf \frac45 && \frac3{10} && 0 \\ & 0 && \frac25 && \bf 1 && \bf 1 && \frac7{10} && 0 \\ 0 && 0 && 1 && \bf 1 && 1 && 0 && 0 \\ & 0 && \frac7{10} && \bf 1 && \bf 1 && \frac7{10} && 0 \\ 0 && \frac3{10} && \bf \frac45 && 1 && \bf \frac45 && \frac3{10} && 0 \\ & 0 && \bf \frac12 && \frac7{10} && \frac7{10} && \bf \frac12 && 0 \\ 0 && \bf \frac15 && \frac3{10} && 0 && \frac3{10} && \bf \frac15 && 0 \\ & \bf 0 && 0 && 0 && 0 && 0 && \bf 0 \\ \bf 0 && 0 && 0 && 0 && 0 && 0 && \bf 0 \end{array} \hspace{-6pt} & \hspace{28pt} \begin{array}{c@{{\hspace{7pt}}}c@{\,}c@{\,}c@{}c@{\,}c@{}c@{}c@{}c@{}c@{}c@{\,}c@{{\hspace{7pt}}}c} \bf 0 && 0 && 0 && 0 && 0 && 0 && \bf 0 \\ & \bf 0 && 0 && 0 && 0 && 0 && \bf 0 \\ 0 && \bf 0 && \!-\frac1{40}\! && 0 && 0 && \!\bf 0\; && 0 \\ & 0 && \bf \frac1{20} && \!\frac{13}{120}\! && 0 && \,\bf 0 && 0 \\ 0 && \!-\frac1{40} && \bf \frac2{15} && \frac15 && \bf\;0 && 0 && 0 \\ & 0 && \!\frac{13}{120} && \bf \frac16 && \bf \,0 && 0 && 0 \\ 0 && 0 && \frac15 && \bf 0 && \!-\frac15 && 0 && 0 \\ & 0 && 0 && \bf 0\; && \bf \!\!-\frac16\! && -\frac15 && 0 \\ 0 && 0 && \bf \,0 && \!-\frac15 && \bf \!-\frac15 && -\frac1{10} && 0 \\ & 0 && \bf 0 && 0 && -\frac1{5} && \bf \!\!-\frac3{20}\! && 0 \\ 0 && \bf \,0 && 0 && 0 && -\frac1{10} && \bf \!\!-\frac1{15} && 0 \\ & \bf 0 && 0 && 0 && 0 && 0 && \bf 0 \\ \bf 0 && 0 && 0 && 0 && 0 && 0 && \bf 0 \end{array} \hspace{-2pt}\end{aligned}$$ $$\begin{aligned} \begin{picture}(44,38)(-40,-4) \put(0,0){\vector(-1,0){40}} \put(0,0){\vector(1,2){18}} \put(0,0){\vector(1,-2){18}} \put(4,29){$A$} \put(-40,4){$E/F$} \put(3,-36){$C$} \put(61,24){{{\it (a)}}} \put(167,24){{{\it (b)}}} \put(279,24){{{\it (c)}}} \put(-5,-63){{{\it (d)}}} \put(108,-63){{{\it (e)}}} \put(217,-63){{{\it (f)}}} \end{picture} && \hspace{-38pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\,}c} &&&&& \!\iddots \\ && 1 && \bf 1 \\[1pt] \cdots & \bf 1 && \bf 1 && 1 \\[1pt] && 1 && \bf 1 \\[-3pt] &&&&& \ddots \end{array} & & \hspace{-32pt} \begin{array}{c@{\;}c@{}c@{\,}c@{}c@{}c} &&&&& \!\iddots \\ && \frac16 && \bf \frac16 \\[1pt] \cdots\; & \bf 0 && \bf 0 && \! 0 \\[1pt] && -\frac16 && \bf -\frac16 \\[-2pt] &&&&& \ddots \end{array} & & \hspace{-28pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{\,}c@{\,}c} &&&&& \!\iddots \\ && 0 && \bf 0 \\[1pt] \cdots\; & \bf 0 && \bf 0 && \! \frac1{36} \\[1pt] && 0 && \bf 0 \\[-3pt] &&&&& \ddots \end{array} \\[10pt] & \hspace{-12pt} \begin{array}{c@{\;}c@{\,}c@{\,}c@{}c@{}c} &&&&& \!\iddots \\ && \frac1{12} && \bf -\frac1{12} \\[1pt] \cdots\; & \bf \frac16 && \bf 0 && \! -\frac16 \\[1pt] && \frac1{12} && \bf -\frac1{12} \\[-2pt] &&&&& \ddots \end{array} & & \hspace{-24pt} \begin{array}{c@{\;}c@{\,}c@{\,}c@{\,}c@{\,}c} &&&&& \!\iddots \\ && \frac1{30} && \bf 0 \\[1pt] \cdots\; & \bf 0 && \bf 0 && \! 0 \\[1pt] && 0 && \bf 0 \\[-4pt] &&&&& \ddots \end{array} & & \hspace{-30pt} \begin{array}{c@{\;}c@{\,}c@{\,}c@{\,}c@{\,}c} &&&&& \!\iddots \\ && 0 && \bf 0 \\[1pt] \cdots\; & \bf 0 && \bf 0 && \! 0 \\[1pt] && \frac1{30} && \bf 0 \\[-4pt] &&&&& \ddots \\ \end{array}\end{aligned}$$ $$\begin{aligned} \begin{picture}(156,20)(0,-32) \put(0,0){\line(1,0){40}} \put(40,0){\line(1,2){20}} \put(40,0){\line(1,-2){20}} \put(0,0){\line(3,2){60}} \put(0,0){\line(3,-2){60}} \put(60,-40){\line(1,2){20}} \put(60,40){\line(1,-2){20}} \put(44,-2){$B$} \put(63,-43){$C$} \put(64,36){$A$} \put(-8,3){$E$} \put(82,-3){$D$} \put(153,30){{{\it (a)}}} \end{picture} & \hspace{-32pt} \begin{array}{c@{{\hspace{6pt}}}c@{}c@{{\hspace{6pt}}}c@{}c@{{\hspace{6pt}}}c@{\,} c@{{\hspace{6pt}}}c@{\;} c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c} &&&&&&&&&&&&&&&&& \bf 0 \\ &&&&&&&&&&&&&& 0 && \bf 0 && 0 \\[-6pt] &&&&&&&&&&& 0 && 0 && \bf \frac25 && 0 && \ddots \\ &&&&&&&& 0 && 0 && \frac25 && \bf \frac45 && \frac23 && 0 \\[-6pt] &&&&& 0 && 0 && 0 && \frac45 && \bf 1 && \frac65 && 0 && \ddots \\ &&0 && 0 && \frac35 && \frac{11}{10} && 1 && \bf 1 && \frac{19}{15} && 0 && 0 \\ \bf0&\bf 0 &&\bf\frac25 && \bf\frac45 && \bf 1 && \bf 1 && \bf 1 && 1 && 0 && 0 && 0 \\ && 0 && 0 && \frac35 && \frac{11}{10} && 1 && \bf 1 && \frac{19}{15} && 0 && 0 \\[-6pt] &&&&& 0 && 0 && 0 && \frac45 && \bf 1 && \frac65 && 0 && \!\iddots \\ &&&&&&&& 0 && 0 && \frac25 && \bf \frac45 && \frac23 && 0 \\[-6pt] &&&&&&&&&&& 0 && 0 && \bf \frac25 && 0 && \!\iddots \\ &&&&&&&&&&&&&& 0 && \bf 0 && 0 \\ &&&&&&&&&&&&&&&&& \bf 0 \end{array} \\[22pt] \begin{picture}(156,20)(0,-32) \put(0,0){\line(1,0){40}} \put(40,0){\line(1,2){20}} \put(40,0){\line(1,-2){20}} \put(0,0){\line(3,2){60}} \put(0,0){\line(3,-2){60}} \put(60,-40){\line(1,2){20}} \put(60,40){\line(1,-2){20}} \put(44,-3){$D$} \put(63,-43){$C$} \put(64,36){$A$} \put(-8,2){$F$} \put(82,-3){$B$} \put(153,-14){{{\it (b)}}} \put(-10,-116){{{\it (c)}}} \put(181,-190){{{\it (d)}}} \put(-10,-287){{{\it (e)}}} \end{picture} \\[-46pt] & \hspace{-42pt} \begin{array}{c@{{\hspace{6pt}}}c@{}c@{{\hspace{6pt}}}c@{}c@{{\hspace{6pt}}}c@{\,} c@{{\hspace{6pt}}}c@{\;} c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{}c@{}c@{\,}c@{}c@{}c@{}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c} &&&&&&&&&&&&&&&&& \bf 0 \\ &&&&&&&&&&&&&& 0 && \bf 0 && 0 \\[-6pt] &&&&&&&&&&& 0 && 0 && \bf -\frac2{15} && 0 && \ddots \\ &&&&&&&& 0 && 0 && -\frac1{24} && \bf -\frac9{40} && -\frac{17}{60} && 0 \\[-6pt] &&&&& 0 && 0 && 0 && -\frac1{60} && \bf -\frac15 && -\frac7{15} && 0 && \ddots \\ &&0 && 0 && \frac12 && \frac{41}{60} && \frac1{12} && \bf -\frac1{12} && -\frac25 && 0 && 0 \\ \bf0&\bf 0 &&\bf\frac13 && \bf\frac{11}{20} && \bf \frac{7}{15} && \bf \frac16 && \bf 0 && -\frac16 && 0 && 0 && 0 \\ &&0 && 0 && \frac12 && \frac{41}{60} && \frac1{12} && \bf -\frac1{12} && -\frac25 && 0 && 0 \\[-6pt] &&&&& 0 && 0 && 0 && -\frac1{60} && \bf -\frac15 && -\frac7{15} && 0 && \!\iddots \\ &&&&&&&& 0 && 0 && -\frac1{24} && \bf -\frac9{40} && -\frac{17}{60} && 0 \\[-6pt] &&&&&&&&&&& 0 && 0 && \bf -\frac2{15} && 0 && \!\iddots \\ &&&&&&&&&&&&&& 0 && \bf 0 && 0 \\ &&&&&&&&&&&&&&&&& \bf 0 \end{array} \\[-50pt] \begin{array}{c@{{\hspace{7pt}}}c@{}c@{\,}c@{\,}c@{\;}c@{\,}c@{\,}c@{\;}c@{\!}c@{\!}c@{\,}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c} &&&&&&&&&&& \bf 0 \\ &&&&&&&& 0 && \bf 0 && 0 \\ &&&&& 0 && 0 && \bf \frac23 && 0 && 0 \\ && 0 && 0 && \frac14 && \bf \frac14 && \frac{37}{24} && 0 && 0 \\ \bf 0 & \bf 0 && \bf 0 && \bf 0 && \bf 0 && 0 && 0 && 0 && 0 \\ && 0 && 0 && \!\!-\frac14 && \bf \!-\frac14 && \!-\frac{37}{24} && 0 && 0 \\ &&&&& 0 && 0 && \bf \!-\frac23 && 0 && 0 \\ &&&&&&&& 0 && \bf 0 && 0 \\ &&&&&&&&&&& \bf 0 \\ \end{array} \; \\[-50pt] & \hspace{-54pt} \begin{array}{c@{{\hspace{6pt}}}c@{}c@{}c@{}c@{}c@{\!}c@{\!}c@{}c@{\;}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{\,}c@{}c@{}c@{}c@{}c@{\;}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c} &&&&&&&&&&&&&&&&& \bf 0 \\ &&&&&&&&&&&&&& 0 && \bf 0 && 0 \\[-6pt] &&&&&&&&&&& 0 && 0 && \bf \!-\frac1{30} && 0 && \ddots \\ &&&&&&&& 0 && 0 && -\frac1{15} && \bf \!-\frac1{20} && -\frac{1}{30} && 0 \\[-6pt] &&&&& 0 && 0 && 0 && 0 && \bf \!\!-\frac1{30} && -\frac3{40} && 0 && \ddots \\ &&0 && 0 && 0 && \!-\frac{1}{20} && \frac1{30} && \bf 0 && -\frac1{15} && 0 && 0 \\ \bf0&\bf 0 &&\bf -\frac1{30} && \bf-\frac{1}{20} && \bf \!-\frac{1}{30} && \bf 0 && \bf 0 && 0 && 0 && 0 && 0 \\ &&0 && 0 && \!-\frac1{10} && \!-\frac{1}{15} && 0 && \bf 0 && 0 && 0 && 0 \\[-6pt] &&&&& 0 && 0 && 0 && 0 && \bf 0 && 0 && 0 && \!\iddots \\ &&&&&&&& 0 && 0 && 0 && \bf 0 && 0 && 0 \\[-6pt] &&&&&&&&&&& 0 && 0 && \bf 0 && 0 && \!\iddots \\ &&&&&&&&&&&&&& 0 && \bf 0 && 0 \\ &&&&&&&&&&&&&&&&& \bf 0 \end{array} \\[-53pt] \begin{array}{c@{{\hspace{7pt}}}c@{}c@{\,}c@{\,}c@{\;}c@{\,}c@{\,}c@{\;}c@{\!}c@{\!}c@{\,}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c@{{\hspace{6pt}}}c} &&&&&&&&&&& \bf 0 \\ &&&&&&&& \,0\! && \bf 0 && 0 \\ &&&&& 0 && 0 && \bf \!-\frac1{12} && 0 && 0 \\ && 0 && 0 && 0 && \bf 0 && \!-\frac{5}{24} && 0 && 0 \\ \bf 0\! & \bf 0 && \bf 0 && \bf 0 && \bf 0 && \frac1{16} && 0 && 0 && 0 \\ && 0 && 0 && 0 && \bf 0 && \!-\frac{5}{24} && 0 && 0 \\ &&&&& 0 && 0 && \bf \!-\frac1{12} && 0 && 0 \\ &&&&&&&& \,0\! && \bf 0 && 0 \\ &&&&&&&&&&& \bf 0 \\ \end{array} \quad\end{aligned}$$ The spline basis {#sec:exbasis} ---------------- Now we can count the dimension of $S^1_k({{\ensuremath{\mathcal{H}}}})$ and describe bases of the spline spaces. We have 4 degrees of freedom around each crossing vertex as in Figure \[fig:freeda\], and 6 degrees of freedom around $B$ and $D$ as in Figure \[fig:freedb\]. The total of $4\cdot 4+2\times 6=28$ degrees of freedom is realized by splines of degree ${\leqslant}6$. Up to the symmetries of the polygonal surface, these splines are depicted in Figures \[fig:splina\], \[fig:splinf\], \[fig:splinb\]. These are the splines characterized in [[*(E1)*]{}]{}. The dimension of $M_k^1$ spaces in §\[sec:ef\], §\[sec:ab\], §\[sec:be\] equals $2k,2k+1,2k-1$, respectively. The kernels of $W_0\oplus W_1$-maps of Lemma \[lm:separate\] are subspaces of splines thoroughly vanishing at the vertices. They have the dimension $2k-8,2k-8,2k-10$, respectively. These splines lift to $S^1_k({{\ensuremath{\mathcal{H}}}})$ by assigning zero specializations in all other $M^1$-spaces and on the non-incident polygons, giving us $22(k-4)-4$ independent splines in $S^1_k({{\ensuremath{\mathcal{H}}}})$ for $k{\geqslant}6$. These are the splines characterized in [[*(E2)*]{}]{}. In particular, for each edge and any $\ell{\geqslant}2$ there is a spline of degree $\ell+3$ that evaluates to $(h_0,h_1,h_2)=u^\ell(1-u)^2\cdot (0,1,-1)$ in the $M^1$-space of that edge, in terms of (\[eq:splines2\]). For $k{\geqslant}5$, this gives $11(k-4)$ independent splines supported on just one edge. The splines corresponding to similar multiples of the syzygies $(2u,1,0)$, $(2u+u^2,1,0)$ can be linearly combined with the offset splines $U_1$, $\widetilde{U}_1$ or $V_1$ (and constant splines) to produce similarly vanishing splines supported only on one edge. Alternatively, one can linearly combine “adjacent" ($\ell\to\ell+1$) syzygy multiples to produce a spline thoroughly vanishing at both vertices. On the edges of §\[sec:ef\] and §\[sec:ab\], polynomial multiples of the syzygy $(2u,1,0)$ give the splines $$\label{eq:esplina} (h_0,h_1,h_2)=u^\ell(1-u)^{m-1}\cdot(1-u,2\ell-2(\ell+m)u,0)$$ with $\ell{\geqslant}2$, $m{\geqslant}3$. Similarly, on and we have the splines $$(h_0,h_1,h_2)=u^\ell(1-u)^{m-1}\cdot(1-u,(2+u)(\ell-\ell u-mu),0).$$ For the edges of §\[sec:ef\], we can adjust (\[eq:esplina\]) to $$(h_0,h_1,h_2)=u^\ell(1-u)^{m-1}\cdot(1-u,2\ell-2\ell u-mu,-mu)$$ and allow $m{\geqslant}2$. Finally, we have $(k-3)^2+6\cdot (k-4)(k-5)/2$ independent splines that have only one non-zero control point, namely an interior control point of a restriction to the rectangle or to some triangle. These splines are characterized in [[*(E4)*]{}]{}. There are no splines characterized in [[*(E3)*]{}]{} as ${{\ensuremath{\mathcal{H}}}}$ has no boundary. In total, we have $$\begin{aligned} \label{eq:gnedim} \dim S^1_k({{\ensuremath{\mathcal{H}}}})= & \, 28+22(k-4)-4+(k-3)^2+3(k-4)(k-5) \nonumber \\ = & \, (2k-3)^2+k-4\end{aligned}$$ for $k{\geqslant}6$. The dimension formulas in §\[sec:dformula\] give the same answer. This formula holds for $k=4$ and $k=5$ as well, giving the dimensions $25$ and $50$, respectively. This can be foreseen in the context of the proof of Lemma \[lm:lbound\]. The additional $\sum m_{{\ensuremath{\mathcal{E}}}}$ relations (4 or 2, respectively) between the $J^{1,1}$-jets at the vertices $B$, $E$, $F$, $D$ are surely unrelated because the edges and are not connected to each other. $$\begin{aligned} \\[-9pt] \begin{array}{c@{\;}c@{\;}c@{\;}c@{}c@{\!}c@{}c@{\;}c@{\;}c@{\;}c@{\;}c} & 1 && 1 && -\frac2{13} && 0 && 0 \\ \bf 1 && \bf 1 && \bf \frac4{13} && \bf 0 && \bf 0 && \bf 0 \\ & 1 && 1 && -\frac2{13} && 0 && 0 \end{array} \hspace{-4pt} & \hspace{37pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{}c@{\!\!}c@{}c@{\;}c@{\;}c@{\;}c@{\;}c} & 0 && \frac14 && -\frac7{195} && 0 && 0 \\ \bf 0 && \bf \frac15 && \bf \frac{14}{195} && \bf 0 && \bf 0 && \bf 0 \\ & 0 && \frac14 && -\frac7{195} && 0 && 0 \end{array} & \hspace{9pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{\,}c@{}c@{\,}c@{\;}c@{\;}c@{\;}c@{\;}c} & 0 && 0 && \frac{15}{13} && 1 && 1 \\ \bf 0 && \bf 0 && \bf \frac9{13} && \bf 1 && \bf 1 && \bf 1 \\ & 0 && 0 && \frac{15}{13} && 1 && 1 \end{array} \, \\[30pt] \begin{picture}(0,20)(-10,-3) \put(-10,90){$\widetilde{V}_1:$} \put(130,90){$\widetilde{V}_2:$} \put(280,90){$\widetilde{V}_5:$} \put(-10,23){$\widetilde{V}_6:$} \put(130,23){$\widetilde{V}_8:$} \put(280,23){$\widetilde{V}_9:$} \end{picture} \begin{array}{c@{\;}c@{\;}c@{\;}c@{}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\!}c@{\!}c} & 0 && 0 && 1 && 0 && \!\!-\frac15 \\ \bf 0 && \bf 0 && \bf \frac{27}{40} && \bf \frac{5}{8} && \bf \frac15 && \bf 0 \\ & 0 && 0 && \frac{5}{4} &&\frac15 && 0 \end{array} & \hspace{34pt} \begin{array}{c@{\;}c@{\;}c@{\;}c@{\!}c@{\!}c@{}c@{}c@{\,}c@{\;}c@{\;}c} & 0 && 0 && \!\!-\frac{2}{13} && 0 && 0 \\ \bf 0 && \bf 0 && \bf -\frac{9}{130} \! && \bf \!\!-\frac{1}{20} && \bf 0 && \bf 0 \\ & 0 && 0 && \!\!-\frac{1}{13} && \frac{1}{20} && 0 \end{array} & \hspace{18pt}\begin{array}{c@{\;}c@{\;}c@{\;}c@{\!}c@{\!}c@{}c@{}c@{\,}c@{\;}c@{\;}c} & 0 && 0 && \!\!-\frac{1}{13} && \frac{1}{20} && 0 \\ \bf 0 && \bf 0 && \bf -\frac{9}{130} \! && \bf \!\!-\frac{1}{20} && \bf 0 && \bf 0 \\ & 0 && 0 && \!\!-\frac{2}{13} && 0 && 0 \end{array} \hspace{-8pt}\end{aligned}$$ Alternative splines around $EB$ and $FD$ {#sec:abe} ---------------------------------------- In §\[sec:gdata\] we chose the quadratic gluing data (\[eq:crossing3\])–(\[eq:crossing3a\]) to interpolate between the relations (\[eq:horder4\]) and (\[eq:horder3\]) of gluing around $E,F$ and $B,D$ in such a way that Theorem \[cond:comp2\] holds at the vertices $E$, $F$. An alternative is to replace (\[eq:crossing3\])–(\[eq:crossing3a\]) by the fractional-linear gluing data $$\label{eq:crossing3b} \partial_{EA}+\partial_{EC}=\frac{6u_E^B}{3-u_E^B}\,\partial_{EB}, \qquad \partial_{FA}+\partial_{FC}=\frac{6u_F^D}{3-u_F^D}\,\partial_{FD}.$$ This leads to the syzygy module ${{\ensuremath{\mathcal{Z}}}}(3-u,6u,3-u)$, with the generators $(1,0,-1)$, . With this modification, the dimension of $M_k^1(EB)$, $M_k^1(FD)$ becomes $2k$ for $k{\leqslant}1$. This gives complete separation of vertices in degree 5 already. The splines in Figure \[fig:splinfd\] should be replaced by the splines in Figure \[fig:splinfda\]. The splines $V_3=U_3$, $V_4=U_4$, $V_7=U_7$ of degree 4 can be kept as $\widetilde{V}_3$, $\widetilde{V}_4$, $\widetilde{V}_7$. The spline (\[eq:trivzero\]) is in $M_5^1(EB)$ and $M_5^1(FD)$ as well. The degrees of freedom around $E,F$ and in Figure \[fig:freedb\] are not changed. Modification of the degree 6 splines in Figures \[fig:splinf\], \[fig:splinb\] is left as an exercise. Dimension formula (\[eq:gnedim\]) is adjusted to $(2k-3)^2+k-2$. Conclusions and perspectives {#sec:practical} ============================ In §\[sec:gsplines\] we gave a general definition of $G^1$ [*polygonal surfaces*]{} and $G^1$ functions on them. The $G^1$ gluing data is allowed to be specified in terms of transitions maps, jet bundles, tangent bundles or transversal vector fields. The new conditions in Theorem \[cond:comp2\] generalizing the balancing condition in [@Peters2010] are incorporated in the definition of $G^1$ polygonal surfaces. We analyze particularly [*rational*]{} $G^1$ polygonal surfaces and $G^1$ polynomial splines on them. The relation of these splines to syzygies and interdependence of the local $M^1$-splines defined on edges are explored in §\[sec:freedom\]. A structured set of generators for the space $S^1({{\ensuremath{\mathcal{M}}}})$ of splines is presented in §\[sec:generate\]. Section \[sec:boxdelta\] defines the generators more explicitly in the context of $G^1$ surfaces made of rectangles and triangles, and gives the dimension formula for the spaces of splines of bounded degree on these surfaces. The example of §\[sec:poctah\] derives spline generators explicitly, and illustrates the technical details handsomely. The splines are constructed not by solving equations of $G^1$ continuity constraints, but my matching the degrees of freedom at the vertices and local $M^1$-splines along the edges. The practical value of a constructed $G^1$ spline space ${{\ensuremath{\mathcal{S}}}}$ is measured by smoothness and fairness [@Peters2003] of realizations in ${{\ensuremath{\mathbb{R}}}}^3$ by the functions in ${{\ensuremath{\mathcal{S}}}}$. Lemmas \[prop:Hg\] and \[prop:Hgb\] imply that for any vertex of a rational $G^1$ polygonal surface ${{\ensuremath{\mathcal{M}}}}$ there is a realization of ${{\ensuremath{\mathcal{M}}}}$ by splines with a non-zero Jacobian (thus locally smooth) at that vertex. Sections \[sec:serverts\] and \[sec:generate\] imply that the vertices can be [*completely separated*]{} by the spline spaces. Part [[*(iii)*]{}]{} of Lemma \[lm:syzygy\] implies that any edge and its neighborhood on both polygons have a locally smooth realization in ${{\ensuremath{\mathbb{R}}}}^2$ by splines. Globally, the realizations can have self-intersections (unavoidable for some non-orientable surfaces [@cltopology]) that are difficult to control analytically. Locally smooth realization of the whole surface ${{\ensuremath{\mathcal{M}}}}$ could be derived by approximation properties of spline spaces, in the context of approximating a corresponding differential surface realized in ${{\ensuremath{\mathbb{R}}}}^3$. The conclusion is that spaces of polynomial splines on rational $G^1$ surfaces are adequate for smooth realization in ${{\ensuremath{\mathbb{R}}}}^3$. If a corresponding differential surface can be topologically realized in ${{\ensuremath{\mathbb{R}}}}^3$ without a self-intersection, the globally smooth realization should be possible with splines. Curvature fairness is an issue for low degree splines [@Peters2003], [@Greiner94]. Realized surfaces tend to be relatively flat along the edges and especially around the vertices. In the context of rational $G^1$ surfaces, the restrictions to edges are often of lower degree than the splines themselves. For example, this is the case when triangles are glued and $\deg {{\ensuremath{\beta}}}>\deg {{\ensuremath{\gamma}}}$. Gluing data with constant ${{\ensuremath{\gamma}}}(u_1)$ is easy to define and interpolate, but seemingly it generates splines of low quality. In particular, syzygy generators like $(1,0,-1)$ should be avoided because they do not influence edge parametrization and postpone richer spline spaces to larger degree given fixed $\delta({{\ensuremath{\mathcal{E}}}})$ in (\[eq:delt\]). The syzygy generators of the same degree could be more desirable. Common roots of $a(u_1)$, $c(u_1)$ in (\[eq:edgeabc\]) close to the interval $[0,1]$ likely mean more unwanted variation of the splines. $$\begin{array}{c@{\;}c@{{\hspace{6pt}}}c@{}c@{\,}c@{{\hspace{6pt}}}c@{}c@{\,}c@{{\hspace{6pt}}}c@{}c@{\,}c@{{\hspace{6pt}}}c@{\;}c} &&& \bf \!\ddots && 0 && 0 && \bf \;\,\iddots\!\!\! \\ && 0 && \bf 0 && -1 && \bf 0 && 0 \\ & 0 && 1 && \bf 0 && \bf 0 && 1 && 0 \\ \bf \cdots && \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf 0 && \bf \cdots \\ & 0 && -1 && \bf 0 && \bf 0 && -1 && 0 \\ && 0 && \bf 0 && 1 && \bf 0 && 0 \\[-6pt] &&& \bf \!\iddots && 0 && 0 && \bf \;\,\ddots\!\!\! \\ \end{array} \begin{picture}(60,20)(-10,-3) \put(-158,53){{{\it (a)}}} \put(32,53){{{\it (b)}}} \end{picture} \begin{array}{c@{{\hspace{7pt}}}c@{\,}c@{\;}c@{{\hspace{7pt}}}c@{\,}c@{\;}c@{{\hspace{7pt}}}c@{\,}c@{\,}c@{\;}c@{{\hspace{7pt}}}c@{{\hspace{7pt}}}c} \bf 0 && 0 && 0 && 0 && 0 && 0 && \bf 0 \\[-1pt] & \bf 0 && 0 && 0 && 0 && 0 && \bf 0 \\[-1pt] 0 && \bf 0 && 0 && r && -q && \bf 0&& 0 \\[-1pt] & 0 && \bf 0 &&0 && p && \bf 0 && 0 \\[-1pt] 0 && 0 && \bf 0 && 0 && \bf 0 && q && 0 \\[-1pt] & 0 && 0 && \bf 0 && \bf 0 && -p && 0 \\[-1pt] 0 && r && 0 && \bf 0 && 0 && s && 0 \\[-1pt] & 0 && p && \bf 0 && \bf 0 && 0 && 0 \\[-1pt] 0 && -q && \bf 0 && 0 && \bf 0 && 0 && 0 \\[-1pt] & 0 && \bf 0 && -p && 0 && \bf 0 && 0 \\[-1pt] 0 && \bf 0 && q && s && 0 && \bf 0 && 0 \\[-1pt] & \bf 0 && 0 && 0 && 0 && 0 && \bf 0 \\[-1pt] \bf 0 && 0 && 0 && 0 && 0 && 0 && \bf 0 \end{array}$$ A particular defect of $G^1$ surfaces is a “waving" behavior around vertices of high valency as in [@Peters2010 Figure 5]. These saddle neighborhoods of high order are easily generated by multiple degrees of freedom in part [[*(iii)*]{}]{} of [[*(D1)*]{}]{}. For example, at a vertex of even valency with all surrounding ${{\ensuremath{\gamma}}}_k(u)=-1$ we have a spline whose only non-zero control points are the mixed derivatives alternating between $1$ and $-1$. Figure \[fig:waving\][[*(a)*]{}]{} depicts such a spline around a vertex of valency 6. To allow only simple saddle or convex neighborhoods of $C^2$ continuity at the vertices, one may introduce a special class of polygonal surfaces with mixed geometric continuity, by requiring $G^1$ continuity along the edges and additionally imposing $G^2$ continuity around the vertices. The conditions of §\[sec:g1vertex\] and §\[sec:crossing\] would then be strengthened to compatibility conditions of the $G^2$ gluing data, and Definition \[def:g1splines\] would be appended by $G^2$ continuity restrictions on the splines at the end-points of vertices. The spaces $M(P)$ in (\[eq:wv\]) will have to be extended to 6-dimensional $J^2$-jets, while the spaces $M(\tau)$ in (\[eq:we\]) would be appended by two utmost $O(v^2)$ terms. The new $G^2$ restrictions on splines will be linear constraints on the image $J^2$-jets of adjusted $W_0$-maps in (\[eq:w0\]). The dimension of $M^1$-spaces will not change generically, as new restrictions on the crossing derivatives will be balanced by new $O(v^2)$ degrees of freedom. Following Definition \[def:sepspline\], an enhanced kind of vertex separability would have to be introduced. The spline space partitioning [[*(D1)*]{}]{}–[[*(D5)*]{}]{} would have to be adjusted then straightforwardly. Expectedly, fairness issues can be addressed by higher degree splines. But it is harder to control the shape with substantially more degrees of freedom. Shape optimization by energy minimization [@Greiner94] is usually applied to whole surfaces in ${{\ensuremath{\mathbb{R}}}}^3$. But this optimization could be applied to filter spline spaces to manageable dimensions. For example, the splines in [[*(D1)*]{}]{} could be optimally adjusted by contiguous splines in [[*(D3)*]{}]{}, [[*(D5)*]{}]{} while preserving the vanishing properties at non-adjacent edges. For example, optimizing the spline in Figure \[fig:splinf\][[*(c)*]{}]{} by minimizing the sum of four $L_2$-norms of the standard gradient $(\partial/\partial u,\partial/\partial v)$ on the four triangles leads to the linear adjustment of that spline by Figure \[fig:waving\][[*(b)*]{}]{} with $$(p,q,r,s)=\frac{1}{4630} \textstyle \left( 324+\frac14, 514+\frac14, 4865+\frac18, 3044+\frac{11}{24} \right).$$ Subsequently, short sequences of nearest splines in [[*(D3)*]{}]{} could be optimally combined and adjusted by splines in [[*(D5)*]{}]{}, etc. A subspace generated by the optimized splines should provide high-quality realizations and manageable control. If shape optimization in ${{\ensuremath{\mathbb{R}}}}^3$ is still needed, it should be then more numerically stable. It is not necessary to build whole $G^1$ polygonal surfaces at once. A practical approach is to consider [*macro-patches*]{} of polygons abstractly glued along a portion of edges with $G^1$ continuity. The macro-patches fit Definition \[def:g1complex\] as $G^1$ polygonal surfaces with boundary, but the splines on them are supposed to thoroughly vanish along the boundary. The whole $G^1$ surface ${{\ensuremath{\mathcal{M}}}}$ is built from overlaying macro-patches, so that each interior vertex of ${{\ensuremath{\mathcal{M}}}}$ is an interior vertex of some macro-patch. A spline space on ${{\ensuremath{\mathcal{M}}}}$ is defined by aggregating the spline spaces on the macro-patches. For high enough degree, the aggregated spline space should concur with Definition \[def:g1splines\]. This approach allows to generalize the partitioning [[*(D1)*]{}]{}–[[*(D5)*]{}]{}. For example, splines of sharp lower degree may be supported on a macro-patch rather than on a set of polygons with a single interior vertex or an interior edge. Or positive splines may need a larger support. Or performing an optimizing filtering could be more effective across macro-patches. Here we consider the construction in [@flex] as a rational $G^1$ polygonal surface ${{\ensuremath{\mathcal{Q}}}}$. As in [@flex], consider a topological mesh $\widehat{{{\ensuremath{\mathcal{Q}}}}}$ of quadrangles. Each quadrangle is represented by the rectangle $[0,1]^2$ which is subdivided by the middle horizontal and vertical lines into 4 smaller rectangles. The quadrangles are supposed to be parametrized by $C^1$ splines on the subdivided rectangle $[0,1]^2$, and the quadrangle parametrizations should meet with a specific $G^1$ continuity. Correspondingly, the polygonal surface ${{\ensuremath{\mathcal{Q}}}}$ has 3 kinds of vertices: 1. The vertices of the original mesh $\widehat{{{\ensuremath{\mathcal{Q}}}}}$, each of some valency ${\geqslant}3$. The interior vertices of valency 4 are [*regular*]{}. The other interior vertices are [*irregular*]{}. 2. Crossing vertices represented by the midpoints of the edges of $\widehat{{{\ensuremath{\mathcal{Q}}}}}$. 3. Crossing vertices represented the center of the subdivided rectangle $[0,1]^2$ for each quadrangle of $\widehat{{{\ensuremath{\mathcal{Q}}}}}$. At all vertices, the symmetric vertex gluing data as in (\[eq:hsym\]) is assumed. There are edges connecting vertices of the kinds [[*(ii)*]{}]{} and [[*(iii)*]{}]{}, representing the subdivision of $[0,1]^2$. The standard $C^1$ gluing data is assumed on them (see Remark \[rm:parametric\]). The other edges are half-edges of the original mesh $\widehat{{{\ensuremath{\mathcal{Q}}}}}$. They connect vertices of the kinds [[*(ii)*]{}]{} and [[*(iii)*]{}]{}, and the quadratic gluing data is assumed on them: $${{\ensuremath{\beta}}}(u)=q\,u^2, \qquad {{\ensuremath{\gamma}}}(u)=-1, \qquad \mbox{with } q=2\cos\frac{2\pi}n.$$ Here we use a standard coordinate $u$ with $u=0$ being a vertex of type [[*(ii)*]{}]{}, and $u=1$ being a vertex of type [[*(i)*]{}]{} of valency $n$. This gluing data satisfies Theorem \[cond:comp2\]. The main assertion in [@flex] is that quartic splines on this surface provide sufficiently many degrees of freedom for high quality surfaces. The quadrangles are considered as macro-patches in [@flex]. Sets of rectangles centered at the vertices of types [[*(i)*]{}]{}, [[*(ii)*]{}]{} can be considered as macro-patches as well. Following our partitioning [[*(E1)*]{}]{}–[[*(E4)*]{}]{} of splines, we have the following quartic splines: - For each quadrangle, we have a center vertex of type [[*(iii)*]{}]{} and corresponding $4$ splines around its center vertex as in [[*(E1)*]{}]{}, $8$ splines along the four $C^1$ edges as in [[*(E2)*]{}]{}, and $4$ splines on the smaller rectangles as in [[*(E4)*]{}]{}. These splines thoroughly vanish on the boundary of $[0,1]^2$. - Around each regular vertex of $\widehat{{{\ensuremath{\mathcal{Q}}}}}$, we have the standard $C^1$ continuity and similar $4+8$ vertex and edge splines. - A half-edge of $\widehat{{{\ensuremath{\mathcal{Q}}}}}$ connecting an irregular vertex ${{\ensuremath{\mathcal{P}}}}$, the corresponding $M^1_4$-space has dimension $9$. However, the 9 degrees of freedom as in (\[eq:mmmm\]) are not met because there is a spline as in (\[eq:trivzero\]). The vertices are therefore not completely separated. There is a relation between corner control coefficients like (\[eq:quarel2\]): $$\label{eq:qrel} 12q\,h_0(1)-12q\,h'_0(1)=12q\,h_0(0)+4q\,h'_0(0)-h'_1(1)-h'_2(1).$$ If the half-edge leads from ${{\ensuremath{\mathcal{P}}}}$ to a regular vertex of $\widehat{{{\ensuremath{\mathcal{Q}}}}}$, this additional restriction is localized to a macro-patch centered at ${{\ensuremath{\mathcal{P}}}}$ and at the adjacent vertices of type [[*(ii)*]{}]{}. If ${{\ensuremath{\mathcal{P}}}}$ is connected to other irregular vertices of $\widehat{{{\ensuremath{\mathcal{Q}}}}}$, some generating splines may need to be globally defined. A dimension formula can be derived using the results of §\[sec:dformula\]. If the mesh $\widehat{{{\ensuremath{\mathcal{Q}}}}}$ defines an orientable surface, we use Theorem \[th:gdimform\] and apply formulas (\[eq:polygee\]), (\[eq:topolov\]) to the topology of $\widehat{{{\ensuremath{\mathcal{Q}}}}}$ to get this dimension expression for $k{\geqslant}5$: $$\begin{aligned} \dim S^1_k({{\ensuremath{\mathcal{Q}}}})= & \; 16-16g\big(\widehat{{{\ensuremath{\mathcal{Q}}}}}\big) + \left( k-1 \right)^2 \#\{\mbox{\rm quadrangles}\} -5\,\#\{\mbox{\rm irregular vertices}\} \nonumber \\[1pt] & +(2k-3) \, \#\{\mbox{\rm boundary edges of } \widehat{{{\ensuremath{\mathcal{Q}}}}}\} -8\,\#\{\mbox{\rm boundary components}\} .\end{aligned}$$ If $k=4$ and there are dependancies between the relations (\[eq:qrel\]), this formula is an underestimate. This is possible only if there are cycles of edges connecting only irregular vertices. [^1]: Department of Mathematical Informatics, University of Tokyo, Bunkyo-ku 113-8656, Japan. E-mail: [[email protected]]{}
--- abstract: 'Recently, Babson and Steingrímsson have introduced generalised permutation patterns that allow the requirement that two adjacent letters in a pattern must be adjacent in the permutation. We consider pattern avoidance for such patterns, and give a complete solution for the number of permutations avoiding any single pattern of length three with exactly one adjacent pair of letters. We also give some results for the number of permutations avoiding two different patterns. Relations are exhibited to several well studied combinatorial structures, such as set partitions, Dyck paths, Motzkin paths, and involutions. Furthermore, a new class of set partitions, called monotone partitions, is defined and shown to be in one-to-one correspondence with non-overlapping partitions.' address: | Matematik\ Chalmers tekniska högskola och Göteborgs universitet\ S-412 96 Göteborg, Sweden author: - Anders Claesson bibliography: - 'gpa.bib' nocite: - '[@SlPl95]' - '[@St97]' title: Generalised pattern avoidance --- Introduction ============ In the last decade a wealth of articles has been written on the subject of pattern avoidance, also known as the study of “restricted permutations” and “permutations with forbidden subsequences”. Classically, a pattern is a permutation $\sigma\in{\mathcal{S}}_k$, and a permutation $\pi\in{\mathcal{S}}_n$ avoids $\sigma$ if there is no subsequence in $\pi$ whose letters are in the same relative order as the letters of $\sigma$. For example, $\pi\in{\mathcal{S}}_n$ avoids $132$ if there is no $1\leq i\leq j\leq k\leq n$ such that $\pi(i)\leq\pi(k)\leq\pi(j)$. In [@Kn73v1] Knuth established that for all $\sigma\in{\mathcal{S}}_3$, the number of permutations in ${\mathcal{S}}_n$ avoiding $\sigma$ equals the $n$th Catalan number, $C_n = \frac{1}{1+n}\binom{2n}{n}$. One may also consider permutations that are required to avoid several patterns. In [@SiSc85] Simion and Schmidt gave a complete solution for permutations avoiding any set of patterns of length three. Even patterns of length greater than three have been considered. For instance, West showed in [@We95] that permutations avoiding both $3142$ and $2413$ are enumerated by the Shröder numbers, $S_n = \sum_{i=0}^n\binom {2n-i} i C_{n-i}$. In [@BaSt00] Babson and Steingrímsson introduced generalised permutation patterns that allow the requirement that two adjacent letters in a pattern must be adjacent in the permutation. The motivation for Babson and Steingrímsson in introducing these patterns was the study of Mahonian statistics, and they showed that essentially all Mahonian permutation statistics in the literature can be written as linear combinations of such patterns. An example of a generalised pattern is $(\axcb)$. An $(\axcb)$-subword of a permutation $\pi = a_1 a_2 \cdots a_n$ is a subword $a_i a_j a_{j+1}$, $(i<j)$, such that $a_i<a_{j+1}<a_j$. More generally, a pattern $p$ is a word over the alphabet $a<b<c<d\cdots$ where two adjacent letters may or may not be separated by a dash. The absence of a dash between two adjacent letters in a $p$ indicates that the corresponding letters in a $p$-subword of a permutation must be adjacent. Also, the ordering of the letters in the $p$-subword must match the ordering of the letters in the pattern. This definition, as well as any other definition in the introduction, will be stated rigorously in Section \[prel\]. All classical patterns are generalised patterns where each pair of adjacent letters is separated by a dash. For example, the generalised pattern equivalent to $132$ is $(\axcxb)$. We extend the notion of pattern avoidance by defining that a permutation avoids a (generalised) pattern $p$ if it does not contain any $p$-subwords. We show that this is a fruitful extension, by establishing connections to other well known combinatorial structures, not previously shown to be related to pattern avoidance. The main results are given below. $$\vspace{.3ex} \begin{array}{l|l|l} P & |{\mathcal{S}}_n(P)| & Description\\ \hline \axbc & B_n & \text{ Partitions of } [n] \\ \axcb & B_n & \text{ Partitions of } [n] \\ \bxac & C_n & \text{ Dyck paths of length } 2n \\ \axbc,\,\abxc & B^*_n & \text{ Non-overlapping partitions of } [n] \\ \axbc,\,\axcb & I_n & \text{ Involutions in } {\mathcal{S}}_n \\ \axbc,\,\acxb & M_n & \text{ Motzkin paths of length } n \\ \end{array}$$ Here ${\mathcal{S}}_n(P) = \{ \pi\in{\mathcal{S}}_n : \pi \text{ avoids } p \text{ for all } p\in P \}$, and $[n]=\{1,2,\ldots,n\}$. When proving that $|{\mathcal{S}}_n(\axbc,\,\abxc)| = B^*_n$ (the $n$th Bessel number), we first prove that there is a one-to-one correspondence between $\{\axbc,\abxc\}$-avoiding permutations and *monotone partitions*. A partition is monotone if its non-singleton blocks can be written in increasing order of their least element and increasing order of their greatest element, simultaneously. This new class of partitions is then shown to be in one-to-one correspondence with non-overlapping partitions. Preliminaries {#prel} ============= By an *alphabet* $X$ we mean a non-empty set. An element of $X$ is called a *letter*. A *word* over $X$ is a finite sequence of letters from $X$. We consider also the *empty word*, that is, the word with no letters; it is denoted by ${{\epsilon}}$. Let $x=x_1x_2\cdots x_n$ be a word over $X$. We call $|x|:=n$ the *length* of $x$. A *subword* of $x$ is a word $v=x_{i_1}x_{i_2}\cdots x_{i_k}$, where $1 \leq i_1<i_2<\cdots<i_k \leq n$. A *segment* of $x$ is a word $v=x_{i}x_{i+1}\cdots x_{i+k}$. If $X$ and $Y$ are two linearly ordered alphabets, then two words $x=x_1x_2 \cdots x_n$ and $y = y_1 y_2 \cdots y_n$ over $X$ and $Y$, respectively, are said to be *order equivalent* if $x_i<x_j$ precisely when $y_i < y_j$. Let $X = A \cup \{\dd\}$ where $A$ is a linearly ordered alphabet. For each word $x$ let $\bar x$ be the word obtained from $x$ by deleting all dashes in $x$. A word $p$ over $X$ is called a *pattern* if it contains no two consecutive dashes and $\bar{p}$ has no repeated letters. By slight abuse of terminology we refer to the *length of a pattern* $p$ as the length of $\bar{p}$. Two patterns $p$ and $q$ of equal length are said to be *dash equivalent* if the $i$th letter in $p$ is a dash precisely when the $i$th letter in $q$ is a dash. If $p$ and $q$ are dash and order equivalent, then $p$ and $q$ are *equivalent*. In what follows a pattern will usually be taken to be over the alphabet $\{a,b,c,d,\ldots\}\cup\{\dd\}$ where $\{a,b,c,d,\ldots\}$ is ordered so that $a<b<c<d<\cdots$. Let $[n]:=\{1,2,\ldots,n\}$ (so $[0]=\emptyset$). A *permutation* of $[n]$ is bijection from $[n]$ to $[n]$. Let ${\mathcal{S}}_n$ be the set of permutations of $[n]$. We shall usually think of a permutation $\pi$ as the word $\pi(1)\pi(2)\cdots\pi(n)$ over the alphabet $[n]$. In particular, ${\mathcal{S}}_0 = \{{{\epsilon}}\}$, since there is only one bijection from $\emptyset$ to $\emptyset$, the empty map. We say that a subword $\sigma$ of $\pi$ is a $p$-*subword* if by replacing (possibly empty) segments of $\pi$ with dashes we can obtain a pattern $q$ equivalent to $p$ such that $\bar q = \sigma$. However, all patterns that we will consider will have a dash at the beginning and one at the end. For convenience, we therefore leave them out. For example, $(\axbc)$ is a pattern, and the permutation $491273865$ contains three $(\axbc)$-subwords, namely $127$, $138$, and $238$. A permutation is said to be *$p$-avoiding* if it does not contain any $p$-subwords. Define ${\mathcal{S}}_n(p)$ to be the set of $p$-avoiding permutations in ${\mathcal{S}}_n$ and, more generally, ${\mathcal{S}}_n(A) = \bigcap_{p\in A} {\mathcal{S}}_n(p)$. We may think of a pattern $p$ as a permutation statistic, that is, define $p\,\pi$ as the number of $p$-subwords in $\pi$, thus regarding $p$ as a function from ${\mathcal{S}}_n$ to ${{\mathbb N}}$. For example, $(\axbc)\,491273865 = 3$. In particular, $\pi$ is $p$-avoiding if and only if $p\,\pi = 0$. We say that two permutation statistics $\mathrm{stat}$ and $\mathrm{stat}'$ are *equidistributed* over $A\subseteq{\mathcal{S}}_n$, if $$\sum_{\pi\in A}x^{\mathrm{stat}\, \pi} = \sum_{\pi\in A}x^{\mathrm{stat}'\, \pi}.$$ In particular, this definition applies to patterns. Let $\pi=a_1 a_2 \cdots a_n \in {\mathcal{S}}_n$. An $i$ such that $a_i>a_{i+1}$ is called a *descent* in $\pi$. We denote by $\operatorname{des}\pi$ the number of descents in $\pi$. Observe that $\operatorname{des}$ can be defined as the pattern $(ba)$, that is, $\operatorname{des}\pi = (ba)\pi$. A *left-to-right minimum* of $\pi$ is an element $a_i$ such that $a_i < a_j$ for every $j<i$. The number of left-to-right minima is a permutation statistic. Analogously we also define *left-to-right maximum*, *right-to-left minimum*, and *right-to-left maximum*. In this paper we will relate permutations avoiding a given set of patterns to other better known combinatorial structures. Here follows a brief description of these structures. Set partitions {#set-partitions .unnumbered} -------------- A *partition* of a set $S$ is a family, $\pi = \{A_1 , A_2, \ldots, A_k \}$, of pairwise disjoint non-empty subsets of $S$ such that $S=\cup_i A_i$. We call $A_i$ a *block* of $\pi$. The total number of partitions of $[n]$ is called a *Bell number* and is denoted $B_n$. For reference, the first few Bell numbers are $$1,1,2,5,15,52,203,877,4140,21147,115975,678570,4213597.$$ Let $S(n,k)$ be the number of partitions of $[n]$ into $k$ blocks; these numbers are called the *Stirling numbers of the second kind*. Non-overlapping partitions {#non-overlapping-partitions .unnumbered} -------------------------- Two blocks $A$ and $B$ of a partition $\pi$ *overlap* if $$\min A < \min B < \max A < \max B.$$ A partition is *non-overlapping* if no pairs of blocks overlap. Thus $$\pi = \{\{ 1,2,5,13\},\{3,8\},\{4,6,7\},\{9\},\{10,11,12\}\}$$ is non-overlapping. A pictorial representation of $\pi$ is $$\pi \,=\, { \begin{smallmatrix} {\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}\\ {\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}\\ {\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}\\ {\makebox[{1.1ex}]{1}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{2}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{3}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{4}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{5}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{6}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{7}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{8}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{9}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{10}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{11}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{12}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{13}}\\ \end{smallmatrix} }.$$ Let $B^*_n$ be the number of non-overlapping partitions of $[n]$; this number is called the $n$th *Bessel number* [@FlSc90 p. 423]. The first few Bessel numbers are $$1,1,2,5,14,43,143,509,1922,7651,31965,139685,636712.$$ We denote by $S^*(n,k)$ the number of non-overlapping partitions of $[n]$ into $k$ blocks. Involutions {#involutions .unnumbered} ----------- An *involution* is a permutation which is its own inverse. We denote by $I_n$ the number of involutions in ${\mathcal{S}}_n$. The sequence $\{I_n\}_0^{\infty}$ starts with $$1,1,2,4,10,26,76,232,764,2620,9496,35696,140152.$$ Dyck paths {#dyck-paths .unnumbered} ---------- A *Dyck path* of length $2n$ is a lattice path from $(0,0)$ to $(2n,0)$ with steps $(1,1)$ and $(1,-1)$ that never goes below the $x$-axis. Letting $u$ and $d$ represent the steps $(1,1)$ and $(1,-1)$ respectively, we code such a path with a word over $\{u,d\}$. For example, the path ![image](dyckpath.eps){width="25ex"} is coded by $uuduuddd$. The $n$th *Catalan number* $C_n=\frac 1 {n+1} \binom {2n} n$ counts the number of Dyck paths of length $2n$. The sequence of Catalan numbers starts with $$1,1,2,5,14,42,132,429,1430,4862,16796,58786,208012.$$ Motzkin paths {#motzkin-paths .unnumbered} ------------- A *Motzkin path* of length $n$ is a lattice path from $(0,0)$ to $(n,0)$ with steps $(1,0)$, $(1,1)$, and $(1,-1)$ that never goes below the $x$-axis. Letting $\ell$, $u$, and $d$ represent the steps $(1,0)$, $(1,1)$, and $(1,-1)$ respectively, we code such a path with a word over $\{\ell, u,d\}$. For example, the path ![image](motzkinpath.eps){width="25ex"} is coded by $u\ell \ell ud\ell d\ell $. The $n$th *Motzkin number* $M_n$ is the number of Motzkin paths of length $n$. The first few of the Motzkin numbers are $$1,1,2,4,9,21,51,127,323,835,2188,5798,15511.$$ Three classes of patterns ========================= Let $\pi=a_1 a_2 \cdots a_n \in {\mathcal{S}}_n$. Define the *reverse* of $\pi$ as $\pi^r:=a_n \cdots a_2 a_1$, and define the *complement* of $\pi$ by $\pi^c(i) = n+1-\pi(i)$, where $i\in[n]$. With respect to being equidistributed, the twelve pattern statistics of length three with one dash fall into the following three classes. - , , , . - , , , . - , , , . The bijections $\pi\mapsto\pi^r$, $\pi\mapsto\pi^c$, and $\pi\mapsto(\pi^r)^c$ give the equidistribution part of the result. Calculations show that these three distributions differ pairwise on ${\mathcal{S}}_4$. Permutations avoiding a pattern of class one or two =================================================== \[bell1\] Partitions of $[n]$ are in one-to-one correspondence with $(\axbc)$-avoiding permutations in ${\mathcal{S}}_n$. Hence $|{\mathcal{S}}_n(\axbc)|=B_n$. Recall that the Bell numbers satisfy $B_0=1$, and $$B_{n+1}= \sum_{k=0}^n \binom n k B_k.$$ We show that $|{\mathcal{S}}_n(\axbc)|$ satisfy the same recursion. Clearly, ${\mathcal{S}}_0(\axbc) = \{{{\epsilon}}\}$. For $n > 0$, let $M=\{2,3,\ldots,n+1\}$, and let $S$ be a $k$ element subset of $M$. For each $(\axbc)$-avoiding permutation $\sigma$ of $S$ we construct a unique $(\axbc)$-avoiding permutation $\pi$ of $[n+1]$. Let $\tau$ be the word obtained by writing the elements of $M\setminus S$ in decreasing order. Define $\pi := \sigma 1 \tau$. Conversely, if $\pi = \sigma 1 \tau$ is a given $(\axbc)$-avoiding permutation of $[n+1]$, where $|\sigma|=k$, then the letters of $\tau$ are in decreasing order, and $\sigma$ is an $(\axbc)$-avoiding permutation of the $k$ element set $\{2,3,\ldots,n+1\}\setminus \{i : i \text{ is a letter in }\tau\}$. Given a partition $\pi$ of $[n]$, we introduce a standard representation of $\pi$ by requiring that: - Each block is written with its least element first, and the rest of the elements of that block are written in decreasing order. - The blocks are written in decreasing order of their least element, and with dashes separating the blocks. Define ${\widehat\pi}$ to be the permutation we obtain from $\pi$ by writing it in standard form and erasing the dashes. We now argue that ${\widehat\pi}:=a_1a_2\cdots a_n$ avoids $(\axbc)$. If $a_i<a_{i+1}$, then $a_i$ and $a_{i+1}$ are the first and the second element of some block. By the construction of ${\widehat\pi}$, $a_i$ is a left-to-right minimum, hence there is no $j\in[i-1]$ such that $a_j<a_i$. Conversely, $\pi$ can be recovered uniquely from ${\widehat\pi}$ by inserting a dash in ${\widehat\pi}$ preceding each left-to-right minimum, apart from the first letter in ${\widehat\pi}$. Thus $\pi\mapsto{\widehat\pi}$ gives the desired bijection. As an illustration of the map defined in the above proof, let $$\pi=\{\{1,3,5\},\{2,6,9\},\{4,7\},\{8\}\}.$$ Its standard form is $8 \dd 47 \dd 296 \dd 153$. Thus ${\widehat\pi}= 847296153$. Let $L(\pi)$ be the number of left-to-right minima of $\pi$. Then $$\sum_{\pi\in{\mathcal{S}}_n(\axbc)} \!\!\!\!\!\!\!\! x^{L(\pi)} = \sum_{k\geq 0} S(n,k)x^k.$$ This result follows readily from the second proof of Proposition \[bell1\]. We here give a different proof, which is based on the fact that the Stirling numbers of the second kind satisfy $$S(n,k) = S(n-1,k-1) + kS(n-1,k).$$ Let $T(n,k)$ be the number of permutations in ${\mathcal{S}}_n(\axbc)$ with $k$ left-to-right minima. We show that the $T(n,k)$ satisfy the same recursion as the $S(n,k)$. Let $\pi$ be an $(\axbc)$-avoiding permutation of $[n-1]$. To insert $n$ in $\pi$, preserving $(\axbc)$-avoidance, we can put $n$ in front of $\pi$ or we can insert $n$ immediately after each left-to-right minimum. Putting $n$ in front of $\pi$ creates a new left-to-right minimum, while inserting $n$ immediately after a left-to-right minimum does not. \[bell2\] Partitions of $[n]$ are in one-to-one correspondence with $(\axcb)$-avoiding permutations in ${\mathcal{S}}_n$. Hence $|{\mathcal{S}}_n(\axcb)|=B_n$. Let $\pi$ be a partition of $[n]$. We introduce a standard representation of $\pi$ by requiring that: - The elements of a block are written in increasing order. - The blocks are written in decreasing order of their least element, and with dashes separating the blocks. Notice that this standard representation is different from the one given in the second proof of Proposition \[bell1\]. Define ${\widehat\pi}$ to be the permutation we obtain from $\pi$ by writing it in standard form and erasing the dashes. It easy to see that ${\widehat\pi}$ avoids $(\axcb)$. Conversely, $\pi$ can be recovered uniquely from ${\widehat\pi}$ by inserting a dash in between each descent in ${\widehat\pi}$. As an illustration of the map defined in the above proof, let $$\pi=\{\{1,3,5\},\{2,6,9\},\{4,7\},\{8\}\}.$$ Its standard form is $8 \dd 47 \dd 269 \dd 135$. Thus ${\widehat\pi}= 847269135$. $$\sum_{\pi\in{\mathcal{S}}_n(\axcb)} \!\!\!\!\!\!\!\! x^{1+\operatorname{des}\pi } = \sum_{k\geq 0} S(n,k)x^k.$$ From the proof of Proposition \[bell2\] we see that $\pi$ has $k+1$ blocks precisely when ${\widehat\pi}$ has $k$ descents. \[involutions\] Involutions in ${\mathcal{S}}_n$ are in one-to-one correspondence with permutations in ${\mathcal{S}}_n$ that avoid $(\axbc)$ and $(\axcb)$. Hence $$|{\mathcal{S}}_n(\axbc, \axcb)|=I_n.$$ We give a combinatorial proof using a bijection that is essentially identical to the one given in the second proof of Proposition \[bell1\]. Let $\pi\in{\mathcal{S}}_n$ be an involution. Recall that $\pi$ is an involution if and only if each cycle of $\pi$ is of length one or two. We now introduce a standard form for writing $\pi$ in cycle notation by requiring that: - Each cycle is written with its least element first. - The cycles are written in decreasing order of their least element. Define ${\widehat\pi}$ to be the permutation obtained from $\pi$ by writing it in standard form and erasing the parentheses separating the cycles. Observe that ${\widehat\pi}$ avoids $(\axbc)$: Assume that $a_i<a_{i+1}$, that is $(a_i\,a_{i+1})$ is a cycle in $\pi$, then $a_i$ is a left-to-right minimum in $\pi$. This is guaranteed by the construction of ${\widehat\pi}$. Thus there is no $j<i$ such that $a_j<a_i$. The permutation ${\widehat\pi}$ also avoids $(\axcb)$: Assume that $a_i>a_{i+1}$, then $a_{i+1}$ must be the smallest element of some cycle. Then $a_i$ is a left-to-right minimum in $\pi$. Conversely, if ${\widehat\pi}:=a_1\ldots a_n$ is an $\{\axbc, \axcb\}$-avoiding permutation then the involution $\pi$ is given by: $(a_i\,a_{i+1})$ is a cycle in $\pi$ if and only if $a_i<a_{i+1}$. The involution $\pi=826543719$ written in standard form is $$(9)(7)(4\,5)(3\,6)(2)(1\,8),$$ and hence ${\widehat\pi}= 974536218$. $${\mathcal{S}}_n(\axbc, \axcb) = {\mathcal{S}}_n(\axbc, acb) = {\mathcal{S}}_n(abc, \axcb) = {\mathcal{S}}_n(abc, acb).$$ Kitaev [@Ki00] observed that the dashes in the patterns $(\axbc)$ and $(\axcb)$ are immaterial for the proof of Proposition \[involutions\]. The result may, however, also be proved directly. For an example of such a proof see the proof of Lemma \[lemma2\]. \[des\_fix\] The number of permutations in ${\mathcal{S}}_{n+k}(\axbc,\axcb)$ with $n-1$ descents equals the number of involutions in ${\mathcal{S}}_{n+k}$ with $n-k$ fixed points. Under the bijection $\pi\mapsto{\widehat\pi}$ in the proof of Proposition \[involutions\], a cycle of length two in $\pi$ corresponds to an occurrence of $(\ab)$ in ${\widehat\pi}$. Hence, if $\pi$ has $n-2k$ fixed points, then ${\widehat\pi}$ has $n-k-1$ descents. Substituting $n+k$ for $n$ we get the desired result. To take the analysis of descents in $\{\axbc,\axcb\}$-avoiding permutations further, we introduce the polynomial $$A_n(x) = \!\!\! \!\!\! \sum_{\pi\in{\mathcal{S}}_n(\axbc,\axcb)} \!\!\!\!\!\! \!\!\!\!\!\! \! x^{1+\operatorname{des}\,\pi},$$ and call it the $n$th *Eulerian polynomial for $\{\axbc,\axcb\}$-avoiding permutations*. Direct enumeration shows that the sequence $\{A_n(x)\}$ starts with $$\begin{array}{rclclclclclclclcl} A_0(x) &=& 1 \\ A_1(x) &=& &\!\!+\!\!& x\\ A_2(x) &=& & & x &\!\!+\!\!& x^2\\ A_3(x) &=& & & & & 3x^2 &\!\!+\!\!& x^3\\ A_4(x) &=& & & & & 3x^2 &\!\!+\!\!& 6x^3 &\!\!+\!\!& x^4\\ A_5(x) &=& & & & & & & 15x^3 &\!\!+\!\!& 10x^4 &\!\!+\!\!& x^5\\ A_6(x) &=& & & & & & & 15x^3 &\!\!+\!\!& 45x^4 &\!\!+\!\!& 15x^5 &\!\!+\!\!& x^6\\ A_7(x) &=& & & & & & & & & 105x^4 &\!\!+\!\!& 105x^5 &\!\!+\!\!& 21x^6 &\!\!+\!\!& x^7.\\ \end{array}$$ We will relate these polynomials to the so called Bessel polynomials. The $n$th *Bessel polynomial* $y_n(x)$ is defined by $$\label{bessel_expl} y_n(x) = \sum_{k=0}^n \binom {n+k} k \binom n k \frac {k!} {2^k} x^k.$$ The first six of the Bessel polynomials are $$\begin{array}{rclclclclclclcl} y_0(x) &=& 1 \\ y_1(x) &=& 1 &\!\!+\!\!& x\\ y_2(x) &=& 1 &\!\!+\!\!& 3x &\!\!+\!\!& 3x^2\\ y_3(x) &=& 1 &\!\!+\!\!& 6x &\!\!+\!\!& 15x^2 &\!\!+\!\!& 15x^3\\ y_4(x) &=& 1 &\!\!+\!\!& 10x &\!\!+\!\!& 45x^2 &\!\!+\!\!& 105x^3 &\!\!+\!\!& 105x^4\\ y_5(x) &=& 1 &\!\!+\!\!& 15x &\!\!+\!\!& 105x^2 &\!\!+\!\!& 420x^3 &\!\!+\!\!& 945x^4 &\!\!+\!\!& 945x^5.\\ \end{array}$$ These polynomials satisfy the second order differential equation $$x^2\frac {d^2y} {dx^2} + 2(x+1) \frac {dy} {dx} = n(n+1)y.$$ Moreover, the Bessel polynomials satisfy the recurrence relation $$\label{bessel_rec} y_{n+1}(x) = (2n+1)xy_n(x) + y_{n-1}(x).$$ \[euler\] Let $y_n(x)$ be the $n$th Bessel polynomial, and let $A_n(x)$ be the $n$th Eulerian polynomial for $\{\axbc,\axcb\}$-avoiding permutations. Then - $\sum_n y_n(x) (xt)^n$ generates $\{A_n(t)\}$, that is $$\sum_{n\geq 0} A_n(t) x^n = \sum_{n\geq 0} y_n(x) (xt)^n.$$ - $A_0(x) = 1$, $A_1(x) = x$, and for $n\geq2$, we have $$A_{n+2}(x) = x(1+x+2x\frac d {dx}) A_n(x).$$ - $A_n(x)$ is explicitly given by $$A_n(x) = \sum_{k=0}^n \binom n k \binom {n-k} k \frac {k!} {2^k} x^{n-k}.$$ Let $I_n^k$ denote the number of involutions in ${\mathcal{S}}_n$ with $k$ fixed points. Then Porism \[des\_fix\] is equivalently stated as $$\label{eulerian_poly} A_n(x) = \sum_{k\geq 0} I_n^{2k-n} x^k.$$ In [@DuFa91] Dulucq and Favreau showed that the Bessel polynomials are given by $$\label{bessel_poly} y_n(x) = \sum_{k\geq 0} I_{n+k}^{n-k} x^k.$$ To prove (i), multiply Equation (\[bessel\_poly\]) by $(xt)^n$ and sum over $n$. $$\begin{aligned} \sum_{n\geq 0} y_n(x)(xt)^n &\,=\, \sum_{n\geq 0} \sum_{k\geq 0} I_{n+k}^{n-k} t^n x^{n+k} \\ &\,=\, \sum_{k\geq 0} \sum_{n\geq 0} I_{k}^{2n-k} t^n x^k && \text{By substituting $n-k$ for $k$.}\\ &\,=\, \sum_{k\geq 0} A_k(t) x^k && \text{By Equation~(\ref{eulerian_poly}).} \end{aligned}$$ We now multiply Equation (\[bessel\_rec\]) by $(xt)^n$ and sum over $n$. Tedious but straightforward calculations then yield (ii) from (i). Finally, we obtain (iii) from Equation (\[bessel\_expl\]) by identifying coefficients in (i). Let $\pi$ be an arbitrary partition whose non-singleton blocks $\{A_1,\ldots,A_k\}$ are ordered so that for all $i\in[k-1]$, $\min A_i>\min A_{i+1}$. If $\max A_i>\max A_{i+1}$ for all $i\in [k-1]$, then we call $\pi$ a *monotone partition*. The set of monotone partitions of $[n]$ is denoted by ${\mathcal{M}}_n$. The partition $$\pi \,=\, { \begin{smallmatrix} {\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}\\ {\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}\\ {\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}\\ {\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}\\ {\makebox[{1.1ex}]{1}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{2}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{3}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{4}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{5}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{6}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{7}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{8}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{9}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{10}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{11}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{12}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{13}}\\ \end{smallmatrix} }$$ is monotone. \[mono\] Monotone partitions of $[n]$ are in one-to-one correspondence with permutations in ${\mathcal{S}}_n$ that avoid $(\axbc)$ and $(\abxc)$. Hence $$|{\mathcal{S}}_n(\axbc,\abxc)| = |{\mathcal{M}}_n|.$$ Given $\pi$ in ${\mathcal{M}}_n$, let $A_1\dd A_2\dd\cdots\dd A_k$ be the result of writing $\pi$ in the standard form given in the second proof of Proposition \[bell1\], and let ${\widehat\pi}=A_1 A_2\cdots A_k$. By the construction of ${\widehat\pi}$ the fist letter in each $A_i$ is a left-to-right minimum. Furthermore, since $\pi$ is monotone the second letter in each non-singleton $A_i$ is a right-to-left maximum. Therefore, if $xy$ is an $(ab)$-subword of ${\widehat\pi}$, then $x$ is left-to-right minimum and $y$ is a right-to-left maximum. Thus ${\widehat\pi}$ avoids both $(\axbc)$ and $(\abxc)$. Conversely, given ${\widehat\pi}$ in ${\mathcal{S}}_n(\axbc,\abxc)$, let $A_1\dd A_2\dd\cdots\dd A_k$ be the result of inserting a dash in ${\widehat\pi}$ preceding each left-to-right minimum, apart from the first letter in ${\widehat\pi}$. Since ${\widehat\pi}$ is $(\abxc)$-avoiding, the second letter in each non-singleton $A_i$ is a right-to-left maximum. The second letter in $A_i$ is the maximal element of $A_i$ when $A_i$ is viewed as a set. Thus $\pi = \{A_1,A_2,\ldots,A_k \}$ is monotone. What is left for us to show is that there is a one-to-one correspondence between monotone partitions and non-overlapping partitions. The proof we give is strongly influenced by the work of Flajolet [@FlSc90]. \[nop-mono\] Monotone partitions of $[n]$ are in one-to-one correspondence with non-overlapping partitions of $[n]$. Hence $|{\mathcal{M}}_n| = B_n^*$. If $k$ is the minimal element of a non-singleton block, then call $k$ the *first* element of that block. Similarly, If $k$ is the maximal element of a non-singleton block, then call $k$ the *last* element of that block. An element of a non-singleton that is not a first or last element is called an *intermediate* element. Let us introduce an ordering of the blocks of a partition. A block $A$ is smaller than a block $B$ if $\min A < \min B$. We define a map $\Phi$ that to each non-overlapping partition $\pi$ of $[n]$ gives a unique monotone partition $\Phi(\pi)$ of $[n]$. Let the integer $k$ range from $1$ to $n$. - If $k$ is the first element of a block of $\pi$, then open a new block in $\Phi(\pi)$ by letting $k$ be its first element. (A block $B$ is open if $\max B<k$.) - If $k$ is the last element of a block of $\pi$, then close the smallest open block of $\Phi(\pi)$ by letting $k$ be its last element. - If $k$ is an intermediate element of some block $B$ of $\pi$, and $B$ is the $i$:th largest open block of $\pi$, then let $k$ belong to the $i$th largest open block of $\Phi(\pi)$. - If $\{k\}$ is a singleton block of $\pi$, then let $\{k\}$ be a singleton block of $\Phi(\pi)$. Observe that $\Phi(\pi)$ is monotone. Indeed, it is only in (b) that we close a block of $\Phi(\pi)$, and we always close the smallest open block of $\Phi(\pi)$. Conversely, we give a map $\Psi$ that to each monotone partition $\pi$ be a of $[n]$ gives a unique non-overlapping partition $\Psi(\pi)$ of $[n]$. Define $\Psi$ the same way as $\Phi$ is defined, except for case (c), where we instead of closing the smallest open block close the largest open block. It is easy to see that $\Phi$ and $\Psi$ are each others inverses and hence they are bijections. \[nop\] The NOPs (non-overlapping partitions) of $[n]$ are in one-to-one correspondence with permutations in ${\mathcal{S}}_n$ that avoid $(\axbc)$ and $(\abxc)$. Hence $$|{\mathcal{S}}_n(\axbc, \abxc)|=B^*_n.$$ Follows immediately from Proposition \[mono\] together with Proposition \[nop-mono\]. By the proof of Proposition \[nop-mono\], the non-overlapping partition $$\begin{aligned} \pi &\,=\, { \begin{smallmatrix} {\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}\\ {\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}\\ {\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}\\ {\makebox[{1.1ex}]{1}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{2}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{3}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{4}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{5}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{6}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{7}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{8}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{9}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{10}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{11}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{12}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{13}}\\ \end{smallmatrix} }\\ \intertext{corresponds to the monotone partition} \Phi(\pi) &\,=\, { \begin{smallmatrix} {\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}\\ {\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}\\ {\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}\\ {\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{-}}}{\makebox[{1.1ex}]{\ensuremath{\circ}}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}\\ {\makebox[{1.1ex}]{1}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{2}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{3}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{4}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{5}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{6}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{7}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{8}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{9}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{10}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{11}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{12}}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{ }}{\makebox[{1.1ex}]{13}}\\ \end{smallmatrix} }\\ \intertext{that according to the proof of Proposition~\ref{mono} corresponds to the $\{\axbc,\abxc\}$-avoiding permutation} \widehat{\Phi(\pi)} &\,=\, 1{\hspace{-0.2ex}}0\;1{\hspace{-0.2ex}}3\;1{\hspace{-0.2ex}}1\;9\;4\;1{\hspace{-0.2ex}}2\;6\;3\;8\;1\;7\;5\;2. \end{aligned}$$ Let $L(\pi)$ be the number of left-to-right minima of $\pi$. Then $$\sum_{\pi\in{\mathcal{S}}_n(\axbc,\abxc)} \!\!\!\!\!\!\!\!\!\!\!\!\!\! x^{L(\pi)} = \sum_{k\geq 0} S^*(n,k)x^k.$$ Under the bijection $\pi\mapsto{\widehat\pi}$ in the proof of Proposition \[mono\], the number of blocks in $\pi$ determines the number of left-to-right minima of ${\widehat\pi}$, and vice versa. The number of blocks is not changed by the bijection $\Phi_1\circ\Psi_2$ in the proof of Proposition \[nop-mono\]. Permutations avoiding a pattern of class three ============================================== In [@Kn73v1] Knuth observed that there is a one-to-one correspondence between $(\bxaxc)$-avoiding permutations and Dyck paths. For completeness and future reference we give this result as a lemma, and prove it using one of the least known bijections. First we need a definition. For each word $x=x_1x_2\cdots x_n$ without repeated letters, we define the *projection* of $x$ onto $S_n$, which we denote $\operatorname{\mathrm{proj}}(x)$, by $$\operatorname{\mathrm{proj}}(x) = a_1 a_2 \cdots a_n \,,\;\text{ where }\; a_i = |\{j\in [n] : x_i\geq x_j \}|.$$ Equivalently, $\operatorname{\mathrm{proj}}(x)$ is the permutation in ${\mathcal{S}}_n$ which is order equivalent to $x$. For example, $\operatorname{\mathrm{proj}}(265) = 132$. \[lemma1\] $|{\mathcal{S}}_n(\bxaxc)|=C_n.$ Let $\pi=a_1 a_2 \cdots a_n$ be a permutation of $[n]$ such that $a_k=1$. Then $\pi$ is $(\bxaxc)$-avoiding if and only if $\pi=\sigma 1 \tau$, where $\sigma:=a_1\cdots a_{k-1}$ is a $(\bxaxc)$-avoiding permutation of $\{n,n-1,\ldots,n-k+1\}$, and $\tau:=a_{k+1}\cdots a_n$ is a $(\bxaxc)$-avoiding permutation of $\{2,3,\ldots,k\}$. We define recursively a mapping $\Phi$ from ${\mathcal{S}}_n(\bxaxc)$ onto the set of Dyck paths of length $2n$. If $\pi$ is the empty word, then so is the Dyck path determined by $\pi$, that is, $\Phi({{\epsilon}}) = {{\epsilon}}$. If $\pi \neq {{\epsilon}}$, then we can use the factorisation $\pi=\sigma 1 \tau$ from above, and define $\Phi(\pi) = u\,(\Phi\circ\operatorname{\mathrm{proj}})(\sigma)\,d\,(\Phi\circ\operatorname{\mathrm{proj}})(\tau)$. It is easy to see that $\Phi$ may be inverted, and hence is a bijection. \[lemma2\] A permutation avoids $(\bxac)$ if and only if it avoids $(\bxaxc)$. The sufficiency part of the proposition is trivial. The necessity part is not difficult either. Assume that $\pi$ contains a $(\bxaxc)$-subword. Then there exist $$A,B,C, n_1,n_2,\ldots,n_r \in [n], \text{ where } A < B < C,$$ such that $B\hspace{-1pt}AC$ is a subword of $\pi$, and $A n_1\cdots n_r C$ is a segment of $\pi$. If $n_1>B$, then $B\hspace{-1pt}A n_1$ form a $(\bxac)$-subword in $\pi$. Assume that $n_1<B$. Indeed, to avoid forming a $(\bxac)$-subword we will have to assume that $n_i<B$ for all $i\in[r]$, but then $B n_r C$ is a $(\bxac)$-subword. Accordingly we conclude that there exists at least one $(\bxac)$-subword in $\pi$. \[catalan\] Dyck paths of length $2n$ are in one-to-one correspondence with $(\bxac)$-avoiding permutations in ${\mathcal{S}}_n$. Hence $$|{\mathcal{S}}_n(\bxac)|=\frac 1 {n+1} \binom {2n} n.$$ Follows immediately from Lemma \[lemma1\] and Lemma \[lemma2\]. Let $L(\pi)$ be the number of left-to-right minima of $\pi$. Then $$\sum_{\pi\in{\mathcal{S}}_n(\bxac)} \!\!\!\!\!\!\!\! x^{L(\pi)} = \sum_{k\geq 0} \frac k {2n-k} \binom{2n-k} n x^k.$$ A *return step* in a Dyck path $\delta$ is a $d$ such that $\delta = \alpha u \beta d \gamma$, for some Dyck paths $\alpha$, $\beta$, and $\gamma$. A useful observation is that every non-empty Dyck path $\delta$ can be uniquely decomposed as $\delta = u \alpha d \beta$, where $\alpha$ and $\beta$ are Dyck paths. This is the so-called *first return decomposition* of $\delta$. Let $R(\delta)$ denote the number of return steps in $\delta$. In [@De99] Deutsch showed that the distribution of $R$ over all Dyck paths of length $2n$ is the distribution we claim that $L$ has over ${\mathcal{S}}_n(\bxac)$. Let $\gamma$ be a Dyck path of length $2n$, and let $\gamma = u \alpha d \beta$ be its first return decomposition. Then $R(\gamma) = 1 + R(\beta)$. Let $\pi\in{\mathcal{S}}_n(\bxac)$, and let $\pi=\sigma 1 \tau$ be the decomposition given in the proof of Lemma \[lemma1\]. Then $L(\pi) = 1 + L(\sigma)$. The result now follows by induction. In addition, it is easy to deduce that left-to-right minima, left-to-right maxima, right-to-left minima, and right-to-left maxima all share the same distribution over ${\mathcal{S}}_n(\bxac)$. Motzkin paths of length $n$ are in one-to-one correspondence with permutations in ${\mathcal{S}}_n$ that avoid $(\axbc)$ and $(\acxb)$. Hence $$|{\mathcal{S}}_n(\axbc, \acxb)|=M_n.$$ We mimic the proof of Lemma \[lemma1\]. Let $\pi\in{\mathcal{S}}_n(\axbc, \acxb)$. Since $\pi$ avoids $(\acxb)$ it also avoids $(\axcxb)$ by Lemma \[lemma2\] via $\pi\mapsto (\pi^c)^r$. Thus we may write $\pi = \sigma n \tau$, where $\pi(k)=n$, $\tau$ is an $\{\axbc, \acxb\}$-avoiding permutation of $\{n-1,n-2,\ldots,n-k+1\}$, and $\tau$ is an $\{\axbc, \acxb\}$-avoiding permutation of $[n-k]$. If $\sigma\neq{{\epsilon}}$ then $\sigma=\sigma'r$ where $r=n-k+1$, or else an $(\axbc)$-subword would be formed with $n$ as the ’$c$’ in $(\axbc)$. Define a map $\Phi$ from ${\mathcal{S}}_n(\axbc, \acxb)$ to the set of Motzkin paths by $\Phi({{\epsilon}}) = {{\epsilon}}$ and $$\Phi(\pi) = \begin{cases} \ell\,(\Phi\circ\operatorname{\mathrm{proj}})(\sigma) & \text{ if } \pi = n\sigma, \\ u\,(\Phi\circ\operatorname{\mathrm{proj}})(\sigma)\,d\,\Phi(\tau) & \text{ if } \pi = \sigma r n \tau \text{ and } r=n-k+1.\\ \end{cases}$$ Its routine to find the inverse of $\Phi$. Let us find the Motzkin path associated to the $\{\axbc,\acxb\}$-avoiding permutation $76453281$. $$\begin{aligned} \Phi(76453{\bf 28}1) &=& u \Phi({\bf 5}4231) d \Phi({\bf 1}) \\ &=& u\ell \Phi({\bf 4}231) d\ell \\ &=& u\ell\ell \Phi({\bf 23}1) d\ell \\ &=& u\ell\ell u d \Phi({\bf 1}) d\ell \\ &=& u\ell\ell u d\ell d\ell \\ \end{aligned}$$ Acknowledgement {#acknowledgement .unnumbered} =============== I am greatly indebted to my advisor Einar Steingrímsson, who put his trust in me and gave me the opportunity to study mathematics on a postgraduate level. This work has benefited from his knowledge, enthusiasm and generosity.
--- abstract: 'We explain how recent developments in the fields of Realisability models for linear logic [@seiller-goig] – or *geometry of interaction* – and implicit computational complexity [@seiller-conl; @seiller-lsp] can lead to a new approach of implicit computational complexity. This semantic-based approach should apply uniformly to various computational paradigms, and enable the use of new mathematical methods and tools to attack problem in computational complexity. This paper provides the background, motivations and perspectives of this *complexity-through-Realisability* theory to be developed, and illustrates it with recent results [@seiller-goic].' address: 'Proofs, Programs, Systems. CNRS – Paris Diderot University.' author: - Thomas Seiller bibliography: - 'thomas.bib' title: | Towards a\ *Complexity-through-Realisability* Theory --- Introduction ============ Complexity theory lies at the intersection between mathematics and computer science, and studies the amount of resources needed to run a specific program (complexity of an algorithm) or solve a particular problem (complexity of a problem). wewill explain how it is possible to build on recent work in *Realisability models for linear logic* – a mathematical model of programs and their execution – to provide new characterisations of existing complexity classes. It is hoped that these characterisations will enable new mathematical techniques, tools and invariants from the fields of operators algebras and dynamical systems, providing researchers with new tools and methods to attack long-standing open problems in complexity theory. The *complexity-through-Realisability* theory we propose to develop will provide a unified framework for studying many computational paradigms and their associated computational complexity theory grounded on well-studied mathematical concepts. This should provide a good candidate for a theory of complexity for computational paradigms currently lacking an established theory (e.g. concurrent processes), as well as contribute to establish a unified and well-grounded account of complexity for higher-order functionals. Even though it has been an established discipline for more than 50 years [@hartmanisstearns], many questions in complexity theory, even basic ones, remain open. During the last twenty years, researchers have developed new approaches based on logic: they offer solid, machine-independent, foundations and provide new tools and methods. Amongst these approaches, the fields of Descriptive Complexity (DC) and Implicit Computational Complexity (ICC) lead to a number of new characterisations of complexity classes. These works laid grounds for both theoretical results [@immerman] and applications such as typing systems for complexity constrained programs and type inference algorithms for statically determining complexity bounds. The *complexity-through-Realisability* theory we propose to develop is related to those established logic-based approaches. As such, it inherits their strengths: it is machine-independent, provides tools and methods from logic and gives grounds for the above mentioned applications. Furthermore, it builds on state-of-the-art theoretical results on Realisability models for linear logic [@seiller-goig] using well-studied mathematical concepts from operators algebras and dynamical systems. As a consequence, it opens the way to use against complexity theory’s open problems the many techniques, tools and invariants that were developed in these disciplines. We illustrate the approach by explaining how first results were recently obtained by capturing a large family of complexity classes corresponding to various notions of automata. Indeed, we provided [@seiller-goic] Realisability models in which types of binary predicates correspond to the classes of languages accepted by one-way (resp. two-way) deterministic (resp. non-deterministic, resp. probabilistic) multi-head automata. This large family of languages contains in particular the classes (regular languages), (stochastic languages), (logarithmic space), (non-deterministic logarithmic space), (complementaries of languages in ), and (probabilistic logarithmic space). Finally, we discuss the possible extensions and further characterisations that could be obtained by the same methods. We in particular discuss the relationship between the classification of complexity classes and the problem of classifying a certain kind of algebras, namely *graphing algebras*. Complexity Theory ----------------- Complexity theory is concerned with the study of how many resources are needed to perform a specific computation or to solve a given problem. The study of *complexity classes* – sets of problems which need a comparable amount of resources to be solved, lies at the intersection of mathematics and computer science. Although a very active and established field for more than fifty years [@cobham; @cookfoundations; @hartmanisstearnsfoundations; @savitch], a number of basic problems remain open, for instance the famous “millennium problem” of whether equals or the less publicized but equally important question of whether equals . In recent years, several results have greatly modified the landscape of complexity theory by showing that proofs of separation (i.e. inequality) of complexity classes are hard to come by, pointing out the need to develop new theoretical methods. The most celebrated result in this direction [@naturalproofs] defines a notion of *natural proof* comprising all previously developed proof methods and shows that no “natural proof” can succeed in proving separation. Mathematicians have then tried to give characterisations of complexity classes that differ from the original machine-bound definitions, hoping to enable methods from radically different areas of mathematics. Efforts in this direction lead to the development of Descriptive Complexity (DC), a field which studies the types of logics whose individual sentences characterise exactly particular complexity classes. Early developments were the 1974 Fagin-Jones-Selman results [@fagin74; @jonesselman] characterising the classes and . Many such characterisations have then been given [@comptongradel; @dawargradel; @gradelgurevich] and the method led Immerman to a proof of the celebrated Immerman-Szelepcsényi theorem [@immerman; @szelepcsenyi] stating the two complexity classes and are equal (though Szelepcsényi’s proof does not use logic-based methods). Implicit Computational Complexity (ICC) develops a related approach whose aim is to study algorithmic complexity only in terms of restrictions of languages and computational principles. It has been established since Bellantoni and Cook’ landmark paper [@bellantonicook], and following work by Leivant and Marion [@leivantmarion1; @leivantmarion2]. Amongst the different approaches to ICC, several results were obtained by considering syntactic restrictions of *linear logic* [@ll], a refinement of intuitionnistic logic which accounts for the notion of resources. Linear logic introduces a modality $\oc$ marking the “possibility of duplicating” a formula $A$: the formula $A$ shall be used exactly once, while the formula $\oc A$ can be used any number of times. Modifying the rules governing this modality then yields variants of linear logic having computational interest: this is how constrained linear logic systems, for instance [@BLL] and [@danosjoinet], are obtained. However, only a limited number of complexity classes were characterised in this way, and the method seems to be limited by its syntactic aspect: while it is easy to modify existing rules, it is much harder to find new, alternative, rules from scratch. The approach we propose to follow in this paper does not suffer from these limitations, allowing for subtle distinctions unavailable to the syntactic techniques of ICC. Realisability Models for Linear Logic ------------------------------------- Concurrently to these developments in computational complexity, and motivated by disjoint questions and interests, Girard initiated the Geometry of Interaction (GoI) program [@towards]. This research program aims at obtaining particular kinds of *Realisability* models (called GoI models) for linear logic. Realisability was first introduced [@kleene] as a way of making the Brouwer-Heyting-Kolmogorov interpretation of constructivism and intuitionistic mathematics precise; the techniques were then extended to classical logic, for instance by Krivine [@krivine], and linear logic. The GoI program quickly arose as a natural and well-suited tool for the study of computational complexity. Using the first GoI model [@goi1], Abadi, Gonthier and Lévy [@AbadiGonthierLevy92b] showed the optimality of Lamping’s reduction in lambda-calculus [@Lamping90]. It was also applied in implicit computational complexity [@baillotpedicini], and was the main inspiration behind dal Lago’s context semantics [@Lago]. More recently the geometry of interaction program inspired new techniques in implicit computational complexity. These new methods were initiated by Girard [@normativity] and have known a rapid development. They lead to a series of results in the form of new characterisations of the classes [@normativity; @seiller-conl], [@seiller-lsp; @aplas14] and [@lics-ptime]. Unfortunately, although the construction of Realisability models and the characterisations of classes are founded on similar techniques, they are two distinct, unrelated, constructions. The approach we propose to develop here will in particular bridge this gap and provide similar characterisations which will moreover allow the use of both logical and Realisability-specific methods. A *Complexity-through-Realisability* Theory =========================================== Technical Background and Motivations ------------------------------------ About ten years ago, Girard showed [@feedback] that the restriction to the unit ball of a von Neumann algebra of the so-called “feedback equation”, which represents the execution of programs in GoI models, always has a solution. Moreover, previous and subsequent work showed the obtained GoI model interprets, depending on the choice of the von Neumann algebra, either full linear logic [@goi3] or the constrained system ELL which characterises elementary time computable functions [@seiller-phd]. This naturally leads to the informal conjecture that there should be a correspondence between von Neumann algebras and complexity constraints. This deep and promising idea turned out to be slightly inexact and seemingly difficult to exploit. Indeed, the author showed [@seiller-phd; @seiller-masas] that the expressivity of the logic interpreted in a GoI model depends not only on the enveloping von Neumann algebra $\vn{N}$ but also on a maximal abelian sub-algebra (masa) $\vn{A}$ of $\vn{N}$, hinting at a refined conjecture stating that complexity constraints correspond to such couples $(\vn{A},\vn{N})$. This approach is however difficult to extend to exploit and adapt to model other constrained logical systems for two reasons. The first reason is that the theory of maximal abelian sub-algebras in von Neumann algebras is an involved subject matter still containing large numbers of basic but difficult open problems [@FiniteVNAandMasas]. The second is that even though some results were obtained, no intuitions were gained about what makes the correspondence between couples $(\vn{A},\vn{N})$ and complexity constraints work. Some very recent work of the author provides the foundational grounds for a new, tractable way of exploring the latter refined conjecture. This series of work [@seiller-goim; @seiller-goia; @seiller-goig; @seiller-goie] describes a systematic construction of realisability models for linear logic which unifies and extends all GoI models introduced by Girard. The construction is built upon a generalization of graphs, named *graphings* [@adams; @levitt_graphings; @gaboriaucost], which can be understood either as *geometric realisations* of graphs on a measure space $(X,\mathcal{B},\mu)$, as measurable families of graphs, or as generalized measured dynamical system. It is parametrized by two monoids describing the model of computation and a map describing the realisability structure: - a monoid $\Omega$ used to associate weights to edges of the graphs; - a map $m:\Omega\rightarrow\bar{\mathbf{R}}_{\geqslant 0}$ defining *orthogonality* – accounting for linear negation; - a monoid $\microcosm{m}$ – the *microcosm* – of measurable maps from $(X,\mathcal{B},\mu)$ to itself. A *$\Omega$-weighted graphing in $\microcosm{m}$* is then defined as a directed graph $F$ whose edges are weighted by elements in $\Omega$, whose vertices are measurable subsets of the measurable space $(X,\mathcal{B})$, and whose edges are *realised* by elements of $\microcosm{m}$, i.e. for each edge $e$ there exists an element $\phi_{e}$ in $\microcosm{m}$ such that $\phi_{e}(s(e))=t(e)$, where $s,t$ denote the source and target maps. Based on this notion, and an orthogonality relation defined from the map $m$, we obtained a systematic method for constructing realisability models for linear logic. \[mainthm\] For all choices of $\Omega$, $\microcosm{m}$ and $m:\Omega\rightarrow\bar{\mathbf{R}}_{\geqslant 0}$, the set of $\Omega$-weighted graphings in $\microcosm{m}$ defines a model of Multiplicative-Additive Linear Logic (). Let us notice that a microcosm $\microcosm{m}$ which contains only measure-preserving maps generates a measurable equivalence relation which, by the Feldman-Moore construction [@FeldmanMoore1], induces a couple $(\vn{A},\vn{N})$ where $\vn{A}$ is a maximal abelian subalgebra of the von Neumann algebra $\vn{N}$. The previous theorem thus greatly generalizes Girard’s general solution to the feedback equation [@feedback], and in several ways. First, we define several models (deterministic, probabilistic, non-deterministic), among which some are unavailable to Girard’s approach: for instance the non-deterministic model violates Girard’s norm condition, and more generally Girard’s techniques only apply when $\Omega$ is chosen as a subset of the complex plane unit disc. It shows in fact much more as it exhibits an infinite family of structures of realisability models (parametrized by the map $m:\Omega\rightarrow\realposN\cup\{\infty\}$) on any model obtained from a microcosm. This extends drastically Girard’s approach for which only two such structures were defined until now: an “orthogonality-as-nilpotency” structure in the algebra $\B{\hil{H}}$ [@goi1] and another one defined using the Fuglede-Kadison determinant [@FKdet] in the type [II]{}$_{1}$ hyperfinite factor [@goi5]. Methodology ----------- We can now explain the proposed methodology for defining a *Complexity-through-realisability* theory. The notion of $\Omega$-weighted $\microcosm{m}$-graphings for given monoid $\Omega$ and microcosm $\microcosm{m}$ yields a very general yet tractable mathematical notion of algorithm, as the microcosm $\microcosm{m}$ can naturally be understood as a set of computational principles [@seiller-lcc14]. It therefore provides an interesting middle-ground between usual computational models, for instance automata, and mathematical techniques from operator algebras and dynamical systems. It comprises Danos’ interpretation of pure lambda-calculus (a Turing complete model of computation) in terms of operators [@danos-phd], but it is not restricted to sequential algorithms as it will be shown to provide an interpretation of probabilistic and quantum programs. It also provides characterisations of usual complexity classes as types of predicates over binary words $\ListType\Rightarrow \cond{Bool}$, which will lead to a partial proof of the above conjecture by showing a correspondence between families of microcosms and complexity constraints. Work in this direction will establish these definitions of algorithms and complexity constraints as a uniform, homogeneous, machine-independent approach to complexity theory. The methods developed in this setting, either adapted from DC/ICC or specific to the realisability techniques employed, will apply to probabilistic/quantum complexity classes as much as sequential classes. In particular, it will offer a framework where comparison between non-classical and classical classes can be performed. It will also expand to computational paradigms where no established theory of complexity exists, providing a strong and coherent proposition for such. It will extend the approach of ICC and DC as it will go beyond the syntactical restrictions they are suffering from. In particular, it will provide a new method for defining logical systems corresponding to complexity classes: the realisability model construction gives a systematic way to define a logic corresponding to the underlying computational model. It will also extend the GoI model approach to complexity by reconciling the logical and complexity aspects, allowing the use of both logical and realisability-specific methods. Lastly, the approach we propose to develop does not naturally fall into the usual pitfalls for the obtention of separation results. Therefore, it provides a framework which will potentially offer separation methods, e.g. using invariants for the well-established mathematical notions it is founded upon. Interaction Graphs Models of Linear Logic ========================================= Graphings --------- Let $(X,\mathcal{B},\lambda)$ be a measure space. We denote by $\mathcal{M}(X)$ the set of non-singular transformations[^1] $X\rightarrow X$. A *microcosm* of the measure space $X$ is a subset $\microcosm{m}$ of $\mathcal{M}(X)$ which is closed under composition and contains the identity. In the following, we will consider a notion of graphing depending on a *weight-monoid* $\Omega$, i.e. a monoid $(\Omega,\cdot,1)$ which contains the possible weights of the edges. Let $\microcosm{m}$ be a microcosm of a measure space $(X,\mathcal{B},\lambda)$ and $V^{F}$ a measurable subset of $X$. A *$\Omega$-weighted graphing in $\microcosm{m}$* of carrier $V^{F}$ is a countable family $F=\{(\omega_{e}^{F},\phi_{e}^{F}: S_{e}^{F}\rightarrow T_{e}^{F})\}_{e\in E^{F}}$, where, for all $e\in E^{F}$ (the set of *edges*): - $\omega_{e}^{F}$ is an element of $\Omega$, the *weight* of the edge $e$; - $S_{e}^{F}\subset V^{F}$ is a measurable set, the *source* of the edge $e$; - $T_{e}^{F}=\phi_{e}^{F}(S_{e}^{F})\subset V^{F}$ is a measurable set, the *target* of the edge $e$; - $\phi_{e}^{F}$ is the restriction of an element of $\microcosm{m}$ to $S_{e}^{F}$, the *realisation* of the edge $e$. It was shown in earlier work [@seiller-goia] how one can construct models of where proofs are interpreted as graphs. This construction relied on a single property, called the *trefoil property*, which relates two simple notions: - the *execution* $F\plug G$ of two graphs, a graph defined as a set of paths; - the *measurement* $\meas{F,G}$, a real number computed from a set of cycles. These constructions can be extended to the more general framework where proofs are interpreted as graphings. Indeed, the notions of paths and cycles in a graphings are quite natural, and from two graphings $F,G$ in a microcosm $\microcosm{m}$ one can define its execution $F\plug G$ which is again a graphing in $\microcosm{m}$[^2]. A more involved argument then shows that the trefoil property holds for a family of measurements $\meas{\cdot,\cdot}$, where $m:\Omega\rightarrow\realposN\cup\{\infty\}$ is any measurable map. These results are obtained as a generalization of constructions considered in the author’s PhD thesis[^3] [@seiller-phd]. Let $\Omega$ be a monoid and $\microcosm{m}$ a microcosm. The set of $\Omega$-weighted graphings in $\microcosm{m}$ yields a model, denoted by ${\mathbb{M}[\Omega,\mathfrak{m}]}$, of . In most of the models, one can define some exponential connectives. In particular, all models considered later on have the necessary structure to define an exponential modality. Let us notice however that the notion of exponential modality we are considering here is extremely weak, as most models won’t validate the functorial promotion rule. The only rule that is assured to be satisfied by the exponential connectives we will consider is the contraction rule, i.e. for any type $\cond{A}$, one has $\oc\cond{A}\multimap \oc\cond{A}\otimes\oc\cond{A}$. These very weak exponential connectives will turn out to be of great interest: we obtain in this way models of linear logic where the exponentials are weaker than what is obtained through syntactic consideration in systems like , , etc. and characterise low complexity classes. Models of Computation --------------------- Before explaining how one can characterise complexity classes in this way, we need to state refinements of the previous theorem. We first define the notion of *deterministic graphing*. A $\Omega$-weighted graphing $G$ is *deterministic* when: - for all $e\in E^{G}$, $\omega^{G}_{e}\in\{0,1\}$; - the following set is of null measure: $\{x\in\measured{X}~|~\exists e\neq e'\in E^{G}, x\in S_{e}^{G}\cap S^{G}_{e'}\}$ *Non-deterministic graphings* are defined as those graphings satisfying the first condition. We then prove that the notions of deterministic and non-deterministic graphings are closed under composition, i.e. if $F,G$ are deterministic graphings, then their execution $F\plug G$ is again a deterministic graphing. This shows that the sets of deterministic and non-deterministic graphings define submodels of ${\mathbb{M}[\Omega,\mathfrak{m}]}$. Let $\Omega$ be a monoid and $\microcosm{m}$ a microcosm. The set of $\Omega$-weighted *deterministic* graphings in $\microcosm{m}$ yields a model, denoted by $\dmodel{\Omega}{m}$, of multiplicative-additive linear logic. The set of $\Omega$-weighted *non-deterministic* graphings in $\microcosm{m}$ yields a model, denoted by $\nmodel{\Omega}{m}$, of multiplicative-additive linear logic. One can also consider several other classes of graphings. We explain here the simplest non-classical model one could consider, namely that of *probabilistic graphings*. In order for this notion to be of real interest, one should suppose that the unit interval $[0,1]$ endowed with multiplication is a submonoid of $\Omega$. A $\Omega$-weighted graphing $G$ is *probabilistic* when: - for all $e\in E^{G}$, $\omega^{G}_{e}\in[0,1]$; - the following set is of null measure: $\{x\in\measured{X}~|~\sum_{e\in E^{G},~x\in S^{G}_{e}} \omega^{G}_{e}> 1\}$ It turns out that this notion of graphing also behaves well under composition, i.e. there exists a *probabilistic* submodel of ${\mathbb{M}[\Omega,\mathfrak{m}]}$, namely the model of *probabilistic graphings*. Let $\Omega$ be a monoid and $\microcosm{m}$ a microcosm. The set of $\Omega$-weighted *probabilistic* graphings in $\microcosm{m}$ yields a model, denoted by $\pmodel{\Omega}{m}$, of multiplicative-additive linear logic. These models are all submodels of the single model ${\mathbb{M}[\Omega,\mathfrak{m}]}$. Moreover, other inclusions of models can be obtained by modifying the other parameters, namely the weight monoid $\Omega$ and the microcosm $\microcosm{m}$. For instance, given two microcosms $\microcosm{m}\subset\microcosm{n}$, it is clear that a graphing in $\microcosm{m}$ is in particular a graphing in $\microcosm{n}$. This inclusion actually extends to an embedding of the model ${\mathbb{M}[\Omega,\mathfrak{m}]}$ into ${\mathbb{M}[\Omega,\mathfrak{n}]}$ which preserves most logical operations[^4]. Moreover, given two microcosms $\microcosm{m}$ and $\microcosm{n}$, one can define the *smallest common extension* $\microcosm{m+n}$ as the compositional closure of the set $\microcosm{m}\cup\microcosm{n}$. The model ${\mathbb{M}[\Omega,\mathfrak{m+n}]}$ then contains both models ${\mathbb{M}[\Omega,\mathfrak{m}]}$ and ${\mathbb{M}[\Omega,\mathfrak{n}]}$ through the embedding just mentioned. In the same way, an inclusion of monoids $\Omega\subset\Gamma$ yields an embedding of the the model ${\mathbb{M}[\Omega,\mathfrak{m}]}$ into ${\mathbb{M}[\Gamma,\mathfrak{m}]}$. For instance, the model ${\mathbb{M}[\{1\},\mathfrak{m}]}$ is a submodel of ${\mathbb{M}[\Omega,\mathfrak{m}]}$ for any monoid $\Omega$. One can also define, given weight monoids $\Omega$, $\Theta$ and $\Xi$ with monomorphisms $\Omega\rightarrow\Theta$ and $\Omega\rightarrow\Xi$, the model ${\mathbb{M}[\Theta+_{\Omega}\Xi,\mathfrak{m}]}$ where $\Theta+_{\Omega}\Xi$ denotes the amalgamated sum of the monoids. illustrates some of these inclusions of models. characterisations of sub-logarithmic classes {#recentresults} ============================================ We now expose some recent results obtained by applying the methodology described above [@seiller-goic]. We describe in this way a number of new characterisations of sublogarithmic complexity classes. Before going into details about these characterisations, let us define a number of complexity classes – all of them definable by classes of automata. For each integer $i$, we define: - the class (resp. ) as the set of languages accepted by deterministic two-way (resp. one-way) multihead automata with at most $i$ heads; - the class (resp. ) as the set of languages accepted by two-way (resp. one-way) multihead automata with at most $i$ heads; - the class (resp. ) as the set of languages whose complementary language is accepted by two-way (resp. one-way) multihead automata with at most $i$ heads; - the class (resp. ) as the set of languages accepted by two-way (resp. one-way) probabilistic multihead automata with at most $i$ heads; We also denote by (resp. ) the class of predicates over binary words that are recognized by a Turing machine using logarithmic space (resp. polynomial time), by (resp. ) its non-deterministic analogue, by (resp. ) the set of languages whose complementary language lies in (resp. ). We also denote by the class of predicates over binary words that are recognized by a probabilistic Turing machine with unbounded error using logarithmic space. We don’t recall the usual definitions of these variants of multihead automata, which can be easily found in the literature. We only recall the classical results: $$\begin{array}{ccc} \cup_{i\in\naturalN}\text{\cctwdfa{i}}=\text{\Logspace}&\hspace{1cm}&\cup_{i\in\naturalN}\text{\cctwnfa{i}}=\text{\NLogspace}\\ \cup_{i\in\naturalN}\text{\cctwconfa{i}}=\text{\coNLogspace}&\hspace{1cm}&\cup_{i\in\naturalN}\text{\cctwpfa{i}}=\text{\PLogspace} \end{array}$$ In the following, we will denote by (resp. , resp. ) the set $\cup_{i\in\naturalN}\text{\cctwdfa{i}}$ (resp. $\cup_{i\in\naturalN}\text{\cctwnfa{i}}$, resp. $\cup_{i\in\naturalN}\text{\cctwpfa{i}}$). Deterministic Computation: From Regular Languages to Logarithmic Space ---------------------------------------------------------------------- In the models of linear logic we described, one can easily define the type $\ListType$ of words over an arbitrary finite alphabet $\Sigma$. The definition of the representation of these binary words comes from the encoding of binary lists in lambda-calculus and is explained thoroughly in previous works [@seiller-phd; @seiller-conl; @seiller-lsp]. We won’t give the formal definition of what is a representation of a word $\word{w}$ here, but let us sketch the main ideas. Given a word, say the binary word $\word{w}=00101$, we introduce a symbol $\star$ that can be understood as a left-hand end-of-tape marker and consider the list of symbols $\star 00101$. Then, the graphing that will represent $\word{w}$ is obtained as a realisation of the directed graph whose set of vertices is $\{\star,0,1\}\times\{\In,\Out\}$ and whose edges link the symbols of the list together, i.e. the graph pictured in . \(0) at (0,0) [$\star$]{}; (0i) at (-0.5,0) [$\bullet$]{}; (0ilabel) at (0i.north) ; (0o) at (0.5,0) [$\bullet$]{}; (0olabel) at (0o.north) ; (1) at (2,0) [$0$]{}; (1i) at (1.5,0) [$\bullet$]{}; (1ilabel) at (1i.north) ; (1o) at (2.5,0) [$\bullet$]{}; (1olabel) at (1o.north) ; (2) at (4,0) [$0$]{}; (2i) at (3.5,0) [$\bullet$]{}; (2ilabel) at (2i.north) ; (2o) at (4.5,0) [$\bullet$]{}; (2olabel) at (2o.north) ; (3) at (6,0) [$1$]{}; (3i) at (5.5,0) [$\bullet$]{}; (3ilabel) at (3i.north) ; (3o) at (6.5,0) [$\bullet$]{}; (3olabel) at (3o.north) ; (4) at (8,0) [$0$]{}; (4i) at (7.5,0) [$\bullet$]{}; (4ilabel) at (4i.north) ; (4o) at (8.5,0) [$\bullet$]{}; (4olabel) at (4o.north) ; (5) at (10,0) [$1$]{}; (5i) at (9.5,0) [$\bullet$]{}; (5ilabel) at (5i.north) ; (5o) at (10.5,0) [$\bullet$]{}; (5olabel) at (5o.north) ; (0o) – (1i) ; (1o) – (2i) ; (2o) – (3i) ; (3o) – (4i) ; (4o) – (5i) ; (5o) – (10.5,-0.5) – (-0.5,-0.5) – (0i) ; We are now interested in the elements of the type $\oc\ListTypeBin$. For each word $\word{w}$, there exists an element $\oc L_{\word{w}}$ in the type $\oc\ListTypeBin$ which represents it. We say that a graphing – or *program* – $P$ of type $\oc\ListTypeBin\multimap \cond{Bool}$ *accepts* the word $\word{w}$ when the execution $P\plug W_{\word{w}}$ is equal to the distinguished element $\tt true\rm\in\cond{Bool}$. The *language* accepted by such a program $P$ is then defined as $[P]=\{\word{w}\in\ListTypeBin~|~\phi\plug W_{\word{w}}=\tt true\rm\}$. Let $\Omega$ be a monoid, $\microcosm{m}$ a microcosm and $\mathcal{L}$ a set of languages. We say the model $\genmodel[]{m}{\Omega}{\textnormal{det}}$ *characterises the set $\mathcal{L}$* if the set $\{[P]~|~P\in \oc\ListTypeBin\multimap\cond{Bool}\}$ is equal to $\mathcal{L}$. We now consider the measure space $\integerN\times[0,1]^{\naturalN}$ endowed with the product of the counting measure on $\integerN$ and the Lebesgue measure on the Hilbert cube $[0,1]^{\naturalN}$. To define microcosms, we use the constructor $+$: if $\microcosm{m}$ and $\microcosm{n}$ are two microcosms, $\microcosm{m+n}$ is the smallest microcosm containing both $\microcosm{m}$ and $\microcosm{n}$. We can now define the following microcosms: - $\microcosm{m}_{1}$ is the monoid of translations $\tau_{k}:(n,x)\mapsto (n+k,x)$; - $\microcosm{m}_{i+1}$ is the monoid $\microcosm{m}_{i}+\microcosm{s}_{i+1}$ where $\microcosm{s_{i+1}}$ is the monoid generated by the single map: $$s_{i+1}:(n,(x_{1},x_{2},\dots))\mapsto(n,(x_{i+1},x_{2},\dots,x_{i},x_{1},x_{i+2},\dots))$$ - $\microcosm{m}_{\infty}=\cup_{i\in\naturalN}\microcosm{m}_{i}$. The intuition is that a microcosm $\microcosm{m}$ represents the set of computational principles available to write programs in the model. The operation $+$ thus extends the set of principles at disposal, increasing expressivity. As a consequence, the set of languages characterised by the type $\oc\ListTypeBin\multimap\cond{Bool}$ becomes larger and larger as we consider extensions of the microcosms. As an example, the microcosm $\microcosm{m}_{1}$ corresponds to allowing oneself to compute with automata. Expanding this microcosm by adding a map $s_{2}$ yields $\microcosm{m}_{2}=\microcosm{m}_{1}+\microcosm{s}_{2}$ and corresponds to the addition of a new computational principle: using a second head. \[th1\] The model $\dmodel{\{1\}}{m_{\textnormal{i}}}$ $(i\in\naturalN\cup\{\infty\})$ characterises the class . In particular, the model $\dmodel{\{1\}}{m_{\textnormal{1}}}$ characterises the class of Regular languages and the model $\dmodel{\{1\}}{m_{\infty}}$ characterises the class . Examples -------- #### Examples of words We here work two examples to illustrate how the representation of computation by graphings works out. First, we give the representation as graphings of the lists $\star 0$ (), $\star 11$ () and $\star 01$ (). In the illustrations, the vertices, e.g. ’0i’, ’0o’, represent disjoint segments of unit length, e.g. $[0,1]$, $[1,2]$. As mentioned in the caption, the plain edges are realised as translations. Notice that those edges are cut into pieces (2 pieces each for the list $\star 0$, and three pieces each for the others). This is due to the fact that these graphings represent exponentials of the word representation: the exponential splits the unit segment into as many pieces as the length of the list; intuitively each piece of the unit segments correspond to the “address” of each bit. The edges then explicit the ordering: each symbol comes with has pointers to its preceding and succeeding symbols. E.g. in the case of the first ’1’ in the list $\star 11$, there is an edge from ’1i’ to ’$\star$o’ representing the fact that the preceding symbol was ’$\star$’, and there is an edge from ’1o’ to ’1i’ representing the fact that the next symbol is a ’1’; notice moreover that these edges move between the corresponding addresses. #### A first machine As a first example, let us represent an automata that accepts a word $\word{w}$ if and only if $\word{w}$ contains at least one symbol ’1’. This example was chosen because of the simplicity of the representation. Indeed, one can represent it by a graphing with a single state and which uses a single head (hence it is a graphing in the microcosm $\microcosm{m}_{1}$). Its transition relation is then defined as follows: if the symbol read is a ’1’ then stop, if it is a ’0’, move the head to read the next symbol, if it is a ’$\star$’ then reject. The representation as a graphing is shown in . Notice that the symbol ’$\star$’ plays both the role of left and right end-markers. We then represent the computation of this graphing with the representations of the words $\star 0$ (), $\star 11$ () and $\star 01$ () in . The computation is illustrated by showing the two graphings (the machine and the integer), one on each side of the set of vertices. Notice that in those figures the graphing representing the machine has been replaced by one of its *refinements* to reflect the splitting of the unit intervals appearing in the representation of the words. The result of the computation is the set of alternating paths from the set of vertices $\{\text{accept},\text{reject}\}$ to itself: in each case there is at most one, which is stressed by drawing the edges it is composed in boldface. The machine accepts the input if and only if there is a single path from “accept” to “accept”. One can see in the figure that this machine accepts the words $\star 11$ and $\star 01$ but not the word $\star 0$. Notice how the computation can be understood as a game between two players. We illustrate this on the computation on the word $\star 01$. First the machine, through its edge from “accept” to ’$\star$o’, asks the input “What’s your first symbol?”. The integer, through its edge from ’$\star$o’ to ’0i’, answers “It’s a ’0’.”. Then the machine asks “What’s your next symbol?” (the edge from ’0i’ to ’0o’), to which the integer replies “It’s a ’1’.” (the edge from ’0o’ to ’1i’). At this point, the machine accepts (the edge from ’1i’ to “accept”). #### A second machine We chose as a second example a more complex machine. The language it accepts is actually a regular language (the words that contain at least one symbol ’1’ and one symbol ’0’), i.e. can be computed by a single-head automata. However, we chose to compute this language by first looking for a symbol ’1’ in the word and, if this part is successful, activate a second head looking for a symbol ’0’, i.e. we compute it with a graphing in the microcosm $\mathfrak{m}_{2}$ even if as a regular language it can be computed by graphings in $\microcosm{m}_{1}$. This automata has two states corresponding to the two parts in the algorithmic procedure: a state “looking for a ’1’” and a state “looking for a ’0’”; we write these states “L1” and “L0” respectively. The graphing representing this automata is represented in , where there are two sets of vertices one above the other: the row below shows the vertices in state L1 while the row on top shows the vertices in state L2. Notice the two dashed lines which are realised using a permutation and correspond to the change of principal head[^5]. We represented the computation of this machine with the same inputs $\star 0$, $\star 11$, and $\star 01$ in . Notice that to deal with states, the inputs are duplicated. Once again, acceptance corresponds with the existence of a path from the “accept” vertex to itself (without changing states). The first computation does not go far: the first head moves along the word and then stops as it did not encounter a symbol ’1’ (this path is shown by thick edges). The second computation is more interesting. First, it follows the computation of the previous machine on the same input: it asks for the first symbol, acknowledge that it is a ’1’, then activates the second head and changes state. At this point the path continues as the second head moves along the input: this seems to contradict the picture since we our path seems to arrive on the middle splitting of the vertex ’$\star$o’ which would not allow us to continue through the input edge whose source is the left-hand splitting of this same vertex. However, this is forgetting that we changed the active head. Indeed, what happens in fact is that the permutation moves the splitting along a new direction: while the same transition realised as a simple translation would forbid us to continue the computation, realising it by the permutation $s_{2}$ allows the computation to continue. We illustrate in how the permutation actually interact with the splitting, in the case the interval is split into two pieces. The representation of vertices as unit intervals is no longer sound as we are actually using the two first copies of $[0,1]$ in the Hilbert cube $[0,1]^{\naturalN}$ to represent this computation, hence working with unit squares. In this case, however, the second head moves along the input without encountering a symbol ’0’ and then stops. This path is shown by thick edges in the figure. The last computation accepts the input. As in the previous case, the first head moves along the input, encounters a symbol ’1’, changes its state and activates the second head. As in the previous case, the computation can continue at this point, and the second head encounters a ’0’ as the first symbol of the input. Then the machine accepts. This path is shown in the figure by thick edges. Non-deterministic and Probabilistic Computation ----------------------------------------------- All the preceding results have non-deterministic and probabilistic analogues; we consider in this section the model of non-deterministic graphings. To obtain the same types of results in that case, two issues should be dealt with. First one needs to consider programs of a different type since the result of a non-deterministic computation yield sets of booleans and not a single boolean. Thus, programs will be considered as elements of a more general type than in the deterministic case. We consider here elements of the type $\oc\ListTypeBin\multimap\NBool$, where $\NBool$ is a specific type definable in the models, somehow a non-deterministic version of the booleans. The second issue concerns acceptance. While it is natural in the deterministic case to ask whether the computation yielded the element $\ttrm{true}\in\Bool$, this is no longer the case. Should one define acceptance as producing at least one element $\ttrm{true}$ or as producing no element $\ttrm{false}$? Both conditions should be considered. In order to obtain a quite general notion of acceptance that can not only capture both notions explained above but extend to other computational paradigms such as probabilistic computation, we use the structure of the realisability models we are working with to define a notion of *test*. Indeed, the models are constructed around an orthogonality relation $\poll$: a test will be an element (or more generally a family of elements) $\testfont{T}$ of the model and a program $P$ accepts the word $\word{w}$ if the execution $P\plug \oc L_{\word{w}}$ is orthogonal to $\testfont{T}$. Given $P$ an element of $\oc\ListTypeBin\multimap\NBool$ and a test $\testfont{T}$, one can define the language $[P]_{\testfont{T}}$ as the set of all words $\word{w}$ that are accepted by $P$ w.r.t. the test $\testfont{T}$: $$[P]_{\testfont{T}}=\{\word{w}~|~P\plug \oc L_{\word{w}}\poll \testfont{T}\}$$ Acceptance for the specific case of the deterministic model can be defined in this way by considering the right notion of test. Let $\Omega$ be a monoid, $\microcosm{m}$ a microcosm, $\testfont{T}$ a test and $\mathcal{L}$ a set of languages. We say the model $\genmodel[]{m}{\Omega}{\ast}$ *characterises the set $\mathcal{L}$ w.r.t. the test $\testfont{T}$* if the set $\{[P]_{\testfont{T}}~|~P\in \oc\ListTypeBin\multimap\NBool\}$ is equal to $\mathcal{L}$. In particular, one can show the existence of two tests $\testnl$ and $\testconl$ that correspond to the two notions of acceptance mentioned above and which allows for the characterisation of usual non-deterministic classes. Those are stated in the following theorem. \[ndth1\] The model $\nmodel{\{1\}}{m_{\textnormal{i}}}$ $(i\in\naturalN\cup\{\infty\})$ characterises the class w.r.t. the test $\testnl$ and characterises the class w.r.t. the test $\testconl$. In the case of probabilistic graphings, one can show the existence of a test $\testprob$ which allows for the characterisation of probabilistic computation with unbounded error. This leads to the following theorems. \[pth1\] The model $\pmodel{[0,1]}{m_{\textnormal{i}}}$ $(i\in\naturalN\cup\{\infty\})$ characterises the class w.r.t. the test $\testprob$. As corollaries of these results, we obtain new characterisations of: - the class of Regular languages (the model $\nmodel{\{1\}}{m_{\textnormal{1}}}$); - the classes and (the model $\nmodel{\{1\}}{m_{\infty}}$); - the class of Stochastic languages (the model $\pmodel{[0,1]}{m_{\textnormal{1}}}$); - the class (the model $\pmodel{[0,1]}{m_{\infty}}$). Variations ---------- All of these results are based upon a correspondence between elements of the type $\oc\ListTypeBin\multimap \NBool$ and some kinds of two-way multihead automata, either deterministic, non-deterministic or probabilistic. Several other results can be obtained by modifying the notion of automata considered. We discuss here three possible modifications. The first modification is actually a restriction, that is: can we represent computation by *one-way automata*? One can already answer positively to this question, as the two-way capability of the automata does not really find its source in the programs $P$ in $\oc\ListTypeBin\multimap \NBool$ but in the representation of words. One can define an alternative representation of words over an alphabet $\Sigma$ and a corresponding type $\ListTypeOW$. We then obtain one-way analogues of , , and . The model $\dmodel{[0,1]}{m_{\textnormal{i}}}$ $(i\in\naturalN\cup\{\infty\})$ characterises the class . The model $\nmodel{[0,1]}{m_{\textnormal{i}}}$ $(i\in\naturalN\cup\{\infty\})$ characterises the class w.r.t. the test $\testnl$ and characterises the class w.r.t. the test $\testconl$. The model $\pmodel{[0,1]}{m_{\textnormal{i}}}$ $(i\in\naturalN\cup\{\infty\})$ characterises the class w.r.t. the test $\testprob$. The second modification is the extension of automata with a *pushdown stack*. Work in this direction has recently lead to a characterisation of in a more syntactical setting [@lics-ptime]. Even though the syntactical approach just mentioned could very well be transported to our setting (it was shown [@seiller-goig] that elements of the *resolution algebra* can be represented as graphings), this would lead to a characterisation based on a microcosm containing non-measure-preserving maps. Even though non-measure-preserving maps are allowed in the general setting of graphings [@seiller-goig], the use of measure-preserving microcosm is more interesting in view of the possible use of mathematical invariants such as $\ell^{2}$-Betti numbers discussed in the next section. One can find such a measure-preserving microcosm $\mathfrak{p}$ which leads to a characterisation of in both the model of deterministic graphings and the model of non-deterministic graphings because non-determinism do not add any expressivity to the model of deterministic two-way multihead automata with a pushdown stack. The last modification is the consideration of *quantum graphings*, i.e. graphings computing with complex numbers. This is still a work in progress, but we believe that one can define variants of quantum graphings corresponding to measure-once or measure-many quantum automata, leading to several other characterisations. Complexity Constraints and Graphing Algebras ============================================ It is our belief also that important contributions can be made by using well-established mathematical techniques, tools and invariants from operator algebras and dynamical systems for addressing open problems in complexity. We now show our approach relates to well-known mathematical notions and how this enables new techniques through the computation of invariants such as $\ell^{2}$-Betti numbers. Compilations ------------ The results exposed in the previous section lead us to conjecture a correspondence between complexity classes and microcosms, a conjecture that we are now able to state precisely. It may appear that the microcosms correspond to complexity *constraints*: for instance $\microcosm{m}_{\infty}$ captures the constraint of “logarithmic space computation” uniformly, i.e. the same microcosm corresponds to the same constraint for several computational paradigms (sequential, probabilistic). This correspondence, if it holds, can only be partial since, as we already explained, an extension of the microcosm $\microcosm{m}_{\infty}$ which characterise the class of polynomial time predicates do not characterise in the non-deterministic model. As a consequence, we know that all microcosms do not characterise uniformly a complexity constraint. We will not discuss this complex question of whether complexity *constraints* can be described uniformly by a microcosm. The mere question of deciding *what is* a complexity constraint seems as difficult to answer as the question of deciding what is the right notion of algorithm. We therefore consider the already difficult question of a possible (partial) correspondence between microcosms and hierarchies of complexity classes. To formalise this, let us fix once and for all a monoid $\Omega$, a type of graphings – e.g. probabilistic – and a test $\testfont{T}$. In the following, we will denote by $\mathtt{C}[\microcosm{m}]$ the set of languages characterised by the type $\ListType\Rightarrow\NBool$ in the chosen model $\anymodel{\Omega}{m}$ w.r.t. the test $\testfont{T}$. We now consider the following natural notion of *compilation*. A measurable map $m:\measured{X}\rightarrow\measured{X}$ is *compilable* in a set of measurable maps $N$ when there exists a finite partition $X_{1},\dots,X_{k}$ of $\measured{X}$ and elements $n_{1},\dots, n_{k}\in N$ such that $m\restr{X_{i}}{=_{\textnormal{a.e.}}}(n_{i})\restr{X_{i}}$. We denoted by $m\prec_{\textnormal{c}} N$ the fact that $m$ is compilable in $N$. This naturally extends to a preorder on sets of measurable maps that we abusively denote by $\prec_{\textnormal{c}}$ and defined as $M\prec_{\textnormal{c}} N$ if and only if $m\prec_{\textnormal{c}} N$ for all $m\in M$. We define the equivalence relation on microcosms: $$\microcosm{m}\sim_{\textnormal{c}}\microcosm{n} \Leftrightarrow (\microcosm{m}\prec_{\textnormal{c}}\microcosm{n}\wedge \microcosm{n}\prec_{\textnormal{c}}\microcosm{m})$$ One can then easily show the following result, stating that – everything else being equal – two equivalent microcosms give rise to the same complexity classes. \[thm\] If $\microcosm{m}\prec_{\textnormal{c}}\microcosm{n}$, then $\mathtt{C}[\microcosm{m}]\subset\mathtt{C}[\microcosm{n}]$. Given a graphing $G$ in $\microcosm{m}$, we show how one can define a graphing $\bar{G}$ in $\microcosm{n}$ such that for all representation of a word $W\in\ListTypeBin$, $G\plug W\poll \testfont{T}$ if and only if $\bar{G}\plug W\poll \testfont{T}$. The new graphing $\bar{G}$ is defined by inductively replacing edges in $G$ by a finite family of edges. The reason why the resulting graphing $\bar{G}$ satisfy the wanted property is comes from the fact paths between $G$ and $W$ are compilable in the set of paths between $\bar{G}$ and $W$. Consequently, cycles between $\bar{G}\plug W$ (the paths) and $ \testfont{T}$ used to define the orthogonality turn out to be refinements of cycles between $G\plug W$ and $\testfont{T}$, which implies that $G\plug W\poll \testfont{T}$ if and only if $\bar{G}\plug W\poll \testfont{T}$. We conjecture that the converse of Theorem \[thm\] holds, i.e. if $\microcosm{m}\not\sim_{\textnormal{c}}\microcosm{n}$ are not equivalent, then $\mathtt{C}[\microcosm{m}]\neq\mathtt{C}[\microcosm{n}]$ are distinct. \[conjecture\] If $\microcosm{m}\not\sim_{\textnormal{c}}\microcosm{n}$, then $\mathtt{C}[\microcosm{m}]\neq\mathtt{C}[\microcosm{n}]$. This would provide a partial equivalence between the problem of classifying complexity classes and that of classifying microcosms. As we will see, the first results we obtained are coherent with this conjecture. We explain below how the notion of equivalence $\sim_{\textnormal{c}}$ relates (in specific cases) to well-studied notions in ergodic theory and von Neumann algebras. This fact could allow, in case the conjecture is proven, for the use of mathematical invariants as a proof method for obtaining separation results. Measurable Equivalence Relations -------------------------------- Let us now explain how the microcosms, used to characterise complexity classes in the work described above, can (in some cases) generate a *measured equivalence relation*. We first recall the basics about this notion. The notion of graphing considered in our work is in fact a generalisation of the notion considered in ergodic theory and operator algebras. The usual definition of graphings do not comprise weights. Moreover, those are usually considered built from measure-preserving maps only. We will refer to these weightless graphings as *simple graphings*. A *weightless graphing* $\phi$ is a $\{1\}$-weighted graphing in the macrocosm $\microcosm{M}$ over $\measured{X}$. A *simple graphing* $\phi$ is a $\{1\}$-weighted graphing in the microcosm $\microcosm{mp}$ of measure-preserving Borel isomorphisms over $\measured{X}$. This definition of simple graphings is equivalent to the usual definition [@gaboriaucost] as a family of partial measure-preserving Borel isomorphisms $\phi_{f}:S^{\phi}_{f}\rightarrow T^{\phi}_{f}$ where $S^{\phi}_{f}, T^{\phi}_{f}$ are Borel subsets of $\measured{X}$. Given a simple graphing, one can define an equivalence relation $\mathcal{R}(\phi)$ by considering the transitive closure of the relation $$\{(x,\phi_{f}(x))~|~ f\in E^{\phi}, x\in S^{\phi}_{f}\}$$ This relation can be described directly in terms of paths. We define for this the set of $\phi$-words as the set: $${\textnormal{words}(\phi)}=\{f_{1}^{\epsilon_{1}}f_{2}^{\epsilon_{2}}\dots f_{k}^{\epsilon_{k}}~|~ k\geqslant 0, \forall 1\leqslant i\leqslant k, f_{i}\in E^{\phi}, \epsilon_{i}\in\{-1,1\}\}$$ Each $\phi$-word $\pi=f_{1}^{\epsilon_{1}}f_{2}^{\epsilon_{2}}\dots f_{k}^{\epsilon_{k}}$ defines a partial measure-preserving Borel isomorphism $\phi_{\pi}$ defined as the composite $\phi_{f_{k}}^{\epsilon_{k}}\circ \phi_{f_{k-1}}^{\epsilon_{k-1}}\circ \dots\circ \phi_{f_{1}}^{\epsilon_{1}}$ from its domain $S^{\phi}_{\pi}$ to its codomain $T^{\phi}_{\pi}$. Let $\phi$ be a simple graphing. We define the equivalence relation $\mathcal{R}(\phi)$ as $$\mathcal{R}(\phi)=\{(x,y)~|~\exists \pi\in{\textnormal{words}(\phi)}, x\in S^{\phi}_{\pi}, y=\phi_{pi}(x)$$ One can then show that this is a *measurable equivalence relation*, or Borel equivalence relation, i.e. an equivalence relation on $X$ such that: 1. \[Boreleq1\] $\mathcal{R}(\phi)$ is a Borel subset of $\measured{X}\times\measured{X}$; 2. \[Boreleq2\] the set of equivalence classes of $\mathcal{R}(\phi)$ is countable; 3. \[Boreleq3\] every partial isomorphism $f:A\rightarrow B$ whose graph $\{(a,f(a))~|~a\in A\}$ is a subset of $\mathcal{R}(\phi)$ is measure-preserving. A microcosm $\microcosm{m}$ is a weightless graphing. Thus every microcosm $\microcosm{m}$ included in the microcosm $\microcosm{mp}$ of measure-preserving Borel automorphisms gives rise to a Borel equivalence relation $\mathcal{R}(\microcosm{m})$. \[eqboreleq\] Let $\microcosm{m}$ and $\microcosm{n}$ be equivalent microcosms. Then $\mathcal{R}(\microcosm{m})=\mathcal{R}(\microcosm{n})$. Borel equivalence relations were extensively studied in several domains of mathematics, as such equivalence relations are induced by measurable group actions. Indeed, if $\alpha$ is a measure-preserving action of a (discrete countable) group $G$ on a (atom-free) standard Borel space $\measured{X}$, then one can define the Borel equivalence relation: $$\mathcal{R}(\alpha)=\{ (x,y)~|~ \exists g\in G, y=\alpha(g)(x) \}$$ Conversely, it was shown by Feldman and Moore [@FeldmanMoore1] that, given a measurable equivalence relation $\mathcal{R}$ on a standard Borel space $\measured{X}$, there exists a group $G$ of measure-preserving Borel automorphisms of $\measured{X}$ such that[^6] $\mathcal{R}=\mathcal{R}(G)$. Trying to classify group actions, mathematicians have developed fine invariants to classify Borel equivalence relations. In particular, Gaboriau extended the notion of $\ell^{2}$-Betti numbers[^7] [@gaboriaul2] to this setting, and also introduced the notion of *cost* [@gaboriaucost] which can be understood as an approximation of the first $\ell^{2}$-Betti number. We here recall the definition of the latter. The cost of a graphing $\phi$ is defined as the (possibly infinite) real number: $$\mathcal{C}(\phi)=\sum_{f\in E^{\phi}} \lambda(S^{\phi}_{f})$$ The cost of a measurable equivalence relation $\mathcal{R}$ is then the infimum of the cost of simple graphings generating $\mathcal{R}$: $$\mathcal{C}(\mathcal{R})=\inf \{\mathcal{C}(\phi)~|~\phi\text{ simple graphing s.t. }\mathcal{R}=\mathcal{R}_{\phi}\}$$ The cost of a group $\Gamma$ is the infimum of the cost of the Borel equivalence relations $\mathcal{R}(\alpha)$ defined by free actions of $\Gamma$ on a standard probability space: $$\mathcal{C}(\Gamma)=\inf \{\mathcal{C}(\mathcal{R}(\alpha))~|~\alpha\text{ free action of $\Gamma$ onto $\measured{X}$ with $\mu(X)=1$}\}$$ Let us refer to Gaboriau’s work for a complete overview [@gaboriaucost], and state the result that will be of use here. A result due to Levitt [@levitt_graphings] allows to compute the cost of finite groups, which are shown to be of fixed cost – i.e. all free actions of finite groups have the same cost. Gaboriau also computed the (fixed) cost of infinite amenable groups. The following proposition states these two results simultaneously. If $\Gamma$ is an amenable group, then $\Gamma$ is has fixed cost $1-\frac{1}{\card{\Gamma}}$, where by convention $\frac{1}{\infty}=0$. Measurable Preorders, Homotopy Equivalence ------------------------------------------ First, let us have a look at the measurable equivalence relations $\mathcal{R}(\microcosm{m}_{i})$ for $\microcosm{m}_{i}$ the microcosms defined in the previous section. Notice that $\mathcal{R}(\microcosm{m}_{i})=\integerN^{2}\times\mathcal{R}(\microcosm{s}_{i})$ where $\mathcal{R}(\microcosm{s}_{i})$ is the Borel equivalence relation on $[0,1]^{\naturalN}$ generated by the natural group action of $\mathfrak{G}_{i}$ – the group of permutations over $i$ elements – on the Hilbert cube $[0,1]^{\naturalN}$ (permutations act on the first $i$ copies of $[0,1]$). This action being free, the Hilbert cube being a standard probability space, and the group $\mathfrak{G}_{i}$ being of finite order, we have that $\mathcal{C}(\mathcal{R}(\microcosm{s}_{i}))=\mathcal{R}(\mathfrak{G}_{i})=1-\frac{1}{i!}$. This shows the following separation theorem for the microcosms $\microcosm{m}_{i}$. For all $1\leqslant i< j\leqslant \infty$, $\microcosm{m}_{i}\not\sim_{c}\microcosm{m}_{j}$. These results are coherent with . Indeed, a number of well-known separation results such as $\neq$ [@monien] hold for the various notion of automata considered in the previous section. This result illustrates how one could use invariants to show that two complexity classes are distinct, under the hypothesis that is true. This uses the fact that invariants such as $\ell^{2}$-Betti numbers can be used to show that two microcosms are not equivalent. This raises two natural questions that we answer now. #### Borel equivalence relations are not enough First the use of Borel equivalence relations is too restrictive for our purpose. Indeed, although the microcosms $\microcosm{m}_{i}$ were simple graphings, we do not want to restrict the framework to those. Indeed, when one wants to represent computational principles, one sometimes want some non-invertible principles to be available. As a consequence, we are interested in microcosms that are not subsets of the microcosm $\microcosm{mp}$, and which are not groups. In other words, we are not interested in group actions on a space, but rather on *monoid actions*. The problem of classifying monoid actions is however much more complex, and much less studied. In order to have a finer analysis of the equivalence, we want to distinguish between monoids of measurable maps and groups of measurable maps. Given a weightless graphing $\phi$, we consider the set of *positive $\phi$-words* as the following subset of $\phi$-words. $${\textnormal{words}^{+}(\phi)}=\{f_{1}f_{2}\dots f_{k}~|~ k\geqslant 1, \forall 1\leqslant i\leqslant k, f_{i}\in E^{\phi}\}$$ Given a weightless graphing, we define the ${\mathcal{P}(\phi)}$ as $${\mathcal{P}(\phi)}=\{(x,y)~|~\exists \pi\in{\textnormal{words}^{+}(\phi)}, y=\phi_{\pi}(x)\}$$ We obtain the following refinement of . If two microcosms $\microcosm{m}$ and $\microcosm{n}$ are equivalent, the preorders ${\mathcal{P}(m)}$ and ${\mathcal{P}(n)}$ are equal. Can one define invariants such as $\ell^{2}$-Betti numbers in this more general case? The notion of cost can be obviously defined for measurable preorders, but is this still an interesting invariant? In case of $\ell^{2}$-Betti numbers, this question is quite complex as it involves the definition of a von Neumann algebra generated by a measurable preorder. Although some work defining von Neumann algebras from left-cancellable monoids could lead to a partial answer, this is a difficult open question. #### Justifying the use of homotopy invariants. The second issue is that the invariant mentioned above, namely $\ell^{2}$-Betti numbers, are *homotopy invariants*, i.e. they are used to show that two measurable equivalence relations are *not homotopy equivalent*, which is stronger than showing that they are not equal. So, are those invariants too complex? Couldn’t we use easier invariants? The answer to this question lies in a more involved study of the equivalence of microcosms. The notion of compilation we discussed above is not the finest equivalence one could consider. We chose to work with this simplified notion since the most general setting needs to go into detailed discussions about the definitions of the exponential connectives in the model. However, one should actually consider a more involved notion, namely that of *compilation up to Borel automorphisms*. This notion is not complicated to define. Let $\Theta$ be a set of Borel automorphisms of $\measured{X}$. A measurable map $m$ is compilable in a set $N$ of measurable maps *up to $\Theta$* if and only if there exists $\theta\in\Theta$, a finite partition $X_{1},\dots,X_{k}$ of $\measured{X}$ and elements $n_{1},\dots, n_{k}\in N$ such that $\theta\circ m\restr{X_{i}}{=_{\textnormal{a.e.}}}(n_{i})\restr{X_{i}}$. Then the corresponding equivalence of two microcosms $\microcosm{m}$ and $\microcosm{n}$ up to a set of Borel automorphisms do not induce the equality of the measurable equivalence relations $\mathcal{R}(\microcosm{m})$ and $\mathcal{R}(\microcosm{n})$, but only that those are homotopy equivalent. In this case, one can understand the interest of invariants such as $\ell^{2}$-Betti numbers, or the cost which can be understood as an approximation of the first $\ell^{2}$-Betti number. Perspectives ============ We believe the above conjecture provides a very good working hypothesis, even if no proof of it is to be found, as the approach we propose provides an homogeneous approach to computational complexity. This *complexity-through-realisability* theory we sketched is founded on alternative definitions of the notions of algorithms and complexity classes. The techniques were illustrated by first results showing how a large family of (predicate) complexity classes can be characterised by these techniques. Those are a first step towards a demonstration that these definitions capture and generalise standard ones, offering a unified homogeneous framework for the study of complexity classes. Future work in this direction should therefore aim at fulfilling two main objectives. The first objective is to establish that this new approach to complexity captures, generalises and extends the techniques developed by previous logic-based approaches to computational complexity. The second objective is to establish that invariants and tools available for the mathematical theories underlying our approach can be used to address open problems in complexity, as already explained in the previous section. A Uniform Approach to Computational Complexity ---------------------------------------------- We propose the following three goals to deal with the first objective: show the *complexity-through-realisability* techniques (i) are coherent with classical theory, (ii) generalise and improve state-of-the-art techniques, and finally (iii) provide the first homogeneous theory of complexity for several computational paradigms. #### Automata, Turing machines, etc. The results presented in this paper are a first step toward showing that our technique are coherent with the classical complexity theory. However, all our results lean on previously known characterisations of complexity classes (predicates) by means of different kinds of automata. As explained above, extensions of these notions of automata can be considered to obtain further results. However, it is important to realise that our approach can also deal with other models of computation. An adaptation of work by Asperti and Roversi [@aspertiroversi] and Baillot [@baillot] should allow for encoding Turing machines in some of the realisability models we consider. This should lead to characterisations of several other complexity classes (predicates), such as the exponential hierarchy. In particular this should allow for a characterisation of , a class that – as explained above – is not characterised naturally by pushdown automata. Moreover, previous characterisations of and by means, respectively, of *branching programs* (Barrington’s theorem [@barrington]) and *bottleneck Turing machines* [@bottleneck] should lead to characterisations of those classes. It would also be interesting to understand if the definition of algorithms as Abstract State Machines (ASMs) proposed by Gurevich [@gurevichasm] corresponds to a specific case of our definition of algorithms as graphings. Although no previous work attempted to relate ASMs and GoI, an ASM is intuitively a kind of automata on first-order structures and such objects can be seen as graphings [@seiller-goig]. This expected result will show that the notion of graphing provides an adequate mathematical definition of the notion of computation. #### Predicates, functions, etc. All results presented above are characterisations of predicate complexity classes. It is natural to wonder if the approach is limited to those or if it applies similarly to the study of functions complexity classes. Since one is considering models of linear logic, the approach does naturally extend to functions complexity classes. A natural lead for studying function classes is to understand the types $\ListType\Rightarrow \ListType$ – functions from (binary) natural numbers to (binary) natural numbers – in the models considered. A first step was obtained by the definition of a model of Elementary Linear Logic () [@seiller-goie], a system which is known to characterise elementary time functions as the set of algorithms/proofs of type $\oc\ListType\Rightarrow \ListType$. Although the mentioned model is not shown complete for , it provides a first step in this direction as the type of functions in this model is naturally sound for elementary time functions. We believe that *complexity-through-realisability* techniques extend ICC techniques in that usual complexity classes of functions can be characterised as types of functions $\oc\ListType\Rightarrow \ListType$ in different models. A first natural question is that of the functions computed by elements of this type in the models described in the previous section. Furthermore, we expect characterisations and/or definitions of complexity classes of higher-order functionals. Indeed, the models considered contain types of higher-order functionals, e.g. $\cond{(\ListType\Rightarrow \ListType)\Rightarrow(\ListType\Rightarrow \ListType)}$ for type 2 functionals. These models therefore naturally characterise subclasses of higher-order functionals. As no established theory of complexity for higher-order functional exists at the time, this line of research is of particular interest. #### Deterministic, Probabilistic, etc. As exposed in the previous section, the techniques apply in an homogeneous way to deterministic, non-deterministic and probabilistic automata. Moreover a generalisation towards quantum computation seems natural, as already discussed. The framework offered by graphings is indeed particularly fit for both the quantum and the probabilistic computational paradigm as it is related to operator algebras techniques, hence to both measure theory and linear algebra. In this particular case of probabilistic and quantum computation, we believe that it will allow for characterisations of other standard probabilistic and quantum complexity classes such as , , or . Furthermore, the generality of the approach and the flexibility of the notion of graphing let us hope for application to other computational paradigms. Among those, we can cite concurrent computation and cellular automata. On one hand, this would provide a viable and well-grounded foundation for a theory of computational complexity for concurrent computation, something currently lacking. On the other hand, applying the techniques to cellular automata will raise the question of the relationship between “classical” complexity theory and the notion of *communication complexity*. Using Mathematical Invariants ----------------------------- As we already discussed in the previous section, the proposed framework has close ties with some well-studied notions from other fields of mathematics. Of course, the possibility of using invariants from these fields in order to obtain new separation results would be a major step for complexity theory. However, let us notice that even though nothing forbids such results, the proposed method do not provide a miraculous solution to long-standing open problems. Indeed, in order to obtain separation results using the techniques mentioned in the previous sections, one would have to - first prove , which is not the simplest thing to do; - then characterise the two classes to be separated by models $\anymodel{\Omega}{m}$ and $\anymodel{\Omega}{m}$ which differ only by the chosen microcosm (either $\microcosm{m}$ or $\microcosm{n}$); - then compute invariants showing that these microcosms are not homotopy equivalent (if the microcosms are groups of measure-preserving Borel isomorphisms), or – in the general case – define and then compute homotopy invariants separating the two microcosms. As an illustration, let us consider what could be done if someone were to find a proof of today. Based on the results exposed above, one could only hope for already known separation results, namely the fact that the inclusions $\subsetneq$, $\subsetneq$, $\subsetneq$ and $\subsetneq$. Those, as explained above, would be obtained by computing the cost of the associated Borel equivalence relations. If one were to consider the characterisation of obtained syntactically [@lics-ptime] as well, one could hope for separating the classes and . However, the microcosm used to characterise is neither a group nor consists in measure-preserving maps, two distinct but major handicaps for defining and computing invariants. As explained, future work will show that one can actually consider a microcosm of measure-preserving maps to characterise . But once again, this microcosm would not be a group, hence one would need to extends the theory of invariants for Borel equivalence relations in order to hope for a separation result. Last, but not least, these invariants should be expressive enough! For instance, the cost of the relation induced by the microcosm $\microcosm{m}_{\infty}$ is equal to $1$, as is the cost of every amenable infinite group. Amenable extensions of this microcosm will henceforth not be shown to be strictly greater than $\microcosm{m}_{\infty}$ unless some other, finer, invariants are considered and successfully computed! However, the proposed proof method does not naturally appear as a *natural proof* in the sense of Razborov and Rudich [@naturalproofs]. It is the author’s opinion that this simple fact on its own makes the approach worth studying further. References {#references .unnumbered} ========== [^1]: A non-singular transformation $f:X\rightarrow X$ is a measurable map which preserves the sets of null measure, i.e. $\lambda(f(A))=0$ if and only if $\lambda(A)=0$. [^2]: As a consequence, a microcosm characterises a “closed world” for the execution of programs. [^3]: In the cited work, the results were stated in the particular case of the microcosm of measure-preserving maps on the real line. [^4]: It preserves all connectives except for negation. [^5]: The machine represented by graphings have a single active head at a given time. The permutations $s_{i}$ ($i\leqslant 2$) swap the $i$-th head with the first one, making the $i$-th active. Reusing the permutation $s_{i}$ then activates the former active head and “deactivates” the $i$-th head. [^6]: In that specific case, we write $G$ the action defined as the inclusion $G\subset \textnormal{Aut}(\measured{X})$. [^7]: A definition which is coherent with $\ell^{2}$-Betti number defined by Atiyah [@atiyahl2betti], the later generalization to groups by Cheeger and Gromov [@gromov-l2] reformulated by LŸck [@luckl2invariants], and the generalization to von Neumann algebras by Connes and Shlyakhtenko [@connesl2].
--- abstract: | M. Beiglböck, V. Bergelson, and A. Fish proved that if $G$ is a countable amenable group and $A$ and $B$ are subsets of $G$ with positive Banach density, then the product set $AB$ is piecewise syndetic. This means that there is a finite subset $E$ of $G$ such that $EAB$ is thick, that is, $EAB$ contains translates of any finite subset of $G$. When $G=\mathbb{Z}$, this was first proven by R. Jin. We prove a quantitative version of the aforementioned result by providing a lower bound on the density (with respect to a Følner sequence) of the set of witnesses to the thickness of $% EAB$. When $G=\mathbb{Z}^d$, this result was first proven by the current set of authors using completely different techniques. address: - 'Dipartimento di Matematica, Universita’ di Pisa, Largo Bruno Pontecorvo 5, Pisa 56127, Italy' - 'Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, Science and Engineering Offices M/C 249, 851 S. Morgan St., Chicago, IL, 60607-7045' - 'Department of Mathematics, College of Charleston, Charleston, SC, 29424' - 'School of Mathematical Sciences, University of Northern Colorado, Campus Box 122, 510 20th Street, Greeley, CO 80639' - 'Fakultät für Mathematik, Universität Wien, Oskar-Morgenstern-Platz 1, Room 02.126, 1090 Wien, Austria.' - 'Department of Mathematics, Louisiana State University, 228 Lockett Hall, Baton Rouge, LA 70803' author: - 'Mauro Di Nasso, Isaac Goldbring, Renling Jin, Steven Leth, Martino Lupini, Karl Mahlburg' bibliography: - 'bibliography.bib' title: High density piecewise syndeticity of product sets in amenable groups --- [^1] Introduction ============ In the paper [@jin], R. Jin proved that if $A$ and $B$ are subsets of $% \mathbb{Z}$ with positive Banach density, then $A+B$ is *piecewise syndetic*. This means that there is $m\in \mathbb{N}$ such that $A+B+[-m,m]$ is *thick*, i.e. it contains arbitrarily large intervals. Jin’s result has since been extended in two different ways. First, using ergodic theory, M. Beiglböck, V. Bergelson, and A. Fish [@beiglbock_sumset_2010] established Jin’s result for arbitrary countable amenable groups (with suitable notions of Banach density and piecewise syndeticity); in [di\_nasso\_nonstandard\_2014]{}, M. Di Nasso and M. Lupini gave a simpler proof of the amenable group version of Jin’s theorem using nonstandard analysis which works for arbitrary (not necessarily countable) amenable groups and also gives a bound on the size of the finite set needed to establish that the product set is thick. Second, in [@di_nasso_high_2015] the current set of authors established a quantitative version of Jin’s theorem by proving that there is $m\in \mathbb{N}$ such that the set of witnesses to the thickness of $A+B+[-m,m]$ has upper density at least as large as the upper density of $A$. (We actually prove this result for subsets of $\mathbb{Z}^{d}$ for any $d$.) The goal of this article is to prove the quantitative version of the result of Beiglböck, Bergelson, and Fish. In the following, we assume that $G\ $is a countable amenable group[^2]: for every finite subset $E$ of $G$ and every $\varepsilon >0$, there exists a finite subset $L$ of $G$ such that, for every $x\in E$, we have $\left\vert xL\bigtriangleup L\right\vert \leq \varepsilon \left\vert L\right\vert $. We recall that $G$ is amenable if and only if it has a (left)* Følner sequence*, which is a sequence $\mathcal{S}=(S_{n})$ of finite subsets of $G$ such that, for every $x\in G$, we have $\frac{\left\vert xS_{n}\bigtriangleup S_{n}\right\vert }{% |S_{n}|}\rightarrow 0$ as $n\rightarrow +\infty $. If $\mathcal{S}$ is a Følner sequence and $A\subseteq G$, we define the corresponding upper $\mathcal{S}$*-density* of $A$ to be$$\overline{d}_{\mathcal{S}}\left( A\right) :=\limsup_{n\rightarrow +\infty }% \frac{\left\vert A\cap S_{n}\right\vert }{\left\vert S_{n}\right\vert }.$$ For example, note that if $G=\mathbb{Z}^{d}$ and $\mathcal{S}% =(S_{n})$ where $S_{n}=[-n,n]^{d}$, then $\overline{d}_{\mathcal{S}}$ is the usual notion of upper density for subsets of $\mathbb{Z}^{d}$. Following [@beiglbock_sumset_2010], we define the (left) *Banach density of $A$*, $\mathrm{BD}\left( A\right) $, to be the supremum of $% \overline{d}_{\mathcal{S}}\left( A\right) $ where $\mathcal{S}$ ranges over all Følner sequences of $G$. One can verify (see [beiglbock\_sumset\_2010]{}) that this notion of Banach density agrees with the usual notion of Banach density when $G=\mathbb{Z}^{d}$. Recall that a subset $A$ of $G$ is *thick* if for every finite subset $% L $ of $G$ there exists a right translate $Lx$ of $L$ contained in $A$. A subset $A$ of $G$ is *piecewise syndetic* if $FA$ is thick for some finite subset $F$ of $G$. Suppose that $G$ is a countable discrete group, $\mathcal{S}$ is a Følner sequence for $G$, $A$ is a subset of $G$, and $\alpha >0$. We say that $A$ is - *upper $\mathcal{S}$-thick of level $\alpha $* if for every finite subset $L$ of $G$, the set $\left\{ x\in G:Lx\subseteq A\right\} $ has upper $\mathcal{S}$-density at least $\alpha $; - *upper $\mathcal{S}$-syndetic of level $\alpha $* if there exists a finite subset $F$ of $G$ such that $FA$ is $\mathcal{S}$-thick of level $% \alpha $. The following is the first main result of this paper: \[main\] Suppose that $G$ is a countable amenable group and $\mathcal{S}$ is a Følner sequence for $G$. If $A$ and $B$ are subsets of $G$ such that $% \overline{d}_{\mathcal{S}}(A)=\alpha >0$ and $\mathrm{BD}(B)>0$, then $BA$ is upper $\mathcal{S}$-syndetic of level $\alpha $. When $G=\mathbb{Z}^d$ and $\mathcal{S}=(S_n)$ where $S_n=[-n,n]^d$, we recover [@di_nasso_high_2015 Theorem 14]. We also recover [@di_nasso_high_2015 Theorem 18], which states (in the current terminology) that if $% A,B\subseteq \mathbb{Z}^d$ are such that $\underline{d}(A)=\alpha>0$ and $% \mathrm{BD}(B)>0$, then $A+B$ is $\mathcal{S^{\prime }}$-syndetic of level $% \alpha$ for any subsequence $\mathcal{S}^{\prime }$ of the aforementioned $% \mathcal{S}$. If $\mathcal{S}$ is a Følner sequence for $G$, then the notions of *lower $\mathcal{S}$-density $\underline{d}_\mathcal{S}$*, *lower $\mathcal{S}$-thick*, and *lower $\mathcal{S}$-syndetic* can be defined in the obvious ways. In [@di_nasso_high_2015 Theorem 19], the current set of authors proved that if $A,B\subseteq \z^d$ are such that $\underline{d}_{\mathcal{S}}(A)=\alpha>0$ and $\BD(B)>0$ (where $\mathcal{S}$ is the usual Følner sequence for $\z^d$ as above), then for any $\epsilon>0$, $A+B$ is lower $\mathcal{S}$-syndetic of level $\alpha-\epsilon$. (An example is also given to show that, under the previous hypotheses, $A+B$ need not be lower $\mathcal{S}$-syndetic of level $\alpha$.) In a previous version of this paper, we asked whether or not the amenable group analog of [@di_nasso_high_2015 Theorem 19] was true. Using ergodic-theoretic methods, Michael Björklund [@bjorklund] settled this question in the affirmative. Shortly after, we realized that our techniques readily established the same result and we include our proof here. In their proof of the amenable group version of Jin’s theorem, the authors of [@di_nasso_nonstandard_2014] give a bound on the size of a finite set needed to witness piecewise syndeticity: if $G$ is a countable amenable group and $A$ and $B$ are subsets of $G$ of Banach densities $\alpha $ and $% \beta $ respectively, then there is a finite subset $E$ of $G$ with $% \left\vert E\right\vert \leq \frac{1}{\alpha \beta }$ such that $EAB$ is thick. In Section \[Section:bound\], we improve upon this theorem in two ways: we slightly improve the bound on $\left\vert E\right\vert $ from $% \frac{1}{\alpha \beta }$ to $\frac{1}{\alpha \beta }-\frac{1}{\alpha }+1$ and we show that, if $\mathcal{S}$ is any Følner sequence such that $% \overline{d}_{\mathcal{S}}\left( A\right) >0$, then $EAB$ is $\mathcal{S}$-thick of level $s$ for some $s>0$. Notions from nonstandard analysis {#notions-from-nonstandard-analysis .unnumbered} --------------------------------- We use nonstandard analysis to prove our main results. An introduction to nonstandard analysis with an eye towards applications to combinatorics can be found in [@jin_introduction_2008]. Here, we just fix notation. If $r,s$ are finite hyperreal numbers, we write $r\lesssim s$ to mean $% \mathrm{st}(r)\leq \mathrm{st}(s)$ and we write $s\approx r$ to mean $% \mathrm{st}(r)=\mathrm{st}(s)$. If $X$ is a hyperfinite subset of ${}^\ast G$, we denote by $\mu _{X}$ the corresponding Loeb measure. If, moreover, $Y\subseteq {}^\ast G$ is internal, we abuse notation and write $\mu_X(Y)$ for $\mu_X(X\cap Y)$. If $\mathcal{S}=\left( S_{n}\right) $ is a Følner sequence for $G$ and $% \nu $ is an infinite hypernatural number, then we denote by $S_{\nu }$ the value at $\nu $ of the nonstandard extension of $\mathcal{S}$. It follows readily from the definition that $\overline{d}_{\mathcal{S}}\left( A\right) $ is the maximum of $\mu _{S_{\nu }}\left( {}^{\ast }A\right) $ as $\nu $ ranges over all infinite hypernatural numbers. It is also not difficult to verify that a countable discrete group $G$ is amenable if and only if it admits a *Følner approximation*, which is a hyperfinite subset $X$ of ${}^{\ast }G$ such that $\left\vert gX\bigtriangleup X\right\vert /\left\vert X\right\vert $ is infinitesimal for every $g\in G$. (One direction of this equivalence is immediate: if $(S_{n})$ is a Følner sequence for $G$, then $S_{\nu }$ is a Følner approximation for $G$ whenever $\nu $ is an infinite hypernatural number.) Acknowledgements ---------------- This work was initiated during a week-long meeting at the American Institute for Mathematics on August 4-8, 2014 as part of the SQuaRE (Structured Quartet Research Ensemble) project Nonstandard Methods in Number Theory. The authors would like to thank the Institute for the opportunity and for the Institute’s hospitality during their stay. The authors would also like to thank the anonymous referee of a previous version of this paper for helpful remarks that allowed us to sharpen our results. High density piecewise syndeticity\[Section:high\_density\] =========================================================== The following fact is central to our arguments and its proof is contained in the proof of [@di_nasso_sumset_2014 Lemma 4.6]. \[goodfolner\] Suppose that $G$ is a countable amenable group and $B\subseteq G$ is such that $\BD(B)\geq \beta$. Then there is a Følner approximation $Y$ of $G$ such that $$\frac{\left\vert {}^{\ast }B\cap Y\right\vert }{\left\vert Y\right\vert }% \gtrsim \beta$$and, for any standard $\varepsilon >0$, there exists a finite subset $F$ of $G$ such that $$\frac{\left\vert {}^{\ast }(FB)\cap Y\right\vert }{\left\vert Y\right\vert }% >1-\varepsilon \text{.}$$ For the sake of brevity, we call such a $Y$ as in the above fact a Følner approximation for $G$ that is *good for $B$*. \[technicallemma\] Suppose that $G$ is a countable amenable group, $\mathcal{% S}=\left( S_{n}\right) $ a Følner sequence for $G$, $Y$ a Følner approximation for $G$, and $A\subseteq G$. 1. If $\overline{d}_{\mathcal{S}}\left( A \right)\geq \alpha$, then there is $\nu>\n$ such that $$\frac{\left\vert {}^{\ast }A\cap S_{\nu }\right\vert }{\left\vert S_{\nu }\right\vert }\gtrsim \alpha \text{\quad and\quad }\frac{1}{\left \vert S_\nu\right \vert}\sum_{x\in S_\nu}\frac{% \left\vert x({}^*A\cap S_\nu)^{-1}\cap Y\right\vert }{\left\vert Y\right\vert }\gtrsim \alpha\text{.} \quad(\dagger)$$ 2. If $\underline{d}_{\mathcal{S}}\left( A\right)> \alpha$, then there is $\nu_0>\n$ such that $(\dagger)$ holds for all $\nu\geq \nu_0$. For (1), first apply transfer to the statement “for every finite subset $E$ of $G$ and every natural number $k$, there exists $n\geq k$ such that $$\frac{1}{\left\vert S_{n}\right\vert }\left\vert A\cap S_{n}\right\vert >\alpha -2^{-k}\text{\quad and\quad }\frac{1}{\left\vert S_{n}\right\vert }\sum_{x\in E}\left\vert x^{-1}S_{n}\bigtriangleup S_{n}\right\vert <2^{-k}\text{.\textquotedblright }$$Fix $K>\n$ and let $\nu$ be the result of applying the transferred statement to $Y$ and $K$. Set $C={}^{\ast }A\cap S_{\nu }$ and let $\chi _{C}$ denote the characteristic function of $C$. We have$$\begin{aligned} \frac{1}{\left\vert S_{\nu }\right\vert }\sum_{x\in S_{\nu }}\frac{% \left\vert xC^{-1}\cap Y\right\vert }{\left\vert Y\right\vert } &=&\frac{1}{% \left\vert S_{\nu }\right\vert }\sum_{x\in S_{\nu }}\frac{1}{\left\vert Y\right\vert }\sum_{y\in Y}\chi _{C}(y^{-1}x) \\ &=&\frac{1}{\left\vert Y\right\vert }\sum_{y\in Y}\frac{\left\vert C\cap y^{-1}S_{\nu }\right\vert }{\left\vert S_{\nu }\right\vert } \\ &\geq &\frac{\left\vert C\right\vert }{\left\vert S_{\nu }\right\vert }% -\sum_{y\in Y}\frac{\left\vert y^{-1}S_{\nu }\bigtriangleup S_{\nu }\right\vert }{\left\vert S_{\nu }\right\vert } \\ &\thickapprox &\alpha \text{.}\end{aligned}$$For (2), apply transfer to the statement “for every finite subset $E$ of $G$ and every natural number $k$, there exists $n_0\geq k$ such that, for all $n\geq n_0$, $$\frac{1}{\left\vert S_{n}\right\vert }\left\vert A\cap S_{n}\right\vert >\alpha -2^{-n_0}\text{\quad and\quad }\frac{1}{\left\vert S_{n}\right\vert }\sum_{x\in E}\left\vert x^{-1}S_{n}\bigtriangleup S_{n}\right\vert <2^{-n_0}\text{.\textquotedblright }$$Once again, fix $K>\n$ and let $\nu_0$ be the result of applying the transferred statement to $Y$ and $K$. As above, this $\nu_0$ is as desired. \[Theorem:EBA\]Suppose that $G$ is a countable amenable group, $\mathcal{% S}=\left( S_{n}\right) $ a Følner sequence for $G$, and $A,B\subseteq G$. If $\overline{d}_{\mathcal{S}}\left( A\right) \geq \alpha $ and $\mathrm{BD}% (B)>0$, then $BA$ is upper $\mathcal{S}$-syndetic of level $\alpha $. Let $Y$ be a Følner approximation for $G$ that is good for $B$. Let $\nu$ be as in part (1) of Lemma \[technicallemma\] applied to $Y$ and $A$. Once again, set $C:={}^*A\cap S_\nu$. Consider the $\mu _{S_{\nu }}$-measurable function$$f\left( x\right) =\mathrm{st}\left( \frac{\left\vert xC^{-1}\cap Y\right\vert }{\left\vert Y\right\vert }\right) .$$Observe that $$\int_{S_{\nu }}fd\mu _{S_{\nu }}=\mathrm{st}\left( \frac{1}{|S_{\nu }|}% \sum_{x\in S_{\nu }}\frac{|xC^{-1}\cap Y|}{|Y|}\right) \geq \alpha ,$$whence there is some standard $r>0$ such that $\mu _{S_{\nu }}\left( \left\{ x\in S_{\nu }:f\left( x\right) \geq 2r\right\} \right) \geq \alpha \text{.}$ Setting $\Gamma =\left\{ x\in S_{\nu }:\frac{|xC^{-1}\cap Y|}{|Y|}\geq r\right\}$, we have that $\mu_{S_\nu}(\Gamma)\geq \alpha$. Take a finite subset $F$ of $G$ such that$$\frac{\left\vert {}^{\ast }(FB)\cap Y\right\vert }{\left\vert Y\right\vert }% >1-\frac{r}{2}.$$Fix $g\in G$. Since $Y$ is a Følner approximation for $G$, we have that$$\frac{\left\vert {}^{\ast }\left( gFB\right) \cap Y\right\vert }{\left\vert Y\right\vert }=\frac{\left\vert {}^{\ast }\left( FB\right) \cap g^{-1}Y\right\vert }{\left\vert Y\right\vert }\approx \frac{\left\vert {}^{\ast }\left( FB\right) \cap Y\right\vert }{\left\vert Y\right\vert },$$whence $\frac{\left\vert {}^{\ast }\left( gFB\right) \cap Y\right\vert }{\left\vert Y\right\vert }>1-r.$ Thus, for any $x\in \Gamma$, we have that $xC^{-1}\cap {}^*(gFB)\not=\emptyset$. In particular, if $L$ is a finite subset of $G$, then $\Gamma \subseteq \;^{\ast }\left( \bigcap_{g\in L}gFBA\right) \text{.}$ Therefore$$\overline{d}_{\mathcal{S}}\left( \bigcap_{g\in L}gFBA\right) \geq \mu_{S_\nu}\left({}^*( \bigcap_{g\in L}gFBA)\right)\geq \mu_{S_\nu}(\Gamma)\geq \alpha \text{.} %\overline{d}_{\mathcal{S}}\left( \bigcap_{g\in L}gFB\right) \geq \mathrm{st}% %\frac{\left\vert {}^{\ast }\bigcap_{g\in L}gFB\right\vert }{\left\vert %Y\right\vert }\geq \mathrm{st}\frac{\left\vert \Gamma \right\vert }{% %\left\vert Y\right\vert }\geq \alpha \text{.}$$ It follows that $BA$ is upper $\mathcal{S}$-syndetic of level $\alpha$. As mentioned in the introduction, Theorem 14 and Theorem 18 of [di\_nasso\_high\_2015]{} are immediate consequences of Theorem \[Theorem:EBA\], after observing that the sequence of sets $\left[ -n,n\right] ^{d}$ as well as any of its subsequences is a Følner sequence for $\mathbb{Z}^{d}$. Example 15 of [@di_nasso_high_2015] shows that the conclusion in Theorem \[Theorem:EBA\] is optimal, even when $G$ is the additive group of integers and $\mathcal{S}$ is the Følner sequence of intervals $\left[ 1,n% \right] $. \[Theorem:EBA-lower\]Suppose that $G$ is a countable amenable group, $% \mathcal{S}=\left( S_{n}\right) $ a Følner sequence for $G$, and $% A,B\subseteq G$. If $\underline{d}_{\mathcal{S}}\left( A\right) \geq \alpha $ and $\mathrm{BD}(B)>0$, then $BA$ is lower $\mathcal{S}$-thick of level $% \alpha -\varepsilon $ for every $\varepsilon >0$. Without loss of generality, we may suppose that $\underline{d}_{\mathcal{S}}(A)>\alpha$. Fix a Følner approximation $Y$ for $G$ that is good for $B$ and $\nu _{0}>\n$ as in part (2) of Lemma \[technicallemma\] applied to $Y$ and $A$. Fix $\nu \geq \nu _{0}$ and standard $\varepsilon >0$ with $\varepsilon <\alpha $. Set $$\Gamma _{\nu }=\left\{ x\in S_{\nu }:\frac{\left\vert xC^{-1}\cap Y\right\vert }{\left\vert Y\right\vert }\geq \varepsilon \right\}$$and observe that $\frac{\left\vert \Gamma _{\nu }\right\vert }{\left\vert S_{\nu }\right\vert }% >\alpha -\varepsilon \text{.}$ Fix a finite subset $F$ of $G$ such that$$\frac{\left\vert {}^{\ast }(FB)\cap Y\right\vert }{\left\vert Y\right\vert }% >1-\frac{\varepsilon}{2} \text{.}$$Fix $g\in G$. Since $Y$ is a Følner approximation of $G$, arguing as in the proof of the previous theorem, we conclude that$$\frac{\left\vert {}^{\ast }\left( gFB\right) \cap Y\right\vert }{\left\vert Y\right\vert }>1-\varepsilon \text{.}$$Fix $L\subseteq G$ finite. Once again, it follows that $\Gamma _{\nu }\subset \;^{\ast }\left( \bigcap_{g\in L}gFBA\right)$ whence $$\frac{\left\vert {}^{\ast }\left( \bigcap_{g\in L}gFBA\right) \cap S_{\nu }\right\vert }{\left\vert S_{\nu }\right\vert }\geq \frac{\left\vert \Gamma _{\nu }\right\vert }{\left\vert S_{\nu }\right\vert }> \alpha -\varepsilon \text{.}$$Since the previous inequality held for every $\nu \geq \nu _{0}$, by transfer we can conclude that$$\underline{d}_{\mathcal{S}}\left( \bigcap_{g\in L}gFBA\right) \geq \alpha -\varepsilon \text{.}$$ It follows that $BA$ is lower $\mathcal{S}$-syndetic of level $\alpha-\epsilon$. As mentioned in the introduction, Theorem \[Theorem:EBA-lower\] is a generalization of [@di_nasso_high_2015 Theorem 19] and was first proven by M. Björklund using ergodic-theoretic methods. A bound on the number of translates\[Section:bound\] ==================================================== The following theorem is a refinement of [di\_nasso\_nonstandard\_2014]{}. In particular, we improve the bound on the number of translates, and also obtain an estimate on the $\mathcal{S}$-density of translates that witness the thickness of $EBA$. Suppose that $G$ is a countable amenable group, $\mathcal{S}=\left( S_{n}\right) $ a Følner sequence for $G$, and $A,B\subseteq G$. If $d_{% \mathcal{S}}\left( A\right) \geq \alpha $ and $\mathrm{BD}\left( B\right) \geq \beta $, then there exists $s>0$ and a finite subset $E\subseteq G$ such that $\left\vert E\right\vert \leq \frac{1}{\alpha \beta }-\frac{1}{% \alpha }+1$ and $EBA$ is $\mathcal{S}$-thick of level $s$. Reasoning as in the proof of Theorem \[Theorem:EBA\] one can prove that there exist $\nu \in {}^{\ast }\mathbb{N}\backslash \mathbb{N}$, a Følner approximation $Y$ of $G$, a standard $r>0$, and an infinitesimal $% \eta \in {}^{\ast }\mathbb{R}_{+}$ such that, if $C={}^{\ast }A\cap S_{\nu }$, $D={}^{\ast }B\cap Y$, and$$\Gamma :=\left\{ x\in S_{\nu }:\frac{1}{\left\vert Y\right\vert }\left\vert xC^{-1}\cap D\right\vert \geq \alpha \beta -\eta \right\} \text{,}$$then $\left\vert \Gamma \right\vert \geq r\left\vert S_{\nu }\right\vert $. Fix a family $\left( p_{g}\right) _{g\in G}$ of strictly positive standard real numbers such that $\sum_{g\in G}p_{g}\leq \frac{1}{2}$. We now define a sequence of subsets $(H_{n})$ of $G$ and a sequence $(s_{n})$ from $G$. Define$$H_{0}:=\left\{ g\in G:\frac{1}{\left\vert \Gamma \right\vert }\left\vert \left\{ x\in \Gamma :gx\notin {}^{\ast }\left( BA\right) \right\} \right\vert >p_{g}\right\} \text{.}$$If $H_{n}$ has been defined and is nonempty, let $s_{n}$ be any element of $% H_{n}$ and set$$H_{n+1}:=\left\{ g\in G:\frac{1}{\left\vert \Gamma \right\vert }\left\vert \left\{ x\in \Gamma :gx\notin {}^{\ast }\left( \left\{ s_{0},\ldots ,s_{n}\right\} BA\right) \right\} \right\vert >p_{g}\right\} \text{.}$$If $H_{n}=\varnothing $ then we set $H_{n+1}=\varnothing $. We claim $H_{n}=\emptyset $ for $n>\left\lfloor \frac{1}{\alpha \beta }-% \frac{1}{\alpha }\right\rfloor $. Towards this end, suppose $H_{n}\neq \varnothing $. For $0\leq k\leq n$, take $\gamma _{k}\in \Gamma $ such that$$s_{k}\gamma _{k}\notin {}^{\ast }\left( \left\{ s_{0},\ldots ,s_{k-1}\right\} BA\right) .$$Observe that the sets $s_{0}D$, $s_{1}(\gamma _{1}C^{-1}\cap D)$, ..., $% s_{n}(\gamma _{n}C^{-1}\cap D)$ are pairwise disjoint. In fact, if $$s_{i}D\cap s_{j}\gamma _{j}C^{-1}\neq \varnothing$$for $0\leq i<j\leq n$, then$$s_{j}\gamma _{j}\in s_{i}DC\subseteq {}^{\ast }\left( \left\{ s_{0},\ldots ,s_{j-1}\right\} BA\right) \text{,}$$contradicting the choice of $\gamma _{j}$. Therefore we have that$$\begin{aligned} 1 &\gtrsim &\frac{1}{\left\vert Y\right\vert }\left\vert s_{0}D\cup s_{1}\left( (\gamma _{1}C^{-1}\cap D\right) \cup \cdots \cup s_{n}\left( (\gamma _{n}C^{-1}\cap D\right) \right\vert \\ &\geq &\frac{1}{\left\vert Y\right\vert }\left( \left\vert D\right\vert +\sum_{i=1}^{n}\left\vert \gamma _{i}C^{-1}\cap D\right\vert \right) \\ &\geq &\beta +\alpha \beta n\text{.}\end{aligned}$$It follows that $n\leq \left\lfloor \frac{1}{\alpha \beta }-\frac{1}{\alpha }% \right\rfloor $. Take the least $n$ such that $H_{n}=\emptyset $. Note that $n\leq \left\lfloor \frac{1}{\alpha \beta }-\frac{1}{\alpha }\right\rfloor +1$. If $% n=0$ then $BA$ is already $\mathcal{S}$-thick of level $r$, and there is nothing to prove. Let us assume that $n\geq 1$, and set $E=\left\{ s_{0},\ldots ,s_{n-1}\right\} $. It follows that, for every $g\in G$, we have that$$\frac{1}{\left\vert \Gamma \right\vert }\left\vert \left\{ x\in \Gamma :gx\in {}^{\ast }\left( EBA\right) \right\} \right\vert \geq 1-p_{g}\text{.}$$Suppose that $L$ is a finite subset of $G$. Then$$\frac{1}{\left\vert \Gamma \right\vert }\left\vert \left\{ x\in \Gamma :Lx\subseteq {}^{\ast }\left( EBA\right) \right\} \right\vert \geq 1-\sum_{g\in L}p_{g}\geq 1-\sum_{g\in G}p_{g}\geq \frac{1}{2}\text{.}$$Therefore$$\frac{1}{\left\vert S_{\nu }\right\vert }\left\vert \left\{ x\in S_{\nu }:Lx\subseteq {}^{\ast }\left( EBA\right) \right\} \right\vert \geq \frac{r}{% 2}\text{.}$$This shows that $EBA$ is $\mathcal{S}$-thick of level $\frac{r}{2}$. With a similar argument and using Markov’s inequality [tao\_introduction\_2011]{} one can also prove the following result. We omit the details. Suppose that $G$ is a countable amenable group, $\mathcal{S}=\left( S_{n}\right) $ a Følner sequence for $G$, and $A,B\subseteq G$. If $d_{% \mathcal{S}}\left( A\right) \geq \alpha $ and $\mathrm{BD}\left( B\right) \geq \beta $, then for every $\gamma \in (0,\alpha \beta ]$ there exists $% E\subseteq G$ such that $\left\vert E\right\vert \leq \frac{1-\beta }{\gamma }+1$ and $EBA$ is $\mathcal{S}$-syndetic of level $\frac{\alpha \beta -\gamma }{1-\gamma }$. [^1]: The authors were supported in part by the American Institute of Mathematics through its SQuaREs program. I. Goldbring was partially supported by NSF CAREER grant DMS-1349399. M. Lupini was supported by the York University Susan Mann Dissertation Scholarship and by the ERC Starting grant no. 259527 of Goulnara Arzhantseva. K. Mahlburg was supported by NSF Grant DMS-1201435. [^2]: For those not familiar with amenable groups, let us mention in passing that the class of (countable) amenable groups is quite robust, e.g. contains all finite and abelian groups and is closed under subgroups, quotients, extensions, and direct limits. It follows, for example, that every countable virtually solvable group is amenable.
--- abstract: | A recent type of B-spline functions, namely trigonometric cubic B-splines, are adapted to the collocation method for the numerical solutions of the Kuramoto-Sivashinsky equation. Having only first and second order derivatives of the trigonometric cubic B-splines at the nodes forces us to convert the Kuramoto-Sivashinsky equation to a coupled system of equations by reducing the order of the higher order terms. Crank-Nicolson method is applied for the time integration of the space discretized system resulted by trigonometric cubic B-spline approach. Some initial boundary value problems are solved to show the validity of the proposed method. **Keywords:** Kuramoto-Sivashinsky Equation; Trigonometric cubic B-spline; collocation. author: - | Ozlem Ersoy Hepson\ Eskişehir Osmangazi University, Faculty of Science and Art,\ Department of Mathematics-Computer, Eskişehir, Turkey title: '**Generation of the Trigonometric Cubic B-Spline Collocation Solutions for the Kuramoto-Sivashinsky(KS) Equation**' --- Introduction ============ The original form of the Kuramoto-Sivashinsky was constructed to describe pattern formations and dissipation of them in reaction-diffusion system[kuramoto1]{}. In that study, the reductive perturbation method was implemented for deriving a scale-invariant part from original macroscopic motion equations. It was also shown that the Ginzburg-Landau equation can govern the dynamics near an instability point in many cases. The origin of persistent wave propagation in reaction-diffusion medium was explored by the same equation[@kuramoto2]. It was also used to explain the chaotic behavior in a distributed chemical reaction due to the unstable growth of a spatial inhomogeneity taking place in an oscillating medium[@kuramoto3]. Small model thermal diffusive instabilities in laminar flame fronts can also be represented by the same equation[@sivas1; @sivas2]. Nonlinear analysis of flame front stability assuming stiochiometric composition of the combustible mixture was also studied with constant-density model of a premixed flame[@sivas3]. The one-dimensional form $$u_{t}+uu_{x}+\alpha u_{xx}+\vartheta u_{xxxx}=0 \label{1}$$of the equation appeared in the study [@sivas4]. Hyman and Nicolaenko characterized the transition to chaos of the solutions by numerical simulations [@hyman1]. The Weiss-Tabor-Carnevale technique applied to the generalised Kuramoto-Sivashinsky equation to extract some particular analytical solutions[@exa1]. In the related literature, the methods covering simplest equation, homotopy analysis and $\tanh $ and extended $% \tanh $ techniques derived to determine solitary wave, or multiple soliton solutions to the Kuramoto-Sivashinsky equation[@exa2; @exa3; @exa4; @exa5]. Besides the analytical solutions, many numerical techniques including Chebyshev spectral collocation[@numer1], finite difference and collocation [@numer2], quintic B-spline [@numer3], radial basis meshless method of lines [@numer4], and exponential cubic B-spline method [@numer5] have been applied to derive the numerical solutions to Kuramoto-Sivashinsky equation. Different from the other B-splines techniques based on classical polynomial cubic, quartic and quintic B-splines[alp1,alp2,alp3]{} or exponential cubic B-splines [@alp4], the trigonometric cubic B-spline functions have recently appeared. In this study, we construct a collocation method based on trigonometric cubic B-spline functions for some initial boundary value problems for the Kuramoto-Sivashinsky equation. After reducing the order of the term with the fourth order derivative to two, we discretize the resultant system by using Crank-Nicolson method in time. Performing the linearization of the nonlinear term lead us to discretize the system by trigonometric cubic B-spline functions. As a result of adapting the initial and boundary conditions, the iteration algorithm will be ready to run. To solve the initial value (\[1\]) numerically we first replace it by a system which is first order in the time derivative$$\begin{array}{r} u_{t}+uu_{x}+\alpha v+\vartheta v_{xx}=0 \\ v-u_{xx}=0% \end{array} \label{2}$$To complete the usual classical mathematical statement of the problem, the initial and the boundary conditions are chosen as to be$$u(x,0)=u_{0} \label{3}$$and$$\begin{array}{c} u(x_{0},t)=g_{0},\text{ }u(x_{N},t)=g_{1}, \\ u_{x}(x_{0},t)=0,\text{ }u_{x}(x_{N},t)=0, \\ u_{xx}(x_{0},t)=0,\text{ }u_{xx}(x_{N},t)=0.% \end{array} \label{4}$$ Cubic Trigonometric B-spline Collocation Method =============================================== Consider a uniform partition of the problem domain $[a=x_{0},b=x_{N}]$ at the knots $x_{i},$ $i=0,...,N$ with mesh spacing $h=(b-a)/N.$ On this partition together with additional knots $x_{-2},x_{-1},x_{N+1},x_{N+2}$ outside the problem domain, $T_{i}(x)$ can be defined as $$T_{i}(x)=\frac{1}{\gamma }\left \{ \begin{tabular}{ll} $W^{3}(x_{i-2}),$ & $x\in \left[ x_{i-2},x_{i-1}\right] $ \\ $% W(x_{i-2})(W(x_{i-2})Y(x_{i})+Y(x_{i+1})W(x_{i-1}))+Y(x_{i+2})W^{2}(x_{i-1}), $ & $x\in \left[ x_{i-1},x_{i}\right] $ \\ $% W(x_{i-2})Y^{2}(x_{i+1})+Y(x_{i+2})(W(x_{i-1})Y(x_{i+1})+Y(x_{i+2})W(x_{i})), $ & $x\in \left[ x_{i},x_{i+1}\right] $ \\ $Y^{3}(x_{i+2}),$ & $x\in \left[ x_{i+1},x_{i+2}\right] $ \\ $0,$ & $\text{otherwise}$% \end{tabular}% \right. \label{5}$$ where $W(x_{i})=\sin (\frac{x-x_{i}}{2}),\hat{Y}(x_{i})=\sin (\frac{x_{i}-x}{% 2}),\gamma =\sin (\frac{h}{2})\sin (h)\sin (\frac{3h}{2}).$ The twice continuously differentiable piecewise trigonometric B-spline function set $% \{T_{i}(x)\}_{i=-1}^{N+1},$ forms a basis for the functions defined in the same interval [@base1; @base2]. $T_{i}(x)$ are twice continuously differentiable piecewise trigonometric cubic B-spline on the interval $[a,b]$. The iterative formula $$T_{i}^{k}(x)=\frac{\sin (\frac{x-x_{i}}{2})}{\sin (\frac{x_{i+k-1}-x_{i}}{2})% }T_{i}^{k-1}(x)+\frac{\sin (\frac{x_{i+k}-x}{2})}{\sin (\frac{x_{i+k}-x_{i+1}% }{2})}T_{i+1}^{k-1}(x),\text{ }k=2,3,4,... \label{6}$$ gives the cubic B-spline trigonometric functions starting with the CTB-splines of order $1:$ $$T_{i}^{1}(x)=\left \{ \begin{tabular}{c} $1,\ x\in \lbrack x_{i},x_{i+1})$ \\ $0$ \ \ ,otherwise.% \end{tabular}% \right.$$ The graph of the trigonometric cubic B-splines over the interval $[0.1]$ is depicted in Fig. 1.$$\begin{array}{c} \FRAME{itbpF}{3.7075in}{3.7075in}{0in}{}{}{fig1.bmp}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 3.7075in;height 3.7075in;depth 0in;original-width 3.6599in;original-height 3.6599in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Fig1.bmp';file-properties "XNPEU";}} \\ \text{Fig.1: Trigonometric cubic B-splines over the interval }[0,1]% \end{array}%$$ The nonzero functional and derivative values of trigonometric cubic B-spline functions at the grids are given in Table 1. $$\begin{tabular}{l} Table 1: Values of $T_{i}(x)$ and its principle two \\ derivatives at the knot points \\ \begin{tabular}{|l|l|l|l|} \hline & $T_{i}(x_{k})$ & $T_{i}^{\prime }(x_{k})$ & $T_{i}^{\prime \prime }(x_{k})$ \\ \hline $x_{i-2}$ & $0$ & $0$ & $0$ \\ \hline $x_{i-1}$ & $\sin ^{2}(\frac{h}{2})\csc \left( h\right) \csc (\frac{3h}{2})$ & $\frac{3}{4}\csc (\frac{3h}{2})$ & $\dfrac{3(1+3\cos (h))\csc ^{2}(\frac{h% }{2})}{16\left[ 2\cos (\frac{h}{2})+\cos (\frac{3h}{2})\right] }$ \\ \hline $x_{i}$ & $\dfrac{2}{1+2\cos (h)}$ & $0$ & $\dfrac{-3\cot ^{2}(\frac{3h}{2})% }{2+4\cos (h)}$ \\ \hline $x_{i+1}$ & $\sin ^{2}(\frac{h}{2})\csc \left( h\right) \csc (\frac{3h}{2})$ & $-\frac{3}{4}\csc (\frac{3h}{2})$ & $\dfrac{3(1+3\cos (h))\csc ^{2}(\frac{h% }{2})}{16\left[ 2\cos (\frac{h}{2})+\cos (\frac{3h}{2})\right] }$ \\ \hline $x_{i+2}$ & $0$ & $0$ & $0$ \\ \hline \end{tabular}% \end{tabular}%$$ An approximate solution $U$ and $V$ to the unknown $u$ and $v$ is written in terms of the expansion of the CTB as $$U(x,t)=\sum_{i=-1}^{N+1}\delta _{i}T_{i}(x),\text{ }V(x,t)=\sum_{i=-1}^{N+1}% \phi _{i}T_{i}(x). \label{7}$$ where $\delta _{i}$ and $\phi _{i}$ are time dependent parameters to be determined from the collocation points $x_{i},i=0,...,N$ and the boundary and initial conditions. The nodal values $U$ and its first and second derivatives at the knots can be found from the (\[7\]) as $$\begin{tabular}{l} $U_{i}=\alpha _{1}\delta _{i-1}+\alpha _{2}\delta _{i}+\alpha _{1}\delta _{i+1}$ \\ $U_{i}^{\prime }=\beta _{1}\delta _{i-1}-\beta _{1}\delta _{i+1}$ \\ $U_{i}^{\prime \prime }=\gamma _{1}\delta _{i-1}+\gamma _{2}\delta _{i}+\gamma _{1}\delta _{i+1}$% \end{tabular}% \begin{tabular}{l} $V_{i}=\alpha _{1}\phi _{i-1}+\alpha _{2}\phi _{i}+\alpha _{1}\phi _{i+1}$ \\ $V_{i}^{\prime }=\beta _{1}\phi _{i-1}-\beta _{1}\phi _{i+1}$ \\ $V_{i}^{\prime \prime }=\gamma _{1}\phi _{i-1}+\gamma _{2}\phi _{i}+\gamma _{1}\phi _{i+1}$% \end{tabular} \label{8}$$$$\begin{array}{lll} \alpha _{1}=\sin ^{2}(\frac{h}{2})\csc (h)\csc (\frac{3h}{2}) & \alpha _{2}=% \dfrac{2}{1+2\cos (h)} & \beta _{1}=-\frac{3}{4}\csc (\frac{3h}{2}) \\ \gamma _{1}=\dfrac{3((1+3\cos (h))\csc ^{2}(\frac{h}{2}))}{16(2\cos (\frac{h% }{2})+\cos (\frac{3h}{2}))} & \gamma _{2}=-\dfrac{3\cot ^{2}(\frac{h}{2})}{% 2+4\cos (h)} & \end{array} \label{9}$$ When KS equation is space-splitted as (\[2\]), The system includes the second-order derivatives so that smooth approximation can constructed with the combination of the trigonometric cubic B-splines. The time integration of the space-splitted system (\[2\]) is performed by the Crank-Nicolson method as$$\begin{array}{r} \dfrac{U^{n+1}-U^{n}}{\Delta t}+\dfrac{(UU_{x})^{n+1}+(UU_{x})^{n}}{2}% +\alpha \dfrac{V^{n+1}+V^{n}}{2}+\vartheta \dfrac{V_{xx}^{n+1}+V_{xx}^{n}}{2}% =0 \\ \\ \dfrac{V^{n+1}+V^{n}}{2}-\dfrac{U_{xx}^{n+1}+U_{xx}^{n}}{2}=0% \end{array} \label{10}$$where $U^{n+1}=U(x,(n+1)\Delta t)$ represent the solution at the $(n+1)$th time level. Here $t^{n+1}=t^{n}+\Delta t$, $\Delta t$ is the time step, superscripts denote $n$ th time level, $t^{n}=n\Delta t.$ One linearize terms $(UU_{x})^{n+1}$and $(UUx)^{n}$ in (\[10\]) as [rubin]{}$$\begin{array}{l} (UUx)^{n+1}=U^{n+1}U_{x}^{n}+U^{n}U_{x}^{n+1}-U^{n}U_{x}^{n} \\ (UUx)^{n}=U^{n}U_{x}^{n}% \end{array}%$$to obtain the time-integrated linearized the KS Equation:$$\begin{array}{r} \dfrac{2}{\Delta t}U^{n+1}-\dfrac{2}{\Delta t}% U^{n}+U^{n+1}U_{x}^{n}+U^{n}U_{x}^{n+1}+\alpha \left( V^{n+1}+V^{n}\right) +\vartheta (V_{xx}^{n+1}+V_{xx}^{n})=0 \\ \\ \dfrac{V^{n+1}+V^{n}}{2}-\dfrac{U_{xx}^{n+1}+U_{xx}^{n}}{2}=0% \end{array} \label{11}$$To proceed with space integration of the (\[11\]), an approximation of $% U^{n}$ and $V^{n}$ in terms of the unknown element parameters and trigonometric cubic B-splines separately can be written as (\[7\]). Substitute Eqs (\[8\]) into (\[11\]) and collocate the resulting the equation at the knots $x_{i},$ $i=0,...,N$ yields a linear algebraic system of equations: $$\begin{tabular}{l} \begin{tabular}{l} $\left[ \left( \frac{2}{\Delta t}+K_{2}\right) \alpha _{1}+K_{1}\beta _{1}% \right] \delta _{m-1}^{n+1}+\left( \alpha \alpha _{1}+\vartheta \gamma _{1}\right) \phi _{m-1}^{n+1}+\left[ \left( \frac{2}{\Delta t}+K_{2}\right) \alpha _{2}\right] \delta _{m}^{n+1}+\left( \alpha \alpha _{2}+\vartheta \gamma _{2}\right) \phi _{m}^{n+1}$ \\ $+\left[ \left( \frac{2}{\Delta t}+K_{2}\right) \alpha _{1}-K_{1}\beta _{1}% \right] \delta _{m+1}^{n+1}+\left( \alpha \alpha _{1}+\vartheta \gamma _{1}\right) \phi _{m+1}^{n+1}$ \\ $=\frac{2}{\Delta t}\alpha _{1}\delta _{m-1}^{n}-\left( \alpha \alpha _{1}+\vartheta \gamma _{1}\right) \phi _{m-1}^{n}+\frac{2}{\Delta t}\alpha _{2}\delta _{m}^{n}-\left( \alpha \alpha _{2}+\vartheta \gamma _{2}\right) \phi _{m}^{n}+\frac{2}{\Delta t}\alpha _{1}\delta _{m+1}^{n}-\left( \alpha \alpha _{1}+\upsilon \vartheta _{1}\right) \phi _{m+1}^{n}$% \end{tabular} \\ \begin{tabular}{l} $-\gamma _{1}\delta _{m-1}^{n+1}+\alpha _{1}\phi _{m-1}^{n+1}-\gamma _{2}\delta _{m}^{n+1}+\alpha _{2}\phi _{m}^{n+1}-\gamma _{1}\delta _{m+1}^{n+1}+\alpha _{1}\phi _{m+1}^{n+1}$ \\ $=\gamma _{1}\delta _{m-1}^{n}-\alpha _{1}\phi _{m-1}^{n}+\gamma _{2}\delta _{m}^{n}-\alpha _{2}\phi _{m}^{n}+\gamma _{1}\delta _{m+1}^{n}-\alpha _{1}\phi _{m+1}^{n},$ $\ \ \ \ \ m=0...N,$ $n=0,1...,$% \end{tabular}% \end{tabular} \label{12}$$ where$$\begin{array}{l} K_{1}=\alpha _{1}\delta _{i-1}+\alpha _{2}\delta _{i}+\alpha _{1}\delta _{i+1} \\ K_{2}=\beta _{1}\delta _{i-1}-\beta _{1}\delta _{i+1}.% \end{array}%$$The system (\[12\]) can be converted the following matrices system;$$\mathbf{Ax}^{n+1}=\mathbf{Bx}^{n} \label{13}$$where$$\mathbf{A=}% \begin{bmatrix} \nu _{m1} & \nu _{m2} & \nu _{m3} & \nu _{m4} & \nu _{m5} & \nu _{m2} & & & & \\ -\gamma _{1} & \alpha _{1} & -\gamma _{2} & \alpha _{2} & -\gamma _{1} & \alpha _{1} & & & & \\ & & \nu _{m1} & \nu _{m2} & \nu _{m3} & \nu _{m4} & \nu _{m5} & \nu _{m2} & & \\ & & -\gamma _{1} & \alpha _{1} & -\gamma _{2} & \alpha _{2} & -\gamma _{1} & \alpha _{1} & & \\ & & & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \\ & & & & \nu _{m1} & \nu _{m2} & \nu _{m3} & \nu _{m4} & \nu _{m5} & \nu _{m2} \\ & & & & -\gamma _{1} & \alpha _{1} & -\gamma _{2} & \alpha _{2} & -\gamma _{1} & \alpha _{1}% \end{bmatrix}%$$ $$\mathbf{B=}% \begin{bmatrix} \nu _{m6} & \nu _{m7} & \nu _{m8} & \nu _{m9} & \nu _{m6} & \nu _{m7} & & & & \\ \gamma _{1} & -\alpha _{1} & \gamma _{2} & -\alpha _{2} & \gamma _{1} & -\alpha _{1} & & & & \\ & & \nu _{m6} & \nu _{m7} & \nu _{m8} & \nu _{m9} & \nu _{m6} & \nu _{m7} & & \\ & & \gamma _{1} & -\alpha _{1} & \gamma _{2} & -\alpha _{2} & \gamma _{1} & -\alpha _{1} & & \\ & & & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \\ & & & & \nu _{m6} & \nu _{m7} & \nu _{m8} & \nu _{m9} & \nu _{m6} & \nu _{m7} \\ & & & & \gamma _{1} & -\alpha _{1} & \gamma _{2} & -\alpha _{2} & \gamma _{1} & -\alpha _{1}% \end{bmatrix}%$$ and$$\begin{array}{lll} \nu _{m1}=\left( \frac{2}{\Delta t}+K_{2}\right) \alpha _{1}+K_{1}\beta _{1} & \nu _{m4}=\left( \alpha \alpha _{2}+\vartheta \gamma _{2}\right) & \nu _{m7}=-\left( \alpha \alpha _{1}+\vartheta \gamma _{1}\right) \\ \nu _{m2}=\left( \alpha \alpha _{1}+\vartheta \gamma _{1}\right) & \nu _{m5}=\left( \frac{2}{\Delta t}+K_{2}\right) \alpha _{1}-K_{1}\beta _{1} & \nu _{m8}=\frac{2}{\Delta t}\alpha _{2} \\ \nu _{m3}=\left( \frac{2}{\Delta t}+K_{2}\right) \alpha _{2} & \nu _{m6}=% \frac{2}{\Delta t}\alpha _{1} & \nu _{m9}=-\left( \alpha \alpha _{2}+\vartheta \gamma _{2}\right)% \end{array}%$$ The system (\[13\]) consist of $2N+2$ linear equation in $2N+6$ unknown parameters $$\mathbf{x}^{n+1}=(\delta _{-1}^{n+1},\phi _{-1}^{n+1},\delta _{0}^{n+1},\phi _{0}^{n+1},\ldots ,\delta _{n+1}^{n+1},\phi _{n+1}^{n+1},).$$ To obtain a unique solution, an additional four constraints are needed. These are obtained from the imposition of the Robin boundary conditions so that $U_{xx}(a,t)=0,$ $V(a,t)=0$ and $U_{xx}(b,t)=0,$ $V(b,t)=0$ gives the following equations:$$\begin{array}{l} \gamma _{1}\delta _{-1}+\gamma _{2}\delta _{0}+\gamma _{1}\delta _{1}=0 \\ \alpha _{1}\phi _{-1}+\alpha _{2}\phi _{0}+\alpha _{1}\phi _{1}=0 \\ \gamma _{1}\delta _{N-1}+\gamma _{2}\delta _{N}+\gamma _{1}\delta _{N+1}=0 \\ \alpha _{1}\phi _{N-1}+\alpha _{2}\phi _{N}+\alpha _{1}\phi _{N+1}=0% \end{array}%$$ Elimination of the parameters $\delta _{-1},\phi _{-1},\delta _{N+1},\phi _{N+1},$ from the Eq.(\[12\]), using the above equations gives a solvable system of $2N+2$ linear equations including $2N+2$ unknown parameters. After finding the unknown  parameters via the application of a variant of Thomas algorithm, approximate solutions at the knots can be obtained by placing successive three parameters in the Eq.(\[8\]). Initial parameters $\delta _{i}^{0},\phi _{i}^{0},$ $i=-1,\ldots ,N+1$ are needed to start the iteration procedure (\[13\]). Thus the following requirements help to determine initial parameters: $$\begin{array}{l} U_{xx}(a,0)=0=\gamma _{1}\delta _{-1}^{0}+\gamma _{2}\delta _{0}^{0}+\gamma _{1}\delta _{1}^{0}, \\ U_{xx}(x_{i},0)=\gamma _{1}\delta _{i-1}^{0}+\gamma _{2}\delta _{i}^{0}+\gamma _{1}\delta _{i+1}^{0}=u_{xx}(x_{i},0),i=1,...,N-1 \\ U_{xx}(b,0)=0=\gamma _{1}\delta _{N-1}^{0}+\gamma _{2}\delta _{N}^{0}+\gamma _{1}\delta _{N+1}^{0}, \\ V(a,0)=0=\alpha _{1}\phi _{-1}^{0}+\alpha _{2}\phi _{0}^{0}+\alpha _{1}\phi _{1}^{0} \\ V(x_{i},0)=\alpha _{1}\phi _{i-1}^{0}+\alpha _{2}\phi _{i}^{0}+\alpha _{1}\phi _{i+1}^{0}=v(x_{i},0),i=1,...,N-1 \\ V(a,0)=\alpha _{1}\phi _{N-1}^{0}+\alpha _{2}\phi _{N}^{0}+\alpha _{1}\phi _{N+1}^{0}% \end{array}%$$ Numerical tests =============== To see versatility of the present method, three numerical examples are studied in this section. The efficiency and accuracy of the solutions will be determined by using the global relative error using formula$$\text{GRE}=\frac{\dsum \limits_{j=1}^{N}\left \vert U_{j}^{n}-u_{j}^{n}\right \vert }{\dsum \limits_{j=1}^{N}\left \vert u_{j}^{n}\right \vert } \label{GRE}$$where $U$ denotes numerical solution and $u$ denotes analytical solution. Numerical solution of KS equation (\[1\]) is obtained for $\alpha =1$ and $% \vartheta =1$ with the exact solution given by$$u(x,t)=b+\frac{15}{19}d\left[ e\tanh \left( k\left( x-bt-x_{0}\right) \right) +f\tanh ^{3}\left( k\left( x-bt-x_{0}\right) \right) \right]$$the initial condition is taken from the exact solution together with boundary conditions given by (\[4\]). This example is studied in [Xu,numer3,Lai]{}. The above solution models the shock wave propagation with the speed $b$ and initial position $x_{0}.$We have considered domain as $% [x_{0},x_{N}]=[-30,30]$ with time step $\Delta t=0.01$ and number of partitions as $150$. In order to compare the solutions with [@numer3] and [@Lai] we have taken $b=5,$ $k=\frac{1}{2}\sqrt{\frac{11}{19}},$ $% x_{0}=-12,$ $d=\sqrt{\frac{11}{19}},$ $e=-9,$ $f=11.$ Table 2 gives a comparison between the global relative error found by our method and by Quintic B-spline collocation method [@numer3] and by Lattice Boltzmann method [@Lai]. The numerical results are plotted at different time step for $\Delta t=0.005$ and $N=400$ in Fig. 2 and Fig. 3 shows projection of the solution on the x-t plane. Solution obtained by trigonometric cubic B-spline collocation method is very close to the exact solutions due to the global relative error obtained in Table 2.$$\begin{array}{l} \text{Table 2: Comparison of global relative error for Example a at different time }t\text{, }N=150 \\ \begin{tabular}{|c|c|c|c|} \hline Time($t$) & Present Method & \cite{numer3} & \cite{Lai} \\ \hline $1$ & \multicolumn{1}{|c|}{$2.98416\times 10^{-5}$} & $3.81725\times 10^{-4}$ & $6.7923\times 10^{-4}$ \\ \hline $2$ & \multicolumn{1}{|c|}{$7.00758\times 10^{-5}$} & $5.51142\times 10^{-4}$ & $1.1503\times 10^{-3}$ \\ \hline $3$ & \multicolumn{1}{|c|}{$9.51142\times 10^{-5}$} & $7.03980\times 10^{-4}$ & $1.5941\times 10^{-3}$ \\ \hline $4$ & \multicolumn{1}{|c|}{$1.79237\times 10^{-4}$} & $8.63662\times 10^{-4}$ & $2.0075\times 10^{-3}$ \\ \hline \end{tabular}% \end{array}%$$ $$\begin{array}{cc} \FRAME{itbpF}{3.7075in}{3.7075in}{0in}{}{}{fig2.bmp}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 3.7075in;height 3.7075in;depth 0in;original-width 3.6599in;original-height 3.6599in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Fig2.bmp';file-properties "XNPEU";}} & \FRAME{itbpF}{2.6757in}{2.7095in}{0in}{}{}{fig3.bmp}{\special% {language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 2.6757in;height 2.7095in;depth 0in;original-width 2.6333in;original-height 2.6671in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Fig3.bmp';file-properties "XNPEU";}} \\ \text{Figure 2: Solutions of KS equation} & \text{Figure3: Projected solutions on }xt-\text{plane}% \end{array}%$$ **(b)** This example represents chaotic behaviors with the initial condition, $$u(x,0)=\cos (\frac{x}{2})\sin (\frac{x}{2})$$ with the boundary condition$$u_{xx}(0,t)=0,\text{ }u_{xx}(4\pi ,t)=0$$The computational domain $[x_{0},x_{N}]=[0,4\pi ]$ is used with $N=512,$ $% \Delta t=0.001,$ $\alpha =1.$ It is shown that KS-Equation is highly sensitive for choice of the parameter $\vartheta .$ In Figs. 4-7, we can observe the solution pattern exhibiting complete chaotic behaviors on the $% xt-$plane, respectively. Figures illustrate that for the smaller value of $% \vartheta ,$ chaotic behavior starts to evolve earlier and  seen more complex instabilities.$$\begin{array}{cc} \FRAME{itbpF}{4.2376in}{4.1018in}{0in}{}{}{1.jpg}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 4.2376in;height 4.1018in;depth 0in;original-width 4.1874in;original-height 4.0525in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename '1.jpg';file-properties "XNPEU";}} & \FRAME{itbpF}{4.2168in}{4.0923in}{0in}{}{}{2.jpg}{\special{language "Scientific Word";type "GRAPHIC";display "USEDEF";valid_file "F";width 4.2168in;height 4.0923in;depth 0in;original-width 4.1667in;original-height 4.1563in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename '2.jpg';file-properties "XNPEU";}} \\ \text{Figure 4: Solutions on }xt-\text{plane for }\vartheta =0.05 & \text{% Figure 5: Solutions on }xt-\text{plane for }\vartheta =0.02% \end{array}%$$$$\begin{array}{cc} \FRAME{itbpF}{4.2376in}{4.2168in}{0in}{}{}{3.jpg}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 4.2376in;height 4.2168in;depth 0in;original-width 4.1874in;original-height 4.1667in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename '3.jpg';file-properties "XNPEU";}} & \FRAME{itbpF}{4.2168in}{4.2065in}{0in}{}{}{4.jpg}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 4.2168in;height 4.2065in;depth 0in;original-width 4.1667in;original-height 4.1563in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename '4.jpg';file-properties "XNPEU";}} \\ \text{Figure 6: Solutions on }xt-\text{plane for }\vartheta =0.01 & \text{% Figure 7: Solutions on }xt-\text{plane for }\vartheta =0.002% \end{array}%$$ **(c)** The KS equation (\[1\]) is obtained for $\alpha =1$ and $% \vartheta =1$. This example represents the simplest nonlinear partial differential equation showing chaotic behavior when spatial domain is finite, with the Gaussian initial condition,$$u(x,0)=-\exp (-x^{2})$$with the boundary condition$$u(x_{0},t)=0,\text{ }u(x_{N},t)=0$$The computational domain $[x_{0},x_{N}]=[-30,30]$ with $N=120,$ $\Delta t=0.001.$ In Figs. 8 and 9, we can observe the convergent numerical results by our trigonometric cubic B-Spline method of lines with complete chaotic behavior at $t=5$ and $t=20$, respectively. It is observed that the result shows same characteristics as in [@numer3].$$\begin{tabular}{ll} \FRAME{itbpF}{3.1652in}{2.6212in}{0in}{}{}{4a.jpg}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 3.1652in;height 2.6212in;depth 0in;original-width 5.2295in;original-height 4.3232in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename '4a.jpg';file-properties "XNPEU";}} & \FRAME{itbpF}{3.1592in}{2.6463in}{0in}{}{}{4b.jpg}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 3.1592in;height 2.6463in;depth 0in;original-width 5.2191in;original-height 4.3647in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename '4b.jpg';file-properties "XNPEU";}} \\ Figure 8: The Chaotic Solution of the KSE $t=5$ & Figure 9: The Chaotic Solution of the KSE $t=20$% \end{tabular}%$$ [99]{} Kuramoto, Y., & Tsuzuki, T. (1975). On the formation of dissipative structures in reaction-diffusion systems reductive perturbation approach. Progress of Theoretical Physics, 54(3), 687-699. Kuramoto, Y., & Tsuzuki, T. (1976). Persistent propagation of concentration waves in dissipative media far from thermal equilibrium. Progress of theoretical physics, 55(2), 356-369. Kuramoto, Y. (1978). Diffusion-induced chaos in reaction systems. Progress of Theoretical Physics Supplement, 64, 346-367. Michelson, D. M., & Sivashinsky, G. I. (1977). Nonlinear analysis of hydrodynamic instability in laminar flames-II. Numerical experiments. Acta Astronautica, 4(11-12), 1207-1221. Sivashinsky, G. I. (1977). Nonlinear analysis of hydrodynamic instability in laminar flames-I. Derivation of basic equations. Acta astronautica, 4(11-12), 1177-1206. Sivashinsky, G. I. (1980). On flame propagation under conditions of stoichiometry. SIAM Journal on Applied Mathematics, 39(1), 67-82. Sivashinsky, G. I., & Michelson, D. M. (1980). On irregular wavy flow of a liquid film down a vertical plane. Progress of theoretical physics, 63(6), 2112-2114. Hyman, J. M., & Nicolaenko, B. (1986). The Kuramoto-Sivashinsky equation: a bridge between PDE’s and dynamical systems. Physica D: Nonlinear Phenomena, 18(1), 113-126. Kudryashov, N. A. (1990). Exact solutions of the generalized Kuramoto-Sivashinsky equation. Physics Letters A, 147(5-6), 287-291. Kudryashov, N. A. (2005). Simplest equation method to look for exact solutions of nonlinear differential equations. Chaos, Solitons & Fractals, 24(5), 1217-1231. Abbasbandy, S. (2008). Solitary wave solutions to the Kuramoto-Sivashinsky equation by means of the homotopy analysis method. Nonlinear Dynamics, 52(1-2), 35-40. Chen, H., & Zhang, H. (2004). New multiple soliton solutions to the general Burgers-Fisher equation and the Kuramoto-Sivashinsky equation. Chaos, Solitons & Fractals, 19(1), 71-76. Wazwaz, A. M. (2006). New solitary wave solutions to the Kuramoto-Sivashinsky and the Kawahara equations. Applied Mathematics and Computation, 182(2), 1642-1650. Khater, A. H., & Temsah, R. S. (2008). Numerical solutions of the generalized Kuramoto-Sivashinsky equation by Chebyshev spectral collocation methods. Computers & Mathematics with Applications, 56(6), 1465-1472. Lakestani, M., & Dehghan, M. (2012). Numerical solutions of the generalized Kuramoto-Sivashinsky equation using B-spline functions. Applied Mathematical Modelling, 36(2), 605-617. Mittal, R. C., & Arora, G. (2010). Quintic B-spline collocation method for numerical solution of the Kuramoto-Sivashinsky equation. Communications in Nonlinear Science and Numerical Simulation, 15(10), 2798-2808. Haq, S., Bibi, N., Tirmizi, S. I. A., & Usman, M. (2010). Meshless method of lines for the numerical solution of generalized Kuramoto-Sivashinsky equation. Applied Mathematics and Computation, 217(6), 2404-2413. Ersoy, O., & Dag, I. (2016). The Exponential Cubic B-Spline Collocation Method for the Kuramoto-Sivashinsky Equation. Filomat, 30(3), 853-861. Korkmaz, A., & Dag, I. (2013). Cubic B-spline differential quadrature methods and stability for Burgers’ equation. Engineering Computations, 30(3), 320-344. Korkmaz, A., & Dag, I. (2013). Numerical simulations of boundary-forced RLW equation with cubic b-spline-based differential quadrature methods. Arabian Journal for Science and Engineering, 38(5), 1151-1160. Korkmaz, A., & Dag, I. (2016). Quartic and quintic B-spline methods for advection–diffusion equation. Applied Mathematics and Computation, 274, 208-219. Korkmaz, A., & Akmaz, H. K. (2015). Numerical Simulations for Transport of Conservative Pollutants. Selcuk Journal of Applied Mathematics, 16(1). S.G. Rubin, R.A. Graves, Cubic spline approximation for problems in fluid mechanics, Nasa TR R-436, Washington DC, 1975. Lyche, T., & Winther, R. (1979). A stable recurrence relation for trigonometric Bsplines, Journal of Approximation theory, 25(3), 266-279. Walz, G. (1997). Identities for trigonometric B-splines with an application to curve design. BIT Numerical Mathematics, 37(1), 189-201 Y. Xu, C. W. Shu, Local discontinuous Galerkin methods for the Kuramoto–Sivashinsky equations and the Ito-type coupled KdV equations, Comput. Meth. Appl. Mech. Eng. 195 (2006) 3430–3447. H. Lai, C. Ma, Lattice Boltzmann method for the generalized Kuramoto–Sivashinsky equation, Physica A 388 (2009) 1405–1412.
--- abstract: 'For $\phi$ a metric on the anticanonical bundle, $-K_X$, of a Fano manifold $X$ we consider the volume of $X$ $$\int_X e^{-\phi}.$$ We prove that the logarithm of the volume is concave along bounded geodesics in the space of positively curved metrics on $-K_X$ and that the concavity is strict unless the geodesic comes from the flow of a holomorphic vector field on $X$. As a consequence we get a simplified proof of the Bando-Mabuchi uniqueness theorem for Kähler - Einstein metrics. We also prove a generalization of this theorem to ’twisted’ Kähler-Einstein metrics and treat some classes of manifolds that satisfy weaker hypotheses than being Fano. .' address: | B Berndtsson :Department of Mathematics\ Chalmers University of Technology\ and Department of Mathematics\ University of Göteborg\ S-412 96 GÖTEBORG\ SWEDEN,\ author: - Bo Berndtsson title: 'A Brunn-Minkowski type inequality for Fano manifolds and the Bando-Mabuchi uniqueness theorem.' --- Introduction ============ Let $X$ be an $n$-dimensional projective manifold with seminegative canonical bundle and let $\Omega$ be a domain in the complex plane. We consider curves $t\rightarrow \phi_t$, with $t$ in $\Omega$, of metrics on $-K_X$ that have plurisubharmonic variation so that $i\ddbar_{t, X}\phi\geq 0$ ( see section 2 for notational conventions). Then $\phi$ solves the homogenous Monge-Ampère equation if (i)\^[n+1]{}=0. By a fundamental theorem of Chen, [@Chen], we can for any given $\phi_0$ defined on the boundary of $\Omega$, smooth with nonnegative curvature on $X$ for $t$ fixed on $\partial\Omega$, find a solution of (1.1) with $\phi_0$ as boundary values. This solution does in general not need to be smooth (see [@1Donaldson]), but Chen’s theorem asserts that we can find a solution that has all mixed complex derivatives bounded, i e $\ddbar_{t, X}\phi$ is bounded on $X\times \Omega$. The solution equals the supremum (or maximum) of all subsolutions, i e all metrics with semipositive curvature that are dominated by $\phi_0$ on the boundary. Chen’s proof is based on some of the methods from Yau’s proof of the Calabi conjecture, so it is not so easy, but it is worth pointing out that the existence of a generalized solution that is only bounded is much easier, see section 2. On the other hand, if we assume that $\phi$ is smooth and $i\ddbar_X\phi>0$ on $X$ for any $t$ fixed, then $$(i\ddbar\phi)^{n+1}=n c(\phi)(i\ddbar\phi)^n\wedge i dt\wedge d\bar t$$ with $$c(\phi)=\frac{\partial^2\phi}{\partial t\partial\bar t}-|\dbar\frac{\partial\phi}{\partial t}|^2_{i\ddbar_X\phi},$$ where the norm in the last term is the norm with respect to the Kähler metric $i\ddbar_X\phi$. Thus equation 1.1 is then equivalent to $c(\phi)=0$. The case when $\Omega=\{t; 0<\Re t<1\}$ is a strip is of particular interest. If the boundary data are also independent of $\Im t$ the solution to 1.1 has a similar invariance property. A famous observation of Semmes, [@Semmes] and Donaldson, [@Donaldson] is that the equation $c(\phi)=0$ then is the equation for a geodesic in the space of Kähler potentials. Chen’s theorem then [*almost*]{} implies that any two points in the space of Kähler potentials can be joined by a geodesic, the proviso being that we might not be able to keep smoothness or strict positivity along all of the curve. This problem causes some difficulties in applications, one of which we will address in this paper. The next theorem is a direct consequence of the results in [@2Berndtsson]. Assume that $-K_X\geq 0$ and let let $\phi_t$ be a curve of metrics on $-K_X$ such that $$i\ddbar_{t,X}\phi\geq 0$$ in the sense of currents. Then $$\F(t):=-\log\int_X e^{-\phi_t}.$$ is subharmonic in $\Omega$. In particular, if $\phi_t$ does not depend on the imaginary part of $t$, $\F$ is convex. Here we interpret the integral over $X$ in the following way. For any choice of local coordinates $z^j$ in some covering of $X$ by coordinate neighbourhoods $U_j$, the metric $\phi_t$ is represented by a local function $\phi_t^j$. The volume form $$c_n e^{-\phi^j_t} dz^j\wedge\bar dz^j,$$ where $c_n=i^{n^2}$ is a unimodular constant chosen to make the form positive, is independent of the choice of local coordinates. We denote this volume form by $e^{-\phi_t}$, see section 2. The results in [@2Berndtsson] deal with more general line bundles $L$ over $X$, and the trivial vector bundle $E$ over $\Omega$ with fiber $H^0(X, K_X+L)$ with the $L^2$-metric $$\|u\|^2_t=\int_X |u|^2 e^{-\phi_t},$$ see section 2. The main result is then a formula for the curvature of $E$ with the $L^2$-metric. In this paper we study the simplest special case, $L=-K_X$. Then $K_X+L$ is trivial so $E$ is a line bundle and Theorem 1.1 says that this line bundle has nonnegative curvature. Theorem 1.1 is formally analogous to the Brunn-Minkowski inequality for the volumes of convex sets, and even more to its functional version, Prekopa’s theorem, [@Prekopa]. Prekopa’s theorem states that if $\phi$ is a convex function on $\R^{n+1}$, then $$f(t):=-\log\int_{\R^n}e^{-\phi_t}$$ is convex. The complex counterpart of this is that we consider a complex manifold $X$ with a family of volume forms $\mu_t$. In local coordinates $z^j$ the volume form can be written as above $c_n e^{-\phi^j_t} dz^j\wedge\bar dz^j$ , and if $\mu_t$ is globally well defined $\phi^j_t$ are then the local representatives of a metric, $\phi_t$, on $-K_X$. Convexity in Prekopa’s theorem then corresponds to positive, or at least semipositive, curvature of $\phi_t$, so $X$ must be Fano, or its canonical bundle must have at least have seminegative curvature (in some sense: $-K_X$ pseudoeffective would be the minimal requirement). The assumption in Prekopa’s theorem that the weight is convex with respect to $x$ and $t$ together then correspond to the assumptions in Theorem 1.1. If $K$ is a compact convex set in $\R^{n+1}$ we can take $\phi$ to be equal to 0 in $K$ and $+\infty$ outside of $K$. Prekopa’s theorem then implies the Brunn-Minkowski theorem, saying that the logarithm of the volumes of $n$-dimensional slices, $K_t$ of convex sets are concave; concretely |K\_[(t+s)/2]{}|\^2|K\_t||K\_s| The Brunn-Minkowski theorem has an important addendum which describes the case of equality : If equality holds in (1.2) then all the slices $K_t$ and $K_s$ are translates of each other $$K_t=K_s + (t-s)\mathbf v$$ where $\mathbf v$ is some vector in $\R^n$. A little bit artificially we can formulate this as saying that we move from one slice to another via the flow of a constant vector field. It follows that from (1.2) and the natural homogenuity properties of Lebesgue measure that $|K_t|^{1/n}$, is also concave. This (’additive version’) is perhaps the most common formulation of the Brunn-Minkowski inequalities, but the logarithmic (or multiplicative) version above works better for weighted volumes and in the complex setting. For the additive version conditions for equality are more liberal; then $K_t$ may change not only by translation but also by dilation (see [@Gardner]), but equality in the multiplicative case excludes dilation. A natural question is then if one can draw a similar conclusion in the complex setting described above. In [@2Berndtsson] we proved that this is indeed so if $\phi$ is known to be smooth and strictly plurisubharmonic on $X$ for $t$ fixed. The main result of this paper is the extension of this to less regular situations. We keep the same assumptions as in Theorem 1.1. Assume that $H^{0,1}(X)=0$, and that the curve of metrics $\phi_t$ is independent of the imaginary part of $t$. Assume moreover that the metrics $\phi_t$ are uniformly bounded in the sense that for some smooth metric on $-K_X$, $\psi$, $$|\phi_t-\psi|\leq C.$$ Then, if the function $\F$ in Theorem 1.1 is affine in a neighbourhood of 0 in $\Omega$, there is a (possibly time dependent) holomorphic vector field $V$ on $X$ with flow $F_t$ such that $$F_t^*(\ddbar\phi_t) =\ddbar\phi_0.$$ The same conclusion can also be drawn without the assumption that $\phi_t$ be independent of the imaginary part of $t$, and then assuming that $\F$ be harmonic instead of affine, but the proof then seems to require more regularity assumptions. For simplicity we therefore treat only the case when $\phi_t$ is independent of $t$, which anyway seems to be the most useful in applications. This theorem is useful in view of the discussion above on the possible lack of regularity of geodesics.As we shall see in section 2 the existence of a generalized geodesic satisfying the boundedness assumption in Theorem 1.2 is almost trivial. One motivation for the theorem is to give a new proof of the Bando-Mabuchi uniqueness theorem for Kähler-Einstein metrics on Fano manifolds. Recall that a metric $\omega_\psi=i\ddbar\psi$, with $\psi$ a metric on $-K_X$ solves the Kähler-Einstein equation if $$\text{Ric}(\omega_\psi)=\omega_\psi$$ or equivalently if for some positive $a$ $$e^{-\psi}=a(i\ddbar\psi)^n,$$ where we use the convention above to interpret $e^{-\psi}$ as a volume form. By a celebrated theorem of Bando and Mabuchi any two Kähler-Einstein metrics $i\ddbar\phi_0$ and $i\ddbar\phi_1$ are related via the time-one flow of a holomorphic vector field. In section 4 we shall give a proof of this fact by joining $\phi_0$ and $\phi_1$ by a geodesic and applying Theorem 1.2. It should be noted that a similar proof of the Bando-Mabuchi theorem has already been given by Berman, [@Berman]. The difference between his proof and ours is that he uses the weaker version of Theorem 1.2 from [@2Berndtsson]. He then needs to prove that the geodesic joining two Kähler-Einstein metrics is in fact smooth, which we do not need, and we also avoid the use of Chen’s theorem since we only need the existence of a bounded geodesic. A minimal assumption in Theorem 1.2 would be that $e^{-\phi_t}$ be integrable, instead of bounded. I do not know if the theorem holds in this generality, but in section 6 we will consider an intermediate situation where $\phi_t=\chi_t +\psi$, with $\chi_t$ bounded and $\psi$ such that $e^{-\psi}$ is integrable, so that the singularities don’t change with $t$. Under various positivity assumptions we are then able to proof a version of Theorem 1.2. Apart from making the problem technically simpler, this extra assumption that $\phi_t=\chi_t +\psi$ also introduces an additional structure, which seems interesting in itself. In section 6 we use it to give a generalization of the Bando-Mabuchi theorem to certain ’twisted’ Kähler-Einstein equations, $$\text{Ric}(\omega)=\omega +\theta$$ considered in [@Szekelyhidi],[@2Berman] and [@2Donaldson]. Here $\theta$ is a fixed positive $(1,1)$-current, that may e g be the current of integration on a klt divisor. The solutions to these equations are then not necessarily smooth and it seems to be hard to prove uniqueness using the original methods of Bando and Mabuchi. Another paper that is very much related to this one is [@Berman; @et; @al], by Berman -Boucksom-Guedj-Zeriahi. There is introduced a variational approach to Monge-Ampere equations and Kähler-Einstein equations in a nonsmooth setting and a uniqueness theorem a la Bando-Mabuchi is proved, using continuous geodesics as we do here, but in a somewhat less general situation. I would like to thank all of these authors for helpful discussions, and Robert Berman in particular for proposing the generalized Bando-Mabuchi theorem in section 6. Preliminaries ============= Notation -------- Let $L$ be a line bundle over a complex manifold $X$, and let $U_j$ be a covering of the manifold by open sets over which $L$ is locally trivial. A section of $L$ is then represented by a collection of complex valued functions $s_j$ on $U_j$ that are related by the transition functions of the bundle, $s_j=g_{j k} s_k$. A metric on $L$ is given by a collection of realvalued functions $\phi^j$ on $U_j$, related so that $$|s_j|^2 e^{-\phi^j}=:|s|^2 e^{-\phi}=:|s|^2_\phi$$ is globally well defined. We will write $\phi$ for the collection $\phi^j$, and refer to $\phi$ as the metric on $L$, although it might be more appropriate to call $e^{-\phi}$ the metric. (Some authors call $\phi$ the ’weight’ of the metric.) A metric $\phi$ on $L$ induces an $L^2$-metric on the adjoint bundle $K_X+L$. A section $u$ of $K_X+L$ can be written locally as $$u= dz\otimes s$$ where $dz=dz_1\wedge ...dz_n$ for some choice of local coordinates and $s$ is a section of $L$. We let $$|u|^2 e^{-\phi}:= c_n dz\wedge d\bar z |s|_\phi^2;$$ it is a volume form on $X$. The $L^2$-norm of $u$ is $$\|u\|^2:=\int_X |u|^2 e^{-\phi}.$$ Note that the $L^2$ norm depends only on the metric $\phi$ on $L$ and does not involve any choice of metric on the manifold $X$. In this paper we will be mainly interested in the case when $L=-K_X$ is the anticanonical bundle. Then the adjoint bundle $K_x+L$ is trivial and is canonically isomorphic to $X\times \C$ if we have chosen an isomorphism between $L$ and $-K_X$. This bundle then has a canonical trivialising section, $u$ identically equal to 1. With the notation above $$\|1\|^2 =\int_X |1|^2 e^{-\phi}=\int_X e^{-\phi}.$$ This means explicitly that we interpret the volume form $e^{-\phi}$ as $$dz^j\wedge d\bar z^j e^{-\phi_j}$$ where $e^{-\phi^j}= |(dz^j)^{-1}|_\phi^2$ is the local representative of the metric for the frame determined by the local coordinates. Notice that this is consistent with the conventions indicated in the introduction. Bounded geodesics ----------------- We now consider curves $t\rightarrow \phi_t$ of metrics on the line bundle $L$. Here $t$ is a complex parameter but we shall (almost) only look at curves that do not depend on the imaginary part of $t$. We say that $\phi_t$ is a subgeodesic if $\phi_t$ is upper semicontinuous and $i\ddbar_{t, X}\phi_t\geq 0$, so that local representatives are plurisubharmonic with respect to $t$ and $X$ jointly. We say that $\phi_t$ is bounded if $$|\phi_t-\psi|\leq C$$ for some constant $C$ and some (hence any) smooth metric on $L$. For bounded geodesics the complex Monge-Ampere operator is well defined and we say that $\phi_t$ is a (generalized) geodesic if $$(i\ddbar_{t, X} \phi_t)^{n+1}=0.$$ Let $\phi_0$ and $\phi_1$ be two bounded metrics on $L$ over $X$ satisfying $i\ddbar\phi_{0,1}\geq 0$. We claim that there is a bounded geodesic $\phi_t$ defined for the real part of $t$ between 0 and 1, such that $$\lim_{t\rightarrow 0,1} \phi_t =\phi_{0,1}$$ uniformly on $X$. The curve $\phi_t$ is defined by \_t={\_t} where the supremum is taken over all plurisubharmonic $\psi_t$ with $$\lim_{t\rightarrow 0,1} \psi_t \leq \phi_{0,1}.$$ To prove that $\phi_t$ defined in this way has the desired properties we first construct a barrier $$\chi_t =\text{max} (\phi_0 -A\Re t, \phi_1 +A(\Re t -1)).$$ Clearly $\chi$ is plurisubharmonic and has the right boundary values if $A$ is sufficiently large. Therefore the supremum in (2.1) is the same if we restrict it to $\psi$ that are larger than $\chi$. For such $\psi$ the onesided derivative at 0 is larger than $-A$ and the onesided derivative at 1 is smaller than $A$. Since we may moreover assume that $\psi$ is independent of the imaginary part of $t$, $\psi$ is convex in $t$ so the derivative with respect to $t$ increases, and must therefore lie between $-A$ and $A$. Hence $\phi_t$ satisfies $$\phi_0 -A\Re t\leq \phi_t\leq \phi_0 +A\Re t$$ and a similar estimate at 1. Thus $\phi_t$ has the right boundary values uniformly. In addition, the upper semicontinuous regularization $\phi_t^*$ of $\phi_t$ must satisfy the same estimate. Since $\phi_t^*$ is plurisubharmonic it belongs to the class of competitors for $\phi_t$ and must therefore coincide with $\phi_t$, so $\phi_t$ is plurisubharmonic. That finally $\phi_t$ solves the homogenuous Monge-Ampere equation follows from the fact that it is maximal with given boundary values, see e g [@Guedj-Zeriahi]. Notice that as a byproduct of the proof we have seen that the geodesic joining two bounded metrics is uniformly Lipschitz in $t$. This fact will be very useful later on. Approximation of metrics and subgeodesics ----------------------------------------- In the proofs we will need to approximate our metrics that are only bounded, and sometimes not even bounded, by smooth metrics. Since we do not want to lose too much of the positivity of curvature this causes some complications and we collect here some results on approximation of metrics that we will use. An extensive treatment of these matters can be found in [@Demailly]. Here we will need only the simplest part of this theory and we also refer to [@Blocki-Kolodziej] for an elementary proof of the result we need. In general a singular metric $\phi$ with $i\ddbar\phi\geq 0$ can not be approximated by a decreasing sequnece of smooth metrics with nonnegative curvature. A basic fact is however (see [@Blocki-Kolodziej], Theorem 1.1) that this is possible if the line bundle in question is positive, so that it has some smooth metric of [*strictly* ]{} positive curvature. This is all we need in the main case of a Fano manifold. The approximation result for positive bundles also holds for $\mathbb Q$-line bundles; just multiply by some sufficiently divisible integer, and even for $\R$-bundles. In this paper we will also be interested in line bundles that are only semipositive. If $X$ is projective, as we assume, the basic fact above implies that we then can approximate any singular metric with nonnegative curvature with a decreasing sequence of smooth $\phi^\nu$s, satisfying $$i\ddbar\phi^\nu\geq -\epsilon_\nu\omega$$ where $\omega$ is some Kähler form and $\epsilon_\nu$ tends to zero. To see this we basically only need to apply the result above for the positive case to the $\R$-bundle $L+\epsilon F$ where $F$ is positive. If $\psi$ is a smooth metric with positive curvature on $F$, we approximate $\phi+\epsilon \psi$ by smooth metrics $\chi_\nu$ with positive curvature. Then $\phi^\nu=\chi_\nu-\epsilon \psi$ satisfies $$i\ddbar \phi^\nu\geq -\epsilon \omega$$ for $\omega=i\ddbar\psi$. Then let $\epsilon$ go to zero and choose a diagonal sequence. This sequence may not be decreasing, but an easy argument using Dini’s lemma shows that we may get a decreasing sequence this way. At one point we also wish to treat a bundle that is not even semipositive, but only effective. It then has a global holomorphic section, $s$, and the singular metric we are interested in is $\log|s|^2$, or some positive multiple of it. We then let $\psi$ be any smooth metric on the bundle and approximate by $$\phi^\nu:=\log( |s|^2 + \nu^{-1}e^{\psi}).$$ Explicit computation shows that $i\ddbar\phi^\nu\geq -C\omega$ where $C$ is some fixed constant. Moreover, outside any fixed neighbourhood of the zerodivisor of $s$, $$i\ddbar\phi^\nu\geq -\epsilon_\nu\omega$$ with $\epsilon_\nu$ tending to zero. This weak approximation will be enough for our purposes. The smooth case =============== In this section we let $L$ be a holomorphic line bundle over $X$ and $\Omega$ be a smoothly bounded open set in $\C$. We consider the trivial vector bundle $E$ over $\Omega$ with fiber $H^0(X, K_X+L)$. Let now $\phi_t$ be a smooth curve of metrics on $L$ of semipositive curvature. For any fixed $t$, $\phi_t$ induces an $L^2$-norm on $H^0(X, K_X+L)$ as described in the previous section $$\|u\|^2_t=\int_X |u|^2 e^{-\phi_t},$$ and as $t$ varies we get an hermitian metric on the vector bundle $E$. We now recall a formula for the curvature of $E$ with this metric from [@1Berndtsson],[@3Berndtsson]. Let for each $t$ in $\Omega$ $$\dt= e^{\phi_t}\partial e^{-\phi_t}=\partial-\partial \phi_t\wedge.$$ If $\alpha$ is an $(n,1)$-form on $X$ with values in $L$, and we write $\alpha=v\wedge \omega$, where $\omega$ is our fixed Kähler form on $X$, then (modulo a sign) $$\dt v=\dbar^*_{\phi_t}\alpha,$$ the adjoint of the $\dbar$-operator for the metric $\phi_t$. In particular this means that the operator $\dt$ is well defined on $L$-valued forms. This also means that for any $t$ we can solve the equation $$\dt v=\eta,$$ if $\eta$ is an $L$-valued $(n,0)$-form that is orthogonal to the space of holomorphic $L$-valued forms (see remark 2 below). Moreover by choosing $\alpha=v\wedge\omega$ orthogonal to the kernel of $\dbar^*_{\phi_t}$ we can assume that $\alpha$ is $\dbar$-closed, so that $\dbar v\wedge \omega=0$. Hence, with this choice, $\dbar v$ is a primitive form. If, as we assume from now, the cohomology $H^{n, 1}(X,L)=0$, the $\dbar$-operator is surjective on $\dbar$-closed forms, so the adjoint is injective, and $v$ is uniquely determined by $\eta$. The reason we can always solve this equation for $t$ and $\phi$ fixed is that the $\dbar$-operator from $L$-valued $(n,0)$-forms to $(n,1)$-forms on $X$ has closed range. This implies that the adjoint operator $\dbar^*_{\phi_t}$ also has closed range and that its range is equal to the orthogonal complement of the kernel of $\dbar$. Moreover, that $\dbar$ has closed range means precisely that for any $(n,1)$-form in the range of $\dbar$ we can solve the equation $\dbar f=\alpha$ with an estimate $$\|f\|\leq C\|\alpha\|$$ and it follows from functional analysis that we then can solve $\dt v=\eta$ with the bound $$\| v\|\leq C\|\eta\|$$ where $C$ is [*the same*]{} constant. In case all metrics $\phi_t$ are of equivalent size, so that $|\phi_t-\phi_{t_0}|\leq A$ it follows that we can solve $\dt v=\eta$ with an $L^2$-estimate independent of $t$. Let $u_t$ be a holomorphic section of the bundle $E$ and let $$\dof:=\frac{\partial\phi}{\partial t}.$$ For each $t$ we now solve v\_t=\_(u\_t), where $\pi_\perp$ is the orthogonal projection on the orthogonal complement of the space of holomorphic forms, with respect to the $L^2$-norm $\|\cdot\|_t^2$. With this choice of $v_t$ we obtain the following formula for the curvature of $E$, see [@1Berndtsson], [@3Berndtsson]. In the formula, $p$ stands for the natural projection map from $X\times\Omega$ to $\Omega$ and $p_*(T)$ is the pushforward of a differential form or current. When $T$ is a smooth form this is the fiberwise integral of $T$. Let $\Theta$ be the curvature form on $E$ and let $u_t$ be a holomorphic section of $E$. For each $t$ in $\Omega$ let $v_t$ solve (3.1) and be such that $\dbar_X v_t\wedge \omega=0$. Put $$\hat u=u_t-dt\wedge v_t.$$ Then u\_t,u\_t\_t= p\_\*(c\_n iu e\^[-]{}) +\_X v\_t\^2 e\^[-\_t]{}idtd|t. This is not quite the same formula as the one used in [@2Berndtsson] which can be seen as corresponding to a different choice of $v_t$. If the curvature acting on $u_t$ vanishes it follows that both terms in the right hand side of (3.2) vanish. In particular, $v_t$ must be a holomorphic form. To continue from there we first assume (like in [@2Berndtsson]) that $i\ddbar\phi_t>0$ on $X$. Taking $\dbar$ of formula 3.1 we get $$\dbar\dt v_t=\dbar\dof\wedge u_t.$$ Using $$\dbar\dt +\dt\dbar=\ddbar\phi_t$$ we get if $v_t$ is holomorphic that $$\ddbar\phi_t\wedge v_t=\dbar\dof\wedge u_t.$$ The complex gradient of the function $i\dof$ with respect to the Kähler metric $i\ddbar\phi_t$ is the $(1,0)$-vector field defined by $$V_t\rfloor i\ddbar\phi_t=i\dbar\dof.$$ Since $\ddbar\phi_t\wedge u_t=0$ for bidegree reasons we get \_tv\_t=u=(V\_t\_t)u= -\_t(V\_tu). If $i\ddbar\phi_t>0$ we find that $$-v_t=V_t\rfloor u.$$ If $v_t$ is holomorphic it follows that $V_t$ is a holomorphic vector field - outside of the zerodivisor of $u_t$ and therefore everywhere since the complex gradient is smooth under our hypotheses. If we assume that $X$ carries no nontrivial holomorphic vector fields, $V_t$ and hence $v_t$ must vanish so $\dof$ is holomorphic, hence constant. Hence $$\ddbar\dof=0$$ so $\ddbar\phi_t$ is independent of $t$. In general - if there are nontrivial holomorphic vector fields - we get that the Lie derivative of $\ddbar\phi_t$ equals $$L_{V_t}\ddbar\phi_t=\partial V_t\rfloor\ddbar\phi_t=\ddbar\dof=\frac{\partial}{\partial t}\ddbar\phi_t .$$ Together with an additional argument showing that $V_t$ must be holomorphic with respect to $t$ as well (see below) this gives that $\ddbar\phi_t$ moves with the flow of the holomorphic vector field which is what we want to prove. For this it is essential that the metrics $\phi_t$ be strictly positive on $X$ for $t$ fixed, but we shall now see that there is a way to get around this difficulty, at least in some special cases. The main case that we will consider is when the canonical bundle of $X$ is seminegative, so we can take $L=-K_X$. Then $K_X+L$ is the trivial bundle and we fix a nonvanishing trivializing section $u=1$. Then the constant section $t\rightarrow u_t=u$ is a trivializing section of the (line) bundle $E$. We write $$\F(t)=-\log \|u\|_t^2=-\log\int_X |u|^2 e^{-\phi_t}=-\log\int_X e^{-\phi_t}.$$ Still assuming that $\phi$ is smooth, but perhaps not strictly positive on $X$, we can apply the curvature formula in Theorem 3.1 with $u_t=u$ and get $$\|u_t\|^2_t i\ddbar_t\F=\langle\Theta u_t,u_t\rangle_t= p_*(c_n i\ddbar\phi \wedge \hat u\wedge\overline{\hat u} e^{-\phi_t}) +\int_X \|\dbar v_t\|^2 e^{-\phi_t}idt\wedge d\bar t.$$ If $\F$ is harmonic, the curvature vanishes and it follows that $v_t$ is holomorphic on $X$ for any $t$ fixed. Since $u$ never vanishes we can [*define*]{} a holomorphic vector field $V_t$ by $$-v_t=V_t\rfloor u.$$ Almost as before we get $$\dbar\dof\wedge u=\ddbar\phi_t\wedge v_t=-\ddbar\phi_t\wedge (V_t\rfloor u) =(V_t\rfloor\ddbar\phi_t)\wedge u,$$ which implies that $$V_t\rfloor i\ddbar\phi_t=i\dbar\dof.$$ if $\mathbf{u}$ never vanishes. This is the important point; we have been able to trade the nonvanishing of $i\ddbar\phi_t$ for the nonvanishing of $u$. This is where we use that the line bundle we are dealing with is $L=-K_X$. We also get the formula for the Lie derivative of $\ddbar\phi_t$ along $V_t$ L\_[V\_t]{}\_t=V\_t\_t==\_t . To be able to conclude from here we also need to prove that $V_t$ depends holomorphically on $t$. For this we will use the first term in the curvature formula, which also has to vanish. It follows that $$i\ddbar\phi \wedge \hat u\wedge\overline{\hat u}$$ has to vanish identically. Since this is a semidefinite form in $\hat u$ it follows that u=0. Considering the part of this expression that contains $dt\wedge d\bar t$ we see that :=-\_X()(V\_t)=0. If $\ddbar_X\phi_t>0$, $\mu$ is easily seen to be equal to the function $c(\phi)$ defined in the introduction, so the vanishing of $\mu$ is then equivalent to the homogenous Monge-Ampère equation. In [@2Berndtsson] we showed that $\partial V_t/\partial \bar t=0$ by realizing this vector field as the complex gradient of the function $c(\phi)$ which has to vanish if the curvature is zero. Here, where we no longer assume strict postivity of $\phi_t$ along $X$ we have the same problems as before to define the complex gradient. Therefore we follow the same route as before, and start by studying $\partial v_t/\partial \bar t$ instead. Recall that $$\dt v_t=\dof\wedge u +h_t$$ where $h_t$ is holomorphic on $X$ for each $t$ fixed. As we have seen in the beginning of this section, $v_t$ is uniquely determined, and it is not hard to see that it depends smoothly on $t$ if $\phi$ is smooth. Differentiating with respect to $\bar t$ we obtain $$\dt\frac{\partial v_t}{\partial \bar t}=\left [\frac{\partial^2\phi}{\partial t\partial \bar t}-\partial_X(\frac{\partial\phi}{\partial\bar t})(V_t)\right ]\wedge u +\frac{\partial h_t}{\partial\bar t}.$$ Since the left hand side is automatically orthogonal to holomorphic forms, we get that $$\dt\frac{\partial v_t}{\partial \bar t}=\pi_\perp(\mu)=0,$$ since $\mu=0$ by (3.6). Again, this means that $\partial v_t/\partial \bar t =0$ since $\partial v_t/\partial \bar t\wedge\omega$ is still $\dbar_X$-closed, and the cohomological assumption implies that $\dt$ is injective on closed forms. All in all, $v_t$ is holomorphic in $t$, so $V_t$ is holomorphic on $X\times\Omega$. We can now conclude the proof in the same way as in [@2Berndtsson]. Define a holomorphic vector field $\V$ on $X\times\Omega$ by $$\V:=V_t -\frac{\partial}{\partial t} .$$ Let $\eta$ be the form $\ddbar_X\phi_t$ on $\X$. Then formula 2.4 says that the Lie derivative $$L_\V\eta=0$$ on $X$. It follows that $\eta$ is invariant under the flow of $\V$ so $\ddbar\phi_t$ moves by the flow of a holomorphic family of automorphisms of $X$. The nonsmooth case =================== In the general case we can write our metric $\phi$ as the uniform limit of a sequence of smooth metrics, $\phi^\nu$, with $i\ddbar\phi^\nu\geq -\epsilon_\nu\omega$, where $\epsilon_\nu$ tends to zero, see section 2.3. Note also that in case we assume that $-K_X>0$ we can even approximate with metrics of strictly positive curvature. The presence of the negative term $-\epsilon_\nu \omega$ causes some minor notational problems in the estimates below. We will therefore carry out the proof under the assumptions that $i\ddbar\phi^\nu\geq 0$ and leave the necessary modifications to the reader. Let $\F_\nu$ be defined the same way as $\F$, but using the weights $\phi^\nu$ instead. Then $$i\ddbar\F_\nu$$ goes to zero weakly on $\Omega$. We get a sequence of $(n-1,0)$ forms $v^\nu_t$, solving $$\dt v^\nu_t=\pi_\perp(\dot{\phi_t^\nu} u)$$ for $\phi=\phi_\nu$. By Remark 1, we have an $L^2$-estimate for $v^\nu_t$ in terms of the $L^2$ norm of $\dofnu$, with the constant in the estimate independent of $t$ and $\nu$. Since $\dot{\phi_t^\nu}$ is uniformly bounded by section 2.2, it follows that we get a uniform bound for the $L^2$-norms of $v_t^\nu$ over all of $X\times\Omega$. Therefore we can select a subsequence of $v_t^\nu$ that converges weakly to a form $v$ in $L^2$. Since $i\ddbar\F_\nu$ tends to zero weakly, Theorem 2.1 shows that the $L^2$-norm of $\dbar_X v^\nu$ over $X\times K$ goes to zero for any compact $K$ in $\Omega$, so $\dbar_X v=0$. Moreover $$\dt_X v= \pi_\perp (\dot{\phi}u)$$ in the (weak ) sense that $$\int_{X\times\Omega}dt\wedge d\bar t\wedge v\wedge\overline{\dbar W} e^{-\phi}= \int_{X\times\Omega}dt\wedge d\bar t\wedge \pi_\perp(\dot{\phi}u)\wedge\overline{ W }e^{-\phi}$$ for any smooth form $W$ of the appropriate degree. As before this ends the argument if there are no nontrivial holomorphic vector fields on $X$. Then $v$ must be zero, so $\dof$ is holomorphic, hence constant. In the general case, we finish by showing that $v_t$ is holomorphic in $t$. The difficulty is that we don’t know any regularity of $v_t$ except that it lies in $L^2$, so we need to formulate holomorphicity weakly. We will use two elementary lemmas that we state without proof. The first one allows us get good convergence properties for geodesics, when the metrics only depend on the real part of $t$ and therfore are convex with respect to $t$. Let $f_\nu$ be a sequence of smooth convex functions on an interval in $\R$ that converge uniformly to the convex function $f$. Let $a$ be a point in the interval such that $f'(a)$ exists. Then $f_\nu'(a)$ converge to $f'(a)$. Since a convex function is differentiable almost everywhere it follows that $f'_\nu$ converges to $f'$ almost everywhere, with dominated convergence on any compact subinterval. Another technical problem that arises is that we are dealing with certain orthogonal projections on the manifold $X$, where the weight depends on $t$. The next lemma gives us control of how these projections change. Let $\alpha_t$ be forms on $X$ with coefficients depending on $t$ in $\Omega$. Assume that $\alpha_t$ is Lipschitz with respect to $t$ as a map from $\Omega$ to $L^2(X)$. Let $\pi^t$ be the orthogonal projection on $\dbar$-closed forms with respect to the metric $\phi_t$ and the fixed Kähler metric $\omega$. Then $\pi^t(\alpha_t)$ is also Lipschitz, with a Lipschitz constant depending only on that of $\alpha$ and the Lipschitz constant of $\phi_t$ with respect to $t$. Note that in our case, when $\phi$ is independent of the imaginary part of $t$, we have control of the Lipscitz constant with respect to $t$ of $\phi_t$ , and also by the first lemma uniform control of the Lipschitz constant of $\phi^\nu_t$, since the derivatives are increasing. It follows from the curvature formula that $$a_\nu:=\int_{X\times\Omega'}i\ddbar\phi^\nu\wedge\hat u\wedge\overline{\hat u}e^{-\phi^\nu}$$ goes to zero if $\Omega'$ is a relatively compact subdomain of $\Omega$. Shrinking $\Omega$ slightly we assume that this actually holds with $\Omega'=\Omega$. By the Cauchy inequality $$\int_{X\times\Omega}i\ddbar\phi^\nu\wedge\hat u\wedge\overline{W}e^{-\phi^\nu}\leq a_\nu\int_{X\times\Omega}i\ddbar\phi^\nu\wedge W\wedge\bar W e^{-\phi^\nu}$$ if $W$ is any $(n,0)$-form. Choose $W$ to contain no differential $dt$, so that it is an $(n,0)$-form on $X$ with coefficients depending on $t$. Then $$\int_{X\times\Omega}i\ddbar\phi^\nu\wedge W\wedge\bar W e^{-\phi^\nu}= \int_{X\times\Omega}i\ddbar_t\phi^\nu\wedge W\wedge\bar W e^{-\phi^\nu}$$ We now assume that $W$ has compact support. The one variable Hörmander inequality with respect to $t$ then shows that the last integral is dominated by \_[X]{}|\^[\^]{}\_t W|\^2e\^[-\^]{}. From now we assume that $W$ is Lipschitz with respect to $t$ as a map from $\Omega$ into $L^2(X)$. Then (4.1) is uniformly bounded, so $$\int_{X\times\Omega}idt\wedge d\bar t\wedge\mu^\nu\wedge\overline{W}e^{-\phi^\nu}$$ goes to zero, where $\mu^\nu$ is defined as in (3.6) with $\phi$ replaced by $\phi^\nu$. By Lemma 4.2 $$\int_{X\times\Omega}idt\wedge d\bar t\wedge\mu^\nu\wedge\overline{\pi_\perp W}e^{-\phi^\nu}$$ also goes to zero. Therefore $$\int_{X\times\Omega}idt\wedge d\bar t\wedge\pi_\perp(\mu^\nu)\wedge\overline{W}e^{-\phi^\nu}.$$ goes to zero. Now recall that $\pi_\perp(\mu^\nu)=\dt (\partial v_t^\nu/\partial\bar t)$ and integrate by parts. This gives that $$\int_{X\times\Omega}idt\wedge d\bar t\wedge \frac{\partial v_t^\nu}{\partial \bar t}\wedge\overline{\dbar_XW}e^{-\phi^\nu}$$ also vanishes as $\nu$ tends to infinity. Next we let $\alpha$ be a form of bidegree $(n,1)$ on $X\times\Omega$ that does not contain any differential $dt$. We assume it is Lipschitz with respect to $t$ and decompose it into one part, $\dbar_X W$, which is $\dbar_X$-exact and one which is orthogonal to $\dbar_X$-exact forms. This amounts of course to making this orthogonal decomposition for each $t$ separately, and by Lemma 4.2 each term in the decomposition is still Lipschitz in $t$, uniformly in $\nu$. Since $v^\nu_t\wedge\omega$ is $\dbar_X$-closed by construction, this holds also for $\partial v^\nu/\partial \bar t$. By our cohomological assumption, it is also $\dbar$-exact, and we get that $$\int_{X\times\Omega}idt\wedge d\bar t\wedge \frac{\partial v_t^\nu}{\partial \bar t}\wedge\overline\alpha e^{-\phi^\nu}= \int_{X\times\Omega}idt\wedge d\bar t\wedge \frac{\partial v_t^\nu}{\partial \bar t}\wedge\overline{\dbar_XW}e^{-\phi^\nu}.$$ Hence $$\int_{X\times\Omega}dt\wedge v_t\wedge\overline{\partial^{\phi^\nu}_t\alpha}e^{-\phi^\nu}$$ goes to zero. By Lemma 4.1 we may pass to the limit here and finally get that \_[X]{} dtv\_te\^[-]{} =0, under the sole assumption that $\alpha$ is of compact support, and Lipschitz in $t$. This is almost the distributional formulation of $\dbar_t v=0$, except that $\phi$ is not smooth. But, replacing $\alpha$ by $e^{\phi-\psi}\alpha$, where $\psi$ is another metric on $L$, we see that if (4.2) holds for some $\phi$, Lipschitz in $t$, it holds for any such metric. Therefore we can replace $\phi$ in (4.2) by some other smooth metric. It follows that $v_t$ is holomorphic in $t$ and therefore, since we already know it is holomorphic on $X$, holomorphic on $X\times\Omega$. This completes the proof. The Bando-Mabuchi theorem. =========================== For $\phi_0$ and $\phi_1$, two metrics on a line bundle $L$ over $X$, we consider their relative energy $$\E(\phi_0,\phi_1).$$ This is well defined if $\phi_j$ are bounded with $i\ddbar\phi_j\geq 0$. It has the fundamental properties that if $\phi_t$ is smooth in $t$ for $t$ in $\Omega$, then $$\frac{\partial}{\partial t}\E(\phi_t,\phi_1)=\int_X \dof (i\ddbar\phi_t)^n/\text{Vol}(L)$$ and $$i\ddbar_t\E(\phi_t,\phi_1)=p_*((i\ddbar_{X,t}\phi)^{n+1})/\text{Vol}(L)=idt\wedge d\bar t\int_X c(\phi_t)(i\ddbar_X \phi_t)^n/\text{Vol}(L),$$ where $p$ is the projection map from $X\times\Omega$ to $\Omega$. Here Vol($L$) is the normalizing factor $$\text{Vol}(L)=\int_X(i\ddbar_X \phi)^n,$$ chosen so that the derivative of $\E$ becomes 1 if $\phi_t=\phi+t$. If the family is only bounded, these formulas hold in the sense of distributions. In particular, if $\phi$ solves the homogenuous Monge-Ampère equation, so that $(i\ddbar_{X,t}\phi)^{n+1}=0$ or equivalently $c(\phi)=0$, then $\E(\phi_t,\phi_1)$ is harmonic in $t$. Hence this function is linear along geodesics. Let now $$\G(t)=\F(t)-\E(\phi_t,\psi)$$ where $\psi$ is arbitrary. Then $\phi_0$ solves the Kähler-Einstein equation if and only if $\G'(0)=0$ for any smooth curve $\phi_t$. If $\phi_0$ and $\phi_1$ are two Kähler-Einstein metrics we connect them by a geodesic $\phi_t$ (a continuous geodesic will be enough). Now $\phi_t$ depends only on the real part of $t$ so $\G$ is convex. We claim that since both end points are Kähler-Einstein metrics, 0 and 1 are stationary points for $\G$. This would be immediate if the geodesic were smooth, but it is not hard to see that it also holds if the geodesic is only bounded, with boundary behaviour as described in section 2.2. The function $\F$ is convex, hence has onesided derivatives at the endpoints, and using the convexity of $\phi$ with respect to $t$ one sees that they equal $$\int \dof e^{-\phi}/\int e^{-\phi}$$ (where $\dof$ now stands for the onesided derivatives). The function $\E(\phi_t,\psi)$ is linear so its distributional derivative $$\int_X \dof (i\ddbar\phi_t)^n/\text{Vol}(L)$$ is constant and simple convergence theorems for the Monge-Ampère operator show that it is equal to its values at the endpoints. Hence both end points are critical points for $\G$ and the convexity implies that $\G$ is constant so $\F$ is linear. By Theorem 1.2 $\ddbar\phi_t$ are related via a holomorphic family of automorphisms. In particular $\ddbar\phi_0$ and $\ddbar\phi_1$ are related via an automorphism which is homotopic to the identity, which is the content of the Bando-Mabuchi theorem. A generalized Bando-Mabuchi theorem =================================== A variant of Theorem 1.2 for unbounded metrics ---------------------------------------------- One might ask if Theorem 1.2 is valid under even more general assumptions. A minimal requirement is of course that $\F$ be finite, or in other words that $e^{-\phi_t}$ be integrable. For all we know Theorem 1.2 might be true in this generality, but here we will limit ourselves to the following situation: Let $t\rightarrow \tau_t$ be a curve of singular metrics on $L=-K_X$ that can be written $$\tau_t=\phi_t +\psi$$ where $\psi$ is a metric on an $\R$-line bundle $S$ and $\phi_t$ is a curve of metrics on $-(K_X+S)$ such that: \(i) $ \phi_t$ is bounded and only depends on $\Re t$. \(ii) $e^{-\psi}$ is integrable and $\psi$ does not depend on $t$ and \(iii) $i\ddbar_{t,X}(\tau_t)\geq 0$. Assume that $-K_X\geq 0$ and that $H^{0,1}(X)=0$. Let $\tau_t=\phi_t+\psi$ be a curve of metrics on $-K_X$ satisfying (i)-(iii). Assume that $$\F(t)=-\log\int_X e^{-\tau_t}$$ is affine. Then there is a holomorphic vector field $V$ on $X$ with flow $F_t$ such that $$F_t^*(\ddbar\tau_t)=\ddbar\tau_0.$$ The proof of this theorem is almost the same as the proof of Theorem 1.2. The main thing to be checked is that for $\tau=\tau^\nu$ a sequence of smooth metrics decreasing to $\tau$ we can still solve the equations $$\partial^{\tau_t} v_t=\pi_\perp (\dot{\tau_t} u)$$ with an $L^2$ -estimate independent of $t$ and $\nu$. Let $L$ be a holomorphic line bundle over $X$ with a metric $\xi$ satisfying $i\ddbar\xi\geq 0$. Let $\xi_0$ be a smooth metric on $L$ with $\xi\leq \xi_0$, and assume $$I:=\int_X e^{\xi_0-\xi}<\infty.$$ Then there is a constant $A$, only depending on $I$ and $\xi_0$ ( not on $\xi$!) such that if $f$ is a $\dbar$-exact $L$ valued $(n,1)$-form with $$\int |f|^2 e^{-\xi}\leq 1$$ there is a solution $u$ to $\dbar u=f$ with $$\int_X |u|^2 e^{-\xi}\leq A.$$ (The integrals are understood to be taken with respect to some arbitrary smooth volume form.) The assumptions imply that $$\int |f|^2 e^{-\xi_0}\leq 1.$$ Since $\dbar$ has closed range for $L^2$-norms defined by smooth metrics, we can solve $\dbar u=f$ with $$\int |u|^2 e^{-\xi_0}\leq C$$ for some constant depending only on $X$ and $\xi_0$. Choose a collection of coordinate balls $B_j$ such that $B_j/2$ cover $X$. In each $B_j$ solve $\dbar u_j=f$ with $$\int_{B_j} |u_j|^2 e^{-\xi}\leq C_1\int_{B_j} |f|^2 e^{-\xi}\leq C_1,$$ $C_1$ only depending on the size of the balls. Then $h_j:=u-u_j$ is holomorphic on $B_j$ and $$\int_{B_j}|h_j|^2 e^{-\xi_0}\leq C_2,$$ so $$\sup_{B_j/2}|h_j|^2 e^{-\xi_0}\leq C_3.$$ Hence $$\int_{B_j/2}|h_j|^2 e^{-\xi}\leq C_3 I$$ and therefore $$\int_{B_j/2}|u|^2 e^{-\xi}\leq C_4 I.$$ Summing up we get the lemma. By the discussion in section 2, the assumption that $-K_X\geq 0$ implies that we can write $\tau_t$ as a limit of a decreasing sequence of smooth metrics $\tau_t^\nu$ with $$i\ddbar\tau_t^\nu\geq -\epsilon_\nu \omega$$ where $\epsilon_\nu$ tends to zero. Applying the lemma to $\xi=\tau^\nu_t$ and $\xi_0$ some arbitrary smooth metric we see that we have uniform estimates for solutions of the $\dbar$-equation, independent of $\nu$ and $t$. By remark 2, section 3, the same holds for the adjoint operator, which means that we can construct $(n-1,0)$-forms $v_t^\nu$ just as in section 3, and the proof of Theorem 6.1 then continues as in section 3. Yet another version ------------------- We also briefly describe yet another situation where the same conclusion as in Theorem 6.1 can be drawn even though we do not assume that $-K_X\geq 0$. The assumptions are very particular, and it is not at all clear that they are optimal, but they are chosen to fit with the properties of desingularisations of certain singular varieties. We then assume instead that $-K_X$ can be decomposed $$-K_X = -(K_X +S) +S$$ where $S$ is the $\R$-line bundle corresponding to a klt -divisor $\Delta\geq 0$ and we assume $-(K_X+S)\geq 0$. We moreover assume that the underlying variety of $\Delta$ is a union of smooth hypersurfaces with simple normal crossings. We then look at curves $$\tau_t=\phi_t +\psi$$ where $i\ddbar_{t, X}\phi_t\geq 0$ and $\psi$ is a fixed metric on $S$ satisfying $i\ddbar\psi=[\Delta]$. We claim that the conclusion of Theorem 6.1 holds in this situation as well. The difference as compared to our previous case is that we do not assume that $\tau_t$ can be approximated by a decreasing sequence of metrics with almost positive curvature. For the proof we approximate $\phi_t$ by a decreasing sequence of smooth metrics $\phi^\nu$ satisfying $$i\ddbar\phi_t^\nu\geq -\epsilon_\nu \omega.$$ As for $\psi$ we approximate it following the scheme at the end of section 2 by a sequence satisfying $$i\ddbar\psi^\nu\geq -C\omega$$ and $$i\ddbar\psi^\nu\geq -\epsilon_\nu \omega$$ outside of any neighbourhood of $\Delta$. Then let $\tau^\nu_t=\phi^\nu_t+\psi^\nu$. Now consider the curvature formula (3.2) \^u\_t,u\_t\_t= p\_\*(c\_n i\^\_t u e\^[-\^\_t]{}) +\_X v\^\_t\^2 e\^[-\^\_t]{}idtd|t We want to see that the second term in the right hand side tends to zero given that the curvature $\Theta^\nu$ tends to zero, and the problem is that the first term on the right hand side has a negative part. However, $$p_*(c_n i\ddbar\tau^\nu_t \wedge \hat u\wedge\overline{\hat u} e^{-\tau^\nu_t})$$ can for any $t$ be estimated from below by -\_u\^2 -C\_U |v\^\_t|\^2 e\^[-\^]{} where $U$ is any small neighbourhood of $\Delta$ if we choose $\nu$ large. This means, first, that we still have at least a uniform upper estimate on $\dbar v^\nu_t$. This, in turn gives by the technical lemma below that the $L^2$-norm of $v^\nu_t$ over a small neighbourhood of $\Delta$ must be small if the neighbourhood is small. Shrinking the neighbourhood as $\nu$ grows we can then arrange things so that the negative part in the right hand side goes to zero. Therefore the $L^2$-norm of $\dbar v^\nu_t$ goes to zero after all, and after that the proof proceeds as before. We collect this in the next theorem. Assume that $-(K_X+S)\geq 0$ and that $H^{0,1}(X)=0$. Let $\tau_t=\phi_t+\psi$ be a curve of metrics on $-K_X$ where \(i) $\phi_t$ are metrics on $-(K_X+S)$ with $i\ddbar\phi_t\geq 0$, and \(ii) $\psi$ is a metric on $S$ with $i\ddbar\psi=[\Delta]$, where $\Delta$ is a klt divisor with simple normal crossings. Assume that $$\F(t)=-\log\int_X e^{-\tau_t}$$ is affine. Then there is a holomorphic vector field $V$ on $X$ with flow $F_t$ such that $$F_t^*(\ddbar\tau_t)=\ddbar\tau_0.$$ We end this section with the technical lemma used above. The term $$\int_U |v^\nu_t|^2 e^{-\tau^\nu}$$ in (6.2) can be made arbitrarily small if $U$ is a sufficiently small neighbourhood of $\Delta$ Covering $\Delta$ with a finite number of polydisks, in which the divisor is a union of coordinate hyperplanes, it is enough to prove the following statement: Let $P$ be the unit polydisk in $\C^n$ and let $v$ be a compactly supported function in $P$. Let $$\psi_\epsilon=\sum \alpha_j\log( |z_j|^2 +\epsilon)$$ where $0\leq \alpha_j<1$. Assume $$\int_P (|v|^2 +|\dbar v|^2)e^{-\psi}\leq 1.$$ Then for $\delta>>\epsilon$ $$\int_{\cup\{|z_j|\leq \delta\}} |v|^2 e^{-\psi_\epsilon} \leq c_\delta$$ where $c_\delta$ tends to zero with $\delta$. To prove this we first estimate the integral over $ |z_1|\leq \delta$ using the one variable Cauchy formula in the first variable $$v(z_1, z')= \pi^{-1}\int v_{\bar\zeta_1}(\zeta_1,z')/(\zeta_1 -z_1)$$ which gives $$|v(z_1, z')|^2 \leq C \int |v_{\bar\zeta_1}(\zeta_1,z')|^2/|\zeta_1 -z_1|.$$ Then multiply by $(|z_1|^2 +\epsilon)^{-\alpha_1}$ and integrate with respect to $z_1$ over $|z_1|\leq \delta$. Use the estimate $$\int_{|z_1|\leq \delta} \frac{1}{(|z_1|^2 +\epsilon)^{\alpha_1}|z_1-\zeta_1|}\leq c_\delta (|\zeta_1|^2+\epsilon)^{-\alpha_1},$$ multiply by $\sum_2^n \alpha_j\log( |z_j|^2 +\epsilon)$ and integrate with respect to $z'$. Repeating the same argument for $z_2, ..z_n$ and summing up we get the required estimate. A generalized Bando-Mabuchi theorem ----------------------------------- As pointed out to me by Robert Berman, Theorems 6.1 and 6.3 lead to versions of the Bando-Mabuchi theorem for ’twisted Kähler-Einstein equations’, [@Szekelyhidi], [@2Berman], and [@2Donaldson]. Let $\theta$ be a positive $(1,1)$-current that can be written $$\theta=i\ddbar\psi$$ with $\psi$ a metric on a $\R$-line bundle $S$. The twisted Kähler-Einstein equation is () =+, for a Kähler metric $\omega$ in the class $c[-(K_X+ S)]$. Writing $\omega=i\ddbar\phi$, where $\phi$ is a metric on the $\R$-line bundle $F:=-(K_X+S)$, this is equivalent to (i)\^n= e\^[-(+)]{}, after adjusting constants. To be able to apply Theorems 6.1 and 6.2 we need to assume that $e^{-\psi}$ is integrable. By this we mean that representatives with respect to a local frame are integrable. When $\theta=[\Delta]$ is the current defined by a divisor, it means that the divisor is klt. Solutions $\phi$ of (6.2) are now critical points of the function $$\G_\psi(\phi):=-\log\int e^{-(\phi+\psi)} -\E(\phi, \chi)$$ where $\chi$ is an arbitary metric on $F$. Here $\psi$ is fixed and we let the variable $\phi$ range over bounded metrics with $i\ddbar\phi\geq 0$. If $\phi_0$ and $\phi_1$ are two critical points, it follows from the discussion in section 2 that we can connect them with a bounded geodesic $\phi_t$. Since $\E$ is affine along the geodesic it follows that $$t\rightarrow -\log\int e^{-(\phi_t+\psi)}$$ is affine along the geodesic and we can apply Theorem 6.1. Assume that $-K_X$ is semipositive and that $H^{0,1}(X)=0$. Assume that $i\ddbar\psi=\theta$, where $e^{-\psi}$ is integrable. Let $\phi_0$ and $\phi_1$ be two bounded solutions of equation (6.3) with $i\ddbar\phi_j\geq 0$. Then there is a holomorphic automorphism, $F$, of $X$, homotopic to the identity, such that $$F^*(\ddbar\phi_1)=\ddbar\phi_0$$ and $$F^*(\theta)=\theta.$$ By Theorem 6.1 there is an $F$ such that $$F^*(\ddbar\phi_1+\ddbar\psi)=\ddbar\phi_0+\ddbar\psi$$ so we just need to see that $F$ preserves $\theta=i\ddbar\psi$. But this follows since $\omega^j:=i\ddbar\phi_j$ solves (6.1) and $F^*(\text{Ric}(\omega^1))= \text{Ric}(F^*(\omega^1))$. Thus $$\omega^1 +\theta=\text{Ric}(\omega^1)$$ implies $$\omega^0+F^*(\theta)=\text{Ric}(\omega^0)=\omega^0 +\theta,$$ and we are done. Note that in case $\theta$ is strictly positive we even get absolute uniqueness. This follows from the proof of Theorem 6.1 since both $\phi_t$ and $\phi_t+\psi$ must be geodesics, which forces $\phi_t$ to be linear in $t$ if $i\ddbar\psi>0$. Certainly the assumption on strict positivity can be considerably relaxed here, see the end of the next section for a comment on this. In the same way we get from Theorem 6.3 Assume that $-K_X=-(K_X+S) +S $ where $-(K_X+S)$ is semipositive and $S$ is the $\R$-line bundle corresponding to a klt divisor $\Delta\geq 0$ with simple normal crossings. Assume also that $H^{0,1}(X)=0$. Let $\phi_0$ and $\phi_1$ be two bounded solutions of equation (6.2) with $\theta=[\Delta]$ and with $i\ddbar\phi_j\geq 0$. Then there is a holomorphic automorphism, $F$, of $X$, homotopic to the identity, such that $$F^*(\ddbar\phi_1)=\ddbar\phi_0$$ and $$F^*([\Delta])=[\Delta].$$ A concluding (wonkish) remark on complex gradients =================================================== The curvature formula in Theorem 3.1 is based on a particular choice of the auxiliary $(n-1,0)$ form $v_t$ as the solution of an equation $$\dt v_t=\pi_\perp(\dof u_t).$$ In the case when $\phi_t$ is smooth and $i\ddbar_X\phi_t>0$ one could alternatively choose $\tilde v_t$ as $$\tilde v_t=V_t\rfloor u,$$ where $V_t$ is the complex gradient of $\dof$ defined by $$V_t\rfloor \ddbar_X\phi_t=\dbar\dof.$$ This leads to a different formula for the curvature which is the one used in [@2Berndtsson]: \^E u,u=\_[X\_t]{} c() |u|\^2 e\^[-]{} + (+1)\^[-1]{}v\_t,v\_t, where $\Box$ is the $\dbar$-Laplacian for the metric $i\ddbar_X\phi_t$. The relation between the two formulas is discussed in [@3Berndtsson] in the more general setting of a nontrivial fibration. At any rate, the two choices $v_t$ and $\tilde v_t$ coincide in case the curvature vanishes, as we have seen in section 3. Of course the definition of $\tilde v_t$ makes no sense in our more general setting since we have no metric on $X$ to help us define a complex gradient. Nevertheless, the methods of section 3 can perhaps be seen as giving a way to define a ’complex gradient’ in a nonregular situation. We formulate the basic principle in the next proposition. Let $L$ be a holomorphic line bundle over the compact Kähler manifold $X$, and let $\phi$ be a smooth metric on $L$, not necessarily with positive curvature. Assume $V$ is a holomorphic vector field on $X$ such that $$V\rfloor \ddbar\phi=0.$$ Then $V=0$ provided that $$H^{(0,1)}(X, K_X+L)=0$$ and $$H^{0}(X, K_X+L)\neq 0.$$ We follow the arguments in section 3. Let $u$ be a global holomorphic section of $K_X+L$, and put $$v:=V\rfloor u.$$ Then $v$ is a holomorphic $(n-1,0)$-form and $$\ddbar\phi\wedge v=-(V\rfloor\ddbar\phi)\wedge u=0.$$ Hence $$\dbar\partial^\phi v=-\partial^\phi\dbar v=0.$$ Put $\alpha=v\wedge \omega$ where $\omega$ is the Kähler form. Then $\alpha $ is a smooth, $\dbar$-closed $(n,1)$-form solving $$\dbar\dbar^*_\phi\alpha=0.$$ This means that $\dbar^*_\phi\alpha$ is a holomorphic, hence smooth $(n,0)$-form. Integrating by parts we get $$|\dbar^*_\phi\alpha|^2=0$$ Since we have assumed $H^{(n,1)}=0$, $\alpha=\dbar g$ for some $g$. Then $$\|\alpha\|^2=\langle\alpha,\dbar g\rangle=0$$ so $v$ and hence $V$ are 0. This means that holomorphic solutions of $$V\rfloor \ddbar\phi=\dbar\chi$$ are unique, if they exist. Let us finally compare this to our first uniqueness result for twisted Kähler-Einstein equations, Theorem 6.5, and the remark immediately after it (Remark 4). There we noted that in case the twisting term $\theta$ is strictly positive, the automorphism $F$ must be the identity, so that we even get absolute uniqueness, and not just uniqueness up to a holomorphic automorphism. A considerably more general statement follows from Proposition 7.1: For absolute uniqueness it suffices to assume that some multiple of the $\R$ bundle $S$ satisfies the cohomological assumptions in Proposition 7.1, $H^0(X, K_X+mS)\neq 0$ and $H^1(X, K_X+mS)=0$. This is certainly the case (by Kodaira vanishing) if $S>0$, even if $\theta$ itself is not assumed positive. Of course it also holds in many other cases that are not covered by Kodaira’s theorem. In this connection, notice also that some kind of regularity of $\phi$ in Proposition 7.1 is necessary, since for completely general metrics the meaning of the operators $\partial^\phi$ and $\dbar^*_\phi$ becomes unclear. This is not just a technical problem. The vector field $z\partial/\partial z$ vanishes at $z=0$ and $z=\infty$ on the Riemann sphere. But, the divisor $\{0\}\cup\{\infty\}$ is certainly ample. \#1\#2\#3[[\#1]{}: [*\#2*]{},  \#3.]{} [9999]{}
--- abstract: 'We introduce and study a new directed sandpile model with threshold dynamics and stochastic toppling rules. We show that particle conservation law and the directed percolation-like local evolution of avalanches lead to the formation of a spatial structure in the steady state, with the density developing a power law tail away from the top. We determine the scaling exponents characterizing the avalanche distributions in terms of the critical exponents of directed percolation in all dimensions.' address: | $^1$Jožef Stefan Institute, P.O. Box 3000, 1001 Ljubljana, Slovenia\ $^2$ Theoretical Physics Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005, India author: - 'Bosiljka Tadić$^{1,*}$ and Deepak Dhar$^{2,**}$' title: Emergent spatial structures in critical sandpiles --- [2]{} Many extended slowly driven dissipative systems in nature evolve into self-organized critical (SOC) steady states which show long-range spatial and temporal correlations. Since the pioneering work of Bak [*et al.*]{} [@BTW], sandpile models have served as paradigms of SOC systems. A great deal of understanding of SOC has been achieved by numerically studying sandpile models with different evolution rules [@KNWZ]. For a special subclass of these models, i.e., the Abelian Sandpile Models (ASM), several analytical results are available [@ASM]. Recent studies of models with stochastic toppling rules have shown that these models usually belong to a universality class different from deterministic automata; they have robust critical states with respect to changes of a control parameter, and may also exhibit a dynamical phase transition between qualitatively different steady states [@stochastic-spa]. However, the spatial structures in the steady states of these models are much less investigated [@structures]. Due to long-range correlations in the critical states, influence of the boundary can be felt deep inside, and this can give rise to large-scale spatial structures. Indeed, emergent spatial structures are [*sine qua non*]{} for a SOC theory of fractals occurring in nature, e.g., mountain landscapes, river networks, and earthquake fault zones. In this Letter, we propose a new stochastic sandpile model which shows emergent spatial structures in the steady states. The model is a stochastic generalization of the directed ASM [@DR] and contains a probabilistic control parameter $p$. In the case $p=$1 the exponents are known exactly in all dimensions [@DR]. We show that for $p \neq 1$, the model is in a new universality class and can be related to directed percolation (DP) [@DP] problem with a nonuniform concentration profile. A steady state exists only for $p > p^\star$, where $p^\star$ equals the critical threshold for directed site percolation. Above $p^\star $ the system evolves towards a steady state which is arbitrarily close to the generalized DP critical line. We also show that in the critical state of our model a spatial structure results from an interplay of DP-like local evolution rules on the one side, and the dynamic conservation law on the other. We find a power-law density profile, which further enables us to determine the exact expressions for the scaling exponents of avalanches in terms of the DP critical exponents in all dimensions $d$. Our numerical simulations in two dimensions support these conclusions. For concreteness, we consider a square lattice of linear size $L$ with sites $(i,j)$ oriented so that the diagonal direction $(1,1)$ is vertically down. A non-negative integer variable, height $h(i,j)$, is attached to each lattice site. Sand grains are added one at a time at a randomly chosen site on the top layer, increasing its height by one. A site becomes [*unstable*]{} if $h(i,j) \geq 2$ and relaxes as follows: With probability $p$, the local height decreases by two, and heights at each of its two downward neighbors increase by one. Otherwise, the heights remain unchanged. In either case, the site is considered as stable at the next time step. Only sites to which at least one particle was added at the preceding time step are checked for toppling. A discrete-time parallel update is applied to all such sites. We apply periodic boundary conditions in the horizontal direction. Two particles leave the system for each toppling occurring at the bottom layer. In the limit $p=1$, the structure of the steady state is known exactly [@DR]: Only configurations with heights $h=0$ and $h=1$ occur, and all such configurations are equally probable. The avalanche clusters are compact. For stochastic toppling, $p \ne 1$, the model has quite a different behavior. It is [*no longer Abelian*]{}, since adding two particles together to a site of height $ h\geq 2$ does not have the same effect as adding them one by one at different time steps. (In the latter case, toppling can occur twice with a finite probability.) With a small probability heights could become arbitrarily large. The avalanche clusters have branches and holes of various sizes. An example is shown in Fig. 1. For small $p$ the average influx of particles per attempted toppling at a site, which is $ \geq 1$, exceeds the average outflux, $2p$, thus there exists a value $p^\star$, such that a steady state is possible only for $p \geq p^\star$. We now argue that $p^\star = p_c$, where $p_c$ is the critical threshold for the directed site percolation problem [@sim]. Suppose that $\cal{C}$ is a stable configuration of the pile, and a particle added at the site $(i,j)$ causes on the average $n_\ell(\cal{C})$ topplings at depth $\ell $ below it. If for any stable configuration $\cal{C' }$, all heights are greater than or equal to corresponding heights in $\cal{C}$ then $ n_\ell(\cal{C}') \geq$ $n_\ell(\cal{C})$. Now, note that for all configurations $\cal{C}$ for which all sites have $h\ge 1$, the distribution of size of avalanches is exactly the same as in the directed [*site-*]{}percolation problem on this lattice. Therefore, for all $p < p_c$, $n_\ell $ decreases exponentially with $\ell $. Hence no topplings occur at large depths, and particles pile up in the upper layers, and thus there is no steady state. Conversely, for $ p > p_c$, the directed percolation avalanche clusters typically form a wedge, and $n_\ell$ increases with $\ell $ for large $\ell $ (see below). Then avalanches in configurations with [*all*]{} heights $h\geq 1$ cause many topplings, and each layer after the avalanche on the average loses particles. Thus if the system ever reaches a state with large density, the number of particles in the system will decrease until at sufficiently many sites heights become low enough so that the propagation of avalanches is affected, and it becomes [*critical*]{}, but not [*super-critical*]{}. Hence the system will have a steady state for all $ p \geq p^{\star}$ = $p_c$. On the square lattice numerical estimate for $p_c$ [@DP] gives $p^\star \approx $ 0.7054853(5). Our numerical simulations support this conclusion. In Fig. 2 the probability $P(T)$ that an avalanche has durations $\ge T$ is plotted against $T$ for different values of $p$ and $L=$200. (Notice that in directed models $T \equiv \ell $). For large $T$ it varies as $P(T)\sim T^{1-\tau _t}$. Exponential decay of the lower three curves indicates loss of SOC. In the steady states for $p\ge p^\star $ the integrated cluster-size distribution behaves as $D(s) \sim s^{1-\tau _s}$, with the following scaling properties $$D(s,L) = L^{1-\tau _t }{\cal{D}}(sL^{-D_\parallel }) \; , \label{fss}$$ and the scaling relation $(\tau _s-1)\ D_\| = \tau _t-1$ is satisfied. In Fig. 3 the distribution $D(s,L)$ vs. $s$ and its finite-size-scaling plot are shown for $p=p^\star$. We now discuss the structure of the steady state for $p^\star \leq p \mathopen<1$. Let $\rho$ be the probability that a site chosen at random has a nonzero height in the steady state. Then the probability that this site will topple if a single particle is added to it is $P_1 = p\rho$, and the probability that it topples if two particles are added to it is $P_2 = p$. The correlations of heights on the same layer are irrelevant and can be ignored [@correlations]. Therefore, the evolution of an avalanche is the same as in a Domany-Kinzel (DK) cellular automaton model of directed site-bond percolation [@DK]. This implies that, in order for the system to have [*critical*]{} correlations in the bulk, the set of points $(P_1,P_2)$ must lie on the critical line of the DK model. However, as we show below, the dynamic conservation law prevents the avalanche clusters [*in the steady state*]{} from being in the universality class of DP. Particle conservation implies that in the steady state the average number of topplings at each layer equals 1/2. On the other hand, in the case of DP the expected number of growth sites at depth $\ell$ is known to vary as $m \sim \ell^\kappa $ with $\kappa ={{[(d-1)\nu_{\bot}-2\beta}]/{\nu_{\|}}}$, where $\beta $, $\nu _\|$ and $\nu _\bot $ are standard DP critical exponents of the order parameter, and parallel and transverse correlation lengths, respectively. For DP in $d\mathopen<$5 dimensions $\kappa \mathclose> 0$, thus $m$ increases with $\ell $, which is clearly not possible in the steady state. The way these conflicting requirements of particle conservation and locally critical DP-like evolution are satisfied in our model is that the critical steady state develops a spatial structure. The density $\rho$, and hence $P_1$, are not uniform throughout the system, but vary from layer to layer [@profiles]. Let $\rho(\ell)$ be the average density of sites with non-zero height in the $\ell $-th layer. By equating average influx and outflux of particles at a site on $\ell $-th layer, we find that $ \rho(\ell) = [1-(2p-1)f(\ell)]/[2p (1-f(\ell)]$, where $f(\ell )$ is the number of topplings caused by simultaneous addition of two particles at the site. The exactly calculated values of $f(\ell )$ for the first few layers indicate that $\rho (\ell )$ increases with $\ell $. As discussed above, for large $\ell$ the profile reaches the value $\rho^{\star}(p) = P_1^{\star}(p)/p$, where $(P_1^{\star}(p),p)$ is a point on the DP critical line in the $(P_1,P_2)$ parameter space. In Fig. 4, we plot the profile $ \rho$ against $\ell$ obtained by numerical simulations for $p=p^\star $ and $L=200$. The profile is well described by a power law: $$\rho(\ell)= \rho^{\star}(p) - A(p) \ell^{-x}\ , \label{rho_ell}$$ with $\rho^{\star }=$1 and $A$=0.39 for $p=p^\star$, and $x=$0.578. We now show that the profile given by Eq. (\[rho\_ell\]) changes the avalanche statistics in our model, and thus strongly affects the bulk transport. In the bulk, transport of particles is locally described by the DK model with parameters $(P_1(\ell ),P_2)$. For large $\ell$ the system is close to the critical line and the local longitudinal correlation length $\xi (\ell)$ varies as $\xi (\ell) \sim \left[P_1^\star -P_1(\ell )\right]^{-\nu _\|}$. In order for the transport to propagate further to a distance $\ell $, $\xi (\ell)$ must be proportional to $\ell $, i.e., $$\xi (\ell) \approx \ell /B \ . \label{xi_ell}$$ This implies that the exponent $x$ in Eq. (\[rho\_ell\]) is exactly $x=1/\nu _\|$. From the simulation data in $\log(1-\rho (\ell ))$ vs. $\log \ell $ plot we find the slope $x =0.575 \pm 0.005$ (see inset to Fig. 4), leading to $\nu _\| = 1.738 \pm 0.005$, in a good agreement with $\nu _\| $ for DP in two-dimensions [@DP]. The calculation of avalanche exponents for our model reduces to the problem of determining the distribution of cluster-sizes of surface clusters in a directed site-bond percolation model where the concentration of bonds has a power-law profile (\[rho\_ell\]). In the renormalization-group approach [@Binder] $x= 1/\nu_{\|}$ is a marginal case and the cluster exponents may depend on the amplitude $A$. Let $G(R_{\bot},R_{\|})$ be the probability that the site $(R_{\bot},R_{\|})$ topples if a particle is added at (0,0) in the steady state. Since $\rho^{\star}(p)-\rho(\ell ,p) \ll 1$ for large $\ell$, we can show that to leading order of perturbation $$G(R_{\bot},R_{\|})\approx G_0(R_{\bot},R_{\|})\ \exp\left[-\int_1^{R_\|} d\ell/\xi(\ell)\right] \ , \label{grr0}$$ where $G_0(R_{\bot},R_{\|})$ is the two-point correlation function for the critical DP process. Using Eq. (\[xi\_ell\]) we get $$G(R_\bot,R_\|)=G_0(R_\bot,R_\|) R_\|^{-B} \ . \label{grrr}$$ The value of B selected by the steady state is determined by the requirement that the average outflux of particles per avalanche from the $R_{\|}$-th layer equals one, i.e., $\sum_{R_{\bot}} G(R_{\bot},R_\|) \sim 1$. Since in DP the average outflux is $ \sum_{R_{\bot}} G_0(R_{\bot},R_\|) \sim R_{\|}^{{[(d-1)\nu_{\bot}-2\beta}]/{\nu_{\|}}}$, it follows that $$B = {{[(d-1)\nu_{\bot}-2\beta}]/{\nu_{\|}}} \ , \label{B_exps}$$ where $\beta $, $\nu _\|$ and $\nu _\bot $ are as above the DP exponents. Thus both the power-law tail and the amplitude $B$ are expressed in terms of standard DP exponents. These facts, in turn, determine the statistics of avalanche clusters. In addition to the exponents for the distributions of duration, $\tau_t$, and size, $\tau_s$, we also define the anisotropy exponent $\zeta $ for the average transverse extent $R_\bot \sim \ell ^\zeta $ of a cluster of length $\ell$. Near the DP critical line $R_\bot $ is expected to have the scaling behavior as $$R_\bot \sim \ell^{\zeta _{DP}} \phi ([P_1^\star -P_1(\ell)]\ell ^{1/\nu_\|}) \ . \label{trlength}$$ Notice that due to the power-law profile (\[rho\_ell\]), the argument of the scaling function $\phi $ in Eq. (\[trlength\]) remains finite in the limit $\ell \to \infty $, and thus $\zeta = \zeta _{DP} = \nu _\bot /\nu _\|$. In the critical DP, the probability that a perturbation survives up to layer $T$ varies as $P_0(T) \sim T^{-\beta/\nu_{\|}}$. As an expression similar to Eq. (\[grr0\]) applies also to the survival probability in the steady state of our model, we have that $P(T)=P_0(T)T^{-B}$. Using $B$ from Eq. (\[B\_exps\]) this gives $$\tau_t = 1+ (d-1)\zeta - \beta/\nu_{\|} \ . \label{tau_t}$$ Using standard scaling arguments for the directed SOC system [@DR] we notice first that the expected number of topplings in a cluster of length $\ell $ scales as $\mathopen<s\mathclose>_\ell \sim \ell^{\tau_t}$, that is, $D_\|=\tau_t$. Together with $(\tau_s-1)D_\| =\tau_t-1$ this then gives the renormalized $\tau _s$ exponent as $$\tau_s = 2-1/\tau_t \ . \label{taus}$$ Inserting the best known numerical values of the exponents for two-dimensional DP [@DP], gives $ \tau_t $= 1.47244, $\tau_s $ = 1.32059, and $\zeta $=0.63261 . We have checked these predictions against numerical simulations of the exponents and fractal dimension $D_\|$. In the inset to Fig. 2 various scaling exponents are plotted versus $p$ in the scaling region $p\ge p^\star $. Away from a small crossover region near the point $p=1$, the obtained values of the exponents are independent of $p$ within numerical error. We find $\tau_t =$1.460 $\pm $0.014, $\tau_s $= 1.313 $\pm $ 0.012, and $\zeta $= 0.624 $\pm $0.014 in fair agreement with the above conclusions. For $d=$3 using Eqs. (\[tau\_t\]-\[taus\]) and known numerical values of DP exponents [@Grassberger] we find $\tau _t=$1.674 and $\tau _s=$1.403. The upper critical dimension of our stochastic model is $d_c=$5, in contrast to $d_c=$3 in the deterministic limit $p=$1. For $d\geq $5, the DP critical exponents are $\beta =$1, $\nu_\|=$1, and $\nu_\bot =$1/2, leading to $B=$0, and thus the exponents have the mean-field values $\tau _t=$2, and $\tau _s$=3/2. In conclusion, we have demonstrated that nearness of the steady states to the directed-percolation critical line and the conservation of number of particles in the bulk are responsible for the emergent spatial structures in our stochastic sandpile model. A power-law density profile has been found and the self-organized criticality which is in a different universality class from the deterministic limit. In all dimensions $d$ the scaling exponents of avalanches have been determined in terms of standard directed percolation critical exponents. The work of B.T. was supported by the Ministry of Science and Technology of the Republic of Slovenia. B.T. thanks Al Corral for his assistance in the graphic program. D.D. would like to acknowledge useful comments on the manuscript by M. Barma and S.N. Majumdar. Electronic address: [email protected] Electronic address: [email protected] P. Bak, C. Tang and K. Wiesenfeld, Phys. Rev. Lett. [**59**]{}, 381 (1987); Phys. Rev. A [**38**]{}, 364 (1988). L. P. Kadanoff, S. R. Nagel, L. Wu and S. M. Zhou, Phys. Rev. A [**39**]{}, 6524 (1989); S. S. Manna, Physica A [**179**]{}, 249 (1991); A. B. Chabra, M. J. Feigenbaum, L. P. Kadanoff, A. J. Kolan, and I. Procaccia, Phys Rev E [**47**]{}, 3099 (1993); B. Tadić and R. Ramaswamy, Phys. Rev. E [**54**]{}, 3157 (1996). D. Dhar, Phys. Rev. Lett. [**64**]{} 1613 (1990); S. N. Majumdar and D. Dhar, Physica [**A 185**]{}, 129 (1992); D. Dhar and S. S. Manna, Phys. Rev. [**E 49**]{}, 2684 (1994). Y.-C. Zhang, Phys. Rev. Lett. [**63**]{}, 470 (1989); S.S. Manna, J. Phys. A [**24**]{}, L363 (1992); S. Maslov and Y.-C. Zhang, Physica A [**223**]{}, 1 (1996); A. Ben-Hur and O. Biham, Phys. Rev. E [**53**]{}, R1317 (1996); S. Lübeck, B. Tadić, and K.D. Usadel, Phys. Rev. E [**53**]{}, 2182 (1996). A. A. Ali, Phys. Rev. E [**52**]{}, R4595 (1995); S. Lübeck, K.D. Usadel, and B. Tadić, in [*Fractal Reviews in the Natural and Applied Sciences*]{}, Ed. M.M. Novak, p. 47, Chapman and Hall (London) 1995. D. Dhar and R. Ramaswamy, Phys. Rev. Lett. [**63**]{}, 1659 (1989). I. Jensen, J. Phys. A [**29**]{}, 7013 (1996) and refs. therein. See also M. Markošová, M. H. Jensen, K.B. Lauritsen, and K. Sneppen, Phys. Rev. [**E55**]{}, R2085 (1997). The argument becomes exact with the rules modified such that heights at sites on the same layer are randomly redistributed fter each avalanche. E. Domany and W. Kinzel, Phys. Rev. Lett. [**53**]{}, 311 (1984). J. G. Brankov, E. V. Ivashkevich and V. B. Priezzhev, J Phys (Paris) I [**3**]{}, 1729 (1993) and E. V. Ivashkevich, J. Phys. A [**27**]{}, 3643 (1994) found a density profile in the simpler ASM, with no significant effects on the critical state in the bulk. For a review see F. Igloi, I. Peschel and L. Turban, Adv. Phys. [**42**]{}, 685 (1993). P. Grassberger, J. Phys. A: Math. Gen. [**22**]{}, 3673 (1989).
--- abstract: 'Rotations of microscopic magnetic particles, magnetosomes, embedded into the cytoskeleton and subjected to the influence of an ac magnetic field and thermal noise are considered. Magnetosome dynamics is shown to comply with the conditions of the stochastic resonance under not-too-tight constraints on the character of the particle’s fastening. The excursion of regular rotations attains the value of order of radian that facilitates explaining the biological effects of low-frequency weak magnetic fields and geomagnetic fluctuations. Such 1-rad rotations are effectively controlled by slow magnetic field variations of the order of $200\mbox{\,nT}$.' --- [**Stochastic Dynamics of Magnetosomes in Cytoskeleton**]{}\  \ V.N. Binhi$^1$ and D.S. Chernavskii$^2$\  \ $^1$A.M. Prokhorov General Physics Institute RAS, Moscow, Russia\ $^2$P.N. Lebedev Physics Institute RAS, Moscow, Russia [PACS: [87.50.Mn]{}; [87.10.+e]{}; [87.16.Ac]{}]{} There are many hypothetical mechanisms suggested to explain the biological effects of weak low-frequency magnetic fields. A brief review of the mechanisms may be found in [@binhi03ae] and the detailed discussion in [@binhi02]. At the same time, the physical nature of these effects remains unclear. The basic problem is that the interaction energy of biologically active molecules and the MF at the geomagnetic level is very small [@binhi02b]. It is much smaller than the energy of thermal fluctuations ${\kappa \! T}\approx 4\cdot 10^{-14}\mbox{\,erg}$ at physiological temperatures. However, many organisms are well known to contain submicron magnetic particles. The energy of their turn in a weak magnetic field $H$ is substantially greater than ${\kappa \! T}$. For single-domain magnetite particles of radius $r = 10^{-5}\mbox{\,cm}$ or $100\mbox{\,nm}$ in the geomagnetic field the energy $\mu H \approx vJ H$ equals approximately $24 {\kappa \! T}$, where $\mu$ is the magnetic moment of the particle, $v$ and $J$ are the volume and the saturation magnetization. The cytoplasm near cell membranes features such visco-elastic properties that the turning of a microparticle may serve as a stimulus to cell division or ignite a nerve impulse. Magnetite particles found in the brain tissues of animals and humans are of particular interest: this constitutes one of the possible mechanisms of the weak MF effect on the human organism [@kirschvink85-e]. The nerve tissue of the brain is separated from the circulatory system by the blood-brain barrier which is impermeable for most chemicals. In turn, the circulatory system is separated from the digestive system. Therefore, relatively large ferro- or ferrimagnetic particles cannot penetrate into brain tissue as a pollutant. They are found to have a biogenic origin, i.e. they appear over time as a direct result of the crystallization in brain matter. Biogenic magnetite particles are often called ‘magnetosomes’; they were first discovered in bacteria that displayed magnetotaxis [@blakemore75]. The density of magnetosomes in the human brain is more than $5\cdot 10^6$, and in meninges more than $10^8$ crystals per gram [@kirschvink92b]. In fact, about 90% of the particles measured in this study were 10–$70\mbox{\,nm}$ in size, and 10% were 90–$200\mbox{\,nm}$. The particles were grouped into ensembles of 50–100 crystals. Given the fact that magnetic moment is in direct proportion to the particle’s volume, it is easily seen that the inequality $\mu H < {\kappa \! T}$ is true for particles less than $30\mbox{\,nm}$ in size. Due to thermal disturbances, such particles can spontaneously switch their magnetic flux without turns, i.e. they are in a superparamagnetic state. The particles that are several hundred nm and more in size go to multiple-domain states (the energy of domain walls is less than that of the MF produced by a single-domain state): their remanent magnetization may be ignored. These particles experience almost no torque in the magnetic fields under consideration. In this article we consider the dynamics of an idealized ’mean’ particle, the magnetosome with the radius $r \sim 100\mbox{\,nm}$ in a single-domain state. The energy of the magnetosome in the geomagnetic field is $\approx 24 {\kappa \! T}$; when exposed to an additional variable magnetic field $h$, its regular changes are about $(h/H_{\rm geo})24kT$. If these changes exceed thermal fluctuations $\sim {\kappa \! T}/2$, they can cause a biological response. This sets a natural constraint on the MF magnitude capable of affecting a biophysical or biochemical system appreciably: $h \gtrsim 1$–$2\mbox{\,$\mu $T}$. However, for magnetosomes bound to an oscillator system, eigenfrequency of which is close to the frequency of the external field, the biologically detectable level of the MF might be less. This may also take place in the special case of magnetosomes bound to a visco-elastic medium: then thermal fluctuations work to facilitate rather than impede the capability of a weak magnetic stimulus to cause a response. Oscillations of a protein macromolecule (dipole resonator) in a microwave EMF have been studied in [@chernavsky89-e]. The dynamics of oscillating magnetic particles in the ELF MF has not been studied in detail yet. Theoretical evaluations of the magnetoreception mechanism based on magnetosome rotations in MF have been working out by many authors since 1970s [@yorke79]. In known works, the dynamics of magnetosomes was modelled by using the equation of [*free rotations*]{} in a viscous liquid, since the elastic properties of structures to which magnetosomes may be attached were not assessed. Quasi-elastic torque had been considered only in relation to the magnetic moment energy in the constant geomagnetic field. It turns out that explicitly taking into account the elasticity of the medium enables one to describe a stochastic rotational dynamics of magnetosomes that may be used to explain the particularities of magnetoreception of weak and hyperweak MFs. This article considers the dynamics of a magnetite particle embedded in the cytoskeleton. The latter consists of a 3D net of protein fibers of 6 to $25\mbox{\,nm}$ in diameter that include actin filaments, intermediate filaments, and microtubules. The ends of these fibers may be fastened to the membrane surface and to various cell organelles. We assume the fibers may also be fastened to a magnetosome surface normally covered with a bilayer lipid membrane [@gorby88]. This fixes the position of the magnetosome and constrains its rotation to some extent. The stationary orientation of the magnetosome generally does not follow the constant MF direction. The balance of the elastic and ‘magnetic’ torques determines the orientation now. The torque $\bf m$ affecting a particle of the magnetic moment $\boldmath \mu$ in an MF $\bf H$ equals ${\bf m} = \boldmath \mu \times {\bf H}$. Here, putting aside the 3D character of the magnetosome rotations, we consider the magnetosome’s motion in the plane of two vectors: the unit vector $\bf n$ of the $x$-axis, with which the vector of magnetosome’s magnetic moment coincides in the absence of the MF (equilibrium position, $\varphi =0$), and the MF vector $\bf H$. The Langevin equation for rotational oscillations of the particle is as follows: $$\label{01} I \ddot{\varphi } + \gamma \dot{\varphi } + k \varphi = -\mu H(t) \sin(\varphi - \varphi_0) + \xi'(t) ~,~~~ \omega_0=\sqrt{k/I}~,$$ where $\varphi$ is the angular displacement, $I$ is the moment of the particle’s inertia, $\gamma$ is the dissipation coefficient, $k$ is the factor of mechanical elasticity resulting from the cytoskeleton fibers’ bending, $\xi'(t)$ is a stochastic torque with the correlation function $\langle \xi'(t) \xi'(t+\Delta t) \rangle = 2\gamma {\kappa \! T}\delta(\Delta t)$, while $\omega_0$ is the eigenfrequency, and $\varphi_0$ is the MF direction. Then, we assume the quantity of fibers fastening the magnetosome to the cytoskeleton may vary from particle to particle and a significant number of magnetosomes are mobile enough to markedly change their orientation in the geomagnetic field. This means the mechanical elasticity due to the fibers’ bending is of the same order as or less than the magnetic elasticity $k \lesssim \mu H \approx 24 {\kappa \! T}$. For magnetite Fe$_3$O$_4$ particles with the substance density $\rho \approx 5.2\mbox{\,g/cm$^3$}$ and radius $r \sim 10^{-5}\mbox{\,cm}$, we derive a value $\omega_0$ in the order of $10^{6}\mbox{\,rad/s}$. A resonance, however, is not possible since the inertia forces are much less than viscous forces: $ I\omega_0 \ll \gamma $. Hereafter, the inertia term in the equation of motion may be ignored. $$\includegraphics[scale=0.5]{figure01.eps}$$ The idea of this work is to study the dynamics of a magnetosome fixed into a [*visco-elastic*]{} cytoskeleton and predominantly oriented in a direction opposite to that of a constant MF; here the case $\varphi_0 = \pi $ is considered. The potential energy of a magnetosome in terms of $\mu H$ in the absence of ac MF $$U = \cos(\varphi) + \frac a2 \varphi^2 ~,~~~ a= \frac{k} {\mu H }$$ is shown in Fig.\[f.1\]. As is seen, for not too large angles at $a<1$ there are two stable equilibrium positions $\varphi_{\pm}$ and the unstable one $\varphi_0 =0$. Within each of the wells of this double-well potential, the motion of the magnetosome demonstrates no peculiarities. This sort of motion has been repeatedly considered in literature. At the same time, due to thermal disturbances, the transitions appear from well to well even with no ac MF signal. Given that, the stochastic turns of the particle take place with considerable angular displacements. A deterministic external force, the ac MF in our case, causes such transitions to be somewhat ordered, the maximum order attained just at the optimal level of the noise. It is essentially the phenomenon of the so-called stochastic resonance (SR) first introduced in [@benzi81] to explain some geophysical processes. So far, the probable manifestation of the SR in the dynamics of magnetosomes has not been investigated. Consider the joint influence of a magnetic signal and a random torque $\xi'(t)$ on a magnetosome. The equation of motion takes the form: $ \gamma \dot{\varphi} -\mu H \sin(\varphi) +k\varphi - \mu h \sin (\Omega t) = \xi'(t)~. $ With the designations $$\label{99} h' \equiv \frac {h} {H} ~,~~ \beta \equiv \frac {\gamma \Omega} {\mu H}~,~~ \tau \equiv \frac {\mu H} {\gamma} t~,~~ D\equiv \frac {2{\kappa \! T}} {\mu H}$$ the equation is reduced to $$\label{100} \dot{\varphi} + \partial_{\varphi} U(\varphi, \tau) = \sqrt{D} \xi(\tau)$$ with the potential $$\label{1001} U(\varphi, \tau) = \cos (\varphi) + \frac a2 \varphi^2 - \varphi h' \sin(\beta \tau)~ .$$ Here $\xi(\tau)$ is the centered Gaussian process of unit variance (the identity $\delta(\alpha t)= \delta(t)/|\alpha | $ is used). Several SR theories are known; we use the results of [@mcnamara89], where the general expression has been derived for the power spectrum of oscillations of a bistable system agitated by regular and random signals. The signal-to-noise ratio is determined as the ratio of the spectrum amplitude at the frequency of the regular signal, to the level of noise at the same frequency. For the system (\[100\]) with a general double-well potential the signal-to-noise ratio equals $$\label{101} R_{\rm sn} = \sqrt{|U''(\varphi_0)| U''(\varphi_{\pm})} \, \frac{U_1^2} {D^2} \exp \left( -2 U_0/D \right) ~,$$ where $U_0$ and $U_1$ are the height and modulation amplitude of the potential barrier, and $U''$ is the curvature of the potential at the respective equilibrium points. The function (\[101\]) attains its maximum at the optimal level of noise $D=U_0$. This means there is an interval in the value of $D$, where the signal-to-noise ratio unexpectedly increases along with increasing noise power — it is this that is the signature of SR. $$\includegraphics[scale=0.5]{figure02.eps}$$ Quantities $U_0$ and others of the potential (\[1001\]) have no exact analytical presentations. Here we derive them as expansions over the parameter $1-a$ that is assumed to be a small parameter: $$\varphi_+^2 = 6 (1-a),~~ U_0 = \frac 32 (1-a)^2 ,~~ U_1^2 = 6 h'^2 (1-a) ,~~ U''(0)= a-1 ,~~ U''(\varphi_+) = 2(1-a) ~.$$ Substitution into (\[101\]) leads to the following expression for the signal-to-noise ratio: $$R_{\rm sn} \approx \frac {6\sqrt2 \,h'^2 (1-a)^2} {D^2} \exp\left\{-3 (1-a)^2 /D \right\} ~.$$ This function is plotted in Fig.\[f.2\] at various ac MF amplitudes and values of the noise parameter $D$, which depends on the size of the magnetosome. As is seen, there is a marked interval of the elasticity parameter $a=k/\mu H$, wherein the signal-to-noise ratio is close to unity. The 100-nm magnetosome fixed in the cytoskeleton with elasticity $a= 0.7$–0.9$\mu H$ in the 13-$\mu$T ac MF and 46-$\mu$T geomagnetic field regularly turns at angles of the same order as the chaotic rotations. It is particularly evident for 50-nm particles, that almost all of them are in the SR conditions. 200-nm particles make regular turns at relatively small MFs $\approx 4.6\mbox{\,$\mu $T}$. Although in each of these cases there is no gain in the magnitude of the effective ac MF as compared to the case of a single-well motion, it is important that the rotation excursion is an order higher, about 1 rad. With such excursions it is easier to account for the influence of the magnetosome’s rotations on biochemical processes. Note that in an SR, the signal-to-noise ratio is enhanced because of the reduced coherence of the signal present in the spectrum of magnetosome oscillations as compared to the coherence of the ac MF signal. Therefore MF signal detection requires a discrimination system — probably, a nonlinear system of biochemical reactions with the characteristic time $\sim 1/\Omega$ — which can ‘make a decision’ as to whether a signal is present in noise. It is of the essence, that this primary mechanism of magnetobiological effects, displaying no frequency selectivity in the ELF range, nevertheless allows one to verify it experimentally. Since the parameter $a=k/\mu H$ depends on the constant MF, the ‘resonance’ on Fig.\[f.2\] will show itself also as a ‘window’ in constant MF values when the effect is possible. Therefore, provided the MF signal transduction to the biochemical level is governed by an SR with magnetosomes of a certain size within limits of about 10–20[%]{}, it follows that the biological effect in the ac MF will take place only in a constant MF near the level $H\sim k/a\mu$. Indeed, when the MF decreases, the potential function transforms into a single-well one and large rotational excursions are no longer possible. When the MF increases, the potential barrier grows and the magnetosome finally remains within one of the two wells. This case also rules out SR manifestation. $$\includegraphics[scale=0.5]{figure03.eps}$$ Apparently, for a portion of magnetosomes, large angular chaotic turns take place in the absence of an ac MF also. If some biochemical reaction depends on these turns, it is evident that it must be sensitive to the condition of a ‘magnetic vacuum’ $h\ll H \ll H_{\rm geo}$. Furthermore, the reaction must be sensitive to small variations of the constant MF, since the probability of transition $W$ from well to well [*exponentially*]{} depends on the barrier height $U_0$, for example in [@mcnamara89]: $$W=\frac1{2\pi} \sqrt{|U''(0)| U''(\varphi_{\pm})} \exp \left( -2 U_0 / D \right) ~.$$ All quantities here, including $U_0$, are functions of the variable $a=k/\mu H$, and hence of $H$. What is of interest is the relative value of the changes in this probability at small variations of the constant MF, i.e., the quantity $$S = -\frac 1W \frac {\textrm{d} W} {\textrm{d} (H/H_{\rm geo})} ~.$$ Since the probability drops with the growth of the barrier height, we use ‘$-$’ to hold positive values for the sensitivity $S$. Shown in Fig.\[f.3\] is the sensitivity $S$ computed at several values of the elasticity of the bond between a mean magnetosome and cytoskeleton. It is seen that in a wide range of elasticities the sensitivity of the relative probability to MF variations near $H_{\rm geo}$ is equal to 10–20. This means a 1% MF change causes 10–20% changes in the transition probability. Assuming 10% changes to be biologically significant, we arrive at the limit of detectable values of the constant MF variations $ \sim 0.005 H_{\rm geo}$ or $0.2\mbox{\,$\mu $T}$. This finding generally does not rule out the possibility of a biological system containing magnetosomes to react to slow geomagnetic fluctuations. The authors gratefully acknowledge M.M.Glaser for improving English style in the article. Grants: RFFBR No.04-04-97298, RFH No.04-03-00069. [99]{} V.N. Binhi and A.V. Savin. Effects of weak magnetic fields on biological systems: Physical aspects. [*Physics–Uspekhi*]{}, 46(3):259–291, 2003. V.N. Binhi. [*Magnetobiology: Underlying Physical Problems*]{}. Academic Press, San Diego, 2002. V.N. Binhi and A.V. Savin. Molecular gyroscopes and biological effects of weak extremely low-frequency magnetic fields. [*Phys. Rev. E*]{}, 65(051912):1–10, 2002. J.L. Kirschvink, D.S. Jones, and B.J. MacFadden, editors. [*Magnetite Biomineralization and Magnetoreception in Organisms. A New Biomagnetism*]{}. Plenum, New York, 1985. Blakemore R.P. Magnetotactic bacteria. [*Science*]{}, 190(4212):377–379, 1975. J.L. Kirschvink, A. Kobayashi-Kirschvink, and B.J. Woodford. Magnetite biomineralization in the human brain. [*Proc. Natl. Acad. Sci. USA*]{}, 89(16):7683–7687, 1992. D.S. Chernavsky and Yu.I. Khurgin. [Physical mechanisms responsible for the interaction of protein macromolecules with RF radiation]{}. In N.D. Devyatkov, editor, [*Millimeter Waves in Medicine and Biology*]{}, pages 227–235. Institute of Radio Engineering and Electronics, USSR Academy of Sciences, Moscow, 1989. In Russian. Yorke E.D. A possible magnetic transducer in birds. [*J. Theor. Biol.*]{}, 77(1):101–105, 1979. Gorby Y.A., Beveridge T.J., and Blakemore R.P. Characterization of the bacterial magnetosome membrane. [*J. Bacteriol.*]{}, 170:834–841, 1988. R. Benzi, A. Sutera, and A. Vulpiani. The mechanism of stochastic resonance. [*J. Phys. A*]{}, 14:L453–L457, 1981. McNamara B. and Wiesenfeld K. Theory of stochastic resonance. [*Phys. Rev. A*]{}, 39(9):4854–4869, 1989.
--- author: - 'S.E. Shaw' - 'N. Mowlavi' - 'J. Rodriguez' - 'P. Ubertini' - 'F. Capitanio' - 'K. Ebisawa' - 'D. Eckert' - | \ T. J.-L. Courvoisier - 'N. Produit' - 'R. Walter' - 'M. Falanga' bibliography: - 'igr00291.bib' date: 'Received now / Accepted then' title: 'Discovery of the *INTEGRAL* X/$\gamma$-ray transient IGR J00291+5934: a Comptonised accreting ms pulsar ?' --- Introduction {#sec:intro} ============ IGR J00291+5934 was discovered on 2nd December 2004 [@dominique], during the routine monitoring of IBIS/ISGRI 20–60 keV images of Galactic Plane Scan (GPS) observations at the *INTEGRAL* Science Data Centre (ISDC). In following GPS observations, on 8th December, the source flux remained basically stable at $\sim 8 \times 10^{-10}$ erg cm$^{-2}$ s$^{-1}$ with a marginal monotonic decrease. However, by 11th December, the source flux had reduced by around 50% (see Sec.\[sec:disco\]). The day after the discovery the same sky region was observed by the [*Rossi X-ray Timing Explorer (RXTE)*]{}, which detected a 35 mCrab excess with a coherent pulsation at $\sim$ 598.88 Hz (1.67 ms) and pulsed fraction $\sim$ 6%, making IGR J00291+5934 the fastest known accreting X-ray pulsar [@atel353]. Further analysis of [*RXTE*]{}/Proportional Counter Array data, showed that the source has an orbital period of $147.412 \pm0.006$ min [@atel360]. The [*RXTE*]{} spectrum was consistent with an absorbed power law with an equivalent absorption column density $N_\mathrm{H}\sim 7\times 10^{21}$ cm$^{-2}$, and a photon index of $\sim 1.7$ [@atel353]. Archival [*RXTE*]{}/All Sky Monitor data suggested that the source had also entered in outburst in 1998 and 2001, which may indicate that IGR J00291+5934 has a $\sim 3$ year recurrence time [@atel357]. No such activity was seen in archival *BeppoSAX* and *INTEGRAL* data [e.g. @atel362], although these instruments did not make contemporaneous observations with *RXTE*. Later observations by the *Chandra* X-ray telescope made a more accurate determination of $N_\mathrm{H}=(2.8 \pm 0.4)\times 10^{21}$ cm$^{-2}$ [@atel369]. Observations at radio and optical wavelengths revealed the presence of a transient counterpart at a position consistent with that of the high-energy source, with possible optical emission features [@atel353; @atel355; @atel356; @atel361]. The most accurate optical position has been reported by @atel354, ($\alpha$,$\delta$)=($00^{h}29^{m}03^{s}\!.06, +59^{\circ}34\arcmin 19\arcsec\!\!.0)\pm 0\arcsec\!\!.5$; the source is located in the galactic plane, away from the galactic centre at ($l, b$) = (1200964, -31765). In view of the high-energy behaviour, the presence of pulsations, the short orbital period and other similarities with the object SAX J1808.4–3658 we consider IGR J00291+5934 to be the 6th member of a class of accreting X-ray binaries with weakly magnetised pulsars. *INTEGRAL* {#sec:obs} ---------- ![image](figs/Gl161-f1.eps){width="17cm"} The *INTEGRAL* satellite was launched on 17 October 2002 and contains several instruments dedicated to observing the high-energy sky in the 3 keV–10 MeV band . The prime instruments for this work are the following: the coded mask imager IBIS/ISGRI, which is sensitive in the 15 keV–1 MeV band and has a large $29^{\circ}\times 29^{\circ}$ field of view ; the X-ray monitor JEM-X, which is sensitive from 3–30 keV and has a 132 diameter field of view . The JEM-X instrument consists of two identical telescopes, but for this work, only the JEM-X1 unit was operational. Data from the satellite are analysed very quickly after an observation; the ISDC Quick Look Analysis (QLA) pipeline runs continuously on the incoming telemetry . Images are produced by QLA in the following energy bands: 3–10 keV and 10–30 keV (JEM-X); 20–60 keV and 60–200 keV (ISGRI). The images are automatically monitored for new or highly variable sources, which can lead to an automatic alert being issued with a delay of $<$ 2 hours from the end of the observation. All images are also inspected manually by the ISDC Scientist on Duty [@munichiqla]. In this *INTEGRAL* observing period, $\sim$ 30% of the total amount of observing time is split between the Galactic Centre Deep Exposure (GCDE) and the GPS. The GPS are regular pointings in a saw-tooth pattern along the Galactic Plane, between galactic latitude $b = \pm 10^{\circ}$, conducted every $\sim$ 12 days (one *INTEGRAL* revolution is 3 days long). Each GPS pointing, or [*S*cience Window]{} (ScW), lasts 2200 seconds and is separated from the next one by 6$^{\circ}$ . Discovery of {#sec:disco} ============= The ISDC QLA pipeline first suggested that a new X/$\gamma$-ray source had been discovered in ISGRI images of GPS observations, by an automatic alert issued on 2nd December 2004 at 09:00:19 UTC [@dominique]. The alert was triggered because a previously unknown source was detected, at a significance $> 10\sigma$, in a 20–60 keV ISGRI image of GPS pointing 0261-2 (ScW 2 of *INTEGRAL* revolution 0261). This was confirmed at 09:42:03 UTC after the following pointing, 0261-3, by another alert issued at the same sky position. In both cases the alerts were issued approximately 1 hour and 40 minutes after the end of the pointing (see Table \[tab:alerts\] for a summary). Further QLA images showed that the source persisted in the ISGRI QLA images in 0261-3, and had also been detected in the JEM-X1 instrument in 0261-2 (albeit at a level below that required to trigger an automatic alert). Due to the progression of the GPS, the source was not in the instrument field of view for the following pointings. The pointing 0261-2 was also the first pointing of the revolution that could be analysed, since the previous pointing was affected by the passage of the satellite through the Earth’s radiation belts. The next GPS observations, during revolution 0263 on 8th December 2004, also yielded four detections of the source by ISGRI. However, in the following GPS pointing, conducted on 11th December (revolution 0264), the source was observed to have faded in ISGRI, and not detected at all in JEM-X. After the detection, the source was also observed in an already scheduled observation of the CasA/Tycho region and was the subject of an *INTEGRAL* ToO observation, which began on 6th December. The analysis of these data is the responsibility of the respective PIs, and will not be discussed here (Falanga, et al., 2005, in preparation). ----------- ----------- ---------------------- -------------- -------------- Pointing UTC Start ($\alpha$, $\delta$) $\theta$ $F_{20-60}$ (Rev-ScW) (hh:mm) ($^{\circ}$) ($^{\circ}$) (cps) 0261-2 06:43 (7.26, 59.57) 3.3 7.5$\pm 0.5$ 0261-3 07:23 (7.24, 59.58) 6.7 6.1$\pm 0.5$ 0261-4\* 08:03 (7.3, 59.57) 12.2 7.9$\pm 1.2$ 0263-1\* 06:31 (7.3, 59.57) 10.2 5.5$\pm 0.8$ 0263-2 07:10 (7.28, 59.58) 6.7 4.3$\pm 0.5$ 0263-3 07:50 (7.27, 59.58) 7.6 4.6$\pm 0.6$ 0263-4\* 08:30 (7.3, 59.57) 11.8 5.3$\pm 1.0$ 0264-2\* 06:18 (7.28, 59.59 ) 10.4 1.9$\pm 0.8$ 0264-3\* 06:58 (7.4, 59.5) 4.6 2.2$\pm 0.5$ 0264-4\* 07:38 (7.29, 59.55) 2.3 2.6$\pm 0.5$ 0264-5\* 08:18 (7.2, 59.5) 7.9 2.4$\pm 0.7$ 0261-2 06:43 (7.279, 59.568) 3.3 1.6$\pm 0.2$ ----------- ----------- ---------------------- -------------- -------------- : Detections of IGR J00291+5934 by the ISDC QLA pipeline, based on ISGRI data from single GPS pointings during *INTEGRAL* revolutions 0261 (2nd December 2004), 0263 (8th December 2004) and 0264 (11th December 2004). The detected position, angular distance of the source from the spacecraft pointing axis ($\theta$) and 20–60 keV count-rate ($F$) are shown. The last line shows the detection by JEM-X1 and the 3–10 keV flux. Pointings marked with \* are those where the source was not automatically detected and localised manually. \[tab:alerts\] Analysis of *INTEGRAL* GPS observations {#sec:anal} --------------------------------------- The ISGRI GPS pointings, listed in Table \[tab:alerts\], have been analysed using the standard OSA 4.2 software[^1]. Fig. \[fig:image\] shows the location of the source in an ISGRI mosaic image, made from all 7 pointings of revolutions 0261 and 0263, and the single JEM-X detection. In both instruments the source is very clearly identified with the optical position of @atel354. Within the ISGRI mosaic image the HMXB is also detected; this gives confidence that some problem with the spacecraft pointing is not responsible for falsely identifying a new source and that IGR J00291+5934 is not a misidentification of another nearby object. The most significant detection of IGR J00291+5934 was during the first pointing, 0261-2, and it was possible to construct a composite JEM-X1 and ISGRI spectrum. However, the sensitivity of ISGRI is such that it is hard to constrain a physical fit to this source on the strength of just one pointing. A simple absorbed power-law, with the value of $N_{H} = 0.2\times 10^{22}$ cm$^{-2}$ fixed, gives a reduced $\chi^{2}_{\nu} = 1.1$. The photon index is $1.81\pm0.13$ corresponding to a 5–50 keV flux of $8.3_{-2.6}^{+3.7} \times 10^{-10}$ erg cm$^{-2}$ s$^{-1}$ in agreement with @atel353. The broad band count-rates of IGR J00291+5934 are also noted in Table \[tab:alerts\]. These show that the flux faded slightly during the course of $\sim$ 10 days, as reported by @atel365. Note that the uncertainties on flux measurements with ISGRI increases with the off-axis angle, $\theta$, as the source flux is not fully coded by the mask. If IGR J00291+5934 is an accreting ms pulsar, then a power-law does not necessarily describe the physics of the source and it is interesting to investigate the possibility of a Comptonised spectrum. To increase the high-energy statistics, an average ISGRI spectrum was made, using the method described in @rodriguez, from those pointings in Table \[tab:alerts\] with $\theta < 10^{\circ}$; this was added to the JEM-X1 spectrum from 0261-2. A reasonable fit was obtained with the [CompST]{} model : $\chi^{2}_{\nu} = 0.7$, electron temperature $kT = 25^{+21}_{-7}$ keV and optical depth $\tau = 3.6^{+1.0}_{-1.3}$; the 5–100 keV flux is $8.5^{+0.5}_{-0.2} \times 10^{-10}$ erg cm$^{-2}$ s$^{-1}$. However, it should be noted that a simple power-law also gives a valid fit, $\chi^{2}_{\nu} = 1.0$, albeit with a softer photon index of $2.05\pm 0.10$. The drop in the flux above 100 keV confirms the presence of a cut-off in the spectrum at high energies (Fig. \[fig:spec\]). Unfortunately, the source was not bright enough to extend the high-energy spectral information with the *INTEGRAL* spectrometer . The source was, however, detected at a corresponding position to the other instruments, ($\alpha$,$\delta$)=(63,+598) $\pm$ 10, and the measured 20–60 keV flux of $(5.3 \pm 1.6) \times 10^{-10}$ erg cm$^{-2}$ s$^{-1}$ (assuming a photon index of 1.8) is in good agreement with ISGRI. The ISGRI data have also been searched for pulsations at the 1.67 ms period of @atel353. To maximize the signal to noise ratio, the 20–60 keV band was used, noisy pixels were removed and only those pixels that were fully illuminated by IGR J00291+5934 were selected. The event arrival time was corrected to the solar barycentre and a folded analysis was conducted around the known pulse-frequency of 598.88 Hz. An upper limit for the pulsed amplitude at 598.88 Hz was found to be $\sim$ 20%, which is consistent with the value of  6% by @atel353. Discussion ========== We report the discovery of the fastest known accreting X-ray pulsar, IGR J00291+5934, with *INTEGRAL*. It is likely that IGR J00291+5934 is a low mass X-ray binary (LMXB) system containing a NS pulsar that has been spun up by accretion of material, from a companion star, via an accretion disc. IGR J00291+5934 is one of the fastest X-ray pulsars discovered to date and is second only to PSR B1937+21 [an isolated pulsar showing radio and X-ray pulsations at $P\sim 1.57$ ms; @1982Natur.300..615B; @2001ApJ...554..316T]. The absorbing column measured by @atel369 with *Chandra* is approximately a factor of two lower than the estimate of the galactic total on the same line of sight . Given that the source is 120$^{\circ}$ from the Galactic Centre and assuming that the galactic disk has a radius of 13 kpc, with the Earth 8.5 kpc from the centre then the average density of absorbing material in the source direction is $\sim$ 0.2 cm$^{-3}$. Assuming no local absorption puts an upper limit on the source distance of $\sim$ 3.3 kpc. Although this is a highly simplistic argument, it seems likely that the source is reasonably local. It should also be noted that the Perseus arm of the Milky Way, at $l = 120^{\circ}$, is located approximately 2.5 kpc away [@1993ApJ...411..674T]. Using 3 kpc as an estimate of the source distance, and the 5–100 keV flux quoted in Sec. \[sec:anal\], gives a luminosity for the source of $\sim 0.9 \times 10^{36}$ erg s$^{-1}$. The other five known ms X-ray pulsars (, , , , ), are believed to be old NS (age $\sim 10^{9}$ years), with moderately weak magnetic field, $B\sim 10^{8}$ G [see e.g. @2003Natur.424...42C; @2003Natur.424...44W]. In fact all of them are transient systems with short orbital periods, accreting at very low rates. This implies that the magnetic field of the NS is very weak, which is also suggested by the perceived age of the systems [@1998ApJ...506L..35H; @2002ApJ...576L..49T]. The *INTEGRAL* high-energy spectral information is limited with this very small data set, but is consistent with properties of other ms X-ray pulsars, with $\tau \sim 3$ and $k_{B}T \sim 20$ keV. The upper limit on the high-energy pulsations, coupled with the peak energy release at $\sim$ 24 keV, could be consistent with a Comptonised flux being emitted from a hot plasma near the inner part of the disc. Some of this plasma may be channeled towards the NS by the magnetic field, resulting in the pulsed part of the spectrum. In this model, the seed photons would be supplied by the cold intermediate part of the disc rather than the higher temperature NS black-body emission. It is remarkable that in all of these objects the pulse fraction is of the same order ($\sim 6$%). IGR J00291+5934, shares many common characteristics with the other objects, particularly SAX J1808.4–3658. The latter has a relatively similar orbital period of $\sim$ 2 hours [@1998Natur.394..346C], a $\sim$ 2 year recurrence time of the outburst and is also the only other known ms pulsar for which a radio counterpart was detected during outburst [@1999ApJ...522L.117G]. It is reasonable to assume that is an old NS that has been spun up by the accretion of material, with a magnetic field of the same order as the other ms pulsars. The spectral analysis of reveals that this source is more similar to XTE J1814–338 than the 3 others for which evidence of black body radiation have been seen. However, this may simply be due to the high lower boundary of the JEM-X detector. This is reinforced by the fact that black body emission has been detected in SAX J1808.4–3658 with detectors allowing a broader coverage towards the low energies [e.g. @2003Natur.424...44W]. The spectral analysis of these objects reveal that they are not so different to other NS/pulsar LMXBs, in the sense that the emission processes are thought to originate through thermal + Comptonised processes. It is therefore quite puzzling that only some of these systems show persistent coherent pulsations at the NS spin period. The fact that all ms X-ray pulsars have very short orbital periods may be a clue to why these systems do or do not show persistent pulsations [see recent review by @aph0501264]. Despite being the fastest accreting ms pulsar to date, it is interesting to note that the period remains significantly higher than 1 ms. This paper is based on observations made with the ESA *INTEGRAL* project. The authors thank A.Paizis for useful comments in the preparation of this paper. SES thanks PPARC for financial support. The useful comments and timely response of the anonymous referee were greatly appreciated. [^1]: The OSA software can be obtained from *www.isdc.unige.ch*.
--- abstract: 'We discuss black holes in an effective theory derived from a superstring model, which includes a dilaton field, a gauge field and the Gauss-Bonnet term. Assuming U(1) or SU(2) symmetry for the gauge field, we find four types of spherically symmetric solutions, i.e., a neutral, an electrically charged, a magnetically charged and a “colored” black hole, and discuss their thermodynamical properties and fate via the Hawking evaporation process. For neutral and electrically charged black holes, we find critical point and a singular end point. Below the mass corresponding to the critical point, no solution exists, while the curvature on the horizon diverges and a naked singularity appears at the singular point. A cusp structure in the mass-entropy diagram is found at the critical point and black holes on the branch between the critical and singular points become unstable. For magnetically charged and “colored" black holes, the solution becomes singular just at the end point with a finite mass. Because the black hole temperature is always finite even at the critical point or the singular point, we may conclude that the evaporation process will not be stopped even at the critical point or the singular point, and the black hole will move to a dynamical evaporation phase or a naked singularity will appear.' address: 'Department of Physics, Waseda University, Shinjuku-ku, Tokyo 169, Japan' author: - 'Takashi Torii[^1], Hiroki Yajima[^2] and Kei-ichi Maeda[^3]' title: 'Dilatonic Black Holes with Gauss-Bonnet Term' --- Introduction {#sec:Introduction} ============ One of the most fascinating dreams for all physicists is the unification of all fundamental forces, i.e., electromagnetic, weak, strong and gravitational interactions. The electromagnetic and weak interactions are successfully unified in the Weinberg-Salam theory. The strong interaction is described by quantum chromodynamics (QCD) and is likely to be unified with the Weinberg-Salam theory into a grand unified theory in the context of gauge theory. The gravitational interaction, however, is not yet included, in spite of a great deal of effort. The most promising candidate for a unified theory of all interactions is a superstring theory, which may unify everything without any divergences. Although such a unified theory may become important in the strong gravity regime, however, little work has been done on extreme situations such as a black hole or the early universe. Since methods to study the strong gravity regime in a string theory are not well developed, most analysis have been performed by using an effective field theory inspired by a string theory, which contains the leading or next to leading order terms of the inverse string tension $\alpha'$. One such application is string cosmology. Some puzzles in Einstein cosmology might be solved with a string theory. For example, while the singularity theorem demand that the universe has an initial singularity in the Einstein gravity, a string inspired model can remove it and provide a non-singular cosmology[@ST; @EM]. Another application is black hole physics. The first study was made by Gibbons and one of the present authors in the Einstein-Maxwell-Dilaton (EMD) system[@GM] and the same solution was also discussed in a different coordinate system in ref.[@Garfinkle]. They found a static spherically symmetric black hole solution (GM-GHS solution) with dilaton hair. Since the dilaton hair cannot appear without electromagnetic hair, it is classified as secondary hair. After this work, many solutions were discussed in various models. Thus the following question naturally arises: How are the black hole solutions affected by the next to leading order term in $\alpha'$, in particular by the higher-curvature term? This was first considered independently of a string theory. Wheeler analyzed the effect of the Gauss-Bonnet (GB) term without a dilaton field[@Wheeler]. When the dilaton field is absent, the GB term does not give any contribution in a four dimensional spacetime because it becomes a surface term and gives a topological invariant. Then he studied black holes in spacetime with more than four dimensions. These solutions are called dimensionally continuum black holes[@dimcon]. Callan et al.[@CMP] discussed black hole solutions in the theory with a higher-curvature term $R_ {\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma}$ and a dilaton field, and Mignemi and Stewart[@MS] took both the GB term and a dilaton field into account in four dimensional spacetime. In their work, field variables are expanded by the inverse string tension $\alpha'$ and the first order terms of $\alpha'$ are taken into account. Using this perturbation, they constructed analytic solutions. There are also some studies which clarify the effect of an axion field as well as the dilaton field, a U(1) gauge field and the GB term[@AGB1; @AGB2; @AGB3]. Either dyon solutions or axisymmetric stationary solutions are analyzed because the axion field becomes trivial if the gauge field does not have both electric and magnetic charges and the spacetime is static and spherically symmetric. In all these models containing a higher-curvature term, a perturbative approach was used. Assuming the GB higher curvature term, recently, Kanti et. al. calculated a neutral solution without such a perturbation and found interesting properties[@KMRTW]. More recently Alexeyev and Pomazanov also discussed the internal structure of these solutions[@AP]. When we turn to another aspect of a unified theory, we find a non-Abelian gauge field. One of the most important facts about black holes with a non-Abelian gauge field is that we have the so-called colored black hole, which was found in Einstein-Yang-Mills system[@Vol] soon after the discovery of a particle-like Bartnik-McKinnon(BM) solution[@Bar]. These solutions can exist by a balance between the attractive force of gravity and the repulsive force of the Yang-Mills (YM) field. If gravity is absent, such non-trivial structures cannot exist. In this sense, these solutions are of a new type. Although both the BM particle and the colored black hole are found to be unstable against radial linear perturbation[@Gal], they showed us a new aspect of black hole physics and forced us to reconsider the black hole no-hair conjecture. After these solutions were discussed, a variety of self-gravitating structures and black hole solutions with a non-Abelian field were found in static spherically symmetric spacetime[@Tor2; @Tor3; @Tach]. The Skyrmion[@Sky; @Dro] or the Skyrme black hole[@Luc; @Dro; @Tor] in Einstein-Skyrme system, the particle solution with a massive Proca field or the Proca black hole[@Gre] in Einstein-Proca system, the monopole[@'tH; @LNW; @Bre; @Tach] or the black hole in a monopole[@LNW; @Bre; @Aic; @Tach] in the Einstein-Yang-Mills-Higgs (real triplet) system and the sphaleron[@Das; @Gre] or sphaleron black hole[@Gre] in Einstein-Yang-Mills-Higgs(complex doublet) system have been discovered. A particular class of the Skyrme black hole, the Proca black hole and the monopole black hole turns out to be stable against radial perturbations. In particular the monopole black hole can be a counterexample of the black hole no-hair conjecture[@Tach], because it is highly stable and is formed through the Hawking evaporation process from the Reissner-Nordström black hole. We also investigated the dilatonic BM particle and the dilatonic colored black hole solution in the Einstein-Yang-Mills-Dilaton (EYMD) system[@Tor; @dil], which are direct extensions of the GM-GHS solution. Although those non-Abelian fields may be expected in some unified theories, such black holes must be very small and then some other contributions such as the GB term and/or the moduli field may also play an important role in their structure, if the fundamental theory is described by a string model. Donets and Gal’tsov showed that a particle-like solution does not exist in the EYMD system with the GB term[@DG]. Then, they assume a numerical constant $\beta$ in front of the GB term, where $\beta =1$ corresponds to the effective sting theory. They showed that there is a critical value $\beta_{cr} =0.37$, beyond which no particle-like solution exists. However, we expect that a black hole solution can exist in this system, so we also study the case of a SU(2) YM field. In this paper, then, we study black holes in a theory inspired by a string theory, i.e., in a model with a dilaton field, a gauge field, and the GB curvature term, and discuss their properties. This paper is organized as follows. We outline the a model and field equations in section 2, and present various types of new solutions (neutral, electrically charged, magnetically charged and “colored” black holes) in section 3. In section 4 we study the thermodynamical properties of those black holes. Section 5 includes discussions and some remarks. A Model and Field Equations {#sec:Model-Equations} =========================== We shall consider the model given by the action $$S = \int d^4x \sqrt{-g}\left[\frac1{2\kappa^2}R -\frac1{2\kappa^2}\left( \nabla \phi\right)^2 -\frac16e^{-2\gamma\phi} H^2 +\frac{\alpha '}{16\kappa^2} \mbox{e}^{-\gamma \phi} \left(\hat{R}^2 - {\rm Tr}{\!\!\mbox{ \boldmath $F$}}^2 \right) \right], \label{2-10}$$ where $\kappa^2=8\pi G$, and discuss a spherically symmetric, static solution. This type of action comes from low-energy limit of the heterotic string theory[@Gross]. The dilaton field is $\phi$ and $\gamma =\sqrt{2}$ is the coupling constant of the dilaton field to the gauge field. $H$ is a three form expressed as $$H = dB + \frac{\alpha'}{8\kappa} \left(\Omega_{3L}-\Omega_ {3Y}\right), \label{2-15}$$ where $B_{\mu\nu}$ is the antisymmetric field in the gravitational multiplet. $\Omega_{3L}$ and $\Omega_{3Y}$ are the Lorentz and gauge Chern-Simon terms respectively; $$\begin{aligned} \Omega_{3L} & = & {\rm tr} \left( \omega \wedge R - \frac13 \omega \wedge \omega \wedge \omega \right), \\ \Omega_{3Y} & = & {\rm Tr} \left( {\!\!\mbox{ \boldmath $A$}}\wedge {\!\!\mbox{ \boldmath $F$}}- \frac13 {\!\!\mbox{ \boldmath $A$}}\wedge {\!\!\mbox{ \boldmath $A$}}\wedge {\!\!\mbox{ \boldmath $A$}}\right), \label{2-17} \end{aligned}$$ where the traces are taken over the Lorentz and gauge indices. $\omega_{a\mu\nu} = e_{(a)}^\rho e_{\mu}^{(b)} \nabla_\rho e_{\nu (b)}$ is the spin connection with vierbein $ e_{\mu}^{(a)}$. Using the dual of the Bianchi identity, we can rewrite a part of the action $$\begin{aligned} -\frac16 e^{-2\gamma\phi} H_{\mu\nu\rho} H^{\mu\nu\rho} & = & -\frac{1}{4\kappa^2} e^{2\gamma\phi} \partial_{\mu} b_{KR} \partial^{\mu} b_{KR} \nonumber \\ && ~~~ +\frac{\alpha'}{32\kappa^2} \left[ b_{KR} \epsilon^ {\rho\sigma\mu\nu} \left(R_{\alpha\beta\rho\sigma} R_{\mu\nu}^ {\;\;\alpha\beta} +{\rm Tr} {\!\!\mbox{ \boldmath $F$}}_{\rho\sigma}{\!\!\mbox{ \boldmath $F$}}_{\mu\nu} \right) \right]. \label{2-57}\end{aligned}$$ The pseudoscalar field $b_{KR}$ is the Kalb-Ramond axion. If a spacetime is static and spherically symmetric and the gauge field does not have dyons but only an electric or a magnetic charge, which is the situation we will discuss here, then the second term of equation (\[2-57\]) vanishes. Then the axion field can be regarded as a massless scalar field. Such a scalar field, however, is trivial because of the black hole no hair theorem[@Bek]. Hence we can drop the axion field here. ${\!\!\mbox{ \boldmath $F$}}$ is the field strength of the gauge field expressed by its potential ${\!\!\mbox{ \boldmath $A$}}$. If it is a $U(1)$ gauge field (the electromagnetic field), then we consider only a electrically or magnetically charged black hole, i.e., $${\!\!\mbox{ \boldmath $A$}}= a dt ~~~{\rm and} ~~~~{\!\!\mbox{ \boldmath $F$}}= {da \over dr} ~ dr\wedge dt .$$ For $SU(2)$ gauge field, we assume the Witten ansatz[@Wit], which is the most generic form of a spherically symmetric SU(2) YM potential. The YM potential becomes $${\!\!\mbox{ \boldmath $A$}}=a{\!\!\mbox{ \boldmath $\tau$}}_r dt+b{\!\!\mbox{ \boldmath $\tau$}}_r dr +\left[ d{\!\!\mbox{ \boldmath $\tau$}}_{\theta} -(1+w) {\!\!\mbox{ \boldmath $\tau$}}_{\phi} \right] d\theta +\left[ (1+w){\!\!\mbox{ \boldmath $\tau$}}_{\theta} +d{\!\!\mbox{ \boldmath $\tau$}}_{\phi} \right] \sin \theta d \phi , \label{3-30}$$ where $a$, $b$, $d$ and $w$ are functions of time and the radial coordinates, $t$ and $r$. We have adopted the polar coordinate description $({\!\!\mbox{ \boldmath $\tau$}}_r, {\!\!\mbox{ \boldmath $\tau$}}_{\theta}, {\!\!\mbox{ \boldmath $\tau$}}_{\phi})$, i.e. $$\begin{aligned} {\!\!\mbox{ \boldmath $\tau$}}_r & = & \frac1{2i} [{\!\!\mbox{ \boldmath $\sigma$}}_1 \sin \theta \cos \phi + {\!\!\mbox{ \boldmath $\sigma$}}_2 \sin \theta \sin \phi + {\!\!\mbox{ \boldmath $\sigma$}}_3 \cos \theta] , \label{3-40} \\ {\!\!\mbox{ \boldmath $\tau$}}_{\theta} & = & \frac1{2i} [{\!\!\mbox{ \boldmath $\sigma$}}_1 \cos \theta \cos \phi + {\!\!\mbox{ \boldmath $\sigma$}}_2 \cos \theta \sin \phi - {\!\!\mbox{ \boldmath $\sigma$}}_3 \sin \theta] , \label{3-50} \\ {\!\!\mbox{ \boldmath $\tau$}}_{\phi} & = & \frac1{2i} [-{\!\!\mbox{ \boldmath $\sigma$}}_1 \sin \phi + {\!\!\mbox{ \boldmath $\sigma$}}_2 \cos \phi] , \label{3-60} \end{aligned}$$ whose commutation relations are $$\left[ {\!\!\mbox{ \boldmath $\tau$}}_a, {\!\!\mbox{ \boldmath $\tau$}}_b \right] ={\!\!\mbox{ \boldmath $\tau$}}_c \hspace{10mm} a, b, c, = r, \theta , ~{\rm or }~~\phi. \label{3-70}$$ Here ${\!\!\mbox{ \boldmath $\sigma$}}_i \hspace{3mm} (i=1,2,3)$ denote the Pauli spin matrices. We can eliminate $b$ using a residual gauge freedom. In the static case, the part of the YM equations is integrated as $d= Cw$ where $C$ is an integration constant. We can set $C=0$ i.e. $d \equiv 0$ without loss of generality. The remaining functions $a$ and $w$ depend only on the radial coordinate $r$. As a result, we obtain a simplified spherically symmetric YM potential as $${\!\!\mbox{ \boldmath $A$}}= a(r) {\!\!\mbox{ \boldmath $\tau$}}_r dt -\left[1+w(r)\right]{\!\!\mbox{ \boldmath $\tau$}}_{\phi} d\theta +\left[1+w(r)\right]{\!\!\mbox{ \boldmath $\tau$}}_{\theta} \sin \theta d\phi. \label{3-80}$$ Substituting this into ${\!\!\mbox{ \boldmath $F$}}=d{\!\!\mbox{ \boldmath $A$}}+{\!\!\mbox{ \boldmath $A$}}\wedge {\!\!\mbox{ \boldmath $A$}}$, we find the field strength: $$\begin{aligned} {\!\!\mbox{ \boldmath $F$}}&=& {d a \over d r} {\!\!\mbox{ \boldmath $\tau$}}_rdr\wedge dt +{d a \over d r} {\!\!\mbox{ \boldmath $\tau$}}_{\phi} dr\wedge d\theta +{d a \over d r} {\!\!\mbox{ \boldmath $\tau$}}_{\theta} dr\wedge \sin \theta d\phi \nonumber\\ & & \mbox{ } -\left(1-w^2\right) {\!\!\mbox{ \boldmath $\tau$}}_rd\theta \wedge \sin \theta d\phi +aw{\!\!\mbox{ \boldmath $\tau$}}_{\theta} dt \wedge d\theta +aw{\!\!\mbox{ \boldmath $\tau$}}_{\phi}dt\wedge \sin \theta d\phi. \label{3-90}\end{aligned}$$ Comparing (\[3-90\]) with the field strength of the U(1) gauge field, we find that $a$ and $w$ play the roles of an electric and a magnetic potentials, respectively. This expression can be used for U(1) gauge field if we formally set ${\!\!\mbox{ \boldmath $\tau$}}_r =1$ and ${\!\!\mbox{ \boldmath $\tau$}}_{\theta}={\!\!\mbox{ \boldmath $\tau$}}_{\phi}=0$. The GB term, $\hat{R}^2$, is defined by $$\hat{R}^2 = R_{\mu \nu \rho \sigma} R^{\mu \nu \rho \sigma} - 4R_{\mu \nu} R^{\mu \nu} + R^2. \label{2-20}$$ This combination is introduced to cancel anomalies and has the advantage that the higher derivatives of metric functions do not appear in the field equations. Setting $\alpha'/\kappa^2 = 1/\pi g^2$, $g$ is regarded as a gauge coupling constant. A numerical constant $\beta$ is introduced in Ref. [@DG] to find a non-trivial particle-like solution, but we fix it to be unity because we are interested in the effective string theory. Because of our ansatz, the metric is of the Schwarzschild type, $$ds^2=-\left(1-\frac{2Gm}r\right){\rm e}^{-2\delta}dt^2 +\left(1-\frac{2Gm}r\right)^{-1}dr^2 +r^2 (d\theta^2 +\sin^2\theta d\phi^2). \label{2-30}$$ The mass function $m=m(r)$ and the lapse function $\delta=\delta (r)$ depend on only the radial coordinate $r$. Varying the action (\[2-10\]) and substituting ansätze (\[3-80\]) and (\[2-30\]), we find the field equations; $$\begin{aligned} & & \delta ' = h^{-1} \left[ -\frac{1}{2} \tilde{r} \phi^{\prime 2} -\frac{{\rm e}^{-\gamma \phi}}{4\tilde{r}} \left\{ {\rm e}^{2\delta} B^{-2} \tilde{a}^2 w^2+w^{\prime 2} +\frac{2\tilde{m}}{\tilde{r}} (\gamma^2 \phi^{\prime 2} -\gamma \phi^{\prime \prime} ) \right\} \right], \label{3-100} \\ & & \tilde{m}'= h^{-1} \left[ \frac{1}{4} B \tilde{r}^2 \phi^{\prime 2} +\frac{{\rm e}^{-\gamma \phi}}{16} \left\{ {\rm e}^{2\delta} \left(\tilde{r}^2 \tilde{a}^{\prime 2} +2B^{-1} \tilde{a}^2 w^2 \right) +2Bw^{\prime 2} +\frac{(1-w^2)^2}{\tilde{r}^2} \right. \right. \nonumber \\ & & \;\;\;\;\;\;\;\; \left. \left. +4B \frac{2\tilde{m}}{\tilde{r}} (\gamma^2 \phi^{\prime 2} -\gamma \phi^{\prime \prime} ) +8\gamma \phi' \frac{\tilde{m}}{\tilde{r}^2} \left(B-\frac{\tilde{m}}{\tilde{r}} \right) \right\}\right], \label{3-110} \\ & & \left[ {\rm e}^{-\delta} \tilde{r}^2 B \phi' \right]' -\frac{{\rm e}^{-\gamma \phi}}{8} \gamma {\rm e}^{-\delta} \tilde{r}^2 \left[{\rm e}^{2\delta} \left\{ \tilde{a}^{\prime 2} +\frac{2 \tilde{a}^2 w^2}{\tilde{r}^2} B^{-1} \right\} -\left\{ \frac{2 w^{\prime 2} }{\tilde{r}^2} B +\frac{(1-w^2)^2}{\tilde{r}^4} \right\} \right. \nonumber \\ & & \;\;\;\;\;\;\;\; \left. +\frac{4}{\tilde{r}^2} \left\{2f^2 +B (\delta' f +f') \right\} +\frac{4}{\tilde{r}^2} (\delta' f -f') \right] = 0, \label{3-120} \\ & & \left[ {\rm e}^{\delta} \tilde{r}^2{\rm e}^{-\gamma \phi}\tilde {a}' \right]' -2{\rm e}^{\delta} {\rm e}^{-\gamma \phi} B^{-1} \tilde{a} w^2 = 0, \label{3-130} \\ & & \left[ {\rm e}^{-\delta} B {\rm e}^{-\gamma \phi} w' \right]' +{\rm e}^{\delta} {\rm e}^{-\gamma \phi} B^{-1} \tilde{a}^{\prime 2} w +{\rm e}^{-\delta} {\rm e}^{-\gamma \phi} \frac{w(1-w^2)}{\tilde{r}^{2}} = 0. \label{3-140} \end{aligned}$$ Here we have used the dimensionless variables; $\tilde{r}=r/\sqrt{\alpha'}, \;\; \tilde{m}=Gm/\sqrt{\alpha'}$ and $\tilde{a}=a \sqrt{\alpha'}$. A prime in the field equations denotes the derivative with respect to $\tilde{r}$, and $$\begin{aligned} B & =& 1-\frac{2\tilde{m}}{\tilde{r}},\\ h & = & 1+\frac{{\rm e}^{-\gamma \phi}}{2\tilde{r}} \gamma \phi' \left(B-\frac{\tilde{m}}{\tilde{r}} \right), \\ f & = & h^{-1} \left[ \frac{\tilde{m}}{\tilde{r}^2} + \frac{1}{4} B \tilde{r} \phi^{\prime 2} -\frac{{\rm e}^{-\gamma \phi}}{16\tilde{r}} \left\{{\rm e}^{2\delta} \left(\tilde{r}^2 \tilde{a}^{\prime 2} -2B^{-1} \tilde{a}^2 w^2 \right) -2Bw^{\prime 2} +\frac{(1-w^2)^2}{\tilde{r}^2} \right\} \right] . \label{3-150} \end{aligned}$$ Note that those equations can be applied for a $U(1)$ gauge field by setting $$\begin{aligned} w & \equiv & \pm 1, \\ \tilde{a}w & \equiv & 0 , \label{3-15}\end{aligned}$$ with $\tilde{a}$ being a non-trivial potential. The latter condition (\[3-15\]) corresponds to the vanishing of the self-interaction due to the non-Abelian term. As for boundary conditions for the metric functions on the event horizon and at spatial infinity, we impose following three ansätze: \(i) Asymptotic flatness at spatial infinity[@chuu1], i.e., as $r \to \infty$, $$\begin{aligned} m(r) & \to & M= {\rm finite}, \label{2-130} \\ \delta(r) & \to & 0 \label{2-140}.\end{aligned}$$ \(ii) The existence of a regular horizon $r_H$, i.e., $$\begin{aligned} 2Gm_H & = & r_H, \label{2-150} \\ \delta_H & < & \infty. \label{2-160}\end{aligned}$$ \(iii) The nonexistence of singularities outside the event horizon, i.e., for $r>r_H$, $$2Gm(r) < r. \label{2-170}$$ The subscript $H$ is used for the values at the event horizon $r=r_H$. As for the field functions we have $$\begin{aligned} \phi & \to & 0, \label{2-111} \\ a & \to & 0, \\ w & \to & \left\{ \begin{array}{rl} \pm 1, & (\mbox{globally magnetically neutral solution}). \\ 0, & (\mbox{globally magnetically charged solution}). \end{array} \right. \label{3-105}\end{aligned}$$ as $r \to \infty$ and impose their finiteness on the horizon. These conditions guarantee that the total energy of the present system is finite[@chuu2]. The boundary conditions of the field functions on the event horizon depend on the gauge group. Hence we will discuss them individually in the next section. New Dilatonic Black Holes with Gauss-Bonnet Term {#sec:Dilatonic_Black Hole} ================================================ In this section we present the solutions of the field equations (\[3-100\])$\sim$(\[3-140\]). We classified them into four types by their gauge charge, i.e. neutral, electrically charged, magnetically charged and “colored” black holes. All solutions need numerical analysis. Neutral Black Hole ------------------ First we consider the case without any gauge field, which is the simplest. The solution is a Schwarzschild type black hole modified by the GB term coupled to a dilaton field. Then the gauge field should be set to $$\begin{aligned} \tilde{a} & \equiv & 0, \\ w & \equiv & \pm 1,\end{aligned}$$ which satisfies the field equations (\[3-130\]) and (\[3-140\]). From Eq. (\[3-120\]), we find the following relation for the dilaton field on the event horizon $$\phi^{\prime 2}_H - \frac{\phi'_H}{A\gamma} +3 =0 , \label{2-240}$$ where $A=e^{-\gamma \phi_H}/4\tilde{r}_H^2$. The quadratic equation (\[2-240\]) has two roots as $$\phi'_{\pm} = \frac{1\pm \sqrt{1-12A^2\gamma^2 }}{2A\gamma}. \label{2-250}$$ Hence, for each value of $\phi_H$ at a fixed event horizon $\tilde{r}_H$, we have two possible boundary values $\phi'_{\pm}$. We integrate the field equations (\[3-100\])$\sim$(\[3-120\]) from the horizon $r=r_H$ with the boundary conditions (\[2-150\])$\sim$(\[2-170\]) and (\[2-250\]). Since the equation of the dilaton field (\[3-120\]) becomes singular on the event horizon, we expand the equations and variables by power series of $\tilde{r}-\tilde{r}_H$ to guarantee the regularity at the horizon, and use their analytic solutions for the first step of integration. We show the behavior of the field functions of neutral black holes with three different radius of event horizon in Fig. 1. We find black hole solutions with a regular horizon only when we choose $\phi' _H = \phi'_+$. For smaller black holes, the dilaton field varies more rapidly than that of the larger ones. This means that stringy effects become more important for a smaller black hole, as we expected. We see that the mass function decreases first near the horizon and then increases afterward, approaching a finite value. There is a region where the effective energy density, which is defined by $(dm/dr)/4\pi r^2$, becomes negative. From Eq. (\[3-110\]), $\tilde{m}'$ is evaluated on the event horizon as $$\tilde{m}'_H= -\frac{A\gamma\phi'_H}{2(1-A\gamma\phi'_H)} = -\frac{1}{6} \phi^{\prime 2}_H , \label{2-260}$$ which is negative definite. Hence the function $m$ always decreases in the vicinity of the event horizon. It is one of the essential points for existence of the neutral black holes that the energy density becomes negative[@KMRTW]. Regarding the GB term as a source term of the Einstein equations and applying the similar analysis that was used to prove the no-hair theorem for a scalar field, we can show that if the time-time component of the energy-momentum tensor (the effective energy density) is positive everywhere, no non-trivial solution can exist. However in our situation this is not the case. So we find a new solution even if the gauge field is absent. We show $M-r_H$ relations in Fig. 2. Note that there is an end point for each branch, where $\phi'_+$ and $\phi'_-$ coincide. With this fact, we can prove that $\phi^{\prime \prime}_H$ and $\delta^{\prime}_H$ diverge. We have also shown that $R_ {\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma}$ diverges on the event horizon, which means a naked singularity appears at the end point of the branch[@KMRTW]. We shall call this point the singular point S. Near the singular point, we also find a critical point C, which gives a lower bound for the black hole mass, i.e., below which mass no solution exists (see Fig. 3(b)). Since this solution is a modification of Schwarzschild black hole, we find no other black hole solutions without a gauge field. No naked singularity appears for the solution at the critical point. We have two black hole solutions in the mass range of $M_C<M<M_S$, where $M_C=6.02771M_{PL}/g$ and $M_S=6.02813M_{PL}/g$ are the masses of the critical and singular solutions, respectively. As we will see later, the stability of these black holes changes at the critical point C indicates that the singular end point solution becomes unstable. When we discuss the evolution of the black holes, it will also important be that there exists a regular critical solution whose mass is smaller than that of the singular solution Electrically Charged Black Hole ------------------------------- Electrically charged black hole solutions with the U(1) gauge field can be obtained by setting $ w = \pm 1, \tilde{a}w = 0$ in Eqs. (\[3-100\])$\sim$(\[3-140\]). Then the electric potential $ \tilde{a}^{\prime \prime}$ is integrated once to give $$\tilde{a}' ={\rm e}^{-\delta} {\rm e}^{\gamma \phi} \frac{Q_e}{\tilde{r}^2}, \label{2-120}$$ where $Q_e$ is a constant of integration and denotes a normalized electric charge. The physical charge is given as $gQ_e$. From Eq. (\[3-120\]), we find the following relation for the dilaton field on the event horizon $$\begin{aligned} & & 2A^4\gamma^4 e^{2\gamma\phi_H}Q_e^2 \phi_H^ {\prime 3} - \left[A^2 \gamma e^{2\gamma\phi_H} Q_e^2 \left(2A^2\gamma^2 +5A\gamma^2+1\right)-2A\gamma \right] \phi_H^{\prime 2} \nonumber \\ & & \;\;\;\; + \left[A^4\gamma^2 e^ {4\gamma\phi_H}Q_e^4 + A e^{2\gamma\phi_H}Q_e^2 \left(2A^2 \gamma^2+4A\gamma^2 +1 \right) -2 \right] \phi'_H \nonumber \\ & & \;\;\;\; + \left[\frac{A^3 \gamma}{2} e^ {4\gamma\phi_H} Q_e^4 -A\gamma e^{2\gamma\phi_H} Q_e^2 \left(6A+1\right) +6 A\gamma \right] =0 . \label{2-180}\end{aligned}$$ There are three roots of $\phi'_H$, $\phi'_1<\phi'_2< \phi'_3$. We can, however, obtain a regular solution only for $\phi'_H=\phi'_2$. This is understood from the fact that in the limit of $Q_e \rightarrow 0$, we recover the previous condition (\[2-240\]) and $\phi'_3=\infty$. We plot the field functions of the solutions with $Q_e=1.0$ in Fig. 3. Their behaviors are almost same as those of the neutral case qualitatively. We show $M-r_H$ relations of black holes with $Q_e=0.4$ and $1.0$ in Fig. 2. As the neutral black hole case, we also find the critical and the singular points (the end point on the branch), C and S. Fig. 2(c) is a magnification around the critical point C. No singular behavior appears for the critical solution and below the critical point, no solution exists. At the singular point, a naked singularity appears. The existence of a critical mass is also known in a Reissner-Nordström black hole with a fixed charge. In that case the outer and inner horizons coincide and the black hole becomes extreme at the critical mass. Our solution curve with a constant charge in $M-r_H$ diagram is also vertical at the critical point. However, we will see in the next section that our critical point C has different thermodynamical properties from that of the extreme point in Reissner-Nordström black hole. Magnetrically Charged Black Hole -------------------------------- Next we turn to a magnetically charged solution. In the present subsection we extend the gauge field from the U(1) to SU(2) group. This can be done by setting $$\begin{aligned} \tilde{a} & \equiv & 0, \\ w & \equiv & 0.\end{aligned}$$ Note that it is not the ’tHooft-Polyakov type, which is obtained through a spontaneous symmetry breaking by a Higgs field as in Refs. [@'tH; @LNW; @Bre; @Aic; @Tach], but the Wu-Yang type solution[@WY]. It is a kind of the dual solution of U(1) electrically charged one. The value of the magnetic charge is quantized as $Q_m=1.0$. On the event horizon, Eq. (\[3-120\]) becomes $$\begin{aligned} && \left[A^3 \gamma^3 \left(2A+1\right) +A \gamma \left(A-2\right) \right]\phi_H^{\prime 2} - \left[ A^2\gamma^2 \left(A^2+2A+ 2\right) + \left(A-2\right)\right] \phi_H' \nonumber \\ && ~~~~~~~~~~~~~~~~~~~~~~ - \frac{A\gamma}{2} (A^2-12A+10)=0. \label{3-115} \end{aligned}$$ We have two roots of $\phi'_{\pm}$ but find again a regular solution only for $\phi'_H=\phi'_+$. The behavior of the field functions is similar to that of neutral and electrically charged solutions. We plot the $M-r_H$ relation in Fig. 4. We discuss its properties in the next subsection together with the “colored” case. Dilatonic Colored Black Hole with Gauss-Bonnet Term --------------------------------------------------- Here we set the ’tHooft-Polyakov ansatz $\tilde{a}\equiv 0$, i.e., purely magnetic YM field strength exists. In the SU(2) Einstein-Yang-Mills system, the no-hair theorem for a spherical monopole or dyons was proved[@NohairEYM1; @NohairEYM2]. It states that there exists no static, spherically symmetric regular non-Abelian black hole solution which has non-zero global Yang-Mills magnetic charge with or without an electric charge. Although it is not clear whether this kind of no-hair theorem holds in the present case, we set $a=0$ in this paper. We obtain the following relation on the horizon from Eq. (\[3-120\]), $$C_1 \phi_H^{\prime 2} + C_2 \phi_H' +C_3 =0 , \label{3-116}$$ where $$\begin{aligned} C_1 & = & A^2 \gamma (1-w_H^2)^2 \left(2A^2\gamma^2 +A\gamma^2+1\right)-2A\gamma, \label{3-125} \\ C_2 & = & -A^4 \gamma^2 (1-w_H^2)^4 - A (1-w_H^2)^2 \left(2A^2\gamma^2+2A\gamma^2 +1\right) + 1, \label{3-135} \\ C_3 & = & -\frac{A^3 \gamma}{2} (1-w_H^2)^4 + A\gamma (1-w_H^2)^2 \left(6A+1\right) -6 A\gamma . \label{3-145} \end{aligned}$$ This has again two roots $ \phi'_{\pm}$, and we find a non-trivial solution only with the value $\phi'_ {+}$, as the same as before. The YM equation (\[3-140\]) on the event horizon becomes $$w'_H = -\frac{w_H(1-w_H^2)}{2f(r_H)}. \label{3-160}$$ Then $w'_H$ is also determined by $\phi_H$ and $w_H$. Now we have only one shooting parameter, $w_H$, which will be fixed by an iterative integration with the boundary condition (\[3-105\]). Here we assume $w_H >0$ without loss of generality, since the field equations are symmetric for the sign change of $w$. With the above conditions, we solve Eqs. (\[3-100\]) $\sim$ (\[3-120\]) and (\[3-140\]) numerically and find discrete families of regular black hole solutions which are characterized by the node number $n$ of the YM potential, just as for the colored black hole[@Vol]. We show some of the solutions with $n=1$ in Fig. 5. We shall call those solutions “colored” black holes as well. Since the YM field damps faster than $\sim 1/r^2$, those black holes have no global color charge related to the gauge field, just like the colored black hole. Here we use double-quotation marks to distinct our new solutions from the original colored black hole. The dilaton field and metric functions are similar to those of other solutions discussed in the previous subsections. There is a region where the effective energy density becomes negative. The characteristic feature is that the YM potential $w$ is almost scale invariant. This is recognized as follows. We shall normalize the variables by $\tilde{r}_H$ to see the scale invariance, i.e., $\hat{r}=\tilde{r}/\tilde{r}_H$, $\hat{m}=\tilde{m}/\tilde{r}_H$, $\hat{a}=\tilde{r}_H \tilde{a}$. In the limit of $\tilde{r_H} \to \infty$ the field equations become $$\begin{aligned} & & \delta^{\hat{\prime}} = -\frac{1}{2} \hat{r} \phi^{{\hat{\prime}} 2} +O \left( \frac1{\tilde{r}_H^2} \right), \label{3-170} \\ & & \hat{m}^{\hat{\prime}} = \frac{1}{4} \left(1-\frac{2\hat{m}}{\hat{r}} \right) \hat{r}^2 \phi^ {{\hat{\prime}} 2} +O \left(\frac1{\tilde{r}_H^2}\right), \label{3-180} \\ & & \left[ e^{-\delta} \hat{r}^2 \left(1-\frac{2\hat{m}}{\hat{r}}\right) \phi^{\hat{\prime}} \right]^{\hat{\prime}} +O \left(\frac1{\tilde{r}_H^2}\right) =0, \label{3-190} \\ & & \left[ e^{-\delta} e^{-\gamma \phi} \left(1-\frac{2\hat{m}}{\hat{r}} \right) w^{\hat{\prime}} \right]^{\hat{\prime}} +e^{-\delta} e^{-\gamma \phi} \frac{w(1-w^2)}{\hat{r}^2} +O \left(\frac1{\tilde{r}_H^2}\right) =0. \label{3-200} \end{aligned}$$ A prime with a hat denotes the derivative with respect to $\hat{r}$. Eqs. (\[3-170\]) $\sim$ (\[3-190\]) are decoupled from the YM field and are the same as those for the Einstein gravity with a massless scalar field. Because of the no-hair theorem, we find just a Schwarzschild solution with $\phi=0$. The YM field equation (\[3-200\]) is then the same as that in a fixed Schwarzschild background spacetime, i.e., $$\hat{r} (\hat{r} -2 \hat{M}) w^{\hat{\prime} \hat{\prime}} +2\hat{M} w^{\hat{\prime}} +w (1-w^2) = 0. \label{3-210}$$ The non-Abelian YM field can have a nontrivial configuration although it makes no contribution to the black hole structure. Then the solution $w = w^{\ast} (\hat{r}) = w^{\ast} \left(\tilde{r}/\tilde{r_H} \right) = w^{\ast} \left(r/r_H \right)$ is scale invariant. From our analysis this scale invariance is still found approximately even for small black holes. The configuration of the YM potentials appears to be described by almost the same function of $r/r_H$. Although the metric function $\delta$ for a small black hole (e.g. $\tilde {r}_H= 1.4007$) varies so rapidly that the above argument may seem to be invalid. This may be understood as follows. $\delta$ is decoupled from other field functions even in the original basic equations. As a result the YM potential $w$ is not affected by $\delta$, (see Eq. (\[3-210\])). Furthermore if $m(r)$ and $e^{-\gamma \phi}$ are almost constant, which is confirmed from our numerical solutions, then the equation for $w(r)$ turns out to be Eq. (\[3-210\]), resulting in a scale invariance of $w(r)$. A similar behavior is found for large black holes in the EYMD system[@Tor]. We show the $M-r_H$ relations of the neutral, the magnetically charged ($Q_m =1.0$) and the “colored” black holes with node number $n=1$ in Fig. 6. There exists a lower bound for its mass in each branch. Since the solution does not exist for $r_H \to 0$, our result is consistent with [@DG], which says that there is no particle-like solutions in the present system. Fig. 6(b) is a magnification around the critical point for the magnetically charged and “colored" black hole. Although the neutral and the electrically charged branches have a turning point, which is a critical point, the magnetically charged and “colored” branches do not. The lowest mass corresponds to the end point, where $\phi'_{\pm}$ coincide. Although we have not proved it analytically, we expect that a naked singularity appears at this end point as neutral solutions, because $\delta '$ for the black hole near critical mass appears to diverge on the horizon from Fig. 4(d), and in our numerical solutions. Hence the critical point coincides with the singular point S. Thermodynamical property and Stability {#sec:Thermodynamics} ====================================== In this section we investigate the thermodynamical properties of the dilatonic black holes with the GB term and then analyze their stability. We also discuss their fates via the Hawking evaporation process. Temperature, Entropy and Stability ---------------------------------- The GM-GHS solutions and the non-Abelian black holes have an interesting thermodynamical property. That is, a discontinuity of the heat capacity of the GM-GHS solution appears, depending on the coupling constant of the dilaton field $\gamma$. Its critical value is just what we use in this paper[@GM]. Similar properties were obtained in the EYMD system[@Tor]. Our new black holes may possess similar interesting properties, which is the reason why we investigate the thermodynamical properties here. Although black hole thermodynamics in non-Einstein theories is not well studied, we can define the temperature and the entropy of the black holes in our model and show that they obey the first low of thermodynamics[@Wald]. The Hawking temperature is given as $$T = \frac{1}{4\pi r_H} e^{-\delta_H} \left[1-2 \tilde{m}'_H \right], \label{4-10}$$ for the metric (\[2-30\]). The inverse temperature $\beta \equiv 1/T$ versus the gravitational mass is shown in Fig. 7. We show the branches of the neutral, the electrically charged ($Q_e=1.0$), the magnetically charged ($Q_m=1.0$) and the “colored” black holes with $n=1$. For comparison we also plot the GM-GHS solution, which have same temperature as Schwarzschild black holes in $\gamma=\sqrt{2}$ case. Comparing the branch of the neutral black holes with that of Schwarzschild black holes, we can guess that the GB term has the tendency to raise the temperature. This can be confirmed by seeing other branches. For example the dilatonic colored black holes in the EYMD system have a lower temperature than that of the Schwarzschild black holes in the mass range of Fig. 7 [@Tor]. However the new solutions with “color charge" are under the branch of the Schwarzschild black holes. These behavior comes from the contribution of $\tilde{m}^{\prime}$ term of Eq. (\[4-10\]). $\tilde{m}^{\prime}$ has the minimum value $\tilde{m}^{\prime}=-1/2$ at the singular point in the neutral case. As for the electrically charged black hole we can see the effect of the GB term in from a different point of view. The thermodynamical properties of GM-GHS solutions changes drastically as the coupling constant $\gamma$ shifts across $\gamma=\sqrt{2}$. For $\gamma<\sqrt{2}$ the temperature is always lower than Schwarzschild black hole and it vanishes in the extreme limit, while for $\gamma>\sqrt{2}$ the temperature is always higher than Schwarzschild black hole and it diverges in the extreme limit. For $\gamma=\sqrt{2}$, which is also the applicable value in our case, the temperature coincides with the Schwarzschild case and it is finite even in the extreme limit. However the behavior of the temperature of the electrically charged solutions is similar to the GM-GHS solution with $\gamma> \sqrt{2}$. The critical value of $\gamma$ in the system including the GB term must be different from $\gamma=\sqrt{2}$ if such a value exists at all. The effect of the GB term also appears when we discuss the heat capacity. It is always negative for all new black holes found here in spite of the existence of the gauge field. Hence the GB term produces a large effect on the thermodynamical properties. We also calculate the entropy of our new solutions. Although the entropy is expressed by the quarter of the area of an event horizon in the Einstein theory, this relation cannot be applied in the non-Einstein theory. Here we adopt the entropy proposed by Wald[@Wald], which originates from the Noether charge of the system. This entropy has several desirable properties. For example, it can be defined in a covariant way in all diffeomorphism-invariant theories, which includes the present model, and it obeys the first low of black hole thermodynamics for an arbitrary perturbation of a stationary black hole. The explicit form of the entropy is derived by $$S=-2\pi \int_{\Sigma} E_R^{\; \mu\nu\rho\sigma} \epsilon_{\mu\nu}\epsilon_{\rho\sigma}, \label{en1}$$ where $\Sigma$ is the event horizon 2-surface, $\epsilon_{\mu\nu}$ denotes the volume element binormal to $\Sigma$ and the integral is taken with respect to the induced volume element on $\Sigma$. $E_R^{\; \mu\nu\rho\sigma}$ is defined as $$E_R^{\; \mu\nu\rho\sigma}= \frac{\partial {\cal L}}{\partial R_{\mu\nu\rho\sigma}}, \label{en2}$$ where ${\cal L}$ is a Lagrangian density. Substituting the present model (\[2-10\]) into (\[en1\]) and (\[en2\]), we obtain a simple form $$S=\frac{A_H}{4} \left(1+\frac{\alpha'}{2r_H^2} e^{-\gamma \phi_H} \right), \label{en3}$$ where $A_H$ is the area of the event horizon. This expression is also true for $\gamma=0$ case, in which we find only a trivial Schwarzschild solution. Although the GB term is totally divergent for $\gamma=0$ and does not give any contributions to the field equations, the black hole entropy defined here has the additional constant $\pi \alpha '/2$ via a surface term. Fig. 8(a) is the plots of $S/S_{\rm Sch}$ for new solutions. The entropy of neutral black hole is always larger than that of the Schwarzschild solution with $\gamma=0$, although the area of neutral black holes is smaller than that of Schwarzschild solutions with a same mass, from Fig. 2. Since $\phi_H$ is negative for neutral solutions in Fig. 1(a), then the second term of Eq. 57 make the entropy much larger. Similarly, charged black holes have larger entropy than that of Reissner-Nordström solutions. We depict the $M$-$S$ diagram in Fig. 8(c), which shows the same property as in the Einstein theory that the entropy becomes smaller when the system includes the gauge field. For the neutral and electrically charged case, we find the critical point C and have two different black hole solutions with the same mass. Since the entropy may be a good indicator for the stability[@Tor2; @Tor3; @Tach], we show the fine structure of $M$-$S^{\ast}$ diagram near the critical point in Fig. 8(b) for the neutral black hole. We subtract a linear function which passes through an appropriate point A and the end point S from the entropy $S(M)$ in order to show the structure near the critical point clearly. Hence only the relative values on the vertical axis are significant. We find a cusp structure at the critical point C. The cusp reminds us of the catastrophe theory. Catastrophe theory is a mathematical tool established by Thom to explain a variety of changes of state in nature, in particular a discontinuous change of states which occurs eventually in spite of gradual changes of parameters of a system[@Thom]. It is widely applied in various research fields and in particular we showed that this theory is applicable to the stability analysis of various types of non-Abelian black holes[@Tor3; @Tach]. As in the previous non-Abelian black hole case, we adopt the mass and the entropy of a black hole as a control parameter and a potential function, respectively. Then the existence of the cusp structure shows that the stability changes at this point. Then we can conclude that the [*upper*]{} branch, AC, is more stable than [*lower*]{} branch, CS, since the entropy is the potential function. As a result, the singular point S becomes unstable. Although the stability of the upper branch AC is not confirmed from Fig. 8(c) because the catastrophe theoretical method gives us only the relative stability, we expect that it is stable since no other solution branch exists. It is also understood from the following argument. We know the stability of the black hole solutions in both EMD and EYMD systems. As in Ref. [@DG], if we have a numerical constant $\beta$ in front of GB term, we may have another control parameter $\beta$. If we solve a series of black hole solutions from $\beta = 0$ to $1$, we find that our present solutions on the AC branch must be stable. The two curves AC and CS do not intersect with a non-zero angle but become tangent to each other. Since a temperature is expressed by $T=dM/dS$ from the first low of thermodynamics, the temperature is continuous at this critical point. This is consistent with Fig. 7. Finally, it must be noted, however, that it has not been proved whether the entropy defined by Eq. (\[en1\]) satisfies the second low of the black hole thermodynamics. In particular in our case there is a region where the effective energy density becomes negative. Fate of Dilatonic Black Holes ----------------------------- We find that the temperature of the black hole is finite for all mass range. Hence we expect that our black holes will not stop evaporating via Hawking radiation. Then we expect the following fates of our black holes: When the neutral or the electrically charged black hole reaches the critical mass C, then the system must shift to another phase which cannot be described as a static spherically symmetric regular black hole. For the magnetically charged and the “colored” black hole, the evaporation proceeds until the spacetime reaches the singular point S and a naked singularity will eventually appear on the hypersurface where the event horizon was located. However we have to study the evaporation process more carefully. Recall that the temperature of the extreme black hole with $\gamma >\sqrt{2}$ in the EMD system is infinite. Although the naive expectation is that this leads to an infinite emission rate, Holzhey and Wilczek showed that the potential, through which created particles travel away to infinity, grows infinitely high in the extreme limit, and then it is expected that the emission rate could be suppressed to a finite value[@HHW]. However, it turns out that our numerical analysis shows this expectation is incorrect[@Koga]. In our case, if the potential barrier becomes infinitely large at the critical point C or at the singular point S, the emission rate might be suppressed to zero though the temperature of the black hole remains finite. Then evaporation stops and the black hole cannot reach the critical or the singular point. Therefore we have to calculate the potential barrier for some field in the background of our new solutions. Here we examine a neutral massless scaler field $\Phi$. It obeys the Klein-Gordon equation; $$\Phi_{,\mu}^{\; ;\mu} =0. \label{4-1}$$ The scalar field is expanded in harmonics. We study one mode of $$\Phi = \frac{\chi(r)}{r} Y_{lm}(\theta, \phi) e^{-i\omega t}. \label{4-2}$$ Eq. (\[4-1\]) becomes separable, and then the radial equation can be written as $$\left[ \frac{d^2}{dr^{\ast 2}} + \omega^2 - V^2 (r) \right] \chi(r^ {\ast}) =0, \label{4-3}$$ where $r^{\ast}$ is the tortoise coordinate defined as $$\frac{d}{dr^{\ast}} \equiv \left(1-\frac{2m}{r} \right) e^{-\delta} \frac{d}{dr} \label{4-4}$$ and $V^2 (r)$ is the potential; $$V^2 (r) = \frac{e^{-2\delta}}{r^2} \left(1-\frac{2m}{r} \right) \left[ l(l+1) +\frac{2m}{r} -2m' -r\delta' \left(1-\frac{2m}{r} \right) \right]. \label{4-5}$$ We calculate the potential of the new black holes. We plot the ratio of the black hole temperature to the maximum of the potential for the neutral solution (Fig. 9). We also plot the Schwarzschild black hole case for comparison. All plots for the neutral solutions have larger values than that of Schwarzschild black holes. This result means that the evaporation process of the neutral solution may be faster than in the Schwarzschild case. Of course, the width of the potential is another factor which determines the evaporation rate. However it does not differ much from that of the Schwarzschild black hole (see Fig. 10). Hence we conclude that the black hole continues to lose its mass, even when the solution approaches the critical mass point or the singular solution one. We then may be faced with a naked singularity, or a transition to a new black hole state (in the neutral or the electrically charged case). Since we have no black hole solutions below the critical mass, the new state might be a time dependent evaporating black hole or just a naked singularity, which was hidden behind the event horizon. It is an open question. Concluding Remarks {#sec:Remarks} ================== We have studied dilatonic black holes with a GB term and found four new types of solutions, i.e., the neutral, the electrically charged, the magnetically charged and the “colored” black holes. The structures of these black holes and the field configurations are almost the same. This may be because the GB term becomes dominant in the basic equations. As a result, the following previously-unknown properties are found with the GB term:\ (1) The effective energy density becomes negative near the event horizon, which may be responsible for the existence of a new type of black hole.\ (2) The neutral or the electrically charged black holes have the critical point C, below which mass no solution exists, and the singular point S, where a naked singularity appears.\ (3) For the magnetically charged or the “colored” black holes there is no critical point but only the singular point S where the black hole has a finite minimum mass.\ (4) The entropy is calculated. It takes a minimum value at the critical point C for the neutral and electrically charged black holes. We also find a cusp structure in the $M$-$S$ diagram, which means that the stability changes at the critical point C. Since we do not find a cusp structure for magnetically charged and “colored" black holes, their stabilities do not change.\ (5) The black hole temperature is always finite even at the critical or the singular point. The heat capacity is always negative, like the Schwarzschild black hole, even for the electrically or magnetically charged case. This is because the mass of the extreme black hole without the GB term, which is given by $GM_{extreme}/\sqrt{\alpha '}=g^2 Q_e$ for a fixed charge, is much smaller than the lower bound for the present black hole mass. Then the charge per unit mass for our black holes is rather small.\ (6) From the finiteness of the temperature and the study of the effective potential for a massless scalar field, we may conclude that the evaporation process will not be stopped even at the critical point or the singular point. Hence it seems inevitable that the black hole shifts to another state such as a dynamical evaporation phase or that a naked singularity will appear. We should mention several further points. There may be a dependence on the metric frame. In this paper we have worked only in the Einstein frame. However some results may change if we go into the string frame since the conformal transformation includes a nontrivial dilaton field. Therefore we studied the system again in the string frame. For example, we plot the $M-r_H$ relation of the neutral black holes in the string frame in Fig. 11. It is known that the gravitational mass and the inertial mass are not equivalent in the string frame. Here we use the former in Fig. 11. We can find similar structures; the existence of a critical point and a singular point. However, the present critical point is not the same as that in the Einstein frame. Since the stability should not change in a choice of the frames, we expect that the entropy will not take a minimum value at this critical point and no cusp structure will appear in the $M_{string}$-$S_{string}$ diagram. From the consistency for stability analysis, the entropy should be minimized at the critical point in the string frame. Next is that the solution curve becomes vertical at the critical point in the $M-r_H$ diagram. Hence the rate of change of the size of a black hole becomes infinite there, i.e., if a black hole emits just one particle by the evaporation process (in fact it is possible even at the critical point because the temperature and the effective potential are finite), the size of the black hole will change drastically. This means that the quasi-static approximation is broken and the thermodynamical approach may also break down near the critical point. Thus we have to study this process taking the back reaction into account. Finally, our results are obtained by using the model (\[2-10\]) which includes only the leading terms of the expansion parameter $\alpha'$. This expansion may not be valid for Planck scale black holes. The existence and behavior of our black holes near the critical or the singular point will be modified. Hence there is a possibility that the critical and singular point are removed by taking into account the higher or all orders in $\alpha'$. It might turn out that a particle-like solution found by Donets and Gal’tsov for $\beta<0.37$ exists in a string theory ($\beta=1$), which could be important in discussion about the final state of a black hole evaporation or the information loss problem. – Acknowledgments – We would like to thank J. Koga and T. Tachizawa for useful discussion and R. Easther for his critical reading of our paper. This work was supported partially by the Grant-in-Aid for Scientific Research Fund of the Ministry of Education, Science and Culture (No. 06302021 and No. 06640412), by the Grant-in-Aid for JSPS Fellows (No. 074767), and by the Waseda University Grant for Special Research Projects. [99]{} I. Antoniadis, J. Rizos and K. Tamvakis, Nucl. Phys. B [**415**]{}, 497 (1994). R. Easther and K. Maeda, WU-AP/58/96, hep-th/9605173. G. W. Gibbons and K. Maeda, Nucl. Phys. B [**298**]{}, 741 (1988). D. Garfinkle, G. T. Horowitz and A. Strominger, Phys. Rev. D [**43**]{}, 3140 (1991). J. T. Wheeler, Nucl. Phys. B [**273**]{}, 732 (1986). D. L. Wiltshire, Phys. Rev. D [**38**]{}, 2445 (1988); B. Whitt, Phys. Rev. D [**38**]{}, 3000 (1988); R. C. Myers and J. Z. Simon, Phys. Rev. D [**38**]{}, 2434 (1988); M. Banados, C. Teitelboim and J. Zanelli, Phys. Rev. D [**49**]{}, 975 (1994). C. G. Callan, R. C. Myers and M. J. Perry, Nucl. Phys. B [**311**]{}, 673 (1988/89). S. Mignemi and N. R. Stewart, Phys. Rev. D [**47**]{}, 5259 (1993). B. A. Campbell, N. Kaloper and K. A. Olive, Phys. Lett. B [**285**]{}, 199 (1992). B. A. Campbell, N. Kaloper, R. Madden and K. A. Olive, Nucl. Phys. B [**399**]{}, 137 (1993). S. Mignemi, Phys. Rev. D [**51**]{}, 934 (1995). P. Kanti, N. E. Mavromatos, J. Rizos, K. Tamvalis and E. Winstanley, hep-th/9511071; gr-qc/9606008. S. O. Alexeyev and M. V. Pomazanov, hep-th/9605106. M. S. Volkov and D. V. Galt’sov, Pis’ma Zh. Eksp. Teor. Fiz. [ **50**]{}, 312 (1989); Sov. J. Nucl. Phys. [**51**]{}, 747 (1990); P. Bizon, Phys. Rev. Lett. [**64**]{}, 2844, (1990); H. P. Künzle and A. K. Masoud-ul-Alam, J. Math. Phys. [**31**]{}, 928 (1990). R. Bartnik and J. McKinnon, Phys. Rev. Lett. [**61**]{}, 141. D. V. Galt’sov and M. S. Volkov, Phys. Lett. A [ **162**]{}, 14 (1992); N. Straumann and Z.-H. Zhou, Phys. Lett. B [**234**]{}, 33 (1990); Z.-H. Zhou and N. Straumann, Nucl. Phys. B [**234**]{}, 180 (1991); P. Bizon, Phys. Lett. B [**259**]{}, 53 (1991); P. Bizon and R. M. Wald, Phys. Lett. B [**259**]{}, 173 (1991). As a review paper see K. Maeda, Jounal of the Korean Phys. Soc. [**28**]{}, S468, (1995). K. Maeda, T. Tachizawa, T. Torii and T. Maki, Phys. Rev. Lett. [**72**]{}, 450, (1994); T. Torii, K. Maeda and T. Tachizawa, Phys. Rev. D [**51**]{}, 1510, (1995). T. Tachizawa, K. Maeda, and T. Torii, Phys. Rev. D [**51**]{}, 4054 (1995). T. H. R. Skyrme, Proc. Roy. Soc. London, [**260**]{}, 127 (1961); J. Math. Phys. [**12**]{}, 1735 (1971). S. Droz, M. Heusler and N. Straumann, Phys. Lett. B [**268**]{}, 371 (1991); P. Bizon and T. Chmaj, Phys. Lett. B [**297**]{}, 55 (1992). H. C. Luckock and I. Moss, Phys. Lett. B [**176**]{}, 314 (1986); H. C. Luckock, [*String Theory and Quantum Gravity*]{} ed. by H. J. de Vega and N. Sanchez, (World Scientific, 1987), p.455. T. Torii and K. Maeda, Phys. Rev. D [**48**]{}, 1643 (1993). B. R. Greene, S. D. Mathur and C. M. O’Neill, Phys. Rev. D [**47**]{}, 2242 (1993). G. ’tHooft, nucl. Phys. B [**79**]{}, 276 (1974); A. M. Polyakov, Pis’ma Zh. Eksp. Teor. Fiz. [**20**]{}, 430 (1974). K. -Y. Lee, V. P. Nair and E. Weinberg, Phys. Rev. Lett. [**68**]{}, 1100 (1992); M. E. Ortiz, Phys. Rev. D [**45**]{}, R2586 (1992). P. Breitenlohner, P. Fargács and D. Maison, Nucl. Phys. B [**385**]{}, 357 (1992). P. C. Aichelburg and P. Bizon, Phys. Rev. D [**48**]{}, 607 (1993). R. Dashen, B. Hasslacher and A. Neveu, Phys. Rev. D [**10**]{}, 4138 (1974); N. S. Manton, Phys. Rev. D [**28**]{}, 2019 (1983); F. R. Klinkhamer and N. S. Manton, Phys. Rev. D [**30**]{}, 2212 (1984). G. Lavrelashvili and D. Maison, Phys. Lett. B [**295**]{}, 67 (1992); Nucl. Phys. B [**410**]{}, 407 (1993); E. E. Donets and D. V. Gal’tsov, Phys. Lett. B [**302**]{}, 411 (1993). E. E. Donets and D. V. Gal’tsov, Phys. Lett. B [**352**]{}, 261 (1995). D. Gross and J. H. Sloan, Nucl. Phys. B [**291**]{}, 41 (1987). J. d. Bekenstein, Phys. Rev. D [**5**]{}, 1239;,2403 (1972); [*ibid*]{} [**51**]{}, R6608 (1995). E. Witten, Phys. Lett. [**38**]{}, 121 (1977). In our analysis, we set $$\begin{aligned} \delta _H & = & 0, \nonumber \\ \delta (r) & \to & \delta_{\infty} = {\rm const.} \;\;\; {\rm as} \;\;\; r \to \infty, \nonumber \end{aligned}$$ instead of the condition (\[2-140\]) in order to simplify the numerical calculations. This difference in $\delta$ is recovered by rescaling the time coordinate as $t \rightarrow t e^{\delta_{\infty}/2}$. Since the value of the dilaton field $\phi$ is not fixed up to a constant because the constant difference can be absorbed in the radial coordinate by rescaling, we can relax the condition (\[2-111\]). For example, we can assume that the dilaton field approaches to some unknown constant value $\phi_{\infty}$ as $r\to \infty$. Introducing $\bar{\phi}$ defined by $\bar{\phi} \equiv \phi -\phi_{\infty}$, and rescaling the variables as $\bar{r} = {\rm e}^{-\gamma \phi_{\infty}/2} \tilde{r}$, $\bar{m} = {\rm e}^{-\gamma \phi_{\infty}/2} \tilde{m}$ and $\bar{a} = {\rm e}^{\gamma \phi_{\infty}/2} \tilde{a}$, we find our boundary condition. However, in our numerical analysis, because we have fixed a charge of black hole in each branch, we have to search for the value of $\phi _H$ by use of an iteration method such that $\phi_\infty$ vanishes. Otherwise $a$ must be rescaled if $\phi_\infty \neq 0$, giving a different value of charge. Hence $\phi_H$ is not a free parameter but a shooting parameter in the electrically charged black hole case. T. T. Wu and C. N. Yang, [*Properties of matter under unusual conditions*]{}, (Interscience, New York, 1969), p.349. A. A. Ershov and D. V. Gal’tsov, Phys. Lett. A [**150**]{}, 159 (1993). P. Bizon and Q. T. Popp, Class. Quantum Grav. [**9**]{}, 193 (1992). R. M. Wald, Phys. Rev. D [**48**]{}, R3427 (1993); V. Iyer and R. M. Wald, Phys. Rev. D [**50**]{}, 846 (1994). R. Thom, [*Structural Stability and Morphogenesis*]{}, Benjamin (1975). C. F. E. Holzhey and F. Wilczek, Nucl. Phys. B. [ **380**]{}, 447 (1992). J. Koga and K. Maeda, Phys. Rev. D [**52**]{}, 7066 (1995). [Figure Captions]{} 0.1cm   \   \   \   \   \   \   \   \   \   \   \ [^1]: electronic mail:[email protected] [^2]: electronic mail:[email protected] [^3]: electronic mail:[email protected]
--- author: - 'Zhong-Rui Bai' - 'Hao-Tong Zhang' - 'Hai-Long Yuan' - 'Guang-Wei Li' - 'Jian-Jun Chen' - 'Ya-Juan Lei' - 'Hui-Qin Yang' - 'Yi-Qiao Dong' - Gang Wang - 'Yong-Heng Zhao' date: 'Received  2009 month day; accepted  2009  month day' title: Sky Subtraction for LAMOST --- Introduction ============ Multi-object spectroscopy with optical fibers, which is a leap-type development for astronomical observation due to its ability of simultaneously observing much more objects than the traditional long slit spectroscopy, has been routinely carried out over three decades. Unlike slit or multi-slit systems, the sky spectrum can not be sampled closely adjacent to the object both on the focal plane and on the CCD in multi-object fiber spectroscopy. This difference makes both the observation and the data reduction strategy of multi-fiber spectroscopy differs from the slit spectroscopy. The standard procedure of sky subtraction for multi-object fiber spectroscopy involves using a subset of fibers (sky fibers) to measure the sky background simultaneously with the object fibers ([@Wyse92]; [@Watson98]) A master sky spectrum is constructed from the sky fibers and then subtracted from each object+sky spectrum. In practice, the sky subtraction accuracy is considered good in the range 1%-2% ([@Elston89]; [@Cuby94]). The limitation comes from various reasons, including focal-ratio degradation of the fibers, internal scattered light, variation of the sky, telecentricity effects ([@Wynne93]), cross talk from adjacent fibers and poor determination of the fiber transmittance ([@Elston89]; [@Watson98]) A number of astronomers have explored techniques to improve the sky subtraction. Observational strategies such as beam-switching ([@barden93]; [@Puech14]; [@Rodrigues12]) and nod-and-shuffle (N+S, [@GBK01]; [@Sharp10]) could help to eliminate the throughput difference between fibers and obtain higher sky subtraction accuracy, but extra cost of exposure time or CCD space is inevitable. For the standard observation mode, strong night sky emission lines are often used to calibrate the relative transmission of fibers to an accuracy of sky subtraction better than 2%(e.g. [@Lissandrini94]). Principal component analysis (PCA) is another well-established technique that has been applied in the sky subtraction for fiber spectroscopic in the recent 10 years after its first demonstration by [@Kurtz00]. [@Wild05] presented a technique to remove the residual OH features based on the PCA of the residual of the sky subtracted sky spectra in the SDSS DR2 and achieved a dramatic improvement in the quality of a large fraction of SDSS spectra, particularly for the fainter objects such as the high-redshift quasars. [@Sharp10] demonstrated that the PCA is more efficient than the N+S technique for observations in the sky limited regime with durations of 10-100 h. [@Soto16] introduced ZAP, an approach to sky subtraction based on PCA, which is likely to be of a useful tool to substantially improve the sky subtraction accuracy. In this paper we describe the sky subtraction technique for the Guo Shou Jing Telescope (a.k.a. LAMOST, [@cui12]). The technique has been intergrated into LAMOST 2-dimensional (2D) Pipeline v2.6 and applied to LAMOST Data Release 3 (DR3) and later. The telescope and instrument character involved in sky subtraction are introduced in Sec. 2. The sky subtraction methodology for LAMOST is present in Sec. 3. The sky subtraction accuracy is analysed in Sec. 4. Some discussions and conclusion are in Sec. 5 and 6, respectively. LAMOST ====== LAMOST Instruments and Observation ---------------------------------- LAMOST is a special Schmidt telescope which allows both a large aperture (effective aperture of 3.6m-4.9m) and a wide field of view (FoV, hereafter) of 5$^{\circ}$ ([@cui12]). 4000 optical fibers are accommodated on the focal plane, each of which is 320 microns in diameter, equivalent to 3.3 arcseconds in the sky. Each fiber is driven by a fiber positioning unit containing two stepping motors, by which all fibers can be positioned simultaneously in less than 10 minutes. The fibers are grouped into 16 spectrographs, in each of which the light beam is split into blue arm (370-590nm) and red arm (570-900nm) by a dichroic mirror then registered by a 4k$\times$4k CCD camera in each arm. The spectral resolution is about 2.5Å in blue arm and 4Å in red arm. To optimize the observing efficiency and mitigate the fiber cross talk, targets in LAMOST survey are grouped into bright, medium and faint plan according to their $r$-band magnitude. The on-site astronomer decides which one to execute based on the moon phase and the weather condition. Fainter plans are always observed in darker nights with better weather conditions. Multiple exposures, usually three, are taken to obtain enough Signal to Noise Ratio(S/N) and to remove cosmic rays. The typical exposure time of one sub-exposure for bright, medium and faint plans are 600, 900 and 1800 seconds, respectively. Twilight flats are taken at zenith both in the evening and morning for correcting the instrument difference between fibers and three Mercury-Cadmium-Neon-Argon arc-lamp frames are taken at the beginning, end and the middle of the observational night, respectively. The 5$^{\circ}$ diameter FoV, or 20 square degrees, are divided to 16 pieces, each piece(250 fibers, 1.25 square degrees) are feeded into one spectrograph, as shown in Fig.\[lamostfp\]. In dark nights, the night sky shows stable gradients on scales of degrees ([@Wyse92]), so the spatial variation of sky inside one spectrograph is insignificant. 20 of 250 fibers, distributed homogeneously both on the sky and on the CCD, are dedicated to sample the sky spectra. The traditional sky subtract method is performed spectrograph by spectrograph. The master sky spectrum is constructed from those sky fibers, as shown in the following sections. Dark Night Sky Spectrum at LAMOST Site -------------------------------------- A typical dark night sky spectrum observed by LAMOST is shown in Fig.\[skysample\]. Except a few emission lines come from the artificial light pollution, most of the distinctive features of the night-sky spectrum, including the continuum, absorption lines and most of the emission lines, are due to natural processes. The continuum of the night-sky are contributed by zodiacal light, the starlight, the extragalactic light, and the reflected solar light ([@Benn98]). The airglow, emitting from various processes of atoms and molecules in the upper atmosphere, produces the \[OI\]5577Å, 6300Å, 6363Å lines, the O band at 8600-8700Å, NaD 5890-5896Å, as well as the OH bands in the red and IR, known as the Meinel bands ([@Meinel50]). The light pollution mostly comes from the street lamps including mercury lamps and sodium lamps. For LAMOST, the mercury streetlight produce strong narrow lines at 3651Å, 3663Å, 4047Å, 4358Å, 5461Å, 5770Å and 5791Å; and the sodium lamp contributes to NaD lines while other Na lines are relatively weak. LAMOST Instrumental Effect -------------------------- ### Instrument difference \[fibertrans\] To subtract sky spectrum with sky fibers, accurate calibration of the relative instrument difference from the mirror of the telescope to the CCD pixel is very important. Generally, those differences include the vignetting effect of the telescope; the throughput difference between fibers which may be caused either by the intrinsical difference due to various reasons(e.g. fiber length, polishing of the end face, etc) or by the misalignments between the fiber and the optical axis of input light beam ([@Wyse92]); the vignetting effect of the spectrograph and the pixel to pixel difference of the CCD. Usually the correction is achieved by using a uniformly illuminated flat field, either the twilight sky or a screen at the telescope pupil. But for LAMOST, a special designed reflecting Schmidt telescope with very large field of view, the difference can not be corrected directly by either kind of flat field. As shown in [@Xue08], since the aperture of LAMOST changes with the telescope pointing, the vignetting effect is as large as 30% across the LAMOST field or 10% for the spectrographs at the edge of the LAMOST field, depending on the target declination and hour angle. Considering the fact that twilight only considered to be homogenous within degrees around the zenith and the very limited observation time during twilight, no twilight flat can compensate this position dependent effect. It is as well impossible to build a large dome flat screen at the telescope pupil, which is the 4.5-meter Schmidt reflector, and illuminate it uniformly at the same pointing as observation. So the twilight flat is only taken to correct the instrument response along the wavelength direction and the instrument difference that is relatively stable with time and position. On the other hand, twisting, bending and stress on the fiber will change the focal ratio degeneration of the light beam at the fiber output end, leading to the change of the fiber throughput. Unfortunately, LAMOST suffers such effect when the fiber positioner put the fiber to a new position. Fig.\[fluxratio\_fp\] shows the result of a test of the fiber throughput changing with fiber positions on January 19th, 2013. During the test, the telescope pointing and the focal plane position was fixed; A dome flat screen illuminated by incandescent lamps was put in front of the focal plane; Dome flat field exposures were taken for two sets of fiber position in turn, so that neighboring exposures are of different fiber position and every other exposures are of the same fiber position; Totally 14 exposures were taken for each fiber position, respectively. Since the position difference of each fiber between neighbouring exposures are relatively small, the flat field brightness difference between the two position of the same fiber cloud be ignored, the only reason that cause the difference between neighbouring exposures of the same fiber should be the stress put on the fiber by the fiber positioner. The statistic of the flux ratio between each pair of the neighbouring exposures and the flux ratio between the exposure of the same fiber position shows that the throughput uncertainty caused by the fiber positioner is about 4.8%, much larger than the uncertainty of the poisson noise(0.2%), which will leave large sky subtraction residuals if not corrected properly. As shown above, the fiber-to-fiber throughput difference depends on the telescope pointing and the position of the fiber, could not be corrected by a twilight flat field. Currently, the only possible solution is calibrating with the strong sky emission lines that go through the same light path as the target, such as \[OI\]5577Å in the blue and some of OH lines in the red. ### Image shift Variations of circumstance temperature and gravity (if moving with the telescope) will induce instability of the spectrograph, thus the image shifts in both the spatial and dispersion directions, which will lead to trace and wavelength calibration error, then finally the bad sky subtraction. For LAMOST, the image shift is mainly caused by the temperature variation and the consuming loss of liquid nitrogen which put weight directly on the CCD camera. As shown in Table\[t\_waveshift\], most of the shift in wavelength direction during the whole night is smaller than 0.1Å, while there are certain spectrographs(e.g. spectrograph 16) that are larger. Also could be seen from the table is that the shift is not homogenous during the night. Except taking arc image between each exposure, using the sky emission line is a cheaper but robust solution to calibrate the shift. Methodology =========== Sky subtraction is one of the final steps in the LAMOST 2D data reduction pipeline, but dependent on the quality of the previous steps. The spectra are extracted from the raw science data using the fiber trace obtained from flat field frame. The initial wavelength solution is obtained by arc-lamp frame and the initial fiber-to-fiber transmittances are estimated by the twilight flat field spectra. The sky emission lines are used to fine-tune the wavelength solution and the fiber-to-fiber transmittances. After that, the master sky spectra are created from the sky sampling fibers, then subtracted from the object spectra. The object spectra are flux-calibrated, different exposures are co-added and interpolated to a logarithmically-spaced wavelength scale, $\Delta\lg\lambda=10^{-4}$. Finally, PCA sky subtraction is performed on the co-added spectra in the wavelength range of 7200-9000Å, where most sky emission lines lie. This paper will focus on the sky subtraction, skipping other steps like flux extraction, arc-lamp wavelength calibration, flat-fielding, flux calibration and spectra co-addition, which will be described in detail in a forthcoming paper (Bai et al., in preparation). We start from the extracted flux, assuming the initial wavelength solution and initial fiber transmittances has been performed. Sky Emission Lines Identification --------------------------------- The sky emission lines could be easily identified after the initial wavelength calibration with arc lamp. In the blue arm, the sky emission lines are relatively sparse and the street light lines are too weak to offer reliable calibration, so only the strong airglow line \[OI\]5577Å are used. While in the red arm, there are bunches of strong emission lines, such as OH bands. It is not easy to identify single lines in this region, yet after a careful comparison between the observed spectra and literature([@Osterbrock96]; [@Osterbrock97]), 13 single lines and 6 doublets are selected, as listed in Table\[shiftline\]. For the doublet, the intensity of the two lines are similar and the separation of the doublet is less than 0.5Å, so that they can be treated as single lines under LAMOST resolution($\sim4$Å), then only the average wavelength of the two lines are adopted in the table. For the $i$th selected line with wavelength $\lambda_i$ in an individual fiber, the profile is fitted with a Sérsic function and a linear background within $\pm8$Å around the line center $\lambda_i$: $$\label{eq:1} f(\lambda)=\alpha{e}^{-\frac{|\lambda-{\lambda}_{i}|^\delta}{\delta\gamma^\delta}}+a\lambda+b,$$ where $f(\lambda)$ is the flux corrected by the twilight flat, $\alpha$ and $\delta$ are the parameters of the Sérsic function and $a\lambda+b$ is the fitted background continuum. The intensity of the line is the sum of the continuum subtracted segment: $$\label{eq2} F_i=\int^{\lambda_i+8}_{\lambda_i-8}{\left[f(\lambda)-a\lambda-b\right]}d\lambda.$$ As noticed by [@bai17] and [@Li15], some of the LAMOST emission line profile can not be perfectly fitted by a Sérsic function due to optical aberration and distortion. So the Sérsic function is only used to derive the accurate wing of the emission line and the back ground function, not taking part in equation \[eq2\]. Wavelength Calibration with Sky Lines {#waveshift} ------------------------------------- Since the image shift varies slightly with fibers, the wavelength solution is corrected fiber by fiber. The wavelength shift of a sky line is defined as the difference between the literature wavelength $\hat{\lambda}_{i}$ and the initial line center $\lambda_i$: $$\Delta{\lambda_{i}}=\hat{\lambda}_{i}-\lambda_i,$$ Then $\Delta{\lambda_{i}}$ are fitted with a linear function of ${\lambda}_{i}$: $$\Delta{\lambda_{i}}=m{\lambda}_{i}+n,$$ the coefficients $m$ and $n$ are derived by solving the above functions with the least square method. Finally, the updated wavelength solution are obtained by $$\lambda'={\lambda}+m{\lambda}+n,$$ where $\lambda'$ is the updated wavelength. For blue arm, $m$ is set to be zero since only \[OI\]5577Å is used. An example of the wavelength correction of the red arm are shown in Fig.\[skylineshift\]. Fiber Transmittance Correction ------------------------------ As described in Section \[fibertrans\], the relative fiber throughput varies with the telescope pointing and the fiber position, this could be calibrated using the intensity of the sky emission line $F$, calculated in Eq.(\[eq2\]). For the $i$th line in a fiber, the scale factor $s(i)$ could be calculated by: $$s(i)=\frac{F(i)}{<F(i)>},$$ where $F(i)$ is the line flux obtained by Eq.(\[eq2\]) and $<F(i)>$ is the median of $F(i)$ over all fibers. For the blue arm, the scale factor is the relative scale of $[OI]5577$Å  line. For the red arm, the median of $s(i)$ over 19 lines listed in table\[shiftline\] is adopted as the final scale factor. The flat field corrected spectra is then divided by the scale factor. Fig.\[skyscale\] shows an example of the distribution of the scale factors on October 26th, 2016. The scale factor in both the blue and the red arms show similar non-Gaussian distribution with standard deviation of about 0.058, while the ratio of the scale factor of the blue arm to the red is close to Gaussian distribution with standard deviation of 0.028. These indicate that the uncertainty of the fiber throughput induced by the telescope vignetting effect and the fiber positioner is about 5.8%, while the accuracy of scale factor correction is about 2.8%. As measured from the data, the uncertainty of the sky emission line intensity is about 1.5% for \[OI\] 5577Å and any single OH lines. Considering $s_r$ is derived from the median of 19 OH lines, the uncertainty of the median is $1.5\%/\sqrt{19}{\approx}0.35\%$ approximately. From the twilight flat fields observed in two adjacent days, the uncertainty of the twilight sky flat is about 1.6%. In total, the synthetic uncertainty of $s_b/s_r$ is about $\sqrt{1.5^2+0.35^2+2\times{}1.6^2}{\approx}2.7\%$, consistent with the accuracy of scale factor correction. Master Sky ---------- After the wavelength and fiber transmittance are fine-tuned by the sky emission line, a master sky spectrum is created from the sky fibers in the same spectrograph, using the B-spline fitting procedure similar to SDSS 2D pipeline(see SDSS data reduction pipeline $idlspec2d$, [@Bolton07]). Spectra of sky fibers are treated as fluxes in discrete pixels; pixels from different sky spectra are aligned together in order of their wavelength. The master spectrum is fitted in 2 dimensions, a cubic B-spline function in the wavelength direction, allowing the B-spline coefficients to vary with the fiber number. The bad pixels are rejected during the fitting. The B-spline function is then interpolated back to each fiber, obtaining the final sky spectrum, which will be subtracted from the object spectrum. Fig. \[bsplinefit\] shows an example of master sky spectrum and the sky subtraction residual. PCA Sky Subtraction ------------------- After the master sky is subtracted, each spectrum is flux calibrated, then different exposures are combined and interpolated to a logarithmically-spaced wavelength scale, i.e. $\Delta\log\lambda=10^{-4}$. PCA is performed on the combined spectra in the range of 7200-9000Å, where the OH sky emission lines are dominating. For each spectrograph, about 20 sky subtracted sky spectra are used to generate the components of PCA. Both the sky and object spectra are first continuum subtracted using a rolling median filter to remove large-scale structures, then the eigenvectors and eigenvalues of PCA are derived from the $\sim$20 sky residual spectra. For each spectrum, a projection coefficient is calculated for each eigenvector, and the sum of the 20 most-significant principal eigenvectors weighted by the projection coefficients is adopted to derived the sky residual spectrum. This residual is then removed from the spectrum and the median filtered continuum is added back. The detail of PCA sky subtraction could be found in [@Wild05] and [@Sharp10]. Sky Subtraction Accuracy ======================== The sky subtraction routine described here is part of the LAMOST 2D Pipeline v2.6 and has been applied in all LAMOST data later than Data Release 3. As the sky subtraction are performed in two stages (i.e. the master sky stage and the PCA stage), the accuracies of the two stages are analysed separately. Wavelength Calibration Accuracy ------------------------------- The typical error of sky line calibration within one observation is about 0.07Å, as shown in Fig \[skylineshift\]. It is not clear whether our calibration is stable between observations, so it is necessary to test it by measuring the radial velocity(RV) variations of stars. There are quite a lot stars with multiple observations of more than one day in LAMOST database and their radial velocities could be used to indicate the stability of our wavelength calibration. A search of repeated observation of F, G and K stars with S/N over 20 in LAMOST DR3 results in 689897 spectra of 301106 stars. Every two observations of the same star is defined as a “pair”, so there will be $\mathrm{C}_n^2$ pairs for a star with $n$ observations. For each pair, $\Delta{}RV$ is defined as $|v_1-v_2|$, where $v_1$ and $v_2$ are the measured RVs of the two observations, respectively. The distribution of the RV difference vs the number of observations is shown in Fig.\[rvcompare\]. The standard deviation of the Gaussian fit of the core of the $\Delta{}RV$ distribution is 4.47km/s, consistent with the typical wavelength calibration uncertainty (0.07Å). Sky Subtraction Accuracy of Single Frame {#secskyres} ---------------------------------------- The forthright way to estimate the sky subtraction accuracy is to measure the residual of the sky subtracted sky spectra. The absolute sky residual is dominated by the shot noise of the original sky flux. A relative sky residual is defined as the ratio of the absolute sky residual to the sky flux: $$r_s(\lambda)=\frac{f_{r}(\lambda)}{f(\lambda)}, \label{eqskyres}$$ where $f_{r}(\lambda)$ is the absolute residual of the sky spectra and $f(\lambda)$ is the original sky flux. For sky emission lines, since the typical FWHM of any single sky line is 3-4Å, pixels within $\pm$[3]{}Å around the line center are used to calculate the residuals of the sky emission lines. In the blue arm, \[OI\] 5577Å  is measured, while in the red arm 10 strong lines including 7714Å, 7750Å, 7794Å, 7821Å, 7853Å, 7913Å, 7964Å, 7993Å, 8025Å and 8062Å  are adopted. For sky continuum, the flux in individual pixel is much lower than that of the emission lines. To depress the shot noise, instead of using counts of individual pixels in Eq.(\[eqskyres\]), the average of the continuum in 5470-5560Å and 6000-6200Å region(see Fig.\[skysample\]) are adopted for the blue and the red arms, respectively: $$r_s(\lambda)=\frac{\overline{f_{r}(\lambda)}}{\overline{f(\lambda)}}.\label{eq:cave}$$ As an example, Fig. \[skyresidualsample\] shows the distribution of the relative sky residuals on September 20th, 2016. The dispersion of the residuals, which can be estimated by $\sigma$ of the Gaussian fitting of the histogram, is an indicator of the sky subtraction accuracy. As could be seen in Fig.\[skyresidualsample\], the histogram of the \[OI\] 5577 emission line residual could be well described by a Gaussian function except at the tail of the histogram, which is mostly caused by the profile difference between the master sky spectra and the optical aberration distorted profile of individual fiber. In the OH line region, the scatter is larger, since not all the 10 lines selected to measure the residual are from the line list in table \[shiftline\] to determine the scale factor, also the residual of non single lines will be affected by the neighbouring lines. The residuals in continuum are consistent with the corresponding emission lines, indicating that the scale factor works well in the continuum region. LAMOST survey is carried out in both bright and dark nights. In moonlit nights, the relative strength of sky emission line to the sky continuum is weaker than that in the darker nights, due to the increase of the background brightness. To investigate the sky subtraction accuracy dependency on the moon light, we have traced $\sigma$ for all the LAMOST sky subtracted sky spectra in each night before December 31, 2016 and the results are shown in Fig.\[skyresidual\]. There is no obvious system tendency of sky continuum residual $\sigma$ on moon phase. The average values of $\sigma$ for the blue and the red sky continuum residual are comparable to those in Fig.\[skyresidualsample\], but with much larger scatter than $\sigma$ of the emission lines. There are 3 reasons for the large scatter. The first is that the exposure time varies between 10 and 30 minutes, then both the continuum and the sky emission line used to calibrate the continuum vary 3 times, thus the relative noise changes $\sqrt{{3}\times{2}}=2.45$ times. The second is that both the sky continuum and the emission line changes from time to time. The third is that sky continuum does not necessarily vary the same as the sky emission line, especially in bright nights, though the difference should be small inside individual spectrograph (see section \[scvar\]). A check of those point with large scatter in dark and grey nights in the upper panel of Fig.\[skyresidual\] shows that those are from observation with short exposures, where both the sky continuum and emission lines are relatively weak. The large scatter of continuum residual at the bright nights follow the similar trend of emission lines in the corresponding moon phase, which is caused by the short exposure time and the dramatic variation of the moon light background, thus the increasing of the relative uncertainty of the sky emission lines. The relative residuals of the sky emission lines are much smaller in dark nights than the continuum, with a median level of 1.5% for \[OI\] 5577 and 2.2% for OH lines, but raising obviously when moon phase is between the 9th and the 21st day, up to 3% for \[OI\] 5577 and 4.5% for OH lines in full moon nights. Considering that \[OI\] 5577 emission line is less affected by neighboring lines than the OH lines in the red and the region used to calculate the scatter in the blue is the region around \[OI\]5577 itself, it is easy to understand the smaller residual of the blue arm. Profit of PCA {#accmes} ------------- The PCA sky subtraction focus on the OH emission lines in the range of 7200-9000Å, the residual of these lines decreases significantly. Three examples of different S/N are shown in Fig. \[pcasample\]. To quantify the contribution of PCA subtraction, the spectra of 7,522 F-type stars observed on Feb 20, 2016 are analyzed. For F stars, there are less features in the wavelength 7720-8100Å, on the contrary, there are abundant sky emission lines in the same region, so the smoothness of the continuum could be used to evaluate the quality of the sky subtraction. The smoothness is defined as: $$a_t=RMS(\frac{f_{r}-c}{c}), \label{objacc}$$ where $f_r$ is the co-added object spectra, $c$ is the pseudo-continuum derived by a rolling median filter and $RMS$ is the short for Root Mean Square. Fig.\[pcacomp\] shows the ratio of the RMS $a_t$ after the PCA sky subtraction to that before PCA sky subtraction. All of the stars show smaller RMS after PCA sky subtraction. The peak occurs at about 75%, which indicates that PCA improves the sky subtraction of OH lines about 25%. Incorporating the results of Fig.\[skyresidual\], the median of the final sky subtraction residual of OH lines in the dark and the bright nights could reach as low as 1.7% and 3.4%, respectively. Comparison with SDSS -------------------- SDSS ([@York00]), similar to LAMOST in resolution and wavelength coverage, is the most successful low resolution multi-fiber spectroscopic program. SDSS has higher system efficiency than LAMOST, 18% and 20% in blue and red arms, respectively ([@Stoughton02]). Considering the efficiency difference, the signal of the same object in the two surveys are different, so it is hard to tell which survey shows the smaller relative sky subtraction residual to the object continuum. Instead of comparing coincident targets in both surveys, comparing the targets of both similar strength of sky emission lines and similar counts of object flux is more reasonable. As shown in the top left panel of Fig.\[tosdss\], about 11000 F stars are selected from both SDSS DR10 and LAMOST DR3 with similar integral strength of \[OI\]5577Å  and similar object continuum intensity. Checking the same sample in the red shows that the OH sky emission lines in LAMOST are much stronger than those in SDSS, so we only keep those LAMOST spectra with the strength of OH 7913Åcomparable to those of SDSS, rejecting about 7000 spectra with stronger OH lines, as shown in the top right panel of Fig.\[tosdss\]. When the intensities of the sky emission lines are similar in the blue part of the two surveys, the sky continuum of LAMOST is brighter than SDSS, thus the S/N of LAMOST stellar continuum in 5500-5560Å  is lower than that of SDSS. While in the red, the sky continuum intensity is similar to SDSS, so the S/N of the two surveys are similar. As noticed in the middle row of Fig.\[tosdss\], the S/N of the two surveys departure at high object flux region, the reason is as follows. The S/N of both LAMOST and SDSS spectra are defined as: $$S/N=flux*\sqrt{invvar}$$ where $invvar$ is the inverse variance, which is accumulated during the data processing. In SDSS image processing, it is defined as: $$invvar=\frac{1}{\sqrt{f+(g*n_{rd})^2+10^{-4}f^2}},$$ where $f$ is the object flux counts, $g$ is the gain and $n_{rd}$ is the read-out noise. The 3rd term in denominator is just to limit the S/N below 100. But in LAMOST image processing, the inverse variance is simply $$invvar=\frac{1}{\sqrt{f+(g*n_{rd})^2}}.$$ The difference of the estimation makes SDSS spectra have lower S/N than LAMOST at the same flux, especially when the flux is high. According to Eq.(\[objacc\]), the residuals in the emission line region of the blue (5570-5585Å  around \[OI\]5577) and the red (7700-8100Å in OH lines region) part are calculated respectively for each star. The residuals vs their object continuum intensities are plotted in the bottom panel of Fig.\[tosdss\]. LAMOST spectra show smaller sky emission line subtraction residuals than SDSS in both the blue and the red part of the spectra, while the red part is more significant than the blue. As indicated in the middle left panel of Fig.\[tosdss\], in the blue part, the S/N of the LAMOST spectra is lower than SDSS, so the actual effect of LAMOST sky subtraction should be even better than SDSS if they are in the same situation. The two reasons that LAMOST performs better than SDSS in sky subtraction may be: LAMOST sky sampling fiber is denser than SDSS both on the sky (25 fibers per 1.25 square degrees vs 32 fibers per 7 square degrees) and in the CCD image (25 per 250 fibers vs 32 per 640 fibers), so the sky spectra is better represented in LAMOST; The PCA sky subtraction, which contributes a lot in reducing the sky subtraction residuals in the red part of the LAMOST spectra, is not used for the SDSS spectra. Many other details in software and hardware may also explain the different performance of the two survey, but it is hard to make a comprehensive comparison. Discussion ========== The final sky subtraction accuracy is affected by many factors in data reduction process, most of them could be corrected by the above schemes, but there are certain problems could not be solved currently, such as the moon light background and the variation of the PSF. Sky Subtraction in Moonlit Night {#scvar} -------------------------------- The sky emission line calibration of the fiber throughput is based on the assumption that the sky background is homogenous and the intensity of sky continuum is proportional to that of the emission line. Generally, the assumption is good in dark night within one degree field of view, which is comparable to the field of view of single LAMOST spectrograph. In the moonlit night, the moonlight scattered by the atmosphere produce background gradient, which must be taken into account in sky subtraction. The sky brightness at LAMOST site is about V=17 mag/arcsec$^2$ at lunar age about 14 days([@Yao13]), which is close to full moon. Considering the size of the LAMOST fiber, 3.3" in diameter, the moon light dominate the sky background with V=14.85 mag. Meanwhile, the faintest targets observed in the bright night is about 14 mag, 2.2 times the background brightness. The typical flux for a 10-minutes-exposure V=14 mag star is about 11000 counts/pix, while the sky background is about 5000 counts/pix, which means 2% sky background residual will lead to 1% uncertainty in the object flux. Thus a 4% sky background gradient across the spectrograph field will lead to 1% system sky subtraction bias, which is acceptable when the sky subtraction residual is usually larger than 2%, as in Fig.\[skyresidual\]. The sky brightness gradient depends strongly on the angular distance between the target field and the moon. If the sky background gradient inside single spectrograph is limited to less than 4%, then we cloud define a secure distance to the moon, beyond which the current sky subtraction scheme is still suitable. With a moon night sky brightness model, Yao et al. (2013) calculated the typical sky brightness distribution for LAMOST site. According to the model, the secure angular distances for sky brightness gradient of 2%, 3% and 4% are derived respectively for different moon phase, as shown in Fig.\[mdis\]. The angle distance needs to be more than $33^{\circ}$ to obtain a gradient less than 4%. Most LAMOST observations, both regular survey and test observations at full moon nights, conform to the condition. A better sky subtraction scheme inducing the moon light brightness model will be the future work for the LAMOST 2D pipeline. Optical PSF Variations ---------------------- Sky subtraction with the master sky spectrum requires that the shape of the PSF in different fibers are similar, which is usually unsatisfied due to the imperfect spectrograph optics such as optical aberration and distortion, or due to the irregular shape of the fiber output end(e.g. due to the poor coupling between the fiber alignment and the slit). The sky subtraction residual caused by the profile difference in master sky subtraction can not be completely removed by the PCA sky subtraction either. To further improve the sky subtraction, future works like careful re-alignment of the fiber and the slit, and introducing spectra extraction method with 2D de-convolution algorithm (e.g. [@Bolton10], [@Li15]) will be necessary. Conclusion ========== Sky subtraction, related to almost every step of the data reduction process, is the most important indicator of the performance of 2D multi-fiber data reduction pipeline. The key algorithms and the results of sky subtraction in LAMOST 2D pipeline are demonstrated here. Due to the special characteristic of LAMOST telescope (variable vignetting with telescope pointing) and the defect in hardware manufacturing and installation (force added by fiber positioner to the fibers), the throughput of LAMOST varies with the telescope pointing and the fiber position, which lead to the failure of the traditional flat field correction, leaving the sky emission line throughput calibration the current most reliable method. Sky emission lines are also used to finetune the wavelength shift about 0.1Å  caused by the instability of the spectrograph. After the subtraction by the finetuned master sky spectra, the red part of the object spectra are processed by PCA sky subtraction to further remove the emission line residual of OH band. Overall, the wavelength calibration accuracy is about 4.5km/s according to the RV measurement of the repeat observation of the same stars. In dark nights, the median of the sky residual is about 1.5% and 1.7% for \[OI\]5577 and OH lines, respectively. in moonlit nights, the residuals rise to 3% and 3.4% for \[OI\]5577 and OH lines respectively, due to the decrease of the exposure time. For the sky continuum, the typical relative residuals are about 3%(Fig.\[skyresidualsample\]). As pointed out in Section\[scvar\], system bias of sky subtraction could be limited within 1% if the distance between the object and the moon is larger than $33^{\circ}$. Our final sky subtracted spectra show smaller residual in sky emission line region than SDSS spectra from the analysis of the F-type stars. **Aknowledgements** Thanks for the advices of the reviewers. This work is supported by the National Natural Science Foundation of China (NSFC) (Grant No. 11503054), NSFC Key Program (Grant No. 11333004) and the National Key Basic Research Program of China (Program 973; Grant No. 2014CB845700). Guoshoujing Telescope (the Large sky Area Multi-Object fiber Spectroscopic Telescope, LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. spectrograph 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 -------------- -------- ------- -------- -------- ------- -------- ------- -------- -------- ------- -------- ------- -------- ------- -------- -------- Blue 0-1 -0.010 0.057 -0.064 0.020 0.016 0.013 0.018 -0.104 0.046 0.100 -0.067 0.036 0.004 0.046 0.064 -0.032 Blue 0-2 -0.033 0.081 -0.089 -0.007 0.033 0.023 0.005 -0.104 0.044 0.152 -0.074 0.046 0.013 0.032 0.114 -0.021 Red 0-1 0.003 0.116 -0.010 -0.026 0.038 -0.084 0.083 -0.094 -0.017 0.097 -0.108 0.031 -0.012 0.068 -0.028 -0.166 Red 0-2 0.009 0.164 0.011 -0.040 0.046 -0.127 0.118 -0.023 -0.068 0.140 -0.121 0.043 -0.026 0.063 0.028 -0.183 $\lambda$(Å) source Pattern -------------- ----------- ------------------------- 5577.334 OI single 6300.304 OI single 6363.780 OI single 6863.955 OI single 6923.220 OH 7-2 P1 single 7316.282 OH 8-3 P1 single 7340.885 OH 8-3 P1 single 7369.366 OH 8-3 P2 blend 7369.248 7369.483 7401.858 OH 8-3 P2 blend 7401.688 7402.029 7750.640 OH 9-4 Q1 single 7794.112 OH 9-4 P1 single 7821.503 OH 9-4 P1 single 7993.332 OH 5-1 P1 single 8025.810 OH 5-1 P2 blend 8025.668 8025.952 8399.170 OH 6-2 P1 single 8465.358 OH 6-2 P2 blend 8465.208 8465.509 8885.850 OH 7-3 P1 single 8943.395 OH 7-3 P2 single 8958.084 OH 7-3 P2 blend 8957.922 8958.246 9001.346 OH 7-3 P2 blend 9001.115 9001.577 : Sky lines used to measure the image shift and the scale factor.\[shiftline\] ![LAMOST fiber division scheme. The fibers on the focal plane are divided to 16 regions, each feeds into one spectrograph. The colored circles indicate position of the fibers and the solid line is the border of the spectrographs. The ID of the spectrographs are marked in the corresponding region.[]{data-label="lamostfp"}](ms2017_0033fig1.eps){width="95.00000%"} ![Moonless night sky spectra taken by LAMOST on September 12th, 2015. Spectra of the blue and the red arm are indicated by the color. The x-axis is in original CCD pixels but wavelength calibrated.[]{data-label="skysample"}](ms2017_0033fig2.eps){width="95.00000%"} ![Flux ratio of the dome flat field. The solid lines show the flat field flux ratio of different fiber position, while the dot-dashed lines are of the same fiber position. The blue and the red lines are from the red and the blue arm,respectively. The $\sigma$ of the Gaussian fitting of the solid lines are 4.77% and 4.83% for the blue and the red, respectively, while those for the dot-dashed lines are 0.280% and 0.283%. []{data-label="fluxratio_fp"}](ms2017_0033fig3.eps){width="50.00000%"} ![An example of the wavelength corrections in a single frame. The blue and red lines show the measured wavelengths of the sky emission lines based on the arc-lamp solution and the sky emission line fine-tuned wavelength, respectively. While the black lines are wavelength in the literature. The distance of the blue and red lines relative to the black are 100 times exaggerated to show the details. The average difference between the initial wavelength solution and the corrected wavelength is $0.13\pm 0.07$Å, which is typical for the correction. The large residuals seen in the plot are caused by bad fibers or bad pixels.[]{data-label="skylineshift"}](ms2017_0033fig4.eps){width="95.00000%"} ![Left: Distribution of the scale factors of the blue and the red arms. Right: Distribution of the ratio of the blue factors to the red one. The standard deviations are as indicated.[]{data-label="skyscale"}](ms2017_0033fig5.eps){width="95.00000%"} ![An example of sky subtraction for spectra taken on Dec. 31th, 2016. Upper: Fluxes from 20 sky fibers of one spectrograph corrected by sky emission lines. The black crosses shows all 20 sky spectra and the solid red line shows the best fitted master sky. The discrepancy of different sky fibers is too small to be discriminated in current scale. Lower: The residuals of the sky subtracted sky spectra.[]{data-label="bsplinefit"}](ms2017_0033fig6.eps){width="95.00000%"} ![Left: The histogram of the repeat observations in LAMOST database, x axis indicate the number of repeat observations. Right: Distribution of radial velocity difference of the repeat observations; the blue curve is the Gaussian fit of the “core” of the distribution.[]{data-label="rvcompare"}](ms2017_0033fig7.eps){width="95.00000%"} ![Residuals of sky subtracted sky spectra taken on Sep. 20th, 2016. The left column is the residual of the average continuum (see equation \[eq:cave\]) in 5470-5560Å and 6000-6200Å for the blue arm and the red arm, respectively. The right column is the residual of the individual pixels of sky emission lines. Each histogram is fitted by a Gaussian function, as shown by the blue curve in each plot. The total number of pixels and the $\sigma_r$ of the fitting are marked in the plots.[]{data-label="skyresidualsample"}](ms2017_0033fig8.eps){width="95.00000%"} ![Relative sky subtraction residual $\sigma_r$ vs moon phase. Upper panel: $\sigma_r$ of the average sky continuum. Lower panel: $\sigma_r$ of the individual pixels of sky emission lines. The blue and the red symbols denotes the blue and the red arm, respectively. The x-axis is the date of moon phase, where 0 denotes new moon and 14.7 denotes full moon. []{data-label="skyresidual"}](ms2017_0033fig9.eps){width="95.00000%"} ![Three example spectra with different S/N in 7200Å-9000Å  band. In each panel, the master sky subtracted spectrum is shown in red and the PCA sky subtracted spectrum is blue. []{data-label="pcasample"}](ms2017_0033fig10.eps){width="95.00000%"} ![The distribution of the ratio of the sky subtraction residual of 7522 F-type stars after the PCA sky subtraction to that before the PCA sky subtraction. The red line is the histogram and the blue is the Gaussian fitting of the histogram.[]{data-label="pcacomp"}](ms2017_0033fig11.eps){width="50.00000%"} ![ Comparison of sky emission line subtraction residuals of SDSS and LAMOST F types stars. The blue symbols denote LAMOST DR3 and the red symbols represent spectra of SDSS DR10 in all panels. The stars are selected with similar \[OI\]5577Å  integral intensity and similar object continuum flux. The object flux, indicated in x axis of all panels, is calculated as the average counts per angstrom of star continuum in 5500-5560Å and 8070-8090Åfor the blue and the red part, respectively. Top: the star continuum intensity vs the integral flux of sky emission line, \[OI\]5577Å and OH 7913Å for the blue and the red part, respectively. Middle: the star continuum intensity vs S/Ns in the blue and the red part, respectively. Bottom: the star continuum intensity vs the relative residuals of the sky emission lines, \[OI\]5577Å and OH lines in 7700-8100Å for the blue and the red part, respectively. Note that both x and y axes are in log scale, so the error bars are not symmetric. []{data-label="tosdss"}](ms2017_0033fig12.eps){width="95.00000%"} ![The curve of angle distance between the LAMOST field center and the moon vs the moon phase. The curves represent the angular distance limit beyond which the moon light brightness gradient inside individual spectrograph is less than the given value marked above each curve. The x-axis is the date of moon age, where 0 denotes new moon and 14.7 denotes full moon. []{data-label="mdis"}](ms2017_0033fig13.eps){width="95.00000%"}
--- abstract: | We describe the recently introduced extremal optimization algorithm and apply it to target detection and association problems arising in pre-processing for multi-target tracking. Extremal optimization is based on the concept of self-organized criticality, and has been used successfully for a wide variety of hard combinatorial optimization problems. It is an approximate local search algorithm that achieves its success by utilizing avalanches of local changes that allow it to explore a large part of the search space. It is somewhat similar to genetic algorithms, but works by selecting and changing bad chromosomes of a bit-representation of a candidate solution. The algorithm is based on processes of self-organization found in nature. The simplest version of it has no free parameters, while the most widely used and most efficient version has one parameter. For extreme values of this parameter, the methods reduces to hill-climbing and random walk searches, respectively. Here we consider the problem of pre-processing for multiple target tracking when the number of sensor reports received is very large and arrives in large bursts. In this case, it is sometimes necessary to pre-process reports before sending them to tracking modules in the fusion system. The pre-processing step associates reports to known tracks (or initializes new tracks for reports on objects that have not been seen before). It could also be used as a pre-process step before clustering, [*e.g.*]{}, in order to test how many clusters to use. The pre-processing is done by solving an approximate version of the original problem. In this approximation, not all pair-wise conflicts are calculated. The approximation relies on knowing how many such pair-wise conflicts that are necessary to compute. To determine this, results on phase-transitions occurring when coloring (or clustering) large random instances of a particular graph ensemble are used. author: - | Pontus Svenson\ Department of Data and Information Fusion\ Division of Command and Control Systems,\ Swedish Defence Research Agency\ SE 172 90 Stockholm, Sweden\ bibliography: - 'ref.bib' title: 'Extremal optimization for sensor report pre-processing' --- Introduction ============ Tracking is one of the basic functionalities required in any data fusion system. A tracker module takes a number of observations of an object and uses them to update our current knowledge on the most probable position of it. Things get more complicated when there are many targets present, since most standard trackers [@BlackmanS:ModernTracking] rely on knowing exactly from which object a sensor report stems. In order to be able to use such modules when there are many targets present and they are near each-other, reports must first be pre-processed into clusters that correspond to the same target. In this paper, we present two new ideas for such clustering: 1. We introduce an approximative way of calculating the cost-matrix that is needed for clustering. 2. We use the extremal optimization method to cluster these approximated problems. Clustering is not only used for pre-processing for trackers. In situations where the flow of reports is too small to be able to track objects, clustering of all old and new reports is used to be able to present a situation-picture to the user. Clustering is also an essential part of the aggregation problem, which deals with reducing the amount of information presented to users by aggregating objects into platoons, companies, and task-forces. The main reason for using extremal optimization for the pre-processing proposed here is that it is a very fast method that is able to give approximate answers at any time during execution. Another reason for using it is that it is very easy to extend the method to dynamically add new reports; the algorithm does not need to be restarted when a new burst of reports arrive. This pre-processing will become an even more important problem in the future, as newer, more advanced sensor systems are used. The presence of large number of swarming sensors, for instance, will lead to a much larger number of reports received simultaneously than current sensor systems. Still, the benefit in computation time gained by using this approximation must be weighted against the errors that are inevitably introduced by it. This paper is outlined in the following way. Section \[extremal\] describes the background of the extremal optimization method and explains how to implement it for clustering. In the following section \[preprocess\], the method for calculating the approximate cost-matrix is described. Section \[experiments\] presents the results of some experiments, while the paper concludes with a discussion and some suggestions for future work. Extremal optimization {#extremal} ===================== Nature has inspired several methods for solving optimization problems. Among the most prominent examples of such methods are simulated annealing [@johnsonaragonmcgeochschevon], neural nets ([*e.g.*]{}, [@hertzkroghpalmer]), and genetic algorithms ([*e.g.*]{}, [@mitchell96]). These methods all rely on encoding a possible solution to a problem as a bit-string or list of spin values. A fitness of this string or spin configuration is then defined in such a way that an extremum of it corresponds to the solution of the original problem one wants to solve. Recently, the way ant and social insects communicate to find food have inspired a completely new set of optimization methods called swarm intelligence [@bonabeu-book]. A similar method is extremal optimization, which, like swarm intelligence, is based on self-organization. Self-organization is the process by which complicated processes in nature take place without any central control. An example is how ants find food — they do this by communicating information on where good food sources are by laying pheromone (smell) paths that attract other ants. Other examples include the occurrence of earthquakes and avalanches [@bak:book]. Extremal optimization is based on models used for simulating such systems. The Bak-Sneppen model, which is used to study evolution, consists of $N$ variables ordered in a chain and given random fitnesses. In each time-step, the variable with worst fitness is chosen. It and its neighbors fitnesses are then updated and replaced with new random values. Interestingly, this simple model leads to cascading avalanches of changing fitnesses that share some of the characteristics of natural processes, such as extinction of species. Extremal optimization uses such avalanches of change to efficiently explore large search landscapes. Extremal optimization requires defining a fitness $\lambda_i$ for each variable in the problem. The fitness can be seen as the local contribution of variable $i$ to the total fitness of the system. In each time-step, the variables are ranked according to their local fitness. The basic version of extremal optimization then selects the variable with the worst fitness in each time-step and changes it. In contrast to a greedy search, the variable is always (randomly) changed, even if this reduces the fitness of the system. The more advanced $\tau$-EO instead changes the variable of rank $k$ with probability proportional to $k^{-\tau}$. The value of $\tau$ that is best to use depends on the problem at hand. For detailed descriptions of the algorithm and its uses for some optimization problems, see [@boettcherpercus; @boettcher2001; @boettcherpercus:cec; @boettcherpercusgrigni; @informs]. For a given problem, the $\tau$ that gives the best balance between ergodicity and random walk-behavior should be used. For extreme values of $\tau$, the method degenerates: $\tau= 0$ corresponds to random walk, this is far too ergodic in that it will wander all over the search space, while $\tau=\infty$ leads to a greedy search, which quickly gets stuck in local minima. By this process, extremal optimization successively changes bad variables. In this way, it is actually more similar to evolution than genetic algorithms. Extremal optimization stresses the importance of avoiding bad solutions rather than finding good ones. The method has been applied with great success to a wide variety of problems, including graph partitioning [@boettcher:pregraph; @boettcher] and coloring [@boettcher:col], image alignment [@image] and studies of spin glasses [@bethe; @stiffness]. The method doesn’t seem to work well for problems with a large number of connections between the variables. The original extremal optimization algorithm achieves its emergent behavior of finding a good solution without any fine-tuning. When using the $\tau$-extension, it is necessary to find a good enough value of $\tau$ to use, but there are some general guide-lines for this. The method of updating the worst variable causes large fluctuations in the solution space. It is by utilizing these fluctuations to quickly search large areas of the energy landscape that the method gains its efficiency. Experimentally, runs of the extremal optimization often follow the same general behavior. First, there is a surprisingly quick relaxation of the system to a good solution. This is then followed by a long period of slow improvements. The results obtained in the experiments described in section \[experiments\] also follow this pattern. [**Extremal optimization**]{} 1. Initialize variables randomly 2. While time $<$ maximum time 1. Calculate local fitness $\lambda_i$ for each variable $i$ 2. Sort $\lambda_i$ 3. Select $k$’th largest $\lambda_i$ with probability $p_k \sim k^{-\tau}$ ($\tau$-EO) 4. OR select largest $\lambda_i$ (standard EO) 5. Change selected variable, regardless of how this affects cost 6. If new configuration has lowest cost so far 1. Store current configuration as best 3. Output best configuration as answer To use the method for clustering, we follow the following simple steps. Using $x_i$ to denote report $i$, write the cost function as a sum of pair-wise conflicts $$C = \sum_{i<j} C(i,j) \delta_{x_i}^{x_j} .$$ The local fitness $\lambda_i$ is easily seen to be $$\lambda_i = \sum_{j\neq i} C(i,j) \delta_{x_i}^{x_j} .$$ Pseudo-code for the algorithm is shown in figure \[pseudo\]. Implementing the algorithm requires power-law distributed random numbers. This is best obtained using pre-computation of two lists of numbers. One list contains powers $$a_k = k^{-\tau} ,$$ while the other is the cumulative sum of $a_k$ $$b_n = \sum_{k\leq n} a_k .$$ The list $b_n$ is needed in order to be able to handle dynamically changing number of reports to cluster. The power-law distribution when there are $n$ reports is now given by $$p_k^n = \frac{a_k}{b_n} .$$ Selecting the appropriate $k$ is done by comparing successive terms with a uniformly distributed random number in the standard way to determine in which interval of $p_k^n$ it fell. A way of speeding up the extremal optimization method is to use a heap instead of a sorted list for storing the $\lambda_i$. A heap is a balanced binary tree where each node has a higher value than each of its children, but there is no requirement that the children are ordered. Since insertions and deletions in a heap can be done in logarithmic time, a factor $n$ is gained in speed. For some problems, it has been shown that the error introduced by not sorting the $\lambda_i$ completely is negligible [@boettcherpercus]. In the implementation used here, this optimization was not used in order to be able to be able to test the correctness of using the approximate cost matrix $c_{ij}$ without introducing possible errors from using a heap. Clustering as pre-processing for tracking {#preprocess} ========================================= The general clustering or association problem can be formulated as a minimization problem. Introducing the notation $x_i=a$ when report $i$ is placed in cluster $a$, this can be written as $$\min_{\{x_i\}} C(\{x_i\}) , \label{eqfull}$$ where $C$ denotes the cost of a configuration. The cost includes terms that give the cost of placing reports together, and also the cost of not placing reports together. Clustering problems are often solved in an approximate way by ignoring multi-variable interactions and focusing on pair-wise conflicts. For an example of how to use Dempster-Shafer theory to derive such an approximation, see [@johan]. The approximation of including only pair-wise conflicts leads to the expression $$C({x_i}) \approx \sum_{i<j} C(i,j) \delta_{x_i}^{x_j} . \label{eq2}$$ It would in principle be possible to include also higher-order terms in this equation; for instance, a term giving the cost of placing three given spins in the same (or different!) clusters could be included. Using equation \[eq2\] makes it possible to solve the clustering problem by mapping it onto a spin model and using any of the many optimization methods devised for them. It is also computationally much less demanding than using the full equation \[eqfull\], since only $N^2$ terms in the $C(i,j)$ matrix need to be calculated. For some applications, however, even these $N^2$ conflicts might require too much processing power to calculate. This is the case when a large number of reports arrive in sudden bursts. If the reports are to be used to track many objects, they first need to be assigned to the correct tracker. (Note: we assume here that the trackers used are single-target trackers. If one uses multi-target trackers based on random sets or probability hypothesis density, no such association needs to be done. Ideas for such trackers have been described in, among others, [@sidenbladhFUSION03; @mahler00] ) Another motivation for trying to find an alternative to calculating all $N^2$ conflicts comes from the increasing importance of distributed architectures for fusion systems. If the reports are distributed across a large number of nodes, we want to minimize the amount of data that needs to be transmitted across the network. If the clustering is also done using distributed computing, we need to calculate as few pair-wise conflicts as possible. In this paper, we present an alternative to clustering such bursts of report using the full cost function or the pair-wise cost-matrix introduced in equation \[eq2\]. The method is based on the observation that the clustering problem must have a solution, and that for many problems there is a sharp threshold in solvability when some parameter is varied [@hogghubermanwilliams]. For the problems considered in this paper, the relevant phase transition to look at is the one occurring for the graph coloring (or clustering) problem. For random graphs with average degree $\gamma$, the solvability of a randomly chosen instance is here determined by comparing $\gamma$ with a critical parameter $\gamma_c$: for $\gamma < \gamma_c$, almost all graphs are colorable ([*i.e.*]{}, they can be clustered without cost), while for larger values of $\gamma$, the probability that such a clustering exists vanishes. For more information on this phase-transition, the reader is referred to [@hogghubermanwilliams]. Information on analytical ways of determining $\gamma_c$ can be found in [@achlioptas:thesis], while numerical values are given in [@nprelax]. Note that the value of $\gamma_c$ depends on the number of clusters $k$; for large values of $k$, various approximations can be used. Given a set ${x_i}$ of $N$ reports, we therefore propose that not all pair-conflicts are calculated. Instead, we randomly select $M=\frac{1}{2} \gamma N$ edges and calculate the conflict for these. This gives a sparse matrix $c_{ij}$, that is a subset of the full $C(i,j)$ matrix: whenever $c_{ij}$ is non-zero, it is equal to $(C(i,j)$, but there might be entries in $C(i,j)$ that are not represented in $c_{ij}$. The matrix $c_{ij}$ is thus a member of the $\mathcal{G}(N,M)$ ensemble of random graphs, but the exact values of its edges are determined by the full cost matrix $C(i,j)$. We stress again that the purpose of doing this is to avoid costly computations. If it is cheap to compute all entries in $C(i,j)$, this should of course be done. If, however, each of these computations involve sophisticated processing such as the calculation of Dempster-Shafer conflicts of complicated belief functions, then a factor $\frac{N}{\gamma}$ in computation time is gained by only calculating $c_{ij}$. The approximate matrix $c_{ij}$ is then used for clustering the reports. The solution that is returned is an approximate solution also of the complete problem only if $c_{ij}$ faithfully captures all the important structure of $C(i,j)$. In order for it to do this, we want to use a $\gamma$ that is as close to the phase-transition as possible. Problems near the phase-transitions are the maximally constrained problems, which lends credibility to the correctness of the approximation $c_{ij}$. Experiments =========== The experiments performed all use a similar scenario. There are 3 enemy objects that are seen simultaneously by a large number of sensors. The sensors involved could be for instance ground-sensor networks [@iam] or packs of swarming UAVs [@gaudiano03a]. In the figures presented here, bursts of 100 reports are pre-processed for further transmission to tracking modules. Since $\gamma_c(3)\approx 4.6$, we present results for $\gamma=3$, 4, and 5. We ran several different simulations for each parameter. Some of the figures show results of a single sample, while others show averaged results. Two averages were performed. First, a number of different sets of sensor reports were generated. Second, a number of different approximate cost-matrices $c_{ij}$ were calculated given a set of sensor reports. Changing the number of averages did not affect the results. Each of the figures shows two curves as functions of time: the cost/fitness of the current solution and the best cost obtained so far. For each scenario, a true cost matrix $C(i,j)$ was determined and used for calculating $c_{ij}$ and for comparing the results of our method with the true solution. A real implementation would of course not calculate $C(i,j)$; it was used here only to test our method. For $\gamma=3$, below the phase transition figure \[g3eo\] and \[g3t15\] show the difference between the standard extremal optimization algorithm and $\tau$-EO. The value of $\tau$ to use was determined using some experimentation; it depends on the structure of the problem. It can be seen quite clearly that $\tau$-EO is better. The figures show the result of a typical run, with no averages being done. Figure \[g3t15ave\] shows the behavior for $\gamma=3$ when averaged over 10 different problems. Here, for each problem the algorithm has also been restarted 10 times using different approximative cost-matrices. What happens when increasing $\gamma$ can be clearly seen by comparing figures \[g3eo\] and \[g4eo\], which both show one run with the standard extremal optimization algorithm but for $\gamma=3$ and 4, respectively. A single run with $\tau$-EO for $\gamma=4$ and $\tau=1.5$ is shown in figure \[g4t15\], while the results for averages over 10 and 10 graphs are shown in figure \[g4t15ave\] Now we turn to results for $\gamma=5$, that is above the phase transition. Here the approximate cost matrices should give problems that are not solvable. The experiments show that optimization of the approximate cost-matrices using $\tau$-EO gives a reasonably good, but not perfect, approximation to the perfect solution. More detailed study of the results of tracking based on this method need to be performed before the method can be evaluated. It is necessary to make much more tests to see which approximation of the true cost matrix should be used. Here we chose to use a matrix that is as close to the phase transition as possible. The motivation for this was that the problems should definitively be solvable. By construction, this means that the average properties of the cost matrix obtained from the sensor networks should have $\gamma < \gamma_c$. However, it is not completely clear that this should be valid for a specific instance of cost matrix from a specific instance of scenario. This thus needs to be confirmed in future work. Discussion and future work ========================== In conclusion, this paper used extremal optimization to do pre-processing of sensor reports before they are used by single-target trackers. When receiving large bursts of sensor reports from sensor networks or swarming UAVs, traditional methods for clustering and associating may be too slow. The approach presented in this paper has two components: 1. Since it costs to much to calculate the complete $N \times N$ cost matrix when $N$ is large and the cost of placing two reports in the same cluster is complicated, it is beneficial to use an approximate cost matrix. By taking advantage of results on phase transitions occurring for clustering problems on random graphs, it is possible to determine how many pairs of conflict must be calculated in order to get a fair approximation of the full cost matrix. 2. We used the extremal optimization algorithm to solve the approximate problem. The method presented here can be extended in a number of ways. The extremal optimization method has been extended to punish variables that are flipped ofter [@middleton]. This leads to more efficient exploration of rugged energy landscapes. It would be interesting to see how this affects the run-time and quality of solutions for the pre-processing method presented here. This dynamic clustering briefly mentioned in the Introduction was not implemented in the experiments described here, but the only changes needed in the algorithm are that the $\lambda_i$ need to be re-calculated taking into account also the new reports. Since they have to be re-calculated in every time-step, the dynamic version of the method has no extra overhead compared to the static. The extremal optimization method is a general-purpose algorithm. Among its advantages is that it is an any-time algorithm, [*i.e.*]{}, it always has a “best so far” solution that can be output and used. It would be interesting to investigate the behavior of extremal optimization for resource allocation and scheduling problems. In particular, extremal optimization should be a better alternative than genetic algorithms for many fusion applications. The method used to determine the approximate cost matrix is the simplest possible — it relies only on the most basic information on the problem and on phase transitions in the $\mathcal{G}(N,M)$ ensemble. By using more information on the structure of graphs near the phase transition, it might be possible to get better approximations. Another approach would be to try to capture the characteristics of the complete cost matrix for specific configurations of sensors and use this information to get a better approximation. We caution the reader that while $c_{ij}$ seems to be a good approximation of $C(i,j)$ for the problems tested in this paper, this might not always be the case, and more thorough investigations of this need to be done using real data. It might also be possible to use more characteristics of the problem to construct the approximate cost-matrix $c_{ij}$. Examples of this could be to use more advanced random graph models than $\mathcal{G}(N,M)$, or dynamically obtained information on the configuration of our sensors or on the last known formation of the enemy objects. In order to extend the method in the ways outlined above, it should be tested using sensor data from large-scale fusion demonstrator systems, such as IFD03 [@ifd03].
--- abstract: 'A review of the calculation of the leading three-loop electroweak corrections to the $\rho$-parameter, ${\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$ and the $W$-boson mass is presented. The heavy Higgs mass expansion and the renormalization are discussed. .7cm' --- 3.5cm [^1] 1.2cm [<span style="font-variant:small-caps;">Radja Boughezal</span> ]{}\ [*Institut für Theoretische Physik und Astrophysik, Universtät Würzburg, , D-97074 Würzburg, Germany*]{}\ and\ [<span style="font-variant:small-caps;">Bas Tausk</span> ]{}\ [*Fakultät für Mathematik und Physik, Albert-Ludwigs-Universität Freiburg,\ D-79104 Freiburg, Germany*]{}\ 1.5cm Introduction {#sec:intro} ============ The Electroweak Standard Model (EWSM) is in good agreement with all experimentally known phenomena of electroweak origin, with the exception of the evidence of neutrino mixing. The only ingredient predicted by this model that has not been seen yet is the Higgs particle. The direct search at LEP provided us with a lower limit on the mass of the Standard Model Higgs-boson $m_H$ excluding the region below $114.4~$GeV [@Barate:2003sz]. The global fit of the experimental data to the Standard Model, based on confronting theoretical predictions for electroweak observables with their experimental values, favours a light Higgs-boson ( $m_H \leq 186~$GeV one-sided $95\%$ Confidence Level (CL) [@LEP]).\ However, there is a discrepancy of about $3.2\,\sigma$ between the two most precise measurements of the electroweak mixing angle ${\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$, which is used to set stringent bounds on the Higgs-boson mass. The measurement based on the leptonic asymmetry parameter $A_l$ at SLD, together with the $W$-boson mass measurement from Tevatron and LEP, point to a light Higgs-boson with a mass slightly below the lower bound from the direct searches. On the other hand, the measurement based on the $b$-quark forward-backward asymmetry $A_{FB}^{0,b}$ at LEP favours a relatively heavy Higgs-boson with a mass around $500~$GeV [@LEP]. Since the center of mass energies of present colliders do not allow to probe the region of a heavy Higgs-boson, the sensitivity of radiative corrections to low energy electroweak observables to a heavy-Higgs boson mass becomes an important tool in setting limits on $m_H$ [@LEP].\ As the Yukawa couplings are very small, the Higgs dependent effects are limited to corrections to the vector-boson propagators. They give rise to shifts, e.g., in the $\rho$-parameter, the $W$-boson mass, and in the effective leptonic weak mixing angle ${\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$. These shifts are often parametrized by S, T and U or $\epsilon_1,\,\epsilon_2, \, \epsilon_3$ [@Eidelman:2004wy].\ For a light Higgs-boson, the Higgs mass dependence of theoretical predictions is mainly due to one-loop radiative corrections to the gauge-boson propagators which grow logarithmically with $m_H$ [@Veltman:1976rt]. However, because the Higgs self-interaction is proportional to $m_H^2$, higher order radiative corrections which grow like powers of $m_H$ could become important if the Higgs-boson mass is much larger than the Z-boson mass.\ At the two-loop level, the leading corrections are proportional to $m_H^2$, but the numerical coefficient of these terms turns out to be very small [@vanderBij2; @Barbieri:1993ra], and therefore they are not important for $m_H$ less than a few TeV. However it has been suggested that the smallness of the two-loop corrections may be somewhat accidental [@akhoury], therefore important effects might first appear only at the three-loop level.\ The situation was only clarified recently by an explicit leading three-loop calculation for three precision variables, the $\rho$-parameter, the electroweak mixing angle ${\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$, and the $W$-boson mass $M_W$ [@Boughezal:2004ef-2005].\ The electroweak $\rho$-parameter is a measure of the relative strengths of neutral and charged-current interactions in four-fermion processes at zero momentum transfer. In the Standard Model, at tree level, it is related to the W and Z boson masses by: $$\label{eq:rhotree} \rho = \frac{M_W^2}{c_W^2\,M_Z^2} = 1 \, ,$$ where $c_W = \cos \theta_W$. Including higher order corrections modifies this relation into $$\rho = \frac{1}{1 - \Delta\rho}.$$ Here $\Delta\rho$ parametrises all higher loop corrections which are sensitive to the existence of a heavy Higgs particle. The leading one- and two-loop corrections, which grow logarithmically and quadratically with $m_H$ respectively, have been calculated [@Veltman:1976rt; @vanderBij:1983bw].\ The sine of the effective leptonic weak mixing angle ${\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$ is defined in terms of the couplings of the $Z$-boson to leptons. The complete electroweak fermionic corrections to ${\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$ at the two-loop level are known [@Awramik:2004ge-Hollik:2005va]. Recently, the Higgs-dependent electroweak two-loop bosonic contributions to this observable have been completed [@Hollik:2005ns]. For the $W$-boson mass, both the fermionic and the bosonic corrections have been calculated at the two-loop level [@Awramik:2003rn].\ In this contribution we review the leading three-loop bosonic corrections to the $\rho$-parameter, ${\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$ and $M_W$ [@Boughezal:2004ef-2005], which grow like $m_H^4$ in the large Higgs mass limit.\ The calculation is organized in such a way that the leading contributions come only from self energy corrections to the gauge boson propagators, often referred to as [*oblique*]{} corrections in the literature, and not from vertex or box diagrams. This is achieved by our choice of renormalization scheme. We should mention at this point that the renormalization is performed up to the two-loop level only, removing sub-divergences from the gauge boson self-energies, but not yet the overall divergences. This is due to the fact that the three-loop counter terms cancel in $\Delta^{(3)}\rho$, $\Delta^{(3)}\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}$ and $\Delta^{(3)} M_W$. The renormalized self-energies are then related to physical observables through the formalism of S, T and U parameters, which was developed by Peskin and Takeuchi to describe the effect of heavy particles on electroweak precision observables [@Peskin]. They are defined in terms of the transverse gauge boson self-energies at zero momentum transfer and their first derivative w.r.t their momentum. These self-energies are not observable individually and may still contain ultraviolet divergences. However the three combinations: $$\begin{aligned} \label{eq:Sdef} S &\equiv& \frac{4 s_W^2 c_W^2}{\alpha} \left( \Sigma^{\prime ZZ}_T - \frac{c_W^2-s_W^2}{c_W s_W} \Sigma^{\prime AZ}_T - \Sigma^{\prime AA}_T \right) \\ \label{eq:Tdef} T &\equiv& \frac{1}{\alpha M_W^2} \left( c_W^2 \Sigma^{ZZ}_T - \Sigma^{WW}_T \right) \\ \label{eq:Udef} U &\equiv& \frac{4 s_W^2}{\alpha} \left( \Sigma^{\prime WW}_T - c_W^2 \Sigma^{\prime ZZ}_T - 2 c_W s_W \Sigma^{\prime AZ}_T - s_W^2 \Sigma^{\prime AA}_T \right)\end{aligned}$$ where $s_W=\sin\theta_W$ and $\alpha$ is the fine structure constant, are finite and observable. Using this formalism, the shifts to $\rho,\,{\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$ and $M_W$ are parametrised as: $$\begin{aligned} \label{eq:deltarho} &&\Delta\rho = \alpha \, T \,, {\nonumber}\\ \label{eq:deltamw} &&\Delta M_W = \frac{\alpha M_W}{2(c_W^2-s_W^2)} \left( - \frac{1}{2} S + c_W^2 T + \frac{c_W^2-s_W^2}{4s_W^2} U \right) \,, {\nonumber}\\ \label{eq:deltasin} &&\Delta{\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}\,=\, \frac{\alpha}{c_W^2-s_W^2} \left( \frac{1}{4} S - s_W^2 c_W^2 T \right) \,. $$ Calculation and renormalization =============================== The bare gauge boson self-energies are decomposed into transversal and longitudinal components according to $$\Sigma^{X}_{\mu\nu}(p) = \left(g_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^2}\right) \Sigma^{X}_T(p^2) + \frac{p_{\mu}p_{\nu}}{p^2} \, \Sigma^{X}_L(p^2) \, ,$$ where $X = AA, AZ, ZZ, WW$. The scalar functions $\Sigma^{X}_{T}(p^2)$ are then expanded in a Taylor series in their momentum $p$ up to order $p^2$. Higher order terms in $p^2$ are suppressed by powers of the heavy Higgs boson mass. For the $T$ parameter, only the constant term of this expansion (i.e $p^2 = 0$) is required. After this step, we are left with vacuum integrals which, in general, depend on three different scales: $m_H,\,M_W$ and $M_Z$. In order to separate the dependence of these integrals on the large scale $m_H$ from the small scales $ M_W$ and $M_Z$ and extract the leading $m_H$ terms, we perform an asymptotic large mass expansion following the method of the expansion by regions [@smirnovbook]. The expansion is constructed by considering different regions in loop momentum space, distinguished by the set of propagator momenta which are large or small in those regions. In each region, a Taylor expansion of all propagators in the small masses and in the small momenta of that region is performed. Typically, the expansion generates extra scalar products of loop momenta in the numerator and higher powers of denominators, as compared to the original diagrams. The resulting expression is then integrated over the whole loop momentum space. For the three-loop vacuum topology shown in Fig \[mercedes\], there are $15$ regions in loop momentum space to consider (see [@Boughezal:2004ef-2005] for more details). The distribution of large and small masses in each diagram decides the number of regions that contribute to the corresponding integral up to the leading terms we are interested in. For example, only four regions give a non vanishing contribution to the diagram shown in Fig \[WWEXPREGexample\], up to $m_H^4$ order. They correspond to: - the all internal momenta large region. In this case one expands the propagators in $M_{\phi}$ only, the result is a one-scale three-loop integral. - the region where $k_1$ is small. Here we expand in $M_\phi$ and $k_1$, which leads to the product of a one-loop integral depending on $M_{\phi}$ times a two-loop integral that depends on $m_H$. - the region where $k_1+k_2$ is small. After expanding in $M_{\phi}$ and $k_1+k_2$ we get, similarly to the previous case, a product of one- and two-loop integrals. - the region where $k_1+k_2+k_3$ is small. Here again we expand in $M_\phi$ and $k_1+k_2+k_3$ and get a product of one- and two loop integrals. All the other regions produce either scaleless integrals, which are zero in dimensional regularization, or terms which do not have enough powers of $m_H$ to contribute to the leading order. (80,80)(0,0) (50,50)(40,0,180) (50,50)(40,180,360) (50,90)[1.5]{} (50,50)[1.5]{} (50,90)(50,50) (16,30)[1.5]{} (16,30)(50,50) (84,30)[1.5]{} (84,30)(50,50) \[mercedes\] From the expansion in the $15$ regions we get two kinds of integrals: - factorizable diagrams, which are products of one-(two) loop vacuum integrals depending on $m_H$ and two-(one-) loop vacuum integrals depending on $M_W$ and $M_Z$. - non-factorizable three-loop vacuum integrals, which are either single-scaled depending on $m_H$, or double-scaled depending on $M_W$ and $M_Z$. (110,110)(0,25) (100,100)[30]{}[1]{} (100,100)(100,130) (75,84)(100,100) (100,100)(125,84) (60,37)(90,72.5)[2]{}[6]{} (90,72.5)[1]{} (140,37)(110,72.5)[2]{}[6]{} (110,72.5)[1]{} (59,30)\[\] (141,30)\[\] (100,64)\[\] (74,73.5)\[\][$\phi^{-}$]{} (128,73.5)\[\][$\phi^{-}$]{} (78,133)\[\][$\phi^{-}$]{} (128,133)\[\][$\phi^{-}$]{} (94,115)\[\] (80,95)\[\] (120,95)\[\] (100,74)\[\] (90,79)\[\] (110,79)\[\] (91,90)\[\] (107,90)\[\] (106,115)\[\] (150,116)\[\] (58,116)\[\] (82,45)\[\] (68,38)(80,52) (120,52)(132,38) \[WWEXPREGexample\] The Integration-By-Parts (IBP) method [@ibp] is then used to reduce all the vacuum integrals to the master ones. We classify the single-scaled three-loop non-factorizable integrals into ten different kinds, depending on the distribution of masses in the propagators. Their reduction to a small set of master integrals [@Broadhurst:1998rz; @Fleischer:1999mp] was done in two ways. On the one hand, reduction formulae based on the Integration By Part identities have been constructed. On the other hand the Automatic Integral Reduction package $AIR$ [@Anastasiou:2004vj] was used as a cross check. The latter was also used to reduce the double-scaled three-loop non-factorizable integrals. Explicit formulae for their master integrals are not needed, as they all canceled once we have summed over all diagrams.\ The longitudinal parts of the gauge boson self-energies are related to the self-energies of the Goldstones and mixings between gauge bosons and Goldstones by a set of Ward identities. We have verified that these Ward identities are satisfied by the full (i.e including tadpoles) unrenormalised self-energies up to order $p^2$. As we have mentioned earlier, leaving out vertex and box contributions requires a proper way of renormalizing. Our renormalization conditions are fixed in such a way that the renormalization removes all the terms of order $m_H^2$ and $m_H^4$ from the one- and two-loop gauge boson self-energies, the charged and neutral Goldstone self-energies and the mixings between gauge bosons and Goldstones. This ensures that no two- or three-loop vertex or box graphs containing such self-energies as subgraphs can give corrections that grow like $m_H^2$ or $m_H^4$ in the large Higgs mass limit (see Fig. \[vertexZZSE\]). (100,70)(0,-5) (30,0)[1.5]{} (30,60)[1.5]{} (60,30)[1.5]{} (85,30)[1.5]{} (100,0)(85,30) (85,30)(100,60) (0,0)(30,0) (30,0)(60,30) (60,30)(30,60) (30,60)(0,60) (60,30)(85,30)[3]{}[5]{} (30,20)[1.5]{} (30,40)[1.5]{} (30,60)(30,40)[-3]{}[3.5]{} (30,0)(30,20)[3]{}[3.5]{} (30,30)[10]{}[MidnightBlue]{}[Gray]{} (98,-4)\[\] (98,63)\[\] (2,-6.5)\[\] (2,66)\[\] (22,50)\[\] (22,12)\[\] (29,29)\[\] (72,22)\[\] \[vertexZZSE\] As a check on the renormalization, we have verified that the renormalized longitudinal photon self-energy and photon-Z mixing are zero. Results and conclusion ====================== The shifts to the electroweak precision observables $\rho$, ${\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$ and $M_W$ relative to their tree level values, expressed in terms of $\alpha,\, G_F $ and $M_Z$, are given by $$\begin{aligned} && \rho = \frac{1}{1 - \Delta\rho}\,,\\ && {\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}= \Delta{\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}+ \frac{1}{2} -\sqrt{\frac{1}{4} - \frac{\pi\alpha}{\sqrt{2}\,G_F M_Z^2}}\, ,\\ && M_W = \Delta M_W + M_Z \sqrt{\frac{1}{2} +\sqrt{\frac{1}{4} - \frac{\pi\alpha}{\sqrt{2}\,G_F M_Z^2}}} \, , $$ with $\Delta\rho$, $\Delta{\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$ and $\Delta M_W$ defined in (\[eq:deltasin\]) in terms of the parameters $S$, $T$ and $U$. While the $U$-parameter vanishes in the approximation where only quartic terms or higher powers in $m_H$ are kept at the three-loop level, the $S$- and the $T$-parameters give the following contributions $$\begin{aligned} && S^{(3)} = \frac{1}{4\pi} {\left(\frac{g^2}{16\pi^2}\right)}^2 \frac{m_H^4}{M_W^4} \left(\, 1.1105 \, \right) \,,\\ && T^{(3)} = \frac{1}{4\pi c_W^2} {\left(\frac{g^2}{16\pi^2}\right)}^2 \frac{m_H^4}{M_W^4} \left(\, -1.7282 \, \right) \,.\end{aligned}$$ Using $\,g^2 \,=\, e^2/s_W^2 \,=\, 4\pi\alpha/s_W^2$ for the weak coupling constant, with and $s_W^2 \,=\, 0.23$, the shifts to $\rho$, ${\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$ and $M_W$ are $$\begin{aligned} && \Delta^{(3)}\rho = -8.3\times 10^{-9}\times m_H^4/M_W^4\,,\\ && \Delta^{(3)}{\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}= 4.6\times 10^{-9}\times m_H^4/M_W^4 \,,\\ && \Delta^{(3)} M_W = -6.3\times 10^{-4}\mbox{MeV}\times m_H^4/M_W^4 .\end{aligned}$$ (0,0)![ *One-, two- and three-loop shifts in $\rho$ as a function of $m_H/M_W$.*](rhoplot_hepph "fig:") (13500,8100)(0,0)(2750,1650) (0,0)\[r\] -0.007 (2750,2493) (0,0)\[r\] -0.006 (2750,3336) (0,0)\[r\] -0.005 (2750,4179) (0,0)\[r\] -0.004 (2750,5021) (0,0)\[r\] -0.003 (2750,5864) (0,0)\[r\] -0.002 (2750,6707) (0,0)\[r\] -0.001 (2750,7550) (0,0)\[r\] 0.000 (4284,1100) (0,0) 5 (6382,1100) (0,0) 10 (8479,1100) (0,0) 15 (10577,1100) (0,0) 20 (12675,1100) (0,0) 25 (550,4600)(7850,275) (0,0) $m_H/M_W$ (7575,4010) (0,0)\[r\] $\Delta^{(1)}$ (7575,3448) (0,0)\[r\] $\Delta^{(1)}+\Delta^{(2)}$ (7575,2886) (0,0)\[r\] $\Delta^{(1)}+\Delta^{(2)}+\Delta^{(3)}$ \[fig:rho\] (0,0)![ *Shifts in ${\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$ as a function of $m_H/M_W$.*](sinthshift_hepph "fig:") (13500,8100)(0,0)(1925,1650) (0,0)\[r\] 0.0 (1925,2388) (0,0)\[r\] 0.5 (1925,3125) (0,0)\[r\] 1.0 (1925,3863) (0,0)\[r\] 1.5 (1925,4600) (0,0)\[r\] 2.0 (1925,5338) (0,0)\[r\] 2.5 (1925,6075) (0,0)\[r\] 3.0 (1925,6813) (0,0)\[r\] 3.5 (1925,7550) (0,0)\[r\] 4.0 (3566,1100) (0,0) 5 (5843,1100) (0,0) 10 (8121,1100) (0,0) 15 (10398,1100) (0,0) 20 (12675,1100) (0,0) 25 (550,4600)(7437,275) (0,0) $m_H/M_W$ (7163,6370) (0,0)\[r\] $\Delta^{(1)}$ (7163,5808) (0,0)\[r\] $\Delta^{(1)}+\Delta^{(2)}$ (7163,5246) (0,0)\[r\] $\Delta^{(1)}+\Delta^{(2)}+\Delta^{(3)}$ \[fig:sinth\] The Higgs mass dependence of the $\rho$-parameter and $\Delta{\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$ is shown in Figs. \[fig:rho\] and \[fig:sinth\]. It turns out that the sign of the leading three-loop corrections to $\Delta\rho$, $\Delta{\sin^2\theta_{\mbox{eff}}^{\mbox{lept}}}$ and $\Delta M_W$ is the same as the sign of the one-loop contributions. The original question that motivated these calculations was, whether inclusion of the three-loop corrections with strong interactions could lead to an effect mimicking the one-loop effects of a light Higgs boson. The result of the investigations shows that this is highly unlikely. As the signs of the three-loop corrections are the same as the ones of the one-loop corrections, with increasing Higgs mass, the three-loop terms only make the effects grow faster, instead of partially cancelling the one-loop corrections. Therefore the presence of a strongly interacting heavy Higgs-sector appears to be extremely unlikely, and the electroweak precision data can indeed be taken as a strong indication for a light Higgs boson sector. Acknowledgement {#acknowledgement .unnumbered} =============== R. B.  would like to thank the organizers of the conference for the pleasant atmosphere. This work was supported by the Sofja Kovalevskaja Award of the Alexander von Humboldt Foundation sponsored by the German Federal Ministry of Education and Research and the DFG within the project “(Nicht)-perturbative Quantenfeldtheorie”. [05]{} R. Barate [*et al.*]{} \[ALEPH Collaboration\], Phys. Lett. B [**565**]{} (2003) 61 \[arXiv:hep-ex/0306033\]. The LEP Collaborations ALEPH, DELPHI, L3 and OPAL, the LEP Electroweak Working Group, the SLD Electroweak Group and the SLD Heavy Flavor Group, arXiv:hep-ex/0509008. S. Eidelman [*et al.*]{} \[Particle Data Group\], Phys. Lett. B [**592**]{},(2004) 1. M. J. G. Veltman, Acta Phys. Polon. B [**8**]{} (1977) 475. J. J. van der Bij, Nucl. Phys. B [**248**]{} (1984) 141;\ J. J. van der Bij, Nucl. Phys. B [**267**]{} (1986) 557. J. van der Bij and M. J. G. Veltman, Nucl. Phys. B [**231**]{} (1984) 205. R. Barbieri, P. Ciafaloni and A. Strumia, Phys. Lett. B [**317**]{} (1993) 381. R. Akhoury, J. J. van der Bij and H. Wang, Eur. Phys. J. C [**20**]{} (2001) 497 \[arXiv:hep-ph/0010187\]. R. Boughezal, J. B. Tausk and J. J. van der Bij, Nucl. Phys. B [**713**]{} (2005) 278 \[arXiv:hep-ph/0410216\];\ R. Boughezal, J. B. Tausk and J. J. van der Bij, Nucl. Phys. B [**725**]{} (2005) 3 \[arXiv:hep-ph/0504092\] M. Awramik, M. Czakon, A. Freitas and G. Weiglein, Phys. Rev. Lett.  [**93**]{} (2004) 201805 \[arXiv:hep-ph/0407317\];\ W. Hollik, U. Meier and S. Uccirati, arXiv:hep-ph/0507158. W. Hollik, U. Meier and S. Uccirati, arXiv:hep-ph/0509302. M. Awramik, M. Czakon, A. Freitas and G. Weiglein, Phys. Rev. D [**69**]{} (2004) 053006 \[arXiv:hep-ph/0311148\]. M. E. Peskin and T. Takeuchi, Phys. Rev. Lett.  [**65**]{} (1990) 964;\ M. E. Peskin and T. Takeuchi, Phys. Rev. D [**46**]{} (1992) 381. V. A. Smirnov, “Applied asymptotic expansions in momenta and masses,” Springer-Verlag, Berlin (2002). F. V. Tkachov, Phys. Lett.  [**B100**]{} (1981) 65;\ K. G. Chetyrkin and F. V. Tkachov, Nucl. Phys. [**B192**]{} (1981) 159. C. Anastasiou and A. Lazopoulos, JHEP [**0407**]{}, 046 (2004) \[arXiv:hep-ph/0404258\]. D. J. Broadhurst, Eur. Phys. J. C [**8**]{} (1999) 311 \[arXiv:hep-th/9803091\]. J. Fleischer and M. Y. Kalmykov, Phys. Lett. B [**470**]{} (1999) 168 \[arXiv:hep-ph/9910223\]. [^1]: Presented at the XXIX conference ‘Matter to the Deepest’, Ustron (Poland), September 2005. Published in Acta Phys. Polon. B [**36**]{} (2005) 3275, ©Acta Physica Polonica
--- abstract: | This work is devoted to almost sure and moment exponential stability of regime-switching jump diffusions. The Lyapunov function method is used to derive sufficient conditions for stabilities for general nonlinear systems; which further helps to derive easily verifiable conditions for linear systems. For one-dimensional linear regime-switching jump diffusions, necessary and sufficient conditions for almost sure and $p$th moment exponential stabilities are presented. Several examples are provided for illustration. [Keywords.]{} Regime-switching jump diffusion, almost sure exponential stability, $p$th moment exponential stability, Lyapunov exponent, Poisson random measure. [Mathematics Subject Classification.]{} 60J60, 60J75, 47D08. author: - 'Zhen Chao[^1], Kai Wang[^2], Chao Zhu[^3],and Yanling Zhu[^4]' title: 'Almost Sure and Moment Exponential Stability of Regime-Switching Jump Diffusions' --- Introduction {#sect-introduction} ============ Applications of stochastic analysis have emerged in various areas such as financial engineering, wireless communications, mathematical biology, and risk management. One of the salient features of such systems is the coexistence of and correlation between continuous dynamics and discrete events. Often, the trajectories of these systems are not continuous: there is day-to-day jitter that causes minor fluctuations as well as big jumps caused by rare events arising from, e.g., epidemics, earthquakes, tsunamis, or terrorist atrocities. On the other hand, the systems often display qualitative changes. For example, as demonstrated in [@BW87], the volatility and the expected rate of return of an asset are markedly different in the bull and bear markets. Regime-switching diffusion with Lévy type jumps naturally captures these inherent features of these systems: the Lévy jumps are well-known to incorporate both small and big jumps ([@APPLEBAUM; @Cont-Tankov]) while the regime switching mechanisms provide the qualitative changes of the environment ([@MaoY; @YZ-10]). In other words, regime-switching diffusion with Lévy jumps provides a uniform and realistic yet mathematically tractable platform in modeling a wide range of applications. Consequently increasing attention has been drawn to the study of regime-switching jump diffusions in recent years. Some recent work in this vein can be found in [@Xi-09; @Zong-14; @Yin-Xi-10; @ShaoX-14; @ZhuYB-15] and the references therein. Regime-switching jump diffusion processes can be viewed as jump diffusion processes in random environments, in which the evolution of the random environments is modeled by a continuous-time Markov chain or more generally, a continuous-state-dependent switching process with a discrete state space. Seemingly similar to the usual jump diffusion processes, the behaviors of regime-switching jump diffusion processes can be markedly different. For example, [@YinZ13 Section 5.6] illustrates that two stable diffusion processes can be combined via a continuous-time Markov chain to produce an unstable regime-switching diffusion process. See also [@Costa-13] for similar observations. This paper aims to investigate almost sure and moment exponential stability for regime-switching diffusions with Lévy type jumps. This is motivated by the recent advances in the investigations of stability of regime-switching jump diffusions in [@Yin-Xi-10; @Zong-14] and the references therein. In [@Yin-Xi-10; @Zong-14], the Lévy measure $\nu$ on some measure space $(U,\mathfrak U)$ is assumed to be a finite measure with $\nu(U) < \infty$. Consequently, in these models, the jump mechanism is modeled by compound Poisson processes and there are finitely many jumps in any finite time interval. In contrast, in our formulation, the Lévy measure $\nu$ on $(\R^{n}-\{0\}, \B(\R^{n}-\{0\}))$ merely satisfies $\int_{\R^{n}-\{0\}} (1\wedge |z|^{2})\nu({\mathrm{d}}z)< \infty$ and hence it is not necessarily finite. This formulation allows the possibility of infinite number of “small jumps” in a finite time interval. Indeed, such “infinite activity models” are studied in the finance literature, such as the variance gamma model in [@Seneta-04] and the normal inverse Gaussian model in [@Barn-98]. See also the recent paper [@Barn-13] for energy spot price modeling using Lévy processes. Our focus of this paper is to study almost sure and moment exponential stabilities of the equilibrium point $x=0$ of regime-switching jump diffusion processes. To this end, we first observe the “nonzero” property, which asserts that almost all sample paths of all solutions to starting from a nonzero initial condition will never reach the origin with probability one. This phenomenon was first established for diffusion processes in [@khasminskii-12] and later extended to regime-switching diffusions in [@MaoY; @YZ-10] under the usual Lipschitz and linear growth conditions. For processes with Lévy type jumps, additional assumptions are needed to handle the jumps to obtain the “nonzero” property. For instance, [@ApplS-09] and [@Wee-99] contain different sufficient conditions. The differences are essentially on the assumptions concerning the jumps. Here we propose a different sufficient condition than those in [@ApplS-09; @Wee-99] for the “nonzero” property for regime-switching jump diffusion. We show in Lemma \[lem-0-inaccessible\] that the “nonzero” property holds under the usual Lipschitz and linear growth conditions on the coefficients of together with Assumption \[assump-jump\]. Note that it is quite easy to verify Assumption \[assump-jump\] in many practical situations; see, for example, the discussions in Remark \[rem-concerning-assumption-jump\]. With the “nonzero” property at our hands, we proceed to obtain sufficient conditions for almost sure and $p$th moment exponential stabilities of the equilibrium point of nonlinear regime-switching jump diffusions. Similar to the related results in [@ApplS-09] for jump diffusions, these sufficient conditions for stability are expressed in terms of the existence of appropriate Lyapunov functions. The details are spelled out in Theorems \[thm-as-exp-stable\] and \[thm-moment-stab\], and Corollary \[coro-as-exp-stable\]. Also, as observed in [@Costa-13; @YinZ13; @YZ-10] for regime-switching diffusions, our results demonstrate that the switching mechanism can contribute to the stabilization or destabilization of jump diffusion processes. Next we show in Theorem \[thm-moment-as-stable\] that $p$th ($p\ge 2$) moment exponential stability implies almost sure exponential stability for regime-switching jump diffusions under a certain integrability condition on the jump term. Such a result has been established for diffusions in [@khasminskii-12], jump diffusions in [@ApplS-09], and regime-switching diffusions in [@MaoY]. In addition, we derive a sufficient condition for $p$th moment exponential stability using $M$-matrices in Theorem \[thm-moment-stab-m-matrix\]. The aforementioned general results are then applied to treat linear regime-switching jump diffusions. For one-dimensional systems, we obtain necessary and sufficient conditions for almost sure and $p$th moment exponential stabilities in Propositions \[prop-as-exp-stable-1d\] and \[prop-moment-exp-stab-1d\], respectively. For the multidimensional system, we present verifiable sufficient conditions for almost sure and moment exponential stability in Propositions \[prop-exp-stab-creterion-b\], \[cor-p-stab\], and \[prop-as-moment-stab-M-matrix\]. To illustrate the results, we also study several examples in Section \[sect:examples\]. The remainder of the paper is organized as follows. After a brief introduction to regime-switching jump diffusion processes in Section \[sect-formulation\], we proceed to deriving sufficient conditions for almost sure and $p$th moment exponential stabilities of the equilibrium point of the nonlinear system in Section \[sect:nonlinear\]. Section \[sect-linear\] treats stability of the equilibrium point of linear systems. Finally we conclude the paper with conclusions and remarks in Section \[sect-conclusion\]. To facilitate the presentation, we introduce some notation that will be used often in later sections. Throughout the paper, we use $x'$ to denote the transpose of $x$, and $x'y$ or $x\cdot y$ interchangeably to denote the inner product of the vectors $x$ and $y$. If $A$ is a vector or matrix, then $|A| : = \sqrt{\tr(AA')}$, $\| A\|:= \sup\{|Ax|: x\in \R^{n}, |x| =1\}$, and $A\gg 0$ means that every element of $A$ is positive. For a square matrix $A $, $\rho(A)$ is the spectral radius of $A$. Moreover if $A$ is a symmetric square matrix, then $\lambda_{\max}(A) $ and $\lambda_{\min}(A)$ denote the maximum and minimum eigenvalues of $A$, respectively. For sufficiently smooth function $\phi: \R^n \to \R$, $D_{x_i} \phi= \frac{\partial \phi}{\partial x_i}$, $D_{x_ix_j} \phi= \frac{\partial^2 \phi}{\partial x_i\partial x_j}$, and we denote by $D\phi =(D_{x_1}\phi, \dots, D_{x_n}\phi)'\in \R^{n}$ and $D^2\phi =(D_{x_ix_j}\phi) \in \R^{n\times n}$ the gradient and Hessian of $\phi$, respectively. For $k \in \mathbb N$, $C^{k}(\R^{n})$ is the collection of functions $f: \R^{n }\mapsto \R$ with continuous partial derivatives up to the $k$th order while $C^{k}_{c} (\R^{n})$ denotes the space of $C^{k}$ functions with compact support. If $B $ is a set, we use $B^o$ and $I_B$ to denote the interior and indicator function of $B$, respectively. Throughout the paper, we adopt the conventions that $\sup \emptyset =-\infty$ and $\inf \emptyset = + \infty$. Formulation {#sect-formulation} =========== Let $(\Omega, {\ensuremath{\mathcal F}}, {\left\{{\ensuremath{\mathcal F}}_{t}\right\}}_{t\ge 0}, {\ensuremath{\mathbb P}})$ be a filtered probability space satisfying the usual condition on which is defined an $n$-dimensional standard ${\ensuremath{\mathcal F}}_t$-adapted Brownian motion $W{(\cdot)}$. Let ${\left\{\psi(\cdot) \right\}}$ be an ${\ensuremath{\mathcal F}}_t$-adapted Lévy process with Lévy measure $\nu(\cdot)$. Denote by $N(\cdot,\cdot)$ the corresponding ${\ensuremath{\mathcal F}}_t$-adapted Poisson random measure defined on $\R_+ \times \R^n_0$: $$N(t,U):= \sum_{0 < s \le t}I_{U}( \Delta \psi_{s} )= \sum_{0< s \le t} I_{U}(\psi(s)- \psi(s-)),$$ where $t \ge 0$ and $U $ is a Borel subset of $\R^{n}_{0}=\R^{n}-{\left\{0\right\}}$. The compensator $\tilde N$ of $N$ is given by $$\label{eq-N-tilde} \tilde N({\mathrm{d}}t,{\mathrm{d}}z):= N({\mathrm{d}}t,{\mathrm{d}}z) - \nu({\mathrm{d}}z){\mathrm{d}}t.$$ Assume that $W{(\cdot)}$ and $N(\cdot,\cdot)$ are independent and that $\nu{(\cdot)}$ is a Lévy measure satisfying $$\label{levy-measure} \int_{\R^n_0} (1 \wedge {\left\vertz\right\vert}^2)\nu({\mathrm{d}}z) < \infty,$$ where $a_1\wedge a_2 =\min \{a_1, a_2\}$ for $a_1,a_2 \in \R $. We consider a stochastic differential equation with regime-switching together with Lévy-type jumps of the form $$\label{sw-jump-diffusion} \begin{aligned} {\mathrm{d}}X(t) = & \, b(X(t ),{\alpha}(t)){\mathrm{d}}t + \sigma(X(t ),{\alpha}(t )){\mathrm{d}}W(t) \\ & \ + \int_{\R ^n_0} \gamma(X(t-),{\alpha}(t-),z)\tilde N({\mathrm{d}}t,{\mathrm{d}}z), \ \ t \ge 0, \end{aligned}$$ with initial conditions $$\label{swjd-initial} X(0)=x_0 \in \R ^n, \ \ {\alpha}(0) = {\alpha}_0\in {\ensuremath{\mathcal M}},$$ where $b(\cdot,\cdot) : \R ^n \times {\ensuremath{\mathcal M}}\mapsto \R ^{n}$, $\sigma (\cdot,\cdot): \R ^n\times {\ensuremath{\mathcal M}}\mapsto \R ^{n\times n}$, and $ \gamma (\cdot,\cdot,\cdot): \R ^n \times {\ensuremath{\mathcal M}}\times \R ^n_0 \mapsto \R ^{n}$ are measurable functions, and ${\alpha}{(\cdot)}$ is a switching component with a finite state space ${\ensuremath{\mathcal M}}:={\left\{1, \dots, m\right\}}$ and infinitesimal generator $Q= (q_{ij}(x))\in \R^{m\times m}$. That is, ${\alpha}{(\cdot)}$ satisfies $$\label{Q-gen}{\ensuremath{\mathbb P}}{\left\{{\alpha}(t+ \delta)=j| X(t) =x, {\alpha}(t)=i,{\alpha}(s),s\le t\right\}}=\begin{cases}q_{ij}(x) \delta + o(\delta),&\hbox{ if }\ j\not= i,\\ 1+ q_{ii}(x)\delta + o(\delta), &\hbox{ if }\ j=i, \end{cases}$$ as $\delta \downarrow 0$, where $q_{ij}(x)\ge 0$ for $i,j\in {\ensuremath{\mathcal M}}$ with $j\not= i$ and $ q_{ii}(x)=-\sum_{j\not= i}q_{ij}(x) <0$ for each $i\in {\ensuremath{\mathcal M}}$. The evolution of the discrete component ${\alpha}{(\cdot)}$ in can be represented by a stochastic integral with respect to a Poisson random measure; see, for example, [@Skorohod-89]. In fact, for $x\in \R^n$ and $i,j \in {\ensuremath{\mathcal M}}$ with $j\not=i$, let $\Delta_{ij}(x)$ be the consecutive left-closed, right-open intervals of the half real line $\R_{+} : = [0,\infty)$, each having length $q_{ij}(x)$. In case $q_{ij}(x) =0$, we set $\Delta_{ij}(x) =\emptyset$. Define a function $h: \R^n \times {\ensuremath{\mathcal M}}\times \R \mapsto \R$ by $$\label{h-def} h(x,i,z)=\sum^m_{j=1} (j-i) I_{\{z\in\Delta_{ij}(x)\}}.$$ Then the evolution of the switching process can be represented by the stochastic differential equation $$\label{eq:jump-mg} {\mathrm{d}}{\alpha}(t) = \int_{\R_{+}} h(X(t-),{\alpha}(t-),z) {N_1}({\mathrm{d}}t,{\mathrm{d}}z),$$ where ${N_{1}}({\mathrm{d}}t,{\mathrm{d}}z)$ is a Poisson random measure (corresponding to a random point process $\mathfrak p{(\cdot)}$) with intensity ${\mathrm{d}}t \times \lambda({\mathrm{d}}z)$, and $\lambda{(\cdot)}$ is the Lebesgue measure on $\R$. Denote the compensated Poisson random measure of $N_1{(\cdot)}$ by $\tilde N_{1}({\mathrm{d}}t, {\mathrm{d}}z): = N_{1}({\mathrm{d}}t, {\mathrm{d}}z)-{\mathrm{d}}t\times \lambda ({\mathrm{d}}z)$. Throughout this paper, we assume that the Lévy process $\psi{(\cdot)}$, the random point process $\mathfrak p{(\cdot)}$, and the Brownian motion $W{(\cdot)}$ are independent. We make the following assumptions throughout the paper: \[assump-0\] Assume $$\label{0-equilibrium} b(0, i) = \sigma(0,i) = \int_{\R_{0}^{n} } \gamma(0, i, z) \nu({\mathrm{d}}z) = 0\; \text{ for all } i\in {\ensuremath{\mathcal M}}.$$ \[assump-ito\] For some positive constant $\kappa$, we have $$\begin{aligned} \label{ito-condition} & \begin{aligned} & {\left\vertb(x,i)- b(y,i)\right\vert}^2 + {\left\vert\sigma(x,i)- \sigma(y,i)\right\vert}^2 \\ & \qquad \qquad \qquad + \int_{\R ^n_0} {\left\vert\gamma(x,i,z)- \gamma(y,i,z)\right\vert}^2\nu({\mathrm{d}}z) \le \kappa {\left\vertx-y\right\vert}^2, \end{aligned}\\ \label{eq2-ito-condition} &\qquad \int_{\R^n_0} \bigl[{\left\vert{\gamma}(x,i,z)\right\vert}^2 + |x\cdot {\gamma}(x,i,z)| \bigr] \nu({\mathrm{d}}z) \le \kappa |x|^{2} \end{aligned}$$ for all $x,y \in \R ^n$ and $i\in {\ensuremath{\mathcal M}}={\left\{1, \dots, m\right\}}$, and that $$\label{eq-q-bdd} \sup {\left\{q_{ij}(x): x \in \R ^{n}, i\not= j \in {\ensuremath{\mathcal M}}\right\}} \le \kappa < \infty.$$ Under Assumptions \[assump-0\] and \[assump-ito\], $X(t) \equiv 0$ is an equilibrium point of . Moreover, in view of [@ZhuYB-15], for each initial condition $(x_0,{\alpha}_0) \in \R ^n \times {\ensuremath{\mathcal M}}$, the system represented by and (or equivalently, and ) has a unique strong solution $(X{(\cdot)},{\alpha}{(\cdot)})=(X^{x_0,{\alpha}_0}{(\cdot)},{\alpha}^{x_0,{\alpha}_0}{(\cdot)})$; the solution does not explode in finite time with probability one. In addition, the generalized Itô lemma reads $$\label{eq:ito} f(X(t),{\alpha}(t)) -f(x_0,{\alpha}_0) = \int^t_0 {\mathcal L}f(X(s-),{\alpha}(s-)) {\mathrm{d}}s + M_1^f(t) + M_2^f(t) +M_3^f(t),$$ for $f\in C^{2}_{c} (\R^{n}\times{\ensuremath{\mathcal M}})$, where $\mathcal L$ is the operator associated with the process $(X,{\alpha})$ defined by: $$\label{op-defn}\begin{aligned} {\ensuremath{\mathcal{L}}}f(x,i) = &\, D f(x,i)\cdot b(x,i) + \frac{1}{2}\tr((\sigma\sigma')(x,i)D^2 f(x,i)) + \sum_{j\in {\ensuremath{\mathcal M}}} q_{ij}(x) [f(x,j)-f(x,i)] \\ &\ + \int_{\R^n_{0}} \![f(x+ \gamma(x,i,z),i)-f(x,i)- D f(x,i)\cdot\gamma(x,i,z) ] \nu({\mathrm{d}}z), \, (x,i)\in \R^{d}\times {\ensuremath{\mathcal M}}, \end{aligned}$$ and $$\begin{aligned} M_1^f(t) & = \int^t_0 D f(X(s-),{\alpha}(s-)) \cdot \sigma(X(s-),{\alpha}(s-)) {\mathrm{d}}W(s), \\ M_2^f(t) & = \int_0^t \int_{\R_+} \big[ f( X(s-), {\alpha}(s-)+ h(X(s-),{\alpha}(s-),z)) -f(X(s-),{\alpha}(s-))\big] \tilde N_{1}({\mathrm{d}}s, {\mathrm{d}}z), \\ M_3^f(t) & = \int_0^t \int_{\R^n_0} \!\left[f(X(s-) + \gamma(X(s-),{\alpha}(s-),z),{\alpha}(s-))- f(X(s-),{\alpha}(s-))\right]\tilde N({\mathrm{d}}s, {\mathrm{d}}z). \end{aligned}$$ Similar to the terminologies in [@khasminskii-12], we have The equilibrium point of is said to be 1. [*almost surely exponentially stable*]{} if there exists a $\delta > 0$ independent of $(x_0,\alpha_0)\in \R^{n}_{0} \times {\ensuremath{\mathcal M}}$ such that $$\limsup_{t\to\infty} \frac{1}{t}\log |X^{x_0,\alpha_0}(t)| \le -\delta\, \text{ a.s.}$$ 2. [*exponentially stable in the $p$th moment*]{} if there exists a $\delta > 0$ independent of $(x_0,\alpha_0)\in \R^{n}_{0} \times {\ensuremath{\mathcal M}}$ such that $$\limsup_{t\to\infty} \frac{1}{t}\log {\ensuremath{\mathbb E}}[|X^{x_0,\alpha_0}(t)|^{p}] \le -\delta.$$ To study stability of the equilibrium point of , we first present the following “nonzero” property, which asserts that almost all sample paths of all solutions to starting from a nonzero initial condition will never reach the origin. This phenomenon was first established for diffusion processes in [@khasminskii-12] and later extended to regime-switching diffusions in [@MaoY; @YZ-10] under fairly general conditions. For processes with Lévy type jumps, additional assumptions are needed to handle the jumps. \[assump-jump\] Assume there exists a constant $\varrho >0$ such that $$\begin{aligned} \label{eq-no-jump-to-0} |x+ \gamma(x,i,z)| \ge \varrho |x|, \text{ for all }(x,i) \in \R^{n}_{0}\times {\ensuremath{\mathcal M}}\text{ and }\nu\text{-almost all }z\in \R^{n}_{0}. \end{aligned}$$ \[rem-concerning-assumption-jump\] From [@MaoY; @YZ-10], we know that under Assumptions \[assump-0\] and \[assump-ito\], a regimes-switching diffusion without jumps cannot “diffuse” from a nonzero state to zero a.s. Assumption \[assump-jump\] further prevents the process $X$ of jumps from a nonzero state to zero. Also a sufficient condition for is $$2x\cdot \gamma(x,i,z) + |\gamma(x,i,z)|^{2} \ge 0,$$ for $\nu$-almost all $z\in \R_{0}^{n}$ and all $(x,i) \in \R^{n}\times {\ensuremath{\mathcal M}}$. Indeed, under such a condition, we have $|x + \gamma(x,i,z)|^{2 } = |x|^{2} + 2x\cdot \gamma(x,i,z) + |\gamma(x,i,z)|^{2} \ge |x|^{2} $ for $\nu$-almost all $z\in \R_{0}^{n}$. This, of course, implies . \[lem-0-inaccessible\] Suppose Assumptions \[assump-0\], \[assump-ito\], and \[assump-jump\] hold. Then for any $(x,i) \in \R^{n}_{0}\times {\ensuremath{\mathcal M}}$, we have $$\label{eq-path-0} {\ensuremath{\mathbb P}}_{x,i} \{X(t) \neq 0 \text{ for all } t \ge 0 \} =1.$$ Consider the function $V(x,i): = |x|^{-2}$ for $x\neq 0$ and $i\in {\ensuremath{\mathcal M}}$. Direct calculations reveal that $D V(x,i)= -2|x|^{-4} x, $ and $ D^2 V(x,i)=-2 |x|^{-4}I +8 |x|^{ -6} x x'. $ Next we prove that for all $x,y\in \R^{n}$ with $x\neq 0$ and $|x+y| \ge \varrho |x| $, we have $$\label{eq-|x|-2-estimate} V(x+y,i) - V(x,i) - DV(x,i)\cdot y = \frac{1}{|x+y|^{ 2}} - \frac{1}{|x|^{ 2}} + \frac{2 x' y}{|x|^{4}} \le K \frac{ |y|^{2}+ |x'y|}{|x|^{4}},$$ where $K$ is some positive constant. Let us prove in several cases: [**Case 1: $x'y \ge 0$**]{}. In this case, it is easy to verify that for any $\theta\in [0,1]$, we have $|x+ \theta y |^{2} = |x|^{2} + 2 \theta x'y + \theta^{2}|y|^{2} \ge |x|^{2}$. Therefore we can use the Taylor expansion with integral reminder to compute $$\begin{aligned} |x+y|^{-2} - |x|^{-2} + 2 |x|^{-4} x' y & = \int_{0}^{1} \frac12 y\cdot D^{2} V(x+ \theta y) y \, {\mathrm{d}}\theta \\ & = \int_{0}^{1} \biggl[- \frac{|y|^{2}}{|x+ \theta y|^{4}}+ 4\frac{y'(x+ \theta y)(x+ \theta y)' y}{|x+ \theta y |^{6}} \biggr]{\mathrm{d}}\theta\\ & \le 4\int_{0}^{1} \frac{|y|^{2}}{|x+ \theta y|^{4}} {\mathrm{d}}\theta \le 4\int_{0}^{1} \frac{|y|^{2}}{|x|^{4}} {\mathrm{d}}\theta = \frac{4 |y|^{2}}{|x|^{4}}.\end{aligned}$$ [**Case 2: $x'y < 0$ and $2x'y + |y|^{2} \ge 0$**]{}. In this case, we have $|x+y|^{2} = |x|^{2} + 2x'y+ |y|^{2} \ge |x|^{2}$ and hence $|x+y|^{-2} - |x|^{-2} \le 0$; which together with $x'y \le 0$ implies that $$|x+y|^{-2} - |x|^{-2} + 2 |x|^{-4} x' y \le 0.$$ [**Case 3: $x'y < 0$ and $2x'y + |y|^{2} < 0$**]{}. In this case, we use the bound $|x+y| \ge \varrho |x| $ to compute $$\begin{aligned} |x+y|^{-2} - |x|^{-2} + 2 |x|^{-4} x' y & = \frac{1}{ |x+y|^{2}} - \frac{1}{|x|^{ 2}} -\frac{|y|^2}{|x|^{4}}+ \frac{2 x'y+|y|^2}{|x|^{4}} \\ & =\frac{-2 x'y}{|x|^{2} |x+y|^{2}} -\frac{|y|^{2}}{|x|^{2} |x+y|^{2}} -\frac{|y|^2}{|x|^{4}} + \frac{2 x'y+|y|^{2}}{|x|^{4}} \\ & \le \frac{2 |x'y|}{\varrho^{2}|x|^{4}}. \end{aligned}$$ Combining the three cases gives . Observe that of Assumption \[assump-jump\] implies that if $x\neq 0$, then $x+ c(x,i,z) \neq 0$ for $\nu$-almost all $z \in \R_{0}^{n}$. Therefore we use Assumptions \[assump-0\] and \[assump-ito\] and to compute $$\begin{aligned} \label{eq-op-calculation} \nonumber {\ensuremath{\mathcal{L}}}V(x,i) & = -2 {\left\vertx\right\vert}^{-4}x \cdot b(x,i) + \frac{1}{ 2}\tr\left[ \sigma \sigma'(x,i) |x|^{-6} {\left(}-2 {\left\vertx\right\vert}^2 I + 8 x x'{\right)}\right]\\ \nonumber & \quad + \int_{\R^{n}_{0}}\big[ |x + \gamma(x,i,z)|^{-2} - |x|^{-2} + 2 |x|^{-4} x \cdot \gamma(x,i,z) \big]\nu({\mathrm{d}}z)\\ \nonumber & \le 2 \kappa |x|^{-2} + 4 |\sigma(x,i)|^{2} |x|^{-4} + K |x|^{-4} \int_{\R_{0}^{n}}\bigl[ |\gamma(x,i,z)|^{2}+ |x\cdot\gamma(x,i,z)|\bigr]\nu({\mathrm{d}}z)\\ & \le K |x|^{-2} = K V(x,i), \end{aligned}$$ where $K$ is a positive constant. Now consider the process $ (X,{\alpha})$ with initial condition $(X(0),{\alpha}(0)) = (x,i) \in \R_{0}^{n}\times {\ensuremath{\mathcal M}}$. Define for $0< {\varepsilon}< |x| < R$, $\tau_{{\varepsilon}}: = \inf\{ t\ge 0: |X(t) | \le {\varepsilon}\}$ and $\tau_{R} : = \inf\{ t\ge 0: |X(t)| \ge R\}$. Then allows us to derive $$\begin{aligned} {\ensuremath{\mathbb E}}_{x,i}& [e^{-K (t\wedge \tau_{{\varepsilon}} \wedge \tau_{R})}V(X(t\wedge \tau_{{\varepsilon}} \wedge \tau_{R}),{\alpha}(t\wedge \tau_{{\varepsilon}} \wedge \tau_{R}))] \\ & = V(x,i) + {\ensuremath{\mathbb E}}_{x,i} \biggl[ \int_{0}^{t\wedge \tau_{{\varepsilon}} \wedge \tau_{R}} e^{-K s} (-K + {\ensuremath{\mathcal{L}}}) V(X(s),{\alpha}(s)){\mathrm{d}}s\biggr] \\ & \le V(x,i) = |x|^{-2}, \ \ \text{ for all }t\ge 0.\end{aligned}$$ Note that on the set $\{ \tau_{{\varepsilon}} < t \wedge \tau_{R}\}$, we have $V(X(t\wedge \tau_{{\varepsilon}} \wedge \tau_{R}),{\alpha}(t\wedge \tau_{{\varepsilon}} \wedge \tau_{R})) = |X(t\wedge \tau_{{\varepsilon}} \wedge \tau_{R})|^{-2} \ge {\varepsilon}^{-2}.$ Thus it follows that $$e^{-K t} {\varepsilon}^{-2} {\ensuremath{\mathbb P}}_{x,i}\{ \tau_{{\varepsilon}} < t \wedge \tau_{R}\} \le {\ensuremath{\mathbb E}}_{x,i} [e^{-K (t\wedge \tau_{{\varepsilon}} \wedge \tau_{R})}V(X(t\wedge \tau_{{\varepsilon}} \wedge \tau_{R}),{\alpha}(t\wedge \tau_{{\varepsilon}} \wedge \tau_{R}))] \le |x|^{-2}.$$ It is well known that under Assumptions \[assump-0\] and \[assump-ito\], the process $X $ has no finite explosion time and hence $\tau_{R} \to \infty$ a.s. as $R \to \infty$. Therefore for any $t> 0$, we have ${\ensuremath{\mathbb P}}_{x,i}\{\tau_{{\varepsilon}} < t \} \le e^{K t} {\varepsilon}^{2} |x|^{-2} $. Passing to the limit as ${\varepsilon}\downarrow 0$, we obtain ${\ensuremath{\mathbb P}}_{x,i}\{ \tau_{0} < t\} =0$ for any $t> 0$, where $\tau_{0}: = \inf\{t\ge 0: X(t) =0\}$. This gives and hence completes the proof. Stability of Nonlinear Systems: General Results {#sect:nonlinear} =============================================== This section is devoted to establishing sufficient conditions in terms of the existence of appropriate Lyapunov functions for stability of the equilibrium point of the system . Section \[subsect:as-stab\] considers almost surely exponential stability while Section \[sect:pth-moment-stab\] studies $p$th moment exponential stability and demonstrates that $p$th moment exponential stability implies almost surely exponential stability under certain conditions. Finally we present a sufficient condition for stability using $M$-matrices in Section \[sect:M-matrix\]. Almost Sure Exponential Stability {#subsect:as-stab} --------------------------------- \[thm-as-exp-stable\] Suppose Assumptions \[assump-0\], \[assump-ito\], and \[assump-jump\] hold. Let $V: \R^n\times\mathcal{M} \mapsto\R^+ $ be such that $V(\cdot, i)\in C^{2}(\R^{n})$ for each $i\in {\ensuremath{\mathcal M}}$. Suppose there exist $p>0$, $c_1(i) >0$, $c_2(i) \in \R$, and nonnegative constants $c_3(i) $, $c_4(i) $ and $c_5(i) $ such that for all $x\not=0$ and $i\in {\ensuremath{\mathcal M}}$, - $c_1(i) |x|^{p}\leq V(x,i),$ - ${\ensuremath{\mathcal{L}}}V(x,i)\leq c_2(i) V(x,i),$ - $| DV(x,i)\cdot \sigma(x,i)|^2\geq c_3(i) V(x,i)^2$, - $\displaystyle\int_{\R_0^{n}}\biggl[\log\left(\frac{V(x+\gamma(x,i,z),i)}{V(x,i)}\right)-\frac{V(x+\gamma(x,i,z),i)}{V(x,i)}+1\biggr]\nu({\mathrm{d}}z)\leq-c_4(i) $, - $\displaystyle\sum_{j\in \mathcal{M}}q_{ij}(x)\left(\log V(x,j)-\frac{V(x,j)}{V(x,i)}\right)\leq -c_5(i) .$ Then $$\label{eq1-X-as-exp-stab} \limsup_{t\rightarrow \infty}\frac{1}{t}\log |X(t)|\leq \frac{1}{p} \max_{i\in {\ensuremath{\mathcal M}}}\bigl\{c_{2}(i)- 0.5 c_{3}(i) -c_{4}(i) -c_{5}(i)\bigr \} =:\delta \;\text{ a.s.}$$ In particular, if $\delta<0$ then the trivial solution of is a.s. exponentially stable. Conditions (iv) and (v) of Theorem \[thm-as-exp-stable\] are natural because of the following observations. At one hand, using the elementary inequality $\log y\leq y-1$ for $y>0$ we derive $$\displaystyle\int_{\R_0^{n}}\biggl[\log\left(\frac{V(x+\gamma(x,i,z),i)}{V(x,i)}\right)-\frac{V(x+\gamma(x,i,z),i)}{V(x,i)}+1\biggr]\nu({\mathrm{d}}z)\leq 0;$$ this leads us to assume that the left-hand side of the above equation is bounded above by a nonpositive constant $-c_4(i) $ in condition (iv); however, the constant may depend on $i\in{\ensuremath{\mathcal M}}$. On the other hand, the inequality $\log y\leq y-1$ for $y>0$ also leads to $$\log V(x,j)-\frac{V(x,j)}{V(x,i)}\leq \log V(x,i)-1.$$ Then it follows that for every $ i\in \mathcal{M}$, $$\aligned & \sum_{j\in \mathcal{M}}q_{ij}(x)\left(\log V(x,j)-\frac{V(x,j)}{V(x,i)}\right) \\ & \quad =\sum_{j\not=i}q_{ij}(x)\left(\log V(x,j)-\frac{V(x,j)}{V(x,i)}\right)+q_{ii}(x)(\log V(x,i)-1)\\ &\quad \leq \sum_{j\not=i}q_{ij}(x)\left(\log V(x,i)-1\right)+q_{ii}(x)(\log V(x,i)-1)= 0. \endaligned$$ In view of this observation, a nonpositive constant $-c_{5}(i)$ in condition (v) is therefore reasonable; again, this constant may depend on $i\in {\ensuremath{\mathcal M}}$. In fact, the constants $c_{1}(i), \dots, c_{5}(i) $ in Conditions (i)–(iv) may all depend on $i\in {\ensuremath{\mathcal M}}$; this allows for some extra flexibility for the selection of the Lyapunov function $V$ and more importantly, the sufficient condition for a.s. exponential stability in Theorem \[thm-as-exp-stable\] and Corollary \[coro-as-exp-stable\]. It is also worth poiting out that conditions (i)–(iv) are similar to those in Theorem 3.1 of [@ApplS-09]. Condition (v) is needed so that we can control the fluctuations of $\frac1t \log|X(t)|$ due to the presence of regime switching. The proof of Theorem \[thm-as-exp-stable\] is a straightforward extension of that of Theorem 3.1 of [@ApplS-09]; some additional care are needed due to the presence of regime-switching. For completeness and also to preserve the flow of presentation, we relegate the proof to the Appendix \[sect-appendix\]. \[coro-as-exp-stable\] In addition to the conditions of Theorem \[thm-as-exp-stable\], suppose also that the discrete component ${\alpha}$ in and is an irreducible continuous-time Markov chain with an invariant distribution $\pi = (\pi_{i}, i \in {\ensuremath{\mathcal M}})$, then can be strengthened to $$\label{eq2-X-as-exp-stab} \limsup_{t\rightarrow \infty}\frac{1}{t}\log |X(t)|\leq \frac{1}{p} \sum_{i\in {\ensuremath{\mathcal M}}}\pi_{i} [c_{2}(i) - 0.5\, { c_{3}(i) }- c_{4}(i) -c_{5}(i)]\; \text{ a.s.}$$ This follows from applying the ergodic theorem of continuous-time Markov chain (see, for example, [@Norris-98 Theorem 3.8.1]) to the right-hand side of : $$\begin{aligned} \lim_{t\to\infty} & \frac{1}{t} \int_{0}^{t} \bigl[c_{2}({\alpha}(s))- 0.5 c_{3}({\alpha}(s)) -c_{4}({\alpha}(s)) -c_{5}({\alpha}(s))\bigr]{\mathrm{d}}s \\ & = \sum_{i=1}^m\pi_i[c_{2}(i) - 0.5\, { c_{3}(i) }- c_{4}(i) -c_{5}(i)]\; \text{ a.s.} \end{aligned}$$ Then follows directly. Exponential $p$th-Moment Stability {#sect:pth-moment-stab} ---------------------------------- \[thm-moment-stab\] Suppose Assumptions \[assump-0\] and \[assump-ito\]. Let $p, c_{1}, c_{2}, c_{3}$ be positive constants. Assume that there exists a function $V:\R^{n}\times {\ensuremath{\mathcal M}}\mapsto \R^{+}$ such that $V(\cdot,i)\in C^{2} (\R^{n})$ for each $i\in {\ensuremath{\mathcal M}}$ satisfying - $c_{1} |x|^{p} \le V(x,i) \le c_{2} |x|^{p}$, - ${\ensuremath{\mathcal{L}}}V(x,i)\leq -c_3V(x,i),$ for all $(x,i) \in \R^{n}\times {\ensuremath{\mathcal M}}$. Let $(X(0),{\alpha}(0)) = (x,i) \in\R^{n}\times {\ensuremath{\mathcal M}}$. Then we have - ${\ensuremath{\mathbb E}}[|X(t)|^{p}] \le \frac{c_{2}}{c_{1}} |x|^{p} e^{-c_{3} t}. $ In particular, the equilibrium point of is exponentially stable in the $p$th moment with Lyapunov exponent less than or equal to $-c_{3}$. - Assume in addition that $p \in (0, 2]$. Then there exists an almost surely finite and positive random variable $\Xi$ such that $$\label{eq new-as-exp stable} |X(t)|^{p} \le \frac{\Xi}{c_{1}} e^{-c_{3}t} \text{ for all }t \ge 0 \text{ a.s.}$$ In particular, the equilibrium point of is almost sure exponentially stable with Lyapunov exponent less than or equal to $-\frac{c_{3}}{p}$. The proof of part (a) is very similar to that of Theorem 3.1 in [@Mao-99]; see also Theorem 4.1 in [@ApplS-09]. For brevity, we shall omit the details here. Part (b) is motivated by [@khasminskii-12 Theorem 5.15]. For any $t \ge 0$ and $(x,i) \in\R^{n}\times {\ensuremath{\mathcal M}}$, we consider the function $f(t,x,i): = e^{c_{3}t}V(x,i)$. Condition (i) and Lemma 3.1 of [@ZhuYB-15] imply that $${\ensuremath{\mathbb E}}[f(t,X(t),{\alpha}(t))] = {\ensuremath{\mathbb E}}[e^{c_{3}t} V(X(t),{\alpha}(t))]\le c_{2 }e^{c_{3} t } {\ensuremath{\mathbb E}}[ |X(t)|^{p}] < \infty.$$ On the other hand, thanks to Itô’s formula, we have for all $0 \le s < t$ $$\begin{aligned} f&(t,X(t),{\alpha}(t)) \\ & = f(s,X(s),{\alpha}(s))+ \int_{s}^{t}e^{c_{3}r } (c_{3} + {\ensuremath{\mathcal{L}}})V(X(r),{\alpha}(r)){\mathrm{d}}r \\ & \quad+ \int_{s}^{t}e^{c_{3}r }D V(X(r),{\alpha}(r)) \cdot \sigma(X(r),{\alpha}(r)) {\mathrm{d}}W(r) \\ & \quad + \int_{s}^{t}\int_{\R_+} e^{c_{3}r }\big[ V( X(r-), {\alpha}(r-)+ h(X(r-),{\alpha}(r-),z))-V(X(r-),{\alpha}(r-))\big] \tilde N_{1}({\mathrm{d}}r, {\mathrm{d}}z)\\ &\quad+ \int_{s}^{t}\int_{\R^{n}_{0}}e^{c_{3}r } [V(X(r-) + \gamma(X(r-),{\alpha}(r-),z), {\alpha}(r-)) - V(X(r-),{\alpha}(r-))] {\widetilde}N({\mathrm{d}}r, {\mathrm{d}}z)\\ & \le f(s,X(s),{\alpha}(s))+ \int_{s}^{t}e^{c_{3}r }D V(X(r),{\alpha}(r)) \cdot \sigma(X(r),{\alpha}(r)) {\mathrm{d}}W(r) \\ & \quad + \int_{s}^{t}\int_{\R_+} e^{c_{3}r }\big[ V( X(r-), {\alpha}(r-)+ h(X(r-),{\alpha}(r-),z))-V(X(r-),{\alpha}(r-))\big] \tilde N_{1}({\mathrm{d}}r, {\mathrm{d}}z)\\ &\quad+ \int_{s}^{t}\int_{\R^{n}_{0}}e^{c_{3}r } \big[V(X(r-) + \gamma(X(r-),{\alpha}(r-),z), {\alpha}(r-)) - V(X(r-),{\alpha}(r-))\big] {\widetilde}N({\mathrm{d}}r, {\mathrm{d}}z),\end{aligned}$$ where we used condition (ii) to obtain the inequality. Let $\tau_{n}: = \inf\{t\ge0: |X(t)| \ge n\} $. Then we have $\lim_{n \to \infty}\tau_{n}= \infty$ a.s. and ${\ensuremath{\mathbb E}}[f(t\wedge\tau_{n}, X(t \wedge \tau_{n}), {\alpha}(t\wedge \tau_{n})) |{\ensuremath{\mathcal F}}_{s}] \le f(s\wedge\tau_{n}, X(s \wedge \tau_{n}), {\alpha}(s\wedge \tau_{n}))$ a.s. Passing to the limit as $n\to \infty$, and noting that $f$ is positive, we obtain from Fatou’s lemma that ${\ensuremath{\mathbb E}}[ f(t,X(t),{\alpha}(t))|{\ensuremath{\mathcal F}}_{s}] \le f(s,X(s),{\alpha}(s))$ a.s. Therefore it follows that the process $\{f(t,X(t),{\alpha}(t)), t \ge 0 \}$ is a positive supermartingale. The martingale convergence theorem (see, for example, Theorem 3.15 and Problem 3.16 in [@Karatzas-S]) then implies that $f(t,X(t),{\alpha}(t))$ converges a.s. to a finite limit as $t \to \infty$. Consequently there exists an a.s. finite and positive random variable $\Xi$ such that $$\sup_{t \ge 0}\{ e^{c_{3}t} V( X(t),{\alpha}(t))\} = \sup_{t \ge 0} \{ f(t, X(t),{\alpha}(t))\} \le \Xi < \infty, \text{ a.s.}$$ Furthermore, it follows from condition (i) that $|X(t)|^{p} \le \frac{1}{c_{1}} V(X(t),{\alpha}(t))$. Putting this observation into the above displayed equation yields . \[thm-moment-as-stable\] Let Assumptions \[assump-0\] and \[assump-ito\] hold. Suppose the equilibrium point of is $p$th moment exponentially stable for some $p \ge 2$ and that for some positive constant $\hat \kappa$, we have $$\label{eq-pth-moment-as-stable} \int_{\R^{n}_{0}} {\left\vert\gamma(x,i,z)\right\vert}^{p}\nu(dz) \le \hat \kappa |x|^{p}, \quad (x,i) \in \R^{n} \times {\ensuremath{\mathcal M}}.$$ Then the equilibrium point of is almost surely exponentially stable. The proof of Theorem \[thm-moment-as-stable\] is very similar to the proofs of Theorem 4.2 of [@ApplS-09] and Theorem 5.9 of [@MaoY] and is deferred to Appendix \[sect-appendix\]. Note that Theorem 4.4 in [@ApplS-09] requires a condition (Assumption 4.1) similar to to hold for all $q \in [2, p]$. Here we observe that it is enough to have for a single $p$, as long as $p \ge 2$. Also notice that in the special case when $p=2 $, then is already contained in Assumption \[assump-ito\]. Criteria for Stability Using $M$-Matrices {#sect:M-matrix} ----------------------------------------- In this subsection, we assume that in , $Q(x) = Q\in \R^{m\times m}$, a constant matrix. Consequently, the switching component ${\alpha}(\cdot)$ in is a continuous-time Markov chain. Let us also assume \[assump-m-matrix\] There exist a positive number $p> 0$ and a positive definite matrix $G\in \S^{n\times n}$ such that for all $x\neq 0$ and $i \in {\ensuremath{\mathcal M}}$, we have $$\begin{aligned} \label{eq1-moment-stable-M} & {\langle}Gx, b(x,i) {\rangle}+ \frac{1}{2} {\langle}\sigma(x,i), G \sigma(x,i) {\rangle}\le \varrho_{i} {\langle}x, Gx{\rangle}, \\ \label{eq2-moment-stable-M} & ({\langle}x, G\sigma(x,i) {\rangle})^{2} \begin{cases} \le \delta_{i} ({\langle}x, G x {\rangle})^{2}, & \text{ if } p \ge 2, \\ \ge \delta_{i} ({\langle}x, G x {\rangle})^{2}, & \text{ if } 0 < p < 2, \end{cases} \intertext{and}\label{eq3-moment-stable-M} & \int_{\R_{0}^{n}}\biggl[ \biggl( \frac{{\langle}x+\gamma(x,i,z), G (x+ \gamma(x,i,z)){\rangle}}{{\langle}x, Gx{\rangle}}\biggr)^{\frac p2} - 1 -\frac{p{\langle}\gamma(x,i,z), Gx{\rangle}}{{\langle}x, Gx{\rangle}} \biggr] \nu({\mathrm{d}}z) \le \lambda_{i},\end{aligned}$$ where $\varrho_{i}, \delta_{i},$ and $\lambda_{i}$, $i\in {\ensuremath{\mathcal M}}$ are constants. Corresponding to the infinitesimal generator $Q$ of and $p> 0$, $G\in \S^{n\times n}$ in Assumption \[assump-m-matrix\], we define an $m\times m$ matrix $$\label{eq-matrix-A(pG)} {\ensuremath{\mathcal A}}: ={\ensuremath{\mathcal A}}(p, G) = \textrm{diag}(\theta_{1}, \dots, \theta_{m}) - Q,$$ where $\theta_{i}: = -[p\varrho_{i} + p (p-2) \delta_{i} + \lambda_{i} ] $, $ i =1, \dots, m. $ \[thm-moment-stab-m-matrix\] Suppose Assumptions \[assump-0\], \[assump-ito\], \[assump-jump\], and \[assump-m-matrix\] hold and that the matrix ${\ensuremath{\mathcal A}}$ defined in is a nonsingular $M$-matrix, then the equilibrium point of is exponentially stable in the $p$th moment. In addition, if either $p\in (0, 2]$ or $p > 2$ with valid, then then the equilibrium point of is a.s. exponentially stable. Recall from [@MaoY] that a square matrix $A= (a_{ij})\in \R^{n\times n}$ is a nonsingular $M$-matrix if $A$ can be expressed in the form $A= s I- G$ with some $G\ge0$ and $s> \rho(G)$, where $I$ is the identity matrix and $\rho(G)$ denotes the spectral radius of $G$. Since ${\ensuremath{\mathcal A}}$ of is a nonsingular $M$-matrix, by Theorem 2.10 of [@MaoY], there exists a vector $(\beta_{1}, \dots, \beta_{m})' \gg 0$ such that $(\bar \beta_{1}, \dots, \bar \beta_{m})' : = {\ensuremath{\mathcal A}}(\beta_{1}, \dots, \beta_{m})' \gg 0 $. Componentwise, we can write $$\bar \beta_{i} : = \theta_{i} \beta_{i} - \sum_{j\in {\ensuremath{\mathcal M}}} q_{ij} \beta_{j} > 0, \quad i \in {\ensuremath{\mathcal M}}.$$ Define $V(x,i) : = \beta_{i} (x' Gx)^{\frac p2}$ for $(x,i)\in \R^{n}\times {\ensuremath{\mathcal M}}$. Then condition (i) of Theorem \[thm-moment-stab\] is trivially satisfied. Moreover, we can use Assumption \[assump-m-matrix\] to compute $$\begin{aligned} \mathcal L V(x,i) & = \beta_{i} p (x' Gx)^{\frac p2 - 1} {\langle}b(x,i), Gx{\rangle}+ \sum_{j=1}^{m} q_{ij} \beta_{j} (x' Gx)^{\frac p2} \\ & \quad+ \frac12 \beta_{i} p (x' Gx)^{\frac p2-2} \tr\big(\sigma(x,i) \sigma'(x,i) [(p-2) Gxx'G + x' Gx G] \big)\\ & \quad+ \beta_{i} \int_{\R_{0}^{n}} \Big[ ({\langle}x+ \gamma(x,i,z), G (x+ \gamma(x,i,z)){\rangle})^{\frac p2} - (x' Gx)^{\frac p2} \\ & \qquad \qquad \qquad \qquad \qquad - p (x' Gx)^{\frac p2 - 1} {\langle}\gamma(x,i,z), Gx{\rangle}\Big] \nu({\mathrm{d}}z) \\ & \leq \sum_{j=1}^{m} q_{ij} \beta_{j} (x' Gx)^{\frac p2} + \beta_{i} \bigl[ p\varrho_{i} + p (p-2) \delta_{i} + \lambda_{i} \bigr] (x' Gx)^{\frac p2} \\ & = \Biggl[ \sum_{j=1}^{m} q_{ij} \beta_{j} - \theta_{i}\beta_{i} \Biggr] (x' Gx)^{\frac p2} = -\bar \beta_{i} (x' Gx)^{\frac p2} \le - \varsigma V(x,i),\end{aligned}$$ where $\varsigma = \min_{1 \le i \le m} \frac{\bar \beta_{i}}{\beta_{i}} > 0$. This verifies condition (ii) of Theorem \[thm-moment-stab\]. Therefore by Theorem \[thm-moment-stab\], part (a), we conclude that the equilibrium point of is exponentially stable in the $p$th moment. The assertion on a.s. exponential stability follows from Theorem \[thm-moment-stab\] part (b) for the case $p\in (0,2]$ and Theorem \[thm-moment-as-stable\] for the case $p > 2$. Stability of Linear Markovian Regime-Switching Jump Diffusion Systems {#sect-linear} ===================================================================== In this section, we consider a linear regime-switching jump diffusion $$\label{linearsystem} \begin{aligned} {\mathrm{d}}X(t) = A({\alpha}(t)) X(t) {\mathrm{d}}t + B({\alpha}(t)) X(t) {\mathrm{d}}W( t) + \int_{\R^{n}_{0}} C({\alpha}(t-),z) X(t-) \tilde N({\mathrm{d}}t, {\mathrm{d}}z), \end{aligned}$$ where ${\alpha}(\cdot) $ is an irreducible continuous-time Markov chain taking values in ${\ensuremath{\mathcal M}}= \{ 1, \dots, m\}$. Consequently we assume that $q_{ij}(\cdot)$ in are constants for all $i,j \in {\ensuremath{\mathcal M}}$. In addition, unless otherwise mentioned, we assume that ${\alpha}(\cdot)$ has an invariant distribution $\pi = (\pi_{i}, i\in {\ensuremath{\mathcal M}})$ throughout the section. In , for each $i \in {\ensuremath{\mathcal M}}$ and $z\in \R^{n}_{0}$, $A_{i} = A(i), B_{i}= B(i)$ and $C_{i}(z) = C(i,z)$ are $n\times n$ matrices satisfying the following condition $$\label{eq0-Ci(z)-cond}\begin{aligned} & \max_{i\in {\ensuremath{\mathcal M}}}\int_{\R_{0}^{n}}\bigl[ | C_{i}(z)|^{2}+ | C_{i}(z)|\bigr]\nu({\mathrm{d}}z) < \infty, \text{ and } \\ & {\langle}\xi, (I + C_{i}(z)')(I+C_{i}(z))\xi {\rangle}\ge \varrho^{2}|\xi|^{2}, \text{ for all }\xi \in \R^{n} \text{ and }\nu\text{-almost all } z\in \R^{n}_{0}, \end{aligned}$$ where $\varrho$ is a positive constant. Apparently satisfies Assumption \[assump-0\]. In addition, the first equation of guarantees that Assumption \[assump-ito\] is satisfied as well. Finally, since $|x+ C_{i}(z) x| = |(I+C_{i}(z)) x| = |{\langle}x, (I + C_{i}(z)')(I+C_{i}(z))x {\rangle}|^{\frac12} $, the uniform ellipticity condition on the matrix $(I + C_{i}(z)')(I+C_{i}(z))$ in the second equation of implies that Assumption \[assump-jump\] holds. We will deal with almost sure exponential stability in Section \[subsect:as-stab-linear\] and moment exponential stability in Section \[subsect:pth-moment-stab-linear\]. In both sections, we will treat one-dimensional and multidimensional systems separately. Finally Section \[sect:examples\] presents several examples. Almost Sure Exponential Stability {#subsect:as-stab-linear} --------------------------------- ### One Dimensional System Let us first consider the one-dimensional regime-switching jump diffusion $$\label{eq-SDE-1} \begin{aligned} {\mathrm{d}}x(t) = a(\alpha(t)) x(t) {\mathrm{d}}t + b(\alpha(t)) x(t) {\mathrm{d}}W( t) + \int_{\R_{0}} c(\alpha(t-),z) x(t-) \tilde N({\mathrm{d}}t, {\mathrm{d}}z). \end{aligned}$$ Suppose for each $i\in {\ensuremath{\mathcal M}}$, $a_{i} = a(i)$, $b_{i} =b(i)$ are real numbers, and $c_{i}(\cdot) = c(i,\cdot)$ is a measurable function from $\R_{0}$ to $\R-\{-1\}$ satisfying . Notice that can be regarded as an extended jump type Black-Scholes model with regime switching; this is motivated by the jump diffusion models in [@Cont-Tankov; @cont2005] as well as the regime-switching models as in [@BW87; @Zhang]. \[prop-as-exp-stable-1d\] Suppose $$\label{eq-1d-nu-condition} \max_{i\in \mathcal{M}} \biggl\{ \int_{\R_0}\big(\log|1+c_i(z)|)^2\nu({\mathrm{d}}z) + \int_{\R_0}\bigl|\log|1+c_i(z)|-c_i(z)\bigr| \nu({\mathrm{d}}z) \biggr\} <\infty,$$then the solution to satisfies the following property: $$\label{eq-Ci(z)-cond} \aligned \lim_{t\rightarrow+\infty}\frac{1} {t}\log|x(t)|=\delta: = \sum_{i=1}^m\pi_i\biggl[a_i-\frac12b^2_i+\int_{\R_0}\big(\log|1+c_i(z)|-c_i(z)\big)\nu({\mathrm{d}}z)\biggr]\;\text{ a.s.} \endaligned$$ In particular, the equilibrium point of is almost surely exponentially stable if and only if $\delta<0$. As in the proof of Theorem \[thm-as-exp-stable\], we need only to consider the case when $x(t) \neq 0$ for all $t \ge 0$ with probability 1. Let $x(0) = x\not =0$ and ${\alpha}(0) = i\in {\ensuremath{\mathcal M}}$. Then by Itô’s formula we have $$\label{wk1} \aligned \log|x(t)| & = \log |x| + \int_0^t\biggl[a(\alpha(s))-\frac{1}{2}b^2(\alpha(s))\\ & \qquad\qquad \qquad \hfill +\int_{\R_0}[\log|1+c(\alpha(s-),z)|-c(\alpha(s-),z)]\nu({\mathrm{d}}z)\biggr]{\mathrm{d}}s + M_{1}(t) + M_{2}(t), \endaligned$$ where $$\begin{aligned} M_1(t) = \int_0^t b(\alpha(s)){\mathrm{d}}W(s), \text{ and } M_{2}(t) = \int_0^t\int_{\R_0}\log|1+c(\alpha(s-),z)|\widetilde{N}({\mathrm{d}}s,{\mathrm{d}}z).\end{aligned}$$ Obviously $M_1$ is a martingale vanishing at 0 with quadratic variation $ \langle M_1, M_1\rangle_{t}=\int_0^tb^2(\alpha(s)){\mathrm{d}}s\leq t \max_{i\in {\ensuremath{\mathcal M}}} b_{i}^{2}. $ On the other hand, implies that $M_{2}$ is a martingale vanishing at $0$. In addition, the quadratic variation of $M_{2}$ is given by $$\aligned \langle M_2, M_2\rangle_{t}=\int_0^t\int_{\mathbb{R}_0}(\log|1 + c(\alpha(s-),z)|)^{2}\nu({\mathrm{d}}z){\mathrm{d}}s\le Kt, \endaligned$$ where $K= \max_{i\in {\ensuremath{\mathcal M}}} \int_{\R_{0}} (\log |1+c_{i}(z)|)^{2}\nu ({\mathrm{d}}z) < \infty$. Therefore we can apply the strong law of large numbers for martingales (see, for example, [@MaoY Theorem 1.6]) to conclude $$\lim_{t\rightarrow+\infty}\frac{M_1(t)}t=0 \,\,\; \text{and}\,\;\lim_{t\rightarrow+\infty}\frac{M_2(t)}t=0\; \text{ a.s.}$$ Then the ergodic theorem for continuous-time Markov chain leads to the desired assertion. ### Multidimensional Systems Let us now focus on the multidimensional system . As before, we suppose that the discrete component ${\alpha}$ in and is an irreducible continuous-time Markov chain with an invariant distribution $\pi = (\pi_{i}, i \in {\ensuremath{\mathcal M}})$. For notational simplicity, define the column vector $\mu=(\mu_1,\mu_2,\dots,\mu_m)'\in\R^{m}$ with $$\label{eq-mu-i-defn} \mu_i := \mu_{i}(G) ={1 \over 2}\lambda_{\max} (GA_i G^{-1} + G^{-1}A_i'G + G^{-1} B_{i}' G^{2} B_{i} G^{-1}),$$ where $G\in \S^{n\times n}$ is a positive definite matrix. Also let $$\label{eq-beta-defn} \beta:=-\pi\mu= - \sum^m_{i=1} \pi_i \mu_i.$$ Then it follows from Lemma A.12 of [@YZ-10] that the equation $$\label{eq-zeta-defn} Q \zeta= \mu + \beta\one$$ has a solution $\zeta=(\zeta_1,\zeta_2,\dots,\zeta_m)'\in\R^{m}$, where $\one: = (1,1,\dots, 1)' \in \R^{m}$. Thus we have from that $$\label{eq-com-0}\mu_i- \sum_{j=1}^m q_{ij}\zeta_j = -\beta,\quad i\in {\ensuremath{\mathcal M}}.$$ Before we state the main result of this section, let us introduce some more notation. If $A$ is a square matrix, then $\rho(A)$ denotes the spectral radius of $A$. Furthermore, if $A$ is symmetric, we denote $$\begin{aligned} \label{eq-lambda-hat-defn} & \widehat{\lambda}(A):= \begin{cases} \lambda_{\max}(A), & \hbox{if } \lambda_{\max}(A)<0, \\ 0\vee\lambda_{\min}(A), & \hbox{otherwise}. \end{cases}\end{aligned}$$ \[prop-exp-stab-creterion-b\] The trivial solution of is a.s. exponentially stable if there exist a positive definite matrix $G\in \S^{n\times n}$, positive numbers $h_i $ and $p$ such that $h_i- p \zeta_{i} >0$ for each $i\in {\ensuremath{\mathcal M}}$ and that $$\label{eq-exp-stab-creterion} \sum_{i\in {\ensuremath{\mathcal M}}} \pi_{i} [c_{2}(i) -0.5c_3(i) - c_{4}(i) -c_{5}(i)]< 0,$$ where $$\begin{aligned} \nonumber c_{2}(i)&: = p\mu_i +\frac{p-2}8 {\Lambda}^2(GB_iG^{-1}+G^{-1}B_i'G)- \frac{p}{h_i-p\zeta_{i}} (\mu_{i}+\beta)+ \frac{1}{h_i-p\zeta_i}\sum_{j\in {\ensuremath{\mathcal M}}} q_{ij} h_{j}+ \eta_{i} , \\ \nonumber c_{3}(i)&: =\frac{ p^2}4\widehat{\lambda}^2(GB_iG^{-1}+G^{-1}B_i'G),\\ \label{eq-c2-5} c_{4}(i)& : = - \biggl\{ 0\wedge\int_{\R_{0}^{n}} \biggl[\frac{p}{2}\log{\lambda_{\max}(G^{-1}(I+ C_{i}(z))' G^{2} (I + C_{i}(z))G^{-1})} \\ \nonumber & \qquad \qquad\qquad \ - \bigl( \lambda_{\min}(G^{-1}(I+ C_{i}(z))' G^{2} (I + C_{i}(z))G^{-1})\bigr)^{\frac{p}{2}}+1\biggr]\nu({\mathrm{d}}z)\biggr\}, \\ \nonumber c_{5}(i)& : = - \sum_{j\in {\ensuremath{\mathcal M}}} q_{ij} \left(\log (h_j- p \zeta_{j}) - \frac{h_j-p \zeta_{j}}{h_i- p \zeta_{i}}\right), \end{aligned}$$ in which $\mu_{i}$, $\beta$, and $\zeta_{i}$ are defined in , , and , respectively, and $$\begin{aligned} \label{eq-rho-i-defn} & \eta_{i}: = \int_{\R_{0}^{n}} \Bigl[ \bigl ( {\lambda_{\max}( G^{-1} (I + C_{i}(z) )' G^{2} (I + C_{i}(z)) G^{-1}}\bigr)^{\frac{p }{2}} - 1 \\ \nonumber & \qquad \qquad\qquad - \frac{p}{2} \lambda_{\min} (G C_{i}(z) G^{-1} + G^{-1} C_{i}'(z) G)\Bigr] \nu({\mathrm{d}}z), \\ &\label{eq-Lambda-defn} \Lambda(GB_iG^{-1}+G^{-1}B_i'G):= \begin{cases} \widehat{\lambda}(GB_iG^{-1}+G^{-1}B_i'G), & \text{ if } 0 < p \le 2, \\ \rho (GB_iG^{-1}+G^{-1}B_i'G), & \text{ if } p > 2. \end{cases}\end{aligned}$$ In the above, we require that the integrals with respect to $\nu$ in and are well-defined. \[rem-about-zeta-choice\] Note that the constants $c_{2}(i)$ and $c_{5}(i)$ in the statement of Proposition \[prop-exp-stab-creterion-b\] actually depend on the choice of the solution $\zeta$ to equation . Nevertheless, for notational simplicity, we write $c_{2}(i), c_{5}(i)$ instead of $c_{2}(i;\zeta), c_{5}(i;\zeta)$. Since $Q$ is a singular matrix, and $\pi(\mu+ \beta \one) =0$, in view of Lemma A.12 of [@YZ-10], has infinitely many solutions and any two solutions $\zeta^{1}, \zeta^{2}$ of satisfy $\zeta^{1}-\zeta^{2} = \varrho \one$ for some $\varrho \in \R$. Hence Proposition \[prop-exp-stab-creterion-b\] and in particular can be strengthened as: If $$\label{eq1-exp-stab-creterion} \min{\left\{\sum_{i\in \mathcal{M}}\pi_i\big[c_2(i)-0.5 \,c_3(i)-c_4(i)-c_5(i)\big]\Big| \zeta\in \R^{m}, Q\zeta=\mu + \beta\one\right\}}<0,$$ then the trivial solution of is a.s. exponentially stable. The proof of Proposition \[prop-exp-stab-creterion-b\] follows from a direct application of Theorem \[thm-as-exp-stable\] and Corollary \[coro-as-exp-stable\]. The idea is to construct an appropriate Lyapunov function $V$ that satisfies conditions (i)–(v) of Theorem \[thm-as-exp-stable\]. To preserve the flow of presentation, we arrange the proof to the Appendix \[sect-appendix\]. Next we present a sufficient condition for a.s. exponential stability for the equilibrium point of a linear stochastic differential equation [*without switching*]{}. \[coro1-as.exp-stab\] Let $i\in {\ensuremath{\mathcal M}}$. Suppose there exist a positive definite matrix $G_{i}\in \S^{n\times n}$ and a positive number $p\in (0, 2]$ such that $$\label{eq-SDEi-exp-stable} \tilde{c}_{2}(i) -0.5c_3(i) - c_{4}(i) < 0,$$ where $c_{3}(i), c_{4}(i)$ are defined in , and $$\begin{aligned} \tilde c_{2}(i)&: = p \mu_{i}+\frac{p-2}8{\Lambda}^2(G_{i}B_iG_{i}^{-1}+G_{i}^{-1}B_i'G_{i}) + \eta_{i}, \end{aligned}$$ where $\mu_{i}, \eta_{i}$, and ${\Lambda}(G_{i}B_iG_{i}^{-1}+G_{i}^{-1}B_i'G_{i})$ are similarly defined in , and respectively, then the equilibrium point of the stochastic differential equation $$\label{eq-SDEi} \begin{aligned} {\mathrm{d}}X^{(i)}(t) = A(i) X^{(i)}(t) {\mathrm{d}}t + B(i) X^{(i)}(t) {\mathrm{d}}W( t) + \int_{\R^{n}_{0}} C(i,z) X^{(i)}(t-) \tilde N({\mathrm{d}}t, {\mathrm{d}}z), \end{aligned}$$ is a.s. exponentially stable. In addition, if $G_{i}= G$ and holds for every $i\in {\ensuremath{\mathcal M}}$, then the equilibrium point of is a.s. exponentially stable. This follows from Proposition \[prop-exp-stab-creterion-b\] directly. $p$th Moment Exponential Stability {#subsect:pth-moment-stab-linear} ---------------------------------- ### One-Dimensional System As in Section \[subsect:as-stab-linear\], let us first derive a necessary and sufficient condition for the $p$th moment exponential stability for the one-dimensional linear system . To this end, we need to introduce some notations. Let $\mathcal P$ be the set of probability measures on the state space ${\ensuremath{\mathcal M}}$; then under the irreducibility and ergodicity assumptions, the empirical measure of the continuous-time Markov chain ${\alpha}(\cdot)$ satisfies the large deviation principle with the rate function $$\label{eq-rate-fn-MC} \mathbf I(\mu) : = -\inf_{u_{1}, \dots, u_{m} > 0} \sum_{i,j\in {\ensuremath{\mathcal M}}} \frac{\mu_{i} q_{ij} u_{j}}{u_{i}},$$ where $\mu=(\mu_{1}, \dots, \mu_{m})\in \mathcal P$; we refer to [@DonsV-75] for details. It is known that $\mathbf I(\mu)$ is lower semicontinuous and $\mathbf I(\mu)=0$ if and only if $\mu= \pi$. In addition, by virtue of [@Zong-14], if $a=(a_{1}, \dots, a_{m})' \in \R^{m}$, then we have $$\label{eq-Lambda(a)} \Upsilon(a):= \lim_{t\to \infty} \frac{1}{t} \log\bigg({\ensuremath{\mathbb E}}\bigg[\exp\bigg\{\int_{0}^{t} a({\alpha}(s)) {\mathrm{d}}s \bigg\}\bigg]\bigg) = \sup_{\mu \in \mathcal P} {\left\{\sum_{i\in {\ensuremath{\mathcal M}}} a_{i} \mu_{i} -\mathbf I(\mu)\right\}}.$$ Note that $\sum_{i \in {\ensuremath{\mathcal M}}}a_{i}\pi_{i} \le \Upsilon(a) \le \max_{i\in {\ensuremath{\mathcal M}}} a_{i}.$ \[prop-moment-exp-stab-1d\] Assume the conditions of Proposition \[prop-as-exp-stable-1d\]. In addition, assume that there exists some $p>0$ such that for each $i\in {\ensuremath{\mathcal M}}$, $\int_{\R_{0}} {\left\vert|1+ c(i,z)|^{p} - p c(i,z) -1\right\vert}\nu({\mathrm{d}}z) < \infty.$ Denote $f=(f(1), \dots, f(m))$ with $$f(i) = f_{p}(i) = pa(i)+ \frac{1}{2}p(p-1)b^2(i) + \int_{\R_{0}} [|1+ c(i,z)|^{p} - p c(i,z) -1]\nu({\mathrm{d}}z).$$ Then we have $$\label{eq-Lypunov exponent} \lim_{t\to \infty} \frac{1}{t} \log({\ensuremath{\mathbb E}}[|x(t)|^{p}]) = \Upsilon(f),$$ where $\Upsilon(f)$ is similarly defined as in . Therefore, the trivial solution of is $p$th moment exponentially stable if and only if $\Upsilon(f) < 0$. See Appendix \[sect-appendix\]. ### Multidimensional System Now let’s focus on establishing a sufficient condition for the $p$-th moment exponential stability of the trivial solution of the multidimensional system . In view of Theorem \[thm-moment-stab\] and the calculations in Proposition \[prop-exp-stab-creterion-b\], we have the following proposition: \[cor-p-stab\] If there exist a positive definite matrix $G\in \S^{n\times n}$, positive numbers $p$ and $h_{i}, i \in {\ensuremath{\mathcal M}}$ such that $$\label{eq-pth-moment-stab-condition} \delta: = \min\biggl\{ \max_{i\in {\ensuremath{\mathcal M}}} c(i; h, \zeta)\Big| \zeta \in \R^{m}, Q\zeta = \mu + \beta \one, h_{i} - p \zeta_{i} > 0 \text{ for each } i \in {\ensuremath{\mathcal M}}\biggr \} < 0,$$ then the equilibrium point of is exponentially stable in the $p$th moment with Lyapunov exponent less than or equal to $\delta$, where $\mu \in \R^{m}$ and $\beta \in \R$ are defined in and , respectively, and $$c(i; h, \zeta) : = p\mu_i +\frac{p-2}8{\Lambda}^2(GB_iG^{-1}+G^{-1}B_i'G)- \frac{p}{h_i-p\zeta_{i}} (\mu_{i}+\beta)+ \frac{1}{h_i-p\zeta_i}\sum\limits_{j\in {\ensuremath{\mathcal M}}} q_{ij} h_{j}+ \eta_{i},$$ in which $\eta_{i}$ and ${\Lambda}(GB_iG^{-1}+G^{-1}B_i'G)$ are defined in and , respectively. Let $p, h= (h_{1}, \dots, h_{m})' $ and $G$ be as in the statement of the corollary and consider the function $V(x,i) = (h_{i} - p \zeta_{i}) (x' G^{2} x)^{p/2} $, $(x,i) \in \R^{n }\times {\ensuremath{\mathcal M}}$. Then we have $$0 < \min_{i\in {\ensuremath{\mathcal M}}} (h_{i} - p \zeta_{i})( \lambda_{\min}(G^{2}))^{p/2} |x|^{p}\le V(x,i) \le \max_{i\in {\ensuremath{\mathcal M}}} (h_{i} - p \zeta_{i})( \lambda_{\max}(G^{2}))^{p/2} |x|^{p}.$$ Moreover, the detailed calculations in the proof of Proposition \[prop-exp-stab-creterion-b\] reveal that $${\ensuremath{\mathcal{L}}}V(x,i) \le c(i; h, \zeta) V(x,i) \le \max_{i \in {\ensuremath{\mathcal M}}}c(i; h, \zeta) V(x,i).$$ Then condition and Theorem \[thm-moment-stab\] lead to the conclusion. Finally we apply Theorem \[thm-moment-stab-m-matrix\] to derive a sufficient condition for a.s. and moment exponential stability for the equilibrium point of . Note that in Proposition \[prop-as-moment-stab-M-matrix\] below, the infinitesimal generator $Q=(q_{ij})$ of the continuous-time Markov chain ${\alpha}$ need not to be irreducible and ergodic. \[prop-as-moment-stab-M-matrix\] Suppose that there exist a positive constant $p$ and a positive definite matrix $G\in \S^{n\times n}$ such that the $m\times m$ matrix $\mathcal A:=\mathrm{diag}(\theta_{1},\dots \theta_{m}) -Q$ is a nonsingular $M$-matrix, then the equilibrium point of is $p$th moment exponentially stable, where for each $i\in {\ensuremath{\mathcal M}}$, $$\begin{aligned} \theta_{i} &: = -\biggl[\frac{p}{2}\lambda_{\max}(GA_{i}G^{-1} +G^{-1}A_{i}G + G^{-1}B_{i}' G^{2} B_{i}G^{-1}) \\ & \qquad\qquad + \frac{p(p-2)}{4} \rho^{2}(GB_{i}G^{-1} +G^{-1}B_{i}G)+ \eta_{i} \biggr ], \end{aligned}$$ and $ \eta_{i}$ is defined in . If in addition that either $p \in (0, 2]$ or else $p > 2$ with $$\max_{i\in{\ensuremath{\mathcal M}}}\int_{\R_{0}^{n}}|C_{i}(z)|^{p}\nu({\mathrm{d}}z) < \infty,$$ then the equilibrium point of is also a.s. exponentially stable. The proof of Proposition \[prop-as-moment-stab-M-matrix\] consists of straightforward verifications of – of Assumption \[assump-m-matrix\]. Theorem \[thm-moment-stab-m-matrix\] then leads to the assertions on almost sure and moment stability. Again we shall arrange the proof to the appendix \[sect-appendix\]. Examples {#sect:examples} -------- In this example, we consider the one-dimension linear system given in , in which $\alpha \in {\ensuremath{\mathcal M}}= \{ 1,2,3\}$ is a continuous-time Markov chain with generator $Q= \begin{bmatrix} -3 & 1 & 2\\ 2 & -2 & 0\\ 4 & 0 & -4 \\ \end{bmatrix},$ $a_{1}=4,a_{2} = 2,a_{3}=3$, $b_{1}=1,b_{2}=3, b_{3}=1 $, and $c_{i}(z)=1\wedge z^2 $ for $i=1,2,3$. In addition, suppose that the characteristic measure of the Poisson random measure $N$ is given by the Lévy measure $\nu ({\mathrm{d}}z) = \frac{{\mathrm{d}}z}{ z^{4/3}}$, $z\in \R_{0}$. Note that $\nu$ is an infinite Lévy measure, i.e., $\nu(\R_{0}) =\infty$. By direct computations, we get $$\delta=\sum_{i=1}^3 \pi_i\biggl[a_i-\frac12b^2_i+\int_{\R_0}\big(\log|1+c_i(z)|-c_i(z)\big)\nu({\mathrm{d}}z)\biggr]= -0.2867.$$ Then Proposition \[prop-as-exp-stable-1d\] implies that the trivial solution of is almost surely exponentially stable. However, if the jumps are excluded from the system , that is, if $c_i(z)=0$ for $i =1,2,3$, then $$\sum_{i=1}^3 \pi_i \biggl(a_i-\frac12b_i^2\biggr)=1.75,$$ which implies that the trivial solution of is almost surely exponentially unstable. This example indicates that the jumps can contribute to the stability of the equilibrium point. \[example1\] Consider the linear system $$\label{eq-ex-linearsystem} \begin{aligned} {\mathrm{d}}X(t) = A({\alpha}(t)) X(t) {\mathrm{d}}t + B({\alpha}(t)) X(t) {\mathrm{d}}W( t) + \int_{\R_{0}} C({\alpha}(t-),z) X(t-) \tilde N({\mathrm{d}}t, {\mathrm{d}}z), \end{aligned}$$ in which $\alpha \in {\ensuremath{\mathcal M}}= \{ 1,2,3\}$ is a continuous-time Markov chain with generator $Q= \begin{bmatrix} -3 & 1 & 2\\ 2 & -2 & 0\\ 4 & 0 & -4 \\ \end{bmatrix},$ $N$ is a Poisson random measure on $[0,\infty)\times \R_{0}$ whose corresponding Lévy measure is given by $\nu({\mathrm{d}}z)=\frac12(e^{-z}\wedge e^z){\mathrm{d}}z, z \in \R$, and $$\begin{aligned} & A_1=\left[ \begin{array}{ccc} 10 & 1 & 8 \\ -3 & 10 & 2 \\ -1 & -8 & 12 \\ \end{array} \right],\;\; & A_2=\left[ \begin{array}{ccc} 17 & 5 & 8\\ -1 & 11 & -3\\ 4 & -5 & 13\\ \end{array} \right],\ \quad & A_3=\left[ \begin{array}{ccc} 10 & -4 & 12 \\ 8 & 10 & -8\\ 3 & -9 & 11\\ \end{array} \right],\;\;\\ & B_1= \left[ \begin{array}{ccc} 1 & 2 & 0 \\ -2 & 1 & 4 \\ -1 & -2 & 1 \\ \end{array} \right], & B_2=\left[ \begin{array}{ccc} -1 & 2 & 1 \\ -3 & 1 & 1 \\ 2 & -1 & 1 \\ \end{array} \right],\ \quad & B_3=\left[ \begin{array}{ccc} 1 & 2 & 2\\ -1 & 1 & 4 \\ -3 & -2 & 1 \\ \end{array} \right],\\ & C_{i}(z) = 0 \in \R^{3\times 3}. \end{aligned}$$ Here $\nu$ belongs to the class of double exponential distributions; we refer to [@Kou-02] for applications of such distributions in math finance. Let us take $$\aligned G=& \left[ \begin{array}{ccc} 3 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 3 \\ \end{array} \right],\;\; h=[20, 20, 20],\;\; p=0.1. \endaligned$$ Then by direct calculation, we get $$\aligned \pi=&(0.5, 0.25, 0.25),\;\;\mu=(23.7194, 34.0899, 28.3542)',\ \ & \beta= -27.4707,\;\;c_3 = c_4 = \mathbf{0}, \endaligned$$ and $$\label{eq-ex1-calcuation} \min_{\zeta\in D}\sum_{i\in {\ensuremath{\mathcal M}}}\pi_i[c_2(i)-c_5(i)]=2.7422>0,$$ where $D:=\{\zeta=(\zeta_1, \zeta_2, \zeta_3)\in \R^3| \;Q\zeta=\mu+\beta\one\}$, and the minimizer in is given by $\zeta=(131.65, 112, 142.5264)'$, and $$\aligned &c_2(1)-c_5(1)=6.5169 ,\;\;c_2(2)-c_5(2)=9.8252,\;\;c_2(3)-c_5(3)= 2.7433. \endaligned$$ Thus we cannot apply Proposition \[prop-exp-stab-creterion-b\] to determine the almost surely exponential stability of the trivial solution of . Next we observe that $$\sum_{i\in \mathcal{M}}\pi_i\left[\lambda_{\min}(B_iB_i')+\frac12\lambda_{\min}(A_i+A_i')-\max\big\{\lambda_{\min}^2(B_iB_i'),\lambda_{\max}^2(B_iB_i')\big\}\right]=0.3541.$$ Therefore by virtue of Theorem 4.3 in [@KZY07], the trivial solution of is unstable in probability, which, in turn, implies that the trivial solution cannot be almost surely exponential stable. \[example2\] Again consider the linear system with the same $Q, A_{i}, B_{i}, G, h, $ and $p$ as given in Example \[example1\], but with $$C_1=\left[ \begin{array}{ccc} 15 & 1 & 2\\ -1 & 9 & 1\\ 7 & -1 & 10\\ \end{array} \right], \;\;\\C_2=\left[ \begin{array}{ccc} 20 & 6 & -3\\ 1 & 14 & 2\\ 3 & 2 & 8\\ \end{array} \right], \;\;\\C_3=\left[ \begin{array}{ccc} 7 & 1 & 4\\ 2 & 10 & 1\\ -1 & 5 & 11\\ \end{array} \right].$$ Then we have $$\aligned \pi=&(0.5, 0.25, 0.25),\;\;\mu=(23.7194, 34.0899, 28.3542)',\ \ & \beta= -27.4707,\;\;c_3 = c_4 = \mathbf{0}, \endaligned$$ and $$\label{eq-ex2-calculation} \min_{\zeta\in D}\sum_{i\in \mathcal{M}}\pi_i[c_2(i)-c_5(i)]=-0.0939,$$ where the minimizer of is $\zeta=(116.2209, 112.9113, 116)'$, and $$\aligned &c_2(1)-c_5(1)=-0.3687 ,\;\;c_2(2)-c_5(2)=-0.1647 ,\;\;c_2(3)-c_5(3)= 0.3463. \endaligned$$ Therefore thanks to Proposition \[prop-exp-stab-creterion-b\], the trivial solution of is almost surely exponentially stable. A comparison between Examples \[example1\] and \[example2\] shows that in some cases, the jumps can suppress the growth of the solution. In addition, we notice that the switching mechanism also contributes to the almost surely exponential stability. Conclusions and Further Remarks {#sect-conclusion} =============================== Motivated by the emerging applications of complex stochastic systems in areas such as finance and energy spot price modeling, this paper is devoted to almost sure and $p$th moment exponential stabilities of regime-switching jump diffusions. The main results include sufficient conditions for almost sure and $p$th moment exponential stabilities of the equilibrium point of nonlinear and linear regime-switching jump diffusions. For general nonlinear systems, the sufficient conditions for stability are expressed in terms of the existence of appropriate Lyapunov functions; from which we also derive a condition using $M$-matrices. In addition, we show that $p$th moment stability implies almost sure exponential stability. For one-dimensional linear regime-switching jump diffusions, we obtain necessary and sufficient conditions for almost sure and $p$th moment exponential stabilities. For the multidimensional system, we present verifiable sufficient conditions in terms of the eigenvalues of certain matrices for stability. Several examples are provided to illustrate the results. In this work, the switching component $\alpha$ has a finite state space. A relevant question is: Can we allow $\alpha $ to have an infinite countable space? In addition, the jump part is driven by a Poisson random measure associated with a Lévy process. A worthwhile future effort is to treat systems in which the random driving force is an alpha-stable process that has finite $p$th moment with $p<2$. This requires more work and careful consideration. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thanks the anonymous reviewers for their useful comments and suggestions. The research of Zhen Chao was supported in part by the NSFC (No. 11471122), the NSF of Zhejiang Province (No. LY15A010016) and ECNU reward for Excellent Doctoral Students in Academics (No. xrzz2014020). The research of Kai Wang and Yanling Zhu was supported in part by the NSF of Anhui Province (No. 1708085MA17 and No. 1508085QA13), the Key NSF of Education Bureau of Anhui Province (No. KJ2013A003) and the Support Plan of Excellent Youth Talents in Colleges and Universities in Anhui Province (2014). The research of Chao Zhu was supported in part by the NSFC (No. 11671034) and the Simons Foundation (award number 523736). Several Technical Proofs {#sect-appendix} ======================== Recall that thanks to Lemma \[lem-0-inaccessible\], for every $(x_{0},{\alpha}_{0}) \in \R_{0}^{n}\times {\ensuremath{\mathcal M}}$, $X(t) := X^{x_{0},{\alpha}_{0}}(t) \not=0$ for all $t\geq 0$ a.s. Let $U(x,i)=\log V(x,i)$ for $(x,i)\in \R^{n}_{0} \times {\ensuremath{\mathcal M}}$. Since $$DU(x,i)=\frac{DV(x,i)}{V(x,i)} \text{ and } D^2U(x,i)= \frac{D^2V(x,i)}{V(x,i)}-\frac{DV(x,i) DV(x,i)'}{V^2(x,i)},$$we have $$\begin{aligned} \nonumber {\ensuremath{\mathcal{L}}}& U(x,i) \\ & \nonumber = \frac{\langle D V(x,i), b(x,i)\rangle}{V(x,i)} + \frac{1}{2 V(x,i) } \tr \bigl ( \sigma\sigma'(x,i) D^{2} V(x,i) \bigr) - \frac{|\langle DV(x,i),\sigma(x,i)\rangle|^2}{2V^2(x,i)} \\ \nonumber & \ + \sum_{j\in {\ensuremath{\mathcal M}}} q_{ij}(x) \log V(x,j) + \int_{\R_0^n}\biggl[\log\frac{V(x +\gamma(x,i,z),i)}{V(x, i)}-\frac{ DV(x,i) \cdot \gamma(x,i,z)}{V(x,i)}\biggr]\nu({\mathrm{d}}z){\mathrm{d}}s \\ \nonumber & = \frac{ {\ensuremath{\mathcal{L}}}V(x,i)}{V(x,i)} - \frac{|\langle DV(x,i),\sigma(x,i)\rangle|^2}{2V^2(x,i)} + \sum_{j\in \mathcal{M}}q_{ij}(x)\left(\log V(x,j)-\frac{V(x,j)}{V(x,i)}\right) \\ \label{eq-LU-calculation} & \ + \int_{\R_0^n}\biggl[\log\frac{V(x +\gamma(x,i,z),i)}{V(x, i)} - \frac{V(x +\gamma(x,i,z),i)}{V(x,i)}+1\biggr] \nu({\mathrm{d}}z).\end{aligned}$$ Now we apply Itô’s formula to the process $U(X(t), {\alpha}(t))$: $$\begin{aligned} \label{eq-Ito-U} U& (X(t), {\alpha}(t)) = U(x_{0},{\alpha}_{0}) + \int_{0}^{t} {\ensuremath{\mathcal{L}}}U(X(s ),{\alpha}(s) ) {\mathrm{d}}s + M(t), \end{aligned}$$ where $M(t) = M_{1}(t) + M_{2}(t) + M_{3}(t)$, and $$\begin{aligned} M_{1}(t) & = \int_{0}^t\frac{\langle DV(X(s), {\alpha}(s)), \sigma(X(s),{\alpha}(s)) \rangle}{V(X(s),{\alpha}(s))} {\mathrm{d}}W(s), \\ M_{2}(t) & = \int_0^{t}\int_{\R_0^n}\log \frac{V(X(s-)+\gamma(X(s-),{\alpha}(s-),z),{\alpha}(s-))}{V(X(s-),{\alpha}(s-))}\widetilde{N}({\mathrm{d}}s,{\mathrm{d}}z), \\ M_{3}(t) & = \int_0^{t}\int_{\R}\log \frac{V(X(s-),{\alpha}(s-) + h(X(s-), {\alpha}(s-), y ) )}{V(X(s-),{\alpha}(s-))}\widetilde{N}_{1}({\mathrm{d}}s,{\mathrm{d}}y).\end{aligned}$$ By the exponential martingale inequality [@APPLEBAUM Theorem 5.2.9], for any $k\in \mathbb N$ and $ \theta \in ( 0, 1)$, we have $$\begin{aligned} {\ensuremath{\mathbb P}}\biggl\{ \sup_{ 0\le t \le k} \biggl[ & M (t) - \frac{\theta} {2} \int_{0}^{t}\frac{|\langle DV(X(s), {\alpha}(s)), \sigma(X(s),{\alpha}(s))\rangle|^{2}}{|V(X(s),{\alpha}(s))|^{2}} {\mathrm{d}}s \\ & \qquad\qquad - f_{1,\theta}(t)- f_{2,\theta}(t) \biggr] > \theta \sqrt k \biggr\} \le e^{-\theta^{2} \sqrt{k}},\end{aligned}$$ where $$\begin{aligned} f_{1,\theta}(t) & = \frac{1}{\theta} \int_{0}^{t}\int_{\R_{0}^{n} } \biggl[ \biggl(\frac{V(X(s-)+\gamma(X(s-),{\alpha}(s-),z),{\alpha}(s-))}{V(X(s-),{\alpha}(s-))} \biggr)^{\theta} - 1 \\ &\qquad \qquad - \theta \log \frac{V(X(s-)+\gamma(X(s-),{\alpha}(s-),z),{\alpha}(s-))}{V(X(s-),{\alpha}(s-))} \biggr]\nu({\mathrm{d}}z){\mathrm{d}}s, \\ f_{2,\theta}(t) &= \frac{1}{\theta} \int_{0}^{t}\int_{\R } \biggl[ \biggl(\frac{V(X(s-),{\alpha}(s-) + h(X(s-), {\alpha}(s-), y ) )}{V(X(s-),{\alpha}(s-))} \biggr)^{\theta} - 1 \\ &\qquad\qquad - \theta \log \frac{V(X(s-),{\alpha}(s-) + h(X(s-), {\alpha}(s-), y ) )}{V(X(s-),{\alpha}(s-))}\biggr] \lambda({\mathrm{d}}y) {\mathrm{d}}s. \end{aligned}$$ We can verify that $\sum_{k} e^{-\theta^{2} \sqrt{k}}< \infty$. Therefore the Borel-Cantelli lemma implies that there exists an $\Omega_{0} \subset \Omega$ with ${\ensuremath{\mathbb P}}(\Omega_{0}) =1$ such that for every $\omega \in \Omega_{0}$, there exists an integer $k_{0} = k_{0}(\omega)$ so that for all $k \ge k_{0}$ and $0 \le t \le k$, we have $$\begin{aligned} \label{eq-M(t)-bound} M(t) \le &\, \frac{\theta} {2} \int_{0}^{t}\frac{|\langle DV(X(s), {\alpha}(s)), \sigma(X(s),{\alpha}(s))\rangle|^{2}}{|V(X(s),{\alpha}(s))|^{2}} {\mathrm{d}}s + \theta \sqrt k + f_{1,\theta}(t) + f_{2,\theta}(t). \end{aligned}$$ Now putting and into , it follows that for all $\omega \in \Omega_{0}$ and $0 \le t \le k$, we have $$\begin{aligned} \nonumber U& (X(t), {\alpha}(t)) - U(x_{0},{\alpha}_{0})\\ \nonumber & \le \int_{0}^{t} \frac{{\ensuremath{\mathcal{L}}}V(X(s), {\alpha}(s))}{ V(X(s), {\alpha}(s))} {\mathrm{d}}s - \frac{1 -\theta}{2} \int_{0}^{t}\frac{|\langle DV(X(s), {\alpha}(s)), \sigma(X(s),{\alpha}(s))\rangle|^{2}}{|V(X(s),{\alpha}(s))|^{2}} {\mathrm{d}}s \\ \nonumber & \quad + \theta \sqrt k + f_{1,\theta}(t) + f_{2,\theta}(t) + \int_{0}^{t} \sum_{j\in \mathcal{M}}q_{{\alpha}(s), j}(X(s))\biggl(\log V(X(s),j)-\frac{V(X(s),j)}{V(X(s),{\alpha}(s))}\biggr) {\mathrm{d}}s \\ \nonumber & \quad + \int_{0}^{t} \int_{\R_0^n}\biggl[\log\frac{V(X(s-) +\gamma(X(s-),{\alpha}(s-),z),{\alpha}(s-))}{V(X(s-), {\alpha}(s-))} + 1 \\ \nonumber & \qquad \qquad \qquad - \frac{V(X(s-) +\gamma(X(s-),{\alpha}(s-),z),{\alpha}(s-))}{V(X(s-),{\alpha}(s-))}\biggr] \nu({\mathrm{d}}z) {\mathrm{d}}s\\ \label{eq-U(Xalpha(t))} & \le \int_{0}^{t} \biggl[c_{2}({\alpha}(s))- \frac{1 -\theta}{2}c_{3}({\alpha}(s)) -c_{4}({\alpha}(s)) -c_{5}({\alpha}(s))\biggr]{\mathrm{d}}s+ \theta \sqrt k + f_{1,\theta}(t) + f_{2,\theta}(t).\end{aligned}$$ Next we argue that for any $t \ge 0$, $ f_{1,\theta}(t) + f_{2,\theta}(t) \to 0$ as $\theta \downarrow 0$. To this end, we first use the elementary inequality $e^a\geq a+1$ for $a\in\R$ to obtain $$\begin{aligned} \frac{1}{\theta}& \biggl[ \biggl(\frac{V(x+\gamma(x,i,z),i )}{V(x,i )} \biggr)^{\theta} - 1 - \theta \log \frac{V(x+\gamma(x,i,z),i )}{V(x,i )} \biggr] \ge 0\, \text{ for any }(x,i) \in \R^{n}\times {\ensuremath{\mathcal M}}.\end{aligned}$$ Next the inequality $x^r\leq 1+r(x-1)$ for $0\leq r\leq 1$ and $x >0$ leads to $$\begin{aligned} \frac{1}{\theta}& \biggl[ \biggl(\frac{V(x+\gamma(x,i,z),i )}{V(x,i )} \biggr)^{\theta} - 1 - \theta \log \frac{V(x+\gamma(x,i,z),i )}{V(x,i )} \biggr] \\ & \le \frac{1}{\theta} \biggl[ 1+ \theta \biggl(\frac{V(x+\gamma(x,i,z),i )}{V(x,i )} -1 \biggr) -1- \theta \log \frac{V(x+\gamma(x,i,z),i )}{V(x,i )} \biggr] \\ & = \frac{V(x+\gamma(x,i,z),i )}{V(x,i )} -1 - \log \frac{V(x+\gamma(x,i,z),i )}{V(x,i )};\end{aligned}$$ notice that the last expression in the above equation is nonnegative thanks to the inequality $a - 1 - \log a \ge 0$ for $a> 0$. Next by virtue of , we can slightly modify the proof of Lemma 3.3 in [@ApplS-09] to obtain $$\begin{aligned} \int_{0}^{t} \int_{\R_{0}^{n} } \biggl[& \frac{V(X(s-)+\gamma(X(s-),{\alpha}(s-),z),{\alpha}(s-))}{V(X(s-),{\alpha}(s-))} - 1 \\ & - \log \frac{V(X(s-)+\gamma(X(s-),{\alpha}(s-),z),{\alpha}(s-))}{V(X(s-),{\alpha}(s-))} \biggr] \nu({\mathrm{d}}z) {\mathrm{d}}s < \infty \; \text{ a.s.}\end{aligned}$$ In addition, we can verify that $\lim_{\theta \downarrow 0} [\frac{1}{\theta} (a^{\theta} -1) - \log a] = 0$ for $a > 0$. Then the dominated convergence theorem leads to $$\begin{aligned} \lim_{\theta \downarrow 0} f_{1,\theta}(t) & = \int_{0}^{t} \int_{\R_{0}^{n}} \lim_{\theta \downarrow 0} \frac{1}{\theta} \biggl[ \biggl(\frac{V(X(s-)+\gamma(X(s-),{\alpha}(s-),z),{\alpha}(s-))}{V(X(s-),{\alpha}(s-))} \biggr)^{\theta} - 1 \\ &\qquad \quad - \theta \log \frac{V(X(s-)+\gamma(X(s-),{\alpha}(s-),z),{\alpha}(s-))}{V(X(s-),{\alpha}(s-))} \biggr]\nu({\mathrm{d}}z){\mathrm{d}}s = 0.\end{aligned}$$ On the other hand, using , we can readily verify that $$\begin{aligned} \int_{0}^{t}\int_{\R } \biggl[ & \frac{V(X(s-),{\alpha}(s-) + h(X(s-), {\alpha}(s-), y ) )}{V(X(s-),{\alpha}(s-))} - 1 \\ &\qquad - \log \frac{V(X(s-),{\alpha}(s-) + h(X(s-), {\alpha}(s-), y ) )}{V(X(s-),{\alpha}(s-))}\biggr] \lambda({\mathrm{d}}y) {\mathrm{d}}s < \infty\; \text{ a.s. } \end{aligned}$$ Therefore using exactly the same argument as above, we derive $\lim_{\theta \downarrow 0} f_{2,\theta}(t) =0 $. Now passing to the limit as $\theta \downarrow 0$ in leads to $$\label{eq-U-prelimit} U (X(t), {\alpha}(t)) - U(x_{0},{\alpha}_{0})\le \int_{0}^{t} \bigl[c_{2}({\alpha}(s))- 0.5 c_{3}({\alpha}(s)) -c_{4}({\alpha}(s)) -c_{5}({\alpha}(s))\bigr]{\mathrm{d}}s$$ for all $\omega \in \Omega_{0}$, $k\ge k_{0} = k_{0}(\omega) $ and $0 \le t \le k$. Recall that $U(x,i) = \log V(x,i)$. Then inserting condition (i) into yields that for almost all $\omega\in \Omega,$ $k\geq k_0 ,$ and $ k-1\leq t\leq k$, we have $$\begin{aligned} \label{eq2-U-prelimit} \nonumber \frac{1}{t}& [p \log|X(t)| + \log c_{1}({\alpha}(t)) ] \le \frac1t\log V(X(t),{\alpha}(t))\\ \nonumber & \le \frac{1}{t} \int_{0}^{t} \bigl[c_{2}({\alpha}(s))- 0.5 c_{3}({\alpha}(s)) -c_{4}({\alpha}(s)) -c_{5}({\alpha}(s))\bigr]{\mathrm{d}}s +\frac{\log V(x_{0},{\alpha}_{0})}{t}\\ \nonumber & \le \frac{1}{t} \int_{0}^{t} \bigl[c_{2}({\alpha}(s))- 0.5 c_{3}({\alpha}(s)) -c_{4}({\alpha}(s)) -c_{5}({\alpha}(s))\bigr]{\mathrm{d}}s +\frac{\log V(x_{0},{\alpha}_{0})}{k-1}\\ & \le \max_{i\in {\ensuremath{\mathcal M}}}\bigl\{c_{2}(i)- 0.5 c_{3}(i) -c_{4}(i) -c_{5}(i) \bigr \} +\frac{\log V(x_{0},{\alpha}_{0})}{k-1};\end{aligned}$$ the last inequality yields by letting $t\to \infty$. Fix some $(x_{0},{\alpha}_{0}) \in \R^{n}_{0}\times {\ensuremath{\mathcal M}}$ and denote by $(X(t),{\alpha}(t))$ the unique solution to – with initial condition $(X(0),{\alpha}(0)) = (x_{0},{\alpha}_{0})$. Suppose that for some $\varrho > 0$, we have $$\limsup_{t\to \infty} \frac1 t \log {\ensuremath{\mathbb E}}[|X(t)|^{p}] \le -\varrho < 0.$$ Then for any $\varrho > {\varepsilon}> 0$, there exists a positive constant $T$ such that ${\ensuremath{\mathbb E}}[|X(t)|^{p}] \le e^{-(\varrho -{\varepsilon}) t}$ for all $ t \ge T. $ This, together with Lemma 3.1 of [@ZhuYB-15], implies that there exists some positive number $M$ so that $$\label{eq-p-moment-bdd} {\ensuremath{\mathbb E}}[|X(t)|^{p}] \le M e^{-(\varrho -{\varepsilon}) t}\; \text{ for all } t \ge 0.$$ Let $\delta > 0$. Then we have for any $k \in \mathbb N$, $$\label{eq0-moment-as} \begin{aligned} {\ensuremath{\mathbb E}}\biggl[ {\sup_{(k-1)\delta \le t \le k\delta}}|X(t)|^{p}\biggr] & \le 4^{p}{\ensuremath{\mathbb E}}\biggl[ |X({(k-1)\delta})|^{p} + {\sup_{(k-1)\delta \le t \le k\delta}}\biggl| \int_{{(k-1)\delta}}^{t} b(X(s),{\alpha}(s)) {\mathrm{d}}s\biggr|^{p} \\ &\qquad + {\sup_{(k-1)\delta \le t \le k\delta}}\biggl| \int_{{(k-1)\delta}}^{t} \sigma(X(s),{\alpha}(s)) {\mathrm{d}}W(s)\biggr|^{p} \\ & \qquad + {\sup_{(k-1)\delta \le t \le k\delta}}\biggl| \int_{{(k-1)\delta}}^{t}\int_{\R^{n}_{0}} \gamma(X(s-),{\alpha}(s-),z) \tilde N({\mathrm{d}}s, {\mathrm{d}}z) \biggr|^{p} \biggr]. \end{aligned}$$ Using and , we have $$\label{eq1-moment-as}\begin{aligned} {\ensuremath{\mathbb E}}\biggl[ {\sup_{(k-1)\delta \le t \le k\delta}}\biggl| \int_{{(k-1)\delta}}^{t} b(X(s),{\alpha}(s)) {\mathrm{d}}s\biggr|^{p}\biggr] & \le {\ensuremath{\mathbb E}}\biggl[\biggl| \int_{{(k-1)\delta}}^{k \delta} |b(X(s),{\alpha}(s))| {\mathrm{d}}s\biggr|^{p} \biggr] \\ & \le (\sqrt \kappa \delta)^{p} {\ensuremath{\mathbb E}}\biggl[ {\sup_{(k-1)\delta \le t \le k\delta}}|X(t)|^{p}\biggr], \end{aligned}$$ where $\kappa > 0$ is the constant appearing in . On the other hand, by the Burkhoder-Davis-Gundy inequality (see, e.g., Theorem 2.13 on p. 70 of [@MaoY]) and , , we have $$\begin{aligned} \label{eq2-moment-as} \nonumber {\ensuremath{\mathbb E}}\biggl[ {\sup_{(k-1)\delta \le t \le k\delta}}\biggl| \int_{{(k-1)\delta}}^{t} \sigma(X(s),{\alpha}(s)) {\mathrm{d}}W(s)\biggr|^{p} \biggr] & \le C_{p} {\ensuremath{\mathbb E}}\biggl[ \biggl( \int_{{(k-1)\delta}}^{k \delta} |\sigma(X(s),{\alpha}(s))|^{2} {\mathrm{d}}s \biggr)^{p/2} \biggr] \\ & \le C_{p} (\kappa\delta)^{p/2} {\ensuremath{\mathbb E}}\biggl[ {\sup_{(k-1)\delta \le t \le k\delta}}|X(t)|^{p}\biggr],\end{aligned}$$ where $C_{p}$ is a positive constant depending only on $p$. Next we use Kunita’s first inequality (see, e.g., Theorem 4.4.23 on p. 265 of [@APPLEBAUM]), , and to estimate $$\label{eq3-moment-as} \begin{aligned} {\ensuremath{\mathbb E}}&\biggl[ {\sup_{(k-1)\delta \le t \le k\delta}}\biggl| \int_{{(k-1)\delta}}^{t}\int_{\R^{n}_{0}} \gamma(X(s-),{\alpha}(s-),z) \tilde N({\mathrm{d}}s, {\mathrm{d}}z) \biggr|^{p} \biggr] \\ & \le D_{p} {\ensuremath{\mathbb E}}\biggl[ \int_{{(k-1)\delta}}^{k\delta} \biggl(\int_{\R_{0}^{n}} |\gamma(X(s-),{\alpha}(s-),z)|^{2} \nu({\mathrm{d}}z) \biggr)^{p/2} {\mathrm{d}}s\biggr]\\ & \quad + D_{p} {\ensuremath{\mathbb E}}\biggl[ \int_{{(k-1)\delta}}^{k\delta} \int_{\R_{0}^{n}} |\gamma(X(s-),{\alpha}(s-),z)|^{p} \nu({\mathrm{d}}z) {\mathrm{d}}s\biggr]\\ & \le D_{p} [ \kappa^{p/2} + \hat \kappa]\delta {\ensuremath{\mathbb E}}\biggl[ {\sup_{(k-1)\delta \le t \le k\delta}}|X(t)|^{p}\biggr], \end{aligned}$$ where $\hat \kappa > 0$ is the constant appearing in and $D_{p}$ is a positive constant depending only on $p$. Now we plug , , and into to derive $$\begin{aligned} & {\ensuremath{\mathbb E}}\biggl[ {\sup_{(k-1)\delta \le t \le k\delta}}|X(t)|^{p}\biggr] \\ &\ \le 4^{p}{\ensuremath{\mathbb E}}[ |X({(k-1)\delta})|^{p}] + 4^{p}\big((\sqrt \kappa \delta)^{p} + C_{p} (\kappa\delta)^{p/2}+ D_{p} [\kappa^{p/2} + \hat \kappa]\delta \big) {\ensuremath{\mathbb E}}\biggl[ {\sup_{(k-1)\delta \le t \le k\delta}}|X(t)|^{p}\biggr]. \end{aligned}$$ Now we choose a $\delta > 0$ sufficiently small so that $$4^{p}\big((\sqrt \kappa \delta)^{p} + C_{p} (\kappa\delta)^{p/2}+ D_{p} [ \kappa^{p/2} + \hat \kappa]\delta \big) < \frac{1}{2}.$$ Then it follows from that $$\label{eq-sup X(t)p mean} {\ensuremath{\mathbb E}}\biggl[ {\sup_{(k-1)\delta \le t \le k\delta}}|X(t)|^{p}\biggr] \le 2M 4^{p} e^{-(\varrho -{\varepsilon}) (k-1)\delta}.$$ The rest of the proof uses the same arguments as those in the proof of Theorem 5.9 of [@MaoY]. For completeness, we include the details here. Thanks to , we have from the Chebyshev inequality that $${\ensuremath{\mathbb P}}\biggl\{\omega\in \Omega: {\sup_{(k-1)\delta \le t \le k\delta}}|X(t)| > e^{-(\varrho-2{\varepsilon})(k-1)\delta/p} \biggr\} \le 2M 4^{p} e^{-{\varepsilon}(k-1)\delta}.$$ Then by the Borel-Cantelli lemma, there exists an $\Omega_{0}\subset \Omega$ with ${\ensuremath{\mathbb P}}(\Omega_{0}) =1$ such that for all $\omega\in \Omega_{0}$, there exists a $k_{0} = k_{0}(\omega) \in {\ensuremath{\mathbb N}}$ such that $${\sup_{(k-1)\delta \le t \le k\delta}}|X(t, \omega)| \le e^{-(\varrho-2{\varepsilon})(k-1)\delta/p},\quad \text{ for all } k \ge k_{0}= k_{0}(\omega).$$ Consequently for all $\omega\in \Omega_{0}$, if $(k-1 )\delta \le t \le k\delta$ and $k\ge k_{0}(\omega)$, we have $$\frac1t \log(|X(t,\omega)|) \le -\frac{(\varrho-2{\varepsilon})(k-1)\delta}{pt} \le -\frac{(\varrho-2{\varepsilon})(k-1) }{pk}.$$ This implies that $$\limsup_{t\to\infty}\frac1t \log(|X(t,\omega)|) \le -\frac{\varrho-2{\varepsilon}}{p}, \quad\text{ for all } \omega\in \Omega_{0}.$$Now letting ${\varepsilon}\downarrow 0$, we obtain that $\limsup_{t\to\infty} \frac{1}{t} \log (|X(t)|) \le -\frac{\varrho}{p}$ a.s. This completes the proof. We need to find an appropriate Lyapunov function $V$ so that all conditions of Theorem \[thm-as-exp-stable\] are satisfied. In addition, since ${\alpha}$ is an irreducible continuous-time Markov chain with a stationary distribution $\pi= (\pi_{i},i\in {\ensuremath{\mathcal M}})$, the assertion on a.s. exponential stability under follows from Corollary \[coro-as-exp-stable\]. To this end, let $G\in \S^{n\times n} $ and $p>0$ be as in the statement of the theorem. We consider the Lyapunov function $$V(x,i) = (h_i-p\zeta_i)(x'G^2 x)\sp {p /2}, \ \ (x,i)\in \R^{n}\times {\ensuremath{\mathcal M}},$$ where $h_i>0$ such that $h_i-p\xi_i>0.$ Let us now verify that $V$ satisfies conditions (i)–(v) of Theorem \[thm-as-exp-stable\]. It is readily seen that for each $i\in{\ensuremath{\mathcal M}}$, $V(\cdot,i)$ is continuous, nonnegative, and vanishes only at $x=0$. Also observe that condition (i) of Theorem \[thm-as-exp-stable\] is satisfied with $c_{1}(i): = (h_i-p\zeta_i)( \lambda_{\min}(G^{2}))^{\frac{p}{2}}.$ We can verify for $x \neq 0$ that $$\begin{aligned} & DV(x,i) =(h_i-p\zeta_i) p (x'G^2 x) \sp {p /2 -1} G^{2} x, \\ & D^{2}V(x,i) = (h_i-p\zeta_i) p (x'G^2 x) \sp {p /2 -2}[(p -2) G^{2} x x' G^{2} + x'G^{2} x G^{2}].\end{aligned}$$ Then we compute $$\begin{aligned} \label{eq-L-V-computation} \lefteqn{ \frac{1}{h_i-p\zeta_i} {\ensuremath{\mathcal{L}}}V(x,i) }\\ \notag & = p (x'G^2 x) \sp {\frac{p}{2} -1} x' G_i^{2} A_{i}x + \frac{1}{2} \tr \big( p (x'G^2 x) \sp {\frac{p}{2} -2}[(p -2) G^{2} x x' G^{2} + x'G^{2} x G^{2}] B_{i}x x' B_{i}'\big) \\ \notag & \ \ +\int_{\R_{0}^{n}} \Big[ (x' (I + C_{i}(z) )' G^{2} (I + C_{i}(z)) x)^{\frac{p}{2}} - ( x' G^{2} x)^{\frac{p}{2}} - p ( x' G^{2} x)^{\frac{p}{2} -1} x'G^{2} C_{i}(z) x\Big] \nu({\mathrm{d}}z) \\ \notag & \ \ + \sum_{j\in {\ensuremath{\mathcal M}}} q_{ij} \frac{h_j-h_i-p \zeta_{j}+p\zeta_i}{h_i-p\zeta_i} (x'G^2 x) \sp {\frac{p}{2} } \\ \notag & = p (x'G^2 x) \sp {p /2 } \biggl[ \frac{x' (G^{2} A_{i} + A_{i}' G^{2} + B_{i}' G^{2} B_{i}) x }{2 x'G^2 x} + (p -2) \frac{ (x' B_{i}' G^{2} x)^{2}}{2(x'G^2 x)^{2}} + \frac{1}{p(h_i-p\zeta_i)}\sum_{j\in {\ensuremath{\mathcal M}}} q_{ij} h_{j}\\ \notag & \ \ - \frac{1}{h_i-p\zeta_i}\sum_{j\in {\ensuremath{\mathcal M}}} q_{ij}\zeta_{j} +\int_{\R_{0}^{n}} \biggl[ \frac{(x' (I + C_{i}(z) )' G^{2} (I + C_{i}(z)) x)^{\frac{p}{2}}}{ p (x'G^2 x) \sp {\frac{p}{2} }} - \frac{1}{p } - \frac{ x'G^{2} C_{i}(z) x}{x'G^2 x}\biggr]\nu({\mathrm{d}}z) \biggr].\end{aligned}$$ Note that $$\begin{aligned} \nonumber \frac{x' (G^{2} A_{i} + A_{i}' G^{2} + B_{i}' G^{2} B_{i}) x }{2 x'G^2 x} & = \frac{x' G (G A_{i} G^{-1} + G^{-1} A_{i}' G + G^{-1} B_{i}' G^{2} B_{i} G^{-1}) Gx}{2 | Gx|^{2}} \\ \label{eq1-L-V-computation} & \le \frac{1}{2}\lambda_{\max}(G A_{i} G^{-1} + G^{-1} A_{i}' G + G^{-1} B_{i}' G^{2} B_{i} G^{-1})= \mu_{i}.\end{aligned}$$ In addition, we have $$\label{eq2-L-V-computation} \begin{aligned} \frac{p-2}{2} \biggl(\frac{ x' B_{i}' G^{2} x}{x'G^2 x}\biggr)^{2} & =\frac{p-2}{8} \biggl(\frac{ x'G (G^{-1} B_{i}' G + G B_{i} G^{-1})G x}{x'G^2 x}\biggr)^{2} \\ & \leq \begin{cases} \frac{p-2}{8} \widehat\lambda^2(G^{-1} B_{i}' G+G B_{i} G^{-1} ), & \text{ if } 0 < p \le 2, \\ \frac{p-2}{8} \rho^{2} (G^{-1} B_{i}' G+G B_{i} G^{-1} ), & \text{ if }p > 2. \end{cases} \\ & = \frac{p-2}{8} \Lambda^{2} (G^{-1} B_{i}' G+G B_{i} G^{-1} ) . \end{aligned}$$ On the other hand, since $$\begin{aligned} \frac{(x' (I + C_{i}(z) )' G^{2} (I + C_{i}(z)) x)^{\frac{p}{2}}}{ (x'G^2 x) \sp {\frac{p}{2} }} &= \biggl( \frac{x' (I + C_{i}(z) )' G^{2} (I + C_{i}(z)) x}{x'G^2 x} \biggr)^{\frac{p}{2}} \\ & \le \bigl ( {\lambda_{\max}( G^{-1} (I + C_{i}(z) )' G^{2} (I + C_{i}(z)) G^{-1})} \bigr)^{\frac{p}{2}},\end{aligned}$$ and $$\frac{ x'G^{2} C_{i}(z) x}{x'G^2 x} = \frac{x' G (G C_{i}(z) G^{-1} + G^{-1} C_{i}'(z) G) Gx}{2 |Gx|^{2}} \ge \frac{1} {2} \lambda_{\min} (G C_{i}(z) G^{-1} + G^{-1} C_{i}'(z) G),$$ it follows that $$\begin{aligned} \nonumber & \int_{\R_{0}^{n}} \biggl[ \frac{(x' (I + C_{i}(z) )' G^{2} (I + C_{i}(z)) x)^{\frac{p}{2}}}{ p (x'G^2 x) \sp {\frac{p}{2} }} - \frac{1}{p } - \frac{ x'G^{2} C_{i}(z) x}{x'G^2 x}\biggr]\nu({\mathrm{d}}z) \\ \nonumber &\ \le \frac{1}{p } \int_{\R_{0}^{n}} \biggl[ \bigl ( {\lambda_{\max}( G^{-1} (I + C_{i}(z) )' G^{2} (I + C_{i}(z)) G^{-1}}\bigr)\bigr)^{\frac{p }{2}} \\ \nonumber & \qquad \qquad \qquad - 1 - \frac{p}{2} \lambda_{\min} (G C_{i}(z) G^{-1} + G^{-1} C_{i}'(z) G)\biggr] \nu({\mathrm{d}}z)\\ \label{eq3-L-V-computation} & \ = \frac{\rho_{i}}{p }.\end{aligned}$$ Then upon putting the estimates – into , we have $$\begin{aligned} {\ensuremath{\mathcal{L}}}V(x,i) & \le (h_i-p \zeta_{i}) (x'G^2 x) \sp {\frac{p}{2} }\cdot\biggl[ p\mu_i +\frac{p-2}8{\Lambda}^2(GB_iG^{-1}+G^{-1}B_i'G)\\ & \qquad \qquad \qquad\qquad \qquad \qquad + \frac{1}{h_i-p\zeta_i}\sum_{j\in {\ensuremath{\mathcal M}}} q_{ij} h_{j}- \frac{p}{h_i-p\zeta_i}\sum_{j\in {\ensuremath{\mathcal M}}} q_{ij}\zeta_{j}+ \eta_{i}\biggr] \\ & = V(x,i) \biggl[ p\mu_i+\frac{p-2}8 {\Lambda}^2(GB_iG^{-1}+G^{-1}B_i'G) \\ & \qquad \qquad \qquad \qquad \qquad \qquad + \frac{1}{(h_i-p\zeta_i)}\sum_{j\in {\ensuremath{\mathcal M}}} q_{ij} h_{j} - \frac{p}{h_i-p\zeta_{i}} (\mu_{i}+\beta) + \eta_{i} \biggr], \end{aligned}$$ where we used to derive the last step. Thus condition (ii) of Theorem \[thm-as-exp-stable\] is satisfied with $c_{2}(i) = p\mu_i +\frac{p-2}8{\Lambda}^2(GB_iG^{-1}+G^{-1}B_i'G)- \frac{p}{h_i-p\zeta_{i}} (\mu_{i}+\beta)+ \frac{1}{(h_i-p\zeta_i)}\sum_{j\in {\ensuremath{\mathcal M}}} q_{ij} h_{j}+ \eta_{i}$. In view of $$\aligned |\langle DV(x,i),B_ix\rangle|^2= & \ p^2V^2(x,i)\biggl(\frac{x'G^2B_ix}{x'G^2x}\biggr)^2= \frac{p^{2}}{4}V^2(x,i)\biggl(\frac{x'G(GB_i G^{-1}+ G^{-1}B_{i}' G ) Gx}{x'G^2x}\biggr)^2 \\ \geq &\ \frac{ p^2}4\widehat{\lambda}^2(GB_iG^{-1}+G^{-1}B_i'G)V^2(x,i), \endaligned$$ Note that $$\begin{aligned} &\int_{\R_{0}^{n}} \biggl[\log\left(\frac{V(x+C_i(z)x,i)}{V(x,i)}\right)-\frac{V(x+C_i(z)x,i)}{V(x,i)}+1\biggr]\nu({\mathrm{d}}z)\\ & \ = \int_{\R_{0}^{n}} \biggl[\frac{p}{2}\log\frac{x' (I+ C_{i}(z))' G^{2} (I + C_{i}(z))x}{ x' G^{2}x }-\frac{(x' (I+ C_{i}(z))' G^{2} (I + C_{i}(z))x)^{\frac{p}{2}}}{(x' G^{2}x)^{\frac{p}{2}} }+1\biggr]\nu({\mathrm{d}}z)\\ & \ \le 0\wedge \int_{\R_{0}^{n}} \biggl[\frac{p}{2}\log{\lambda_{\max}(G^{-1}(I+ C_{i}(z))' G^{2} (I + C_{i}(z))G^{-1})} \\ & \qquad \qquad- \bigl( \lambda_{\min}(G^{-1}(I+ C_{i}(z))' G^{2} (I + C_{i}(z))G^{-1})\bigr)^{\frac{p}{2}}+1\biggr]\nu({\mathrm{d}}z)\\ & \ = -c_{4}(i).\end{aligned}$$ This establishes Condition (iv). Likewise, we can verify condition (v) as follows. $$\begin{aligned} &\sum_{j\in {\ensuremath{\mathcal M}}} q_{ij} \left(\log V(x,j)-\frac{V(x,j)}{V(x,i)}\right) \\ & \ \ = \sum_{j\in {\ensuremath{\mathcal M}}} q_{ij} \left(\log (h_j- p \zeta_{j}) +\frac{p}{2}\log (x'G^{2}x) - \frac{h_j-p \zeta_{j}}{h_i- p \zeta_{i}}\right) \\ & \ \ = \sum_{j\in {\ensuremath{\mathcal M}}} q_{ij} \left(\log (h_j- p \zeta_{j}) - \frac{h_j-p \zeta_{j}}{h_i- p \zeta_{i}}\right) = - c_{5}(i).\end{aligned}$$ Thus we have verified all conditions of Theorem \[thm-as-exp-stable\] and hence in view of Corollary \[coro-as-exp-stable\], implies the desired conclusion. This proof is motivated by the proofs of Theorem 5.24 in [@MaoY] and Theorem 3.3 in [@Zong-14]. As in the proof of Proposition \[prop-as-exp-stable-1d\], let us assume $x(0) = x \neq 0$ and ${\alpha}(0) = i \in {\ensuremath{\mathcal M}}$. Then by the Itô formula, we have $$\begin{aligned} & |x(t)|^{p} \\ & \ = |x|^{p} \exp\biggl\{\int_0^t\biggl[pa(\alpha(s))-\frac{p}{2}b^2(\alpha(s)) +p\int_{\R_0}[\log|1+c(\alpha(s-),z)|-c(\alpha(s-),z)]\nu({\mathrm{d}}z)\biggr]{\mathrm{d}}s \\ & \qquad \qquad \qquad + \int_0^t p b(\alpha(s)){\mathrm{d}}W(s) + \int_0^t\int_{\R_0}p \log|1+c(\alpha(s-),z)|\widetilde{N}({\mathrm{d}}s,{\mathrm{d}}z) \biggr\}\\ & \ = |x|^{p} \exp\biggl\{\int_0^t f({\alpha}(s)) {\mathrm{d}}s\biggr\} \mathcal E (t), \end{aligned}$$ where $ \mathcal E (t) : = \exp \{ g(t) \}$ with $$\begin{aligned} g(t) & = \int_0^t p b(\alpha(s)){\mathrm{d}}W(s) - \frac{1}{2} \int_{0}^{t} p^{2} b^{2}({\alpha}(s)){\mathrm{d}}s + \int_0^t\int_{\R_0}p \log|1+c(\alpha(s-),z)|\widetilde{N}({\mathrm{d}}s,{\mathrm{d}}z) \\ & \qquad\qquad -\int_{0}^{t}\int_{\R_{0}}[|1+ c({\alpha}(s-),z)|^{p}-p \log|1+ c({\alpha}(s-),z)| -1] \nu({\mathrm{d}}z){\mathrm{d}}s. \end{aligned}$$ For each $t \ge 0$, let $\G_{t}: =\sigma({\alpha}(s): 0\le s \le t)$, $\G:=\bigvee_{t\ge 0} \G_{t}$, and $\mathcal H_{t}: = \sigma(W(s), N([0,s)\times A), 0\le s \le t, A \in \B(\R_{0}))$. Denote $\mathcal D_{t}: = \G \bigvee \mathcal H_{t}$. Let $\{ \tau_{k}, k =1,2, \dots\}$ denote the sequence of switching times for the continuous-time Markov chain ${\alpha}(\cdot)$; that is, we define $\tau_{1}: = \inf\{t \ge 0: {\alpha}(t) \neq {\alpha}(0) \} $ and $\tau_{k+1} : = \inf\{t \ge \tau_{k}: {\alpha}(t) \neq {\alpha}(\tau_{k}) \}$ for $k =1, 2,\dots$ It is well-known that $\tau_{k}\to \infty$ a.s. as $k\to \infty$. Write $\tau_{0}: =0$. Then we can compute $$\begin{aligned} {\ensuremath{\mathbb E}}[ |x(t)|^{p}] & = {\ensuremath{\mathbb E}}\Biggl[|x|^{p} \sum_{k=0}^{\infty}I_{ \{\tau_{k}\le t < \tau_{k+1}\}} \exp{\left\{\int_{0}^{t} f({\alpha}(s)) {\mathrm{d}}s\right\}} {\ensuremath{\mathcal E}}(t) \Biggr]\\ & = \sum_{k=0}^{\infty} {\ensuremath{\mathbb E}}\biggl[ |x|^{p} {\ensuremath{\mathbb E}}\biggl[ I_{ \{\tau_{k}\le t < \tau_{k+1}\}}\exp{\left\{\int_{0}^{t} f({\alpha}(s)) {\mathrm{d}}s\right\}} {\ensuremath{\mathcal E}}(t) \Big| \mathcal D_{\tau_{k}}\biggr]\biggr]\\ & = \sum_{k=0}^{\infty} {\ensuremath{\mathbb E}}\biggl[ |x|^{p} I_{ \{\tau_{k}\le t < \tau_{k+1}\}}\exp{\left\{\int_{0}^{t} f({\alpha}(s)) {\mathrm{d}}s\right\}} {\ensuremath{\mathcal E}}(\tau_{k}) {\ensuremath{\mathbb E}}\bigl[ \exp\{ g(t) - g(\tau_{k})\} \big | \mathcal D_{\tau_{k}}\bigr] \biggr].\end{aligned}$$ Note that on the event $ \{\tau_{k}\le t < \tau_{k+1}\}$, we have $$\begin{aligned} g(t) -g(\tau_{k}) & = \int_{\tau_{k}}^t p b(\alpha(\tau_{k})){\mathrm{d}}W(s) - \frac{1}{2} \int_{\tau_{k}}^{t} p^{2} b^{2}({\alpha}(\tau_{k})){\mathrm{d}}s \\ & \qquad + \int_{\tau_{k}}^t\int_{\R_0}p \log|1+c({\alpha}(\tau_{k}-),z)|\widetilde{N}({\mathrm{d}}s,{\mathrm{d}}z) \\ & \qquad -\int_{\tau_{k}}^{t}\int_{\R_{0}}[|1+ c({\alpha}(\tau_{k}-),z)|^{p}-p \log|1+ c({\alpha}(\tau_{k}-),z)| -1] \nu({\mathrm{d}}z){\mathrm{d}}s. \end{aligned}$$ Then it follows from the definition of the $\sigma$-algebra $\mathcal D_{\tau_{ k}}$ and Corollary 5.2.2 of [@APPLEBAUM] that $ {\ensuremath{\mathbb E}}\bigl[ \exp\{ g(t) - g(\tau_{k})\} | \mathcal D_{\tau_{k}}\bigr] =1$. Consequently, we have $$\begin{aligned} {\ensuremath{\mathbb E}}& [ |x(t)|^{p}] \\ & = \sum_{k=0}^{\infty} {\ensuremath{\mathbb E}}\biggl[ |x|^{p} I_{ \{\tau_{k}\le t < \tau_{k+1}\}}\exp{\left\{\int_{0}^{t} f({\alpha}(s)) {\mathrm{d}}s\right\}} {\ensuremath{\mathcal E}}(\tau_{k}) \biggr] \\ & = \sum_{k=0}^{\infty} {\ensuremath{\mathbb E}}\biggl[{\ensuremath{\mathbb E}}\biggl[ |x|^{p} I_{ \{\tau_{k}\le t < \tau_{k+1}\}}\exp{\left\{\int_{0}^{t} f({\alpha}(s)) {\mathrm{d}}s\right\}} {\ensuremath{\mathcal E}}(\tau_{k}) \Big | \mathcal D_{\tau_{k-1}} \biggr] \biggr] \\ & = \sum_{k=0}^{\infty} {\ensuremath{\mathbb E}}\biggl[|x|^{p} I_{ \{\tau_{k}\le t < \tau_{k+1}\}}\exp{\left\{\int_{0}^{t} f({\alpha}(s)) {\mathrm{d}}s\right\}} {\ensuremath{\mathcal E}}(\tau_{k-1}) {\ensuremath{\mathbb E}}[\exp\{g(\tau_{k}) - g(\tau_{k-1}) | \mathcal D_{\tau_{k-1}} \} ] \biggr].\end{aligned}$$ As argued before, we have $ {\ensuremath{\mathbb E}}[\exp\{g(\tau_{k}) - g(\tau_{k-1}) | \mathcal D_{\tau_{k-1}} \} ] =1$ and hence $${\ensuremath{\mathbb E}}[ |x(t)|^{p}] = \sum_{k=0}^{\infty} {\ensuremath{\mathbb E}}\biggl[ |x|^{p} I_{ \{\tau_{k}\le t < \tau_{k+1}\}}\exp{\left\{\int_{0}^{t} f({\alpha}(s)) {\mathrm{d}}s\right\}} {\ensuremath{\mathcal E}}(\tau_{k-1}) \biggr].$$ Continue in this fashion and we derive that $$\begin{aligned} {\ensuremath{\mathbb E}}[ |x(t)|^{p}] & = \sum_{k=0}^{\infty} {\ensuremath{\mathbb E}}\biggl[ |x|^{p} I_{ \{\tau_{k}\le t < \tau_{k+1}\}}\exp{\left\{\int_{0}^{t} f({\alpha}(s)) {\mathrm{d}}s\right\}} {\ensuremath{\mathcal E}}(\tau_{0}) \biggr] \\ & = \sum_{k=0}^{\infty} {\ensuremath{\mathbb E}}\biggl[|x|^{p} I_{ \{\tau_{k}\le t < \tau_{k+1}\}}\exp{\left\{\int_{0}^{t} f({\alpha}(s)) {\mathrm{d}}s\right\}} \biggr] \\ & = {\ensuremath{\mathbb E}}\biggl[ |x|^{p}\exp{\left\{\int_{0}^{t} f({\alpha}(s)) {\mathrm{d}}s\right\}} \biggr].\end{aligned}$$ Then it follows that $$\begin{aligned} \lim_{t\to \infty} \frac{1}{t}\log({\ensuremath{\mathbb E}}[ |x(t)|^{p}] ) & = \lim_{t\to \infty} \frac{1}{t} \log(|x|^{p}) + \lim_{t\to \infty} \frac{1}{t} \log\bigg({\ensuremath{\mathbb E}}\bigg[\exp\bigg \{ \int_{0}^{t} f({\alpha}(s)) {\mathrm{d}}s\bigg\}\bigg]\bigg) = \Upsilon(f).\end{aligned}$$ This completes the proof. In view of Theorem \[thm-moment-stab-m-matrix\], we only need to verify Assumption \[assump-m-matrix\] for the positive definite matrix $G^{2}$. But as observed in the proof of Proposition \[prop-exp-stab-creterion-b\], we have $$\begin{aligned} {\langle}G^{2}x, A_{i}x{\rangle}+\frac12{\langle}B_{i}x,G^{2}B_{i}x{\rangle}\le\frac12 \lambda_{\max}(GA_{i}G^{-1} +G^{-1}A_{i}G + G^{-1}B_{i}' G^{2} B_{i}G^{-1}) {\langle}x, G^{2}x{\rangle}, \end{aligned}$$ and shows that $$\begin{aligned} \nonumber & \int_{\R_{0}^{n}} \biggl[ \frac{(x' (I + C_{i}(z) )' G^{2} (I + C_{i}(z)) x)^{\frac{p}{2}}}{ (x'G^2 x) \sp {\frac{p}{2} }} - 1 - p \frac{ x'G^{2} C_{i}(z) x}{x'G^2 x}\biggr]\nu({\mathrm{d}}z) \le \eta_{i}. \end{aligned}$$Finally we observe that $$\frac{({\langle}x, G^{2}B_{i}x{\rangle})^{2}}{({\langle}x, G^{2} x{\rangle})^{2}} = \frac14 \biggl(\frac{x' G (G^{-1} B_{i}'G + GB_{i}G^{-1})G x}{x'G^{2}x} \biggr)^{2} \le \frac14 \rho^{2} (G^{-1} B_{i}'G + GB_{i}G^{-1}).$$ Therefore we have verified – of Assumption \[assump-m-matrix\]. Then the assertions of the proposition follows from Theorem \[thm-moment-stab-m-matrix\] directly. =-3pt Applebaum, D. (2009). , volume 116 of [ *Cambridge Studies in Advanced Mathematics*]{}. Cambridge University Press, Cambridge, second edition. Applebaum, D. and Siakalli, M. (2009). Asymptotic stability of stochastic differential equations driven by [L]{}évy noise. , 46(4):1116–1129. Barndorff-Nielsen, O. E. (1998). Processes of normal inverse [G]{}aussian type. , 2(1):41–68. Barndorff-Nielsen, O. E., Benth, F. E., and Veraart, A. E. D. (2013). Modelling energy spot prices by volatility modulated [L]{}[é]{}vy-driven [V]{}olterra processes. , 19(3):803–845. Barone-Adesi, G. and Whaley, R. E. (1987). Efficient analytic approximation of [A]{}merican option values. , 42(2):301–320. Cont, R. and Tankov, P. (2004). . Chapman & Hall/CRC Financial Mathematics Series. Chapman & Hall/CRC, Boca Raton, FL. Cont, R. and Voltchkova, E. (2005). Integro-differential equations for option prices in exponential [L]{}[é]{}vy models. , 9(3):299–325. Costa, O. L. V., Fragoso, M. D., and Todorov, M. G. (2013). . Probability and its Applications (New York). Springer, Heidelberg. Donsker, M. D. and Varadhan, S. R. S. (1975). Asymptotic evaluation of certain [M]{}arkov process expectations for large time. [I]{}. [II]{}. , 28:1–47; ibid. 28 (1975), 279–301. Karatzas, I. and Shreve, S. E. (1991). , volume 113 of [ *Graduate Texts in Mathematics*]{}. Springer-Verlag, New York, second edition. Khasminskii, R. (2012). , volume 66 of [*Stochastic Modelling and Applied Probability*]{}. Springer, New York, 2nd edition. Khasminskii, R. Z., Zhu, C., and Yin, G. (2007). Stability of regime-switching diffusions. , 117(8):1037–1051. Kou, S. G. (2002). A jump-diffusion model for option pricing. , 48(8):1086–1101. Mao, X. (1999). Stability of stochastic differential equations with [M]{}arkovian switching. , 79(1):45–67. Mao, X. and Yuan, C. (2006). . Imperial College Press, London. Norris, J. R. (1998). , volume 2 of [*Cambridge Series in Statistical and Probabilistic Mathematics*]{}. Cambridge University Press, Cambridge. Reprint of 1997 original. Seneta, E. (2004). Fitting the variance-gamma model to financial data. , 41A:177–187. Stochastic methods and their applications. Shao, J. and Xi, F. (2014). Stability and recurrence of regime-switching diffusion processes. , 52(6):3496–3516. Skorokhod, A. V. (1989). , volume 78 of [*Translations of Mathematical Monographs*]{}. American Mathematical Society, Providence, RI. Translated from the Russian by H. H. McFaden. Wee, I.-S. (1999). Stability for multidimensional jump-diffusion processes. , 80(2):193–209. Xi, F. (2009). Asymptotic properties of jump-diffusion processes with state-dependent switching. , 119(7):2198–2221. Yin, G. and Xi, F. (2010). Stability of regime-switching jump diffusions. , 48(7):4525–4549. Yin, G. G. and Zhang, Q. (2013). , volume 37 of [*Stochastic Modelling and Applied Probability*]{}. Springer, New York, second edition. Yin, G. G. and Zhu, C. (2010). , volume 63 of [*Stochastic Modelling and Applied Probability*]{}. Springer, New York. Zhang, Q. (2001). Stock trading: an optimal selling rule. , 40(1):64–87. Zhu, C., Yin, G., and Baran, N. A. (2015). Feynman-[K]{}ac formulas for regime-switching jump diffusions and their applications. , 87(6):1000–1032. Zong, X., Wu, F., Yin, G., and Jin, Z. (2014). Almost sure and [$p$]{}th-moment stability and stabilization of regime-switching jump diffusion systems. , 52(4):2595–2622. [^1]: Department of Mathematics, Shanghai Key Laboratory of Pure Mathematics and Mathematical Practice, East China Normal University, 500 Dongchuan Road, Shanghai, 200241, China, Email: [[email protected]]{}. [^2]: Department of Applied Mathematics, Anhui University of Finance and Economics, Bengbu 233030, China, Email: [[email protected]]{}. [^3]: Department of Mathematical Sciences, University of Wisconsin-Milwaukee, Milwaukee, WI 53201, Email: [[email protected]]{}. [^4]: School of International Trade and Economics, University of International Business and Economics, Beijing 100029, China, Email: [[email protected]]{}.
--- abstract: | Protons and antiprotons at collider energies are a source of high energy Weizsäcker–Williams photons. This may open a possibility to study exclusive photoproduction of heavy vector mesons at energies much larger than possible at the HERA accelerator. Here we present a detailed investigation of the exclusive $J/\psi$ photoproduction in proton-proton (RHIC, LHC) and proton-antiproton (Tevatron) collisions. We calculate several differential distributions in $t_1, t_2, y, \phi$, as well as transverse momentum distributions of $J/\Psi$’s. We discuss correlations in the azimuthal angle between outgoing protons or proton and antiproton as well as in the ($t_1, t_2$) space. Differently from electroproduction experiments, here both colliding beam particles can be a source of photons, and we find large interference terms in azimuthal angle distributions in a broad range of rapidities of the produced meson. We also include the spin–flip parts in the electromagnetic vertices. We discuss the effect of absorptive corrections on various distributions. Interestingly, absorption corrections induce a charge asymmetry in rapidity distributions, and are larger for $p p$ reactions than for the $p \bar p$ case. The reaction considered here constitutes an important nonreduceable background in recently proposed searches for odderon exchange. [*[dedicated to Kolya Nikolaev on the occasion of his 60th birthday]{}*]{} author: - 'W. Schäfer' - 'A. Szczurek' title: | Exclusive photoproduction of $J/\psi$\ in proton-proton and proton-antiproton scattering --- Introduction ============ The diffractive photoproduction of $J/\psi$–mesons has been recently a subject of thorough studies at HERA [@ZEUS_JPsi; @H1_JPsi], and serves to elucidate the physics of the QCD pomeron and/or the small–$x$ gluon density in the proton (for a recent review and references, see [@INS06]). Being charged particles, protons/antiprotons available at e.g. RHIC, Tevatron and LHC are a source of high energy Weizsäcker–Williams photons, and photoproduction processes are also accessible in hadronic collisions. Hadronic exclusive production mechanisms of mesons at central rapidities in $pp$ collisions were intensively studied in the 1990-ies at energies of a few tens of GeV [@Central_experiment], and raised much theoretical interest for their potential of investigating exotic hadronic states (see e.g. [@Central_theory]). Recently there was interest in describing diffractive exclusive production of heavy scalar [@KMRS04; @PST07_chic] and pseudoscalar [@SPT06_etap] mesons in terms of off-diagonal unintegrated gluon distributions, which may provide insight into the related diffractive production mechanism of the Higgs boson ([@KMR97; @Saclay] and references therein). A purely hadronic mechanism for the exclusive production of $J/\Psi$ mesons in proton-proton and proton-antiproton collisions was suggested as a candidate in searches for yet another exotic object of QCD, the elusive odderon exchange [@SMN91; @BMSC07]. In order to identify the odderon exchange one has to consider all other possible processes leading to the same final channel which in the context of the searches for the odderon will constitute the unwanted background. One of such processes (and perhaps the only one at the level of fully exclusive $J/\Psi$-production) is pomeron-photon or photon-pomeron fusion [@KMR_photon; @KN04; @BMSC07], which we study in this communication at a more detailed level than available in the literature. We feel that its role as a background for odderon searches warrants a more detailed analysis including energy dependence and differential distributions of the photoproduction mechanism in hadronic collisions. As will be discussed, the process considered here is interesting also in its own rights. An important concern of our work are absorption effects. More often than not absorption effects are either completely ignored or included as a multiplicative reduction factor, which is simply wrong for many observables (like distributions in $t_1$ or $t_2$) as we shall show in our paper. We think this point requires broader public spread as it often appears to be forgotten or ignored. We present a detailed analysis of several differential distributions in order to identify the absorption effects. We also put a special emphasis on interference phenomena. We will discuss also more subtle phenomena like the spin flip in the electromagnetic vertices and a charge asymmetry by comparing differential distributions in proton-proton and proton-antiproton exclusive $J/\psi$ production. In this work, we do not include a possible odderon contribution. We wish to stress, that the photoproduction mechanism of exclusive $J/\Psi$’s must exist without doubt, and does not die out as energy increases. A related purely hadronic (Odderon) contribution with the same properties has not been unambiguously identified in other experiments, and hence cannot be estimated in a model-independent way. While certain QCD-inspired toy-models for CP-odd multigluon t-channel exchanges exist, they do not allow reliable calculations of hadronic amplitudes. In practice the magnitude of the corresponding Born amplitude strongly depend on the details of how to treat gluons in the nonperturbative domain, as well as on the modeling of proton structure. Furthermore, the energy dependence of the full (beyond the Born approximation, and beyond perturbation theory) amplitude is unknown and it can even not be excluded that this contribution would vanish with rising energy. It is not the issue of our paper to further discuss such models, we rather think that in the search for an odderon one should take further initiative, if substantial deviations from the more conservative physics discussed here are found. Amplitudes and cross sections ============================= $2 \to 3$ amplitude {#2_to_3} ------------------- Here we present the necessary formalism for the calculation of amplitudes and cross–sections. The basic mechanisms are shown in Fig.\[fig:diagram\_photon\_pomeron\]. ![\[fig:diagram\_photon\_pomeron\] The sketch of the two mechanisms considered in the present paper: $\gamma \Pom$ (left) and $\Pom\gamma$ (right). Some kinematical variables are shown in addition.](Diagram_1.eps){width="\textwidth"} The distinctive feature, when compared to photoproduction in lepton–hadron collisions, is that now both participating hadrons can serve as the source of the photon, and it is necessary to take account of the interference between the two amplitudes. Due to the Coulomb singularities in the photon exchange parts of the amplitude, the electromagnetic vertices involve only very small, predominantly transverse momentum transfers. Their effect is fully quantified by the well–known electromagnetic Dirac– and Pauli form factors of the nucleon. Regarding the photoproduction amplitude, we try to be as far as possible model independent, and take advantage of the precise knowledge of diffractive vector meson production over a broad energy range available from experiments at HERA. The amplitude for the $2 \to 3$ process of Fig.\[fig:diagram\_photon\_pomeron\] can be decomposed as $$\begin{aligned} {\cal M}_{h_1 h_2 \to h_1 h_2 V}^ {\lambda_1 \lambda_2 \to \lambda'_1 \lambda'_2 \lambda_V}(s,s_1,s_2,t_1,t_2) &&= {\cal M}_{\gamma \Pom} + {\cal M}_{\Pom \gamma} \nonumber \\ &&= {\langle {p_1', \lambda_1'} |} J_\mu {| {p_1, \lambda_1} \rangle} \epsilon_{\mu}^*(q_1,\lambda_V) {\sqrt{ 4 \pi \alpha_{em}} \over t_1} {\cal M}_{\gamma^* h_2 \to V h_2}^{\lambda_{\gamma^*} \lambda_2 \to \lambda_V \lambda_2} (s_2,t_2,Q_1^2) \nonumber \\ && + {\langle {p_2', \lambda_2'} |} J_\mu {| {p_2, \lambda_2} \rangle} \epsilon_{\mu}^*(q_2,\lambda_V) {\sqrt{ 4 \pi \alpha_{em}} \over t_2} {\cal M}_{\gamma^* h_1 \to V h_1}^{\lambda_{\gamma^*} \lambda_1 \to \lambda_V \lambda_1} (s_1,t_1,Q_2^2) \, . \nonumber \\ \label{Two_to_Three}\end{aligned}$$ The outgoing protons lose only tiny fractions $z_1,z_2 \ll 1$ of their longitudinal momenta.In terms of their transverse momenta ${\mbox{\boldmath $p$}}_{1,2}$ the relevant four–momentum transfers squared are $t_i = - ({\mbox{\boldmath $p$}}_i^2 + z_i^2 m_p^2)/(1-z_i) \, , i = 1,2$, and $s_1 \approx (1 -z_2) s$ and $s_2 \approx (1-z_1) s$ are the familiar Mandelstam variables for the appropriate subsystems. Due to the smallness of the photon virtualities, denoted by $Q_i^2 = -t_i$, [^1] it is justified to neglect the contribution from longitudinal photons, recall that $\sigma_L(\gamma^* p \to J/\Psi p) /\sigma_T(\gamma^* p \to J/\Psi p) \propto Q^2/m_{J/\Psi}^2$ [@H1_JPsi; @INS06]. Then, the amplitude for emission of a photon of transverse polarization $\lambda_V$, and transverse momentum ${\mbox{\boldmath $q$}}_1 = - {\mbox{\boldmath $p$}}_1$, entering eq.(\[Two\_to\_Three\]) reads: $$\begin{aligned} {\langle {p_1', \lambda_1'} |} J_\mu {| {p_1, \lambda_1} \rangle} \epsilon_{\mu}^*(q_1,\lambda_V) &&= { ({\mbox{\boldmath $e$}}^{*(\lambda_V)} {\mbox{\boldmath $q$}}_1) \over \sqrt{1-z_1}} \, {2 \over z_1} \, \chi^\dagger_{\lambda'} \Big\{ F_1(Q_1^2) - {i \kappa_p F_2(Q_1^2) \over 2 m_p} ( {\mbox{\boldmath $\sigma$}}_1 \cdot [{\mbox{\boldmath $q$}}_1,{\mbox{\boldmath $n$}}]) \Big\} \chi_\lambda \, , \nonumber \\\end{aligned}$$ here ${\mbox{\boldmath $e$}}^{(\lambda)} = -(\lambda {\mbox{\boldmath $e$}}_x + i {\mbox{\boldmath $e$}}_y)/\sqrt{2}$, ${\mbox{\boldmath $n$}}|| {\mbox{\boldmath $e$}}_z$ denotes the collision axis, and ${\mbox{\boldmath $\sigma$}}_1/2$ is the spin operator for nucleon $1$, $\chi_\lambda$ is its spinor. $F_1$ and $F_2$ are the Dirac– and Pauli electromagnetic form factors, respectively. Here we have given only that part of the current, which gives rise to the logarithmic $dz/z$ longitudinal momentum spectrum of photons, which dominates in the high–energy kinematics considered here. It is worthwhile to recall, that for a massive fermion, that includes a spin–flip contribution originating from its anomalous magnetic moment, $\kappa_p = 1.79$. Notice its suppression at small transverse momenta. The parametrization of the photoproduction amplitude which we used in practical calculations can be found in the Appendix. Above we already used the assumption of $s$–channel–helicity conservation in the $\gamma^* \to J/\Psi$ transition, which for heavy vector mesons is indeed well justified by experiment [^2] [@ZEUS_JPsi; @H1_JPsi; @INS06]. In summary we present the $2 \to 3$ amplitude in the form of a 2–dimensional vector as $$\begin{aligned} {\mbox{\boldmath $M$}}({\mbox{\boldmath $p$}}_1,{\mbox{\boldmath $p$}}_2) &&= e_1 {2 \over z_1} {{\mbox{\boldmath $p$}}_1 \over t_1} {\cal{F}}_{\lambda_1' \lambda_1}({\mbox{\boldmath $p$}}_1,t_1) {\cal {M}}_{\gamma^* h_2 \to V h_2}(s_2,t_2,Q_1^2) \nonumber \\ && + e_2 {2 \over z_2} {{\mbox{\boldmath $p$}}_2 \over t_2} {\cal{F}}_{\lambda_2' \lambda_2}({\mbox{\boldmath $p$}}_2,t_2) {\cal {M}}_{\gamma^* h_1 \to V h_1}(s_1,t_1,Q_2^2) \, . \end{aligned}$$ The differential cross section of interest is given in terms of ${\mbox{\boldmath $M$}}$ as $$d \sigma = { 1 \over 512 \pi^4 s^2 } | {\mbox{\boldmath $M$}}|^2 \, dy dt_1 dt_2 d\phi \, ,$$ where $y \approx \log(z_1 \sqrt{s}/m_{J/\Psi})$ is the rapidity of the vector meson, and $\phi$ is the angle between ${\mbox{\boldmath $p$}}_1$ and ${\mbox{\boldmath $p$}}_2$. Notice that the interference between the two mechanisms $\gamma \Pom$ and $\Pom \gamma$ is proportional to $e_1 e_2 ({\mbox{\boldmath $p$}}_1 \cdot {\mbox{\boldmath $p$}}_2)$ and introduces a charge asymmetry as well as an angular correlation between the outgoing protons. Clearly, the interference cancels out after integrating over $\phi$, and the so integrated distributions will coincide for $pp$ and $p\bar{p}$ collisions. Absorptive corrections {#Absorption} ---------------------- We still need to correct for a major omission in our description of the production amplitude. Consider for example a rest frame of the proton 2 (the target) of the left panel of Fig \[fig:diagram\_photon\_pomeron\]. Here, the virtual photon may be viewed as a parton of proton 1 (the beam), separated from it by a large distance in impact parameter space. It splits into its $c\bar{c}$–Fock component at a large longitudinal distance before the target, and to obtain the sought for production amplitude we project the elastically scattered $c\bar{c}$ system onto the desired $J/\Psi$–final state [@Kolya_CT]. We entirely neglected the possibility [@Bjorken] that the photon’s spectator partons might participate in the interaction, and destroy the rapidity gap(s) in the final state. Stated differently, for the diffractive final state of interest, spectator interactions do not cancel, and will affect the cross section. As a QCD–mechanism consider the interaction of a $\{c\bar{c}\}_1 \{qqq\}_1$–beam system with the target by multiple gluon exchanges (see Fig \[fig:multigluon\] ). ![ Left: the QCD two–gluon exchange mechanism for the Born–level amplitude. Right: a possible multigluon–exchange contribution that involves uncancelled spectator interactions. The impact parameters relevant for the discussion are indicated.[]{data-label="fig:multigluon"}](Diagram_3.eps){width="\textwidth"} Then, for the $J/\Psi$ final state of interest, the interaction of the $c \bar{c}$–color singlet state is dominated by small dipole sizes $r_s \sim 4/m_{J /\Psi}$ (the scanning radius of [@Kolya_VM]). It can be exhausted by the minimal two–gluon color–singlet exchange, and will be quantified by the color–dipole cross section $\sigma({\mbox{\boldmath $r$}})$ [@Kolya_CT; @NZ_91], respectively its non–forward generalization [@NNPZZ; @INS_Saturation]. Let ${\mbox{\boldmath $b$}}_V$ be the tranverse separation of the $J/\Psi$ and the target, and ${\mbox{\boldmath $r$}}$ the size of the $c\bar{c}$–dipole as shown in Fig \[fig:multigluon\]. Then, the $2 \to 3$ amplitude of section \[2\_to\_3\] will involve, besides the vertex for the $p \to \gamma p$ transition, the expectation value ${\langle {J/\Psi} |} \Gamma^{(0)}({\mbox{\boldmath $r$}},{\mbox{\boldmath $b$}}_V) {| {\gamma} \rangle}$ of $$\Gamma^{(0)}({\mbox{\boldmath $r$}}, {\mbox{\boldmath $b$}}_V ) = { 1\over 2} \, \sigma({\mbox{\boldmath $r$}}) \, t_N({\mbox{\boldmath $b$}}_V, B) \, ,$$ where $t_N({\mbox{\boldmath $b$}},B) = \exp(-{{\mbox{\boldmath $b$}}^2/2B } ) / (2 \pi B ) $ is an optical density of the target. Systematic account for the spectator interactions in QCD however is a difficult problem, as one cannot rely on the Abramovsky-Gribov-Kancheli (AGK) [@AGK] cutting rules, when due account for color is taken [@NS_AGK]. To obtain at least a qualitative account of absorptive corrections we restrict ourselves to only a subclass of absorptive corrections, the ’diffractive cut’, which contribution is model independent [@Gribov]. Regarding our $\{c\bar{c}\}_1 \{qqq\}_1$–system, with ${\mbox{\boldmath $b$}}_i$ denoting the constituent quarks’ impact parameters, and ${\mbox{\boldmath $b$}}$ the impact parameter of the beam proton, the absorbed amplitude in impact parameter space will contain $$\begin{aligned} \Gamma({\mbox{\boldmath $r$}}, {\mbox{\boldmath $b$}}_V,{\mbox{\boldmath $b$}}) &&= { 1\over 2} \, \sigma({\mbox{\boldmath $r$}}) \, t_N({\mbox{\boldmath $b$}}_V, B) - {1 \over 4} \sigma({\mbox{\boldmath $r$}}) \sigma_{qqq}(\{{\mbox{\boldmath $b$}}_i\}) t_N({\mbox{\boldmath $b$}}_V, B) t_N({\mbox{\boldmath $b$}},B_{el}) \nonumber \\ &&= \Gamma^{(0)}({\mbox{\boldmath $r$}}, {\mbox{\boldmath $b$}}_V ) \Big( 1 - {{1\over 2}}\sigma_{qqq}(\{{\mbox{\boldmath $b$}}_i\}) t_N({\mbox{\boldmath $b$}}, B_{el}) \Big) \to \Gamma^{(0)}({\mbox{\boldmath $r$}}, {\mbox{\boldmath $b$}}_V ) \cdot S_{el} ({\mbox{\boldmath $b$}}) \, .\end{aligned}$$ In effect, we merely multiply the Born–level amplitude $\Gamma^{(0)}$ by the probability amplitude for beam and target to pass through each other without inelastic interaction. In momentum space, we obtain the absorbed amplitude as $$\begin{aligned} {\mbox{\boldmath $M$}}({\mbox{\boldmath $p$}}_1,{\mbox{\boldmath $p$}}_2) &&= \int{d^2 {\mbox{\boldmath $k$}}\over (2 \pi)^2} \, S_{el}({\mbox{\boldmath $k$}}) \, {\mbox{\boldmath $M$}}^{(0)}({\mbox{\boldmath $p$}}_1 - {\mbox{\boldmath $k$}}, {\mbox{\boldmath $p$}}_2 + {\mbox{\boldmath $k$}}) \nonumber \\ &&= {\mbox{\boldmath $M$}}^{(0)}({\mbox{\boldmath $p$}}_1,{\mbox{\boldmath $p$}}_2) - \delta {\mbox{\boldmath $M$}}({\mbox{\boldmath $p$}}_1,{\mbox{\boldmath $p$}}_2) \, , \label{rescattering_term}\end{aligned}$$ and with $$S_{el}({\mbox{\boldmath $k$}}) = (2 \pi)^2 \delta^{(2)}({\mbox{\boldmath $k$}}) - {{1\over 2}}T({\mbox{\boldmath $k$}}) \, \, \, , \, \, \, T({\mbox{\boldmath $k$}}) = \sigma^{pp}_{tot}(s) \, \exp\Big(-{{1\over 2}}B_{el} {\mbox{\boldmath $k$}}^2 \Big) \, ,$$ the absorptive correction $\delta {\mbox{\boldmath $M$}}$ reads [^3] $$\begin{aligned} \delta {\mbox{\boldmath $M$}}({\mbox{\boldmath $p$}}_1,{\mbox{\boldmath $p$}}_2) = \int {d^2{\mbox{\boldmath $k$}}\over 2 (2\pi)^2} \, T({\mbox{\boldmath $k$}}) \, {\mbox{\boldmath $M$}}^{(0)}({\mbox{\boldmath $p$}}_1-{\mbox{\boldmath $k$}},{\mbox{\boldmath $p$}}_2+{\mbox{\boldmath $k$}}) \, . \label{absorptive_corr}\end{aligned}$$ A number of improvements on this result can be expected to be relevant. Firstly, a more consistent microscopic treatment of spectator interactions along the lines of [@NS_AGK] would be desirable. Experience from hadronic phenomenology [@KNP] suggests that at Tevatron energies, the purely elastic rescattering taken into account by eq.(\[absorptive\_corr\]) are insufficient, and inelastic screening corrections will to a crude estimate lead to an enhancement of absorptive corrections by a factor $\lambda \sim (\sigma_{el} + \sigma_{D})/\sigma_{el}$ [@TerM]. Here $\sigma_D = 2 \sigma_{SD} + \sigma_{DD}$, and $\sigma_{SD} = \sigma(pp \to pX) \, , \, \sigma_{DD} = \sigma(pp \to XY)$ are the cross sections for single–, and double–diffractive processes, respectively. Secondly, also the $\gamma p \to J/\Psi p$ production amplitude will be affected by unitarity corrections. For example, with increasing rapidity gap $\Delta y$ between $J/\Psi$ and the target, one should account for additional $s$–channel gluons, and for sufficiently dense multiparton systems, the two-gluon exchange approximation for the $\gamma \to J/\Psi$ transition used above, ultimately becomes inadequate. For relevant discussions of unitarity/saturation–effects in diffractive $J/\Psi$–production, see [@Saturation], the scaling properties of vector meson production in the presence of a large saturation scale are found in [@INS_Saturation]. In our present approach, where the production amplitude is taken essentially from experiment, one must content oneself with the fact, that (some) saturation effects are effectively contained in our parametrization, and any extrapolation beyond the energy domain covered by data must be taken with great caution. ![\[fig:diagram\_rescattering\] A sketch of the elastic rescattering amplitudes effectively taken into account by eq.(\[rescattering\_term\]). ](Diagram_2.eps){width="\textwidth"} Results ======= In this section we shall present results of differential cross sections for $J/\Psi$ production. We shall concentrate on the Tevatron energy $W=1960 \textrm{GeV}$, where such a measurement might be possible even at present. While in this paper we concentrate on the fully exclusive process $pp \to pp J/\Psi$, $p\bar p \to p \bar p J/\Psi$, it is important to realize, that from an experimental point of view there are additional contributions related to the exclusive production of $\chi_c$ mesons and their subsequent radiative decays to $J/\Psi \gamma$. It may be difficult to measure/resolve the soft decay photons and therefore experimentally this contribution may be seen as exclusive production of $J/\psi$. We note in this context that besides the scalar $\chi_c(0^{++})$ meson, which exclusive production has been discussed in the literature (e.g. [@KMRS04] and references therein), the axial–vector and tensor states $\chi_c(1^{++})$ and $\chi_c(2^{++})$ have larger branching fractions into the relevant $J/\Psi \gamma$ channel. Although their exclusive production cross sections can be expected to be suppressed at low transverse momenta [@KMRS04; @Yuan], a more detailed numerical analysis is not existing in the literature and would clearly go beyond the scope of the present paper. Distributions of $J/\psi$ ------------------------- ![\[fig:dsig\_dy\] $d \sigma / dy$ as a function of the $J/\Psi$ rapidity ($y$) for W = 1960 GeV. For a better understanding of the results we also show (dashed lines) the subsystem energies $W_{1V}=\sqrt{s_1}$ and $W_{2V}=\sqrt{s_2}$ in GeV. ](dsig_dy_w1960.eps){width="50.00000%"} Let us start from the rapidity distribution of the $J/\Psi$ shown in Fig.\[fig:dsig\_dy\]. In the figure we present also the subsystem energies $\sqrt{s_1},\sqrt{s_2}$. At $|y| > 3$ the energies of the $\gamma p \to J/\psi p$ or $\gamma \bar p \to J/\psi \bar p$ subprocesses exceed the energy range explored at HERA. This may open a possibility to study $J/\Psi$ photoproduction at Tevatron. This is interesting by itself and requires further detailed studies. In turn, this means that our estimate of the cross section far from midrapidity region requires extrapolations above the measured energy domain. In Fig.\[fig:dsig\_dy\_energy\] we collect rapidity distributions for different energies relevant for RHIC, Tevatron and LHC. We observe an occurence of a small dip in the distribution at midrapidities at LHC energy. The shape of the rapidity distribution at LHC energies however relies precisely on the above mentioned extrapolation of the parametrization of HERA–data to higher energies. Clearly a real experiment at Tevatron and LHC would help to constrain cross sections for $\gamma p \to J/\psi p$ process. ![ \[fig:dsig\_dy\_energy\] $d \sigma / dy$ for exclusive $J/\psi$ production as a function of $y$ for RHIC, Tevatron and LHC energies. No absorption corrections were included here.](dsig_dy_energy.eps){width="50.00000%"} In order to understand the origin of the small dip at midrapidity at LHC energy in Fig.\[fig:dsig\_dy\_deco\] we show separately the contributions of the two components ($\gamma \Pom, \Pom \gamma$ exchange) for Tevatron (left) and LHC (right). We see that at LHC energy the two components become better separated in rapidity. This reflects the strong rise of the $J/\Psi$ photoproduction cross section with energy, which can be expected to slow down with increasing energy. Notice that the beam hadron $h_1$ moves along positive rapidities, so that, for example for the mechanism $\gamma\Pom$ it is the Pomeron exchange which ’propagates’ over the larger distance in rapidity space. It would be interesting to confront our present simple predictions with the predictions of the approach which uses unintegrated gluon distributions – objects which are/were tested in other high-energy processes. This will be a subject of our forthcoming studies. ![ \[fig:dsig\_dy\_deco\] $d \sigma / dy$ as a function of $y$ for the Tevatron and LHC energy. Individual processes are shown separately. Notice that the beam hadron $h_1$ of Fig. \[fig:diagram\_photon\_pomeron\] moves at positive rapidities. No absorption corrections were included here.](dsig_dy_w1960_deco.eps "fig:"){width="40.00000%"} ![ \[fig:dsig\_dy\_deco\] $d \sigma / dy$ as a function of $y$ for the Tevatron and LHC energy. Individual processes are shown separately. Notice that the beam hadron $h_1$ of Fig. \[fig:diagram\_photon\_pomeron\] moves at positive rapidities. No absorption corrections were included here.](dsig_dy_w14000_deco.eps "fig:"){width="40.00000%"} Up to now we have not taken into account any restrictions on $t_1$ and/or $t_2$. In practice, it can be necessary to impose upper cuts on the transferred momenta squared. It is also interesting in the context of searches for odderon, to see how quickly the cross section for the “background” drops with $t_1$ and $t_2$. In Fig.\[fig:dsig\_dy\_tcut\] we show distribution in $J/\psi$ rapidity for different cuts on $t_1$ and $t_2$. Clearly, imposing a cut on $t_1$ and $t_2$ removes the photon-pole contribution dominant at small momentum transfers. Even relatively small cut lowers the cross section considerably, and the dropping of the cross section is much faster than for the pomeron-odderon exchanges [@BMSC07]. Imposing upper cuts on $t_1$ and $t_2$ will therefore help considerably to obtain a possible “odderon-enriched” sample. ![ \[fig:dsig\_dy\_tcut\] $d \sigma / dy$ as a function of $y$ for the Tevatron energy and different upper cuts on $t_1$ and $t_2$: $t_{cut}$ = 0.0 GeV$^2$ (solid), $t_{cut} = -0.05$ GeV$^2$ (thin solid), $t_{cut} = -0.1$ GeV$^2$ (dash-dotted), and $t_{cut} = -0.2$ GeV$^2$ (dashed).](dsig_dy_cutont.eps){width=".5\textwidth"} We wish to repeat here that without absorption effects the rapidity distribution of $J/\psi$ in proton-proton and proton-antiproton collisions are identical. $$\frac{d\sigma (pp \to pp J/\Psi, W)}{dy} = \frac{d\sigma (pp \to p \bar p J/\Psi)}{dy} \label{dsig_dy_pp_vs_ppbar}$$ It is interesting to stress in this context that it is not the case for transverse momentum distribution of $J/\psi$, where $$\frac{d\sigma(pp \to pp J/\Psi,W)}{d^2{\mbox{\boldmath $p$}}_{V}} \ne \frac{d\sigma (p\bar p \to p \bar p J/\Psi,W)}{d^2{\mbox{\boldmath $p$}}_{V}} \; . \label{dsig_dy_pp_vs_ppbar}$$ This is demonstrated in Fig.\[fig:dsig\_dpvt2\], where we see, that at small transverse momenta of the vector meson, the interference enhances the cross section in $pp$ collisions and depletes it in $p\bar p$ collisions. It is a distinctive feature of the mechanism discussed here, that vector mesons are produced with very small transverse momenta. The difference between proton-antiproton and proton-proton collisions survives even at large rapidities of $J/\psi$. When integrated over the $J/\psi$ transverse momentum, and in absence of absorptive corrections, cross sections will again be identical in the $pp$ and $p \bar p$ cases. ![ The distribution $d\sigma/d {\mbox{\boldmath $p$}}_{V}^2$ of $J/\psi$ as a function of $J/\psi$ transverse momentum for different intervals of rapidity: $-0.5 < y < 0.5$ (left panel) and $3.5 < y < 4.5$ (right panel) at W = 1960 GeV. The result for $p \bar p$ collisions is shown by the solid line and the result for $p p$ collisions by the dashed line. No absorption corrections were included here. \[fig:dsig\_dpvt2\]](dsig_dpvt2_y_m05_p05.eps "fig:"){width="40.00000%"} ![ The distribution $d\sigma/d {\mbox{\boldmath $p$}}_{V}^2$ of $J/\psi$ as a function of $J/\psi$ transverse momentum for different intervals of rapidity: $-0.5 < y < 0.5$ (left panel) and $3.5 < y < 4.5$ (right panel) at W = 1960 GeV. The result for $p \bar p$ collisions is shown by the solid line and the result for $p p$ collisions by the dashed line. No absorption corrections were included here. \[fig:dsig\_dpvt2\]](dsig_dpvt2_y_p35_p45.eps "fig:"){width="40.00000%"} Distributions of (anti–)protons ------------------------------- Now we shall proceed to distributions related to (anti–)protons. In Fig.\[fig:dsig\_dt12\] we show distributions in the transferred momenta squared (identical for $t_1$ and $t_2$). We show separately the contributions of $\gamma\Pom$ and $\Pom\gamma$ exchanges. The figures clearly display the strong photon–pole enhancement at very small $t$. ![ \[fig:dsig\_dt12\] $d \sigma / dt_{1/2}$ as a function of Feynman $t_{1/2}$ for W = 1960 GeV. The photon-exchange (dashed) and pomeron-exhange (dotted) contributions are shown in addition. No absorption corrections were included.](dsig_dt_w1960.eps){width="40.00000%"} In order to better understand the distributions in $t_1$ or $t_2$ in Fig.\[fig:map\_t1t2\] we show how $t_1$ and $t_2$ are correlated. Here we do not make any restrictions on the rapidity range. The significant enhancements of the cross section in the form of ridges along $t_1 \sim$ 0 and $t_2 \sim$ 0 are again due to the massless photon–exchange, and most of the integrated cross section comes from these regions. The pomeron-odderon and odderon-pomeron exchange contributions considered in Ref.[@BMSC07] would not exhibit such a significant local enhancements and would be smeared over broader range in the $(t_1,t_2)$ space. Therefore in the dedicated searches for the odderon exchange upper cuts on $t_1$ and $t_2$ should be imposed, and $t_{upper}= -0.2$ GeV seems to be a good choice. ![ Two-dimensional distribution in $t_1$ and $t_2$ for the Tevatron energy W = 1960 GeV. In this calculation a full range of the $J/\psi$ rapidities was included. No absorption corrections were included here.[]{data-label="fig:map_t1t2"}](map_t1t2_w1960_fully.eps){width="45.00000%"} We repeat that the reaction considered in this paper leads to azimuthal correlations between outgoing proton and antiproton. In Fig.\[fig:dsig\_dphi\_w1960\] we show the corresponding angular distribution for proton-antiproton collision (solid line). For reference we also show, by the dotted line, the incoherent sum of the $\gamma \Pom$ and $\Pom \gamma$ mechanisms. The distribution for proton-proton collisions (dashed line) is shown for comparison alsoat the Tevatron energy. Clearly the interference terms in both reactions are in opposite phase due to different electric charges of proton and antiproton. In the absence of absorptive corrections, we have $$\frac{d \sigma}{d \phi} = A \pm B \cos\phi$$ for $p p$ (+) and $p \bar p$ (-) collisions respectively. The interference effect (B/A) here is at the level of $\sim$ 40–50 %. ![ \[fig:dsig\_dphi\_w1960\] $d \sigma / d\phi$ as a function of $\phi$ for W = 1960 GeV. The solid line corresponds to a coherent sum of amplitudes whereas the dashed line to incoherent sum of both processes. No absorption corrections were included here.](dsig_dphi_w1960.eps){width="50.00000%"} In Fig.\[fig:map\_yphi.eps\] we show the two-dimensional distributions differentially in both rapidity and azimuthal angle. Interestingly, the interference effect is significant over broad range of $J/\psi$ rapidity, which is reflected in the fact that even at large $J/\psi$ rapidities one observes ansisotropic distributions in the azimuthal angle. ![ \[fig:map\_yphi.eps\] $d \sigma / dy d \phi$ for W = 1960 GeV and for $p \bar p$ (left panel) and $p p$ (right panel) collisions. No absorption corrections were included here.](map_yphi_ppbar.eps "fig:"){width="45.00000%"} ![ \[fig:map\_yphi.eps\] $d \sigma / dy d \phi$ for W = 1960 GeV and for $p \bar p$ (left panel) and $p p$ (right panel) collisions. No absorption corrections were included here.](map_yphi_pp.eps "fig:"){width="45.00000%"} Up to now we have considered only spin-preserving contributions. Now we wish to show the effect of electromagnetic spin-flip discussed in the previous section. In Fig.\[fig:sftosp\] we show the ratio of helicity-flip to helicity-preserving contribution. The ratio is a rather flat function of $t_1$ and $t_2$. At $t_1 = -1$ GeV$^2$ and $t_2 = -1$ GeV$^2$ the ratio reaches about 0.4. ![ \[fig:sftosp\] The ratio of helicity-flip to helicity-preserving contribution as a function of $t_1$ and $t_2$.](map_t1t2_sftosp.eps){width="50.00000%"} Absorption effects ------------------ Now we will show the effect of absorptive corrections discussed in section \[Absorption\] on various differential distributions. ![ \[fig:M2\_phi\_w1960\] Fully differential cross section $d \sigma/dy dt_1 dt_2 d \phi$ as a function of $\phi$ for $y$=0 and different combinations of $t_1 = t_2 \, (-0.1,-0.3,-0.5$ GeV$^2$ (from top to bottom)) for $p \bar p$ (left panel) and $p p$ (right panel) reactions. The solid lines include rescattering, while the dashed lines correspond the Born–level mechanism only. ](mat2_phi_ppbar_diagonal.eps "fig:"){width="45.00000%"} ![ \[fig:M2\_phi\_w1960\] Fully differential cross section $d \sigma/dy dt_1 dt_2 d \phi$ as a function of $\phi$ for $y$=0 and different combinations of $t_1 = t_2 \, (-0.1,-0.3,-0.5$ GeV$^2$ (from top to bottom)) for $p \bar p$ (left panel) and $p p$ (right panel) reactions. The solid lines include rescattering, while the dashed lines correspond the Born–level mechanism only. ](mat2_phi_pp_diagonal.eps "fig:"){width="45.00000%"} ![ \[fig:M2\_t1t2\_w1960\] The fully differential cross section $d \sigma/dy dt_1 dt_2 d \phi$ as a function of $t_1$ and $t_2$ for $y$=0 and $\phi = \pi/2$ for $p \bar p$ (left panel) and $p p$ (right panel) reactions. ](diff_t1t2_ppbar_pi2.eps "fig:"){width="40.00000%"} ![ \[fig:M2\_t1t2\_w1960\] The fully differential cross section $d \sigma/dy dt_1 dt_2 d \phi$ as a function of $t_1$ and $t_2$ for $y$=0 and $\phi = \pi/2$ for $p \bar p$ (left panel) and $p p$ (right panel) reactions. ](diff_t1t2_pp_pi2.eps "fig:"){width="40.00000%"} Let us start from the presentation of the effects of absorption for selected points in phase space. In Fig.\[fig:M2\_phi\_w1960\] we show the fully differential cross section $d \sigma/dy dt_1 dt_2 d \phi$ as a function of $\phi$ for selected (fixed) values of $t_1$, $t_2$ and for $y$ = 0. We show results for $p \bar p$ (left panel) and $pp$ (right panel) collisions for the same center-of-mass energy $W$ = 1960 GeV. While at smaller $t_{1,2}$ we observe a smooth reduction of the Born–level result, absorptive corrections induce a strong $\phi$–dependence at larger $t_{1,2}$. The positions of the diffractive minima which appear as a consequence of cancellations of the Born and rescattering amplitudes move with the value of $t \equiv t_1 = t_2$. In Fig.\[fig:M2\_t1t2\_w1960\] we present fully differential cross section as a function of $t_1$ and $t_2$ for $y$=0 and $\phi= \pi/2$. For proton-proton scattering we observe clearly a diffractive minimum for $t_1 = t_2 \approx$ 0.4 GeV$^2$. In Fig. \[fig:dsig\_scan\] we show the fully differential cross section as a function of the transverse momentum squared ${\mbox{\boldmath $p$}}_1^2$ at a fixed value ${\mbox{\boldmath $p$}}_2^2 = 1 $GeV$^2$ at rapidity $y = 0$. The rich structure as a function of transverse momenta and azimuthal angles is also revealed by this plot. The plots in Fig.\[fig:dsig\_scan\_absorption\] give an idea, to which extent the diffractive dip–bump structure depends on the details of our treatment of absorption. Here we show, by the dotted line, the cross section calculated for the Born–level amplitude. The solid line shows the result with elastic scattering included, and for the dashed and dash–dotted lines we enhanced the rescattering amplitude $T$ of section \[Absorption\] by a factor $\lambda = 1.2$ and $\lambda = 1.5$ respectively. The region of very small ${\mbox{\boldmath $p$}}_1^2$ is entirely insensitive to rescattering, reflecting the ultraperipheral nature of photon exchange. The diffractive dip–bump structure, situated at larger transverse momenta reveals a dependence on the strength of absorptive corrections. This concerns the position of dips as well as the strength of the cross section in various windows of phase space. The sensitivity to rescattering is however washed out in integrated observables – clearly the contribution from low transverse momenta, which is not strongly affected by rescattering, is large. This becomes apparent in Figs. \[fig:ratio\_absorption\_t1t2\], \[fig:dsig\_dt\_absorption\] and \[fig:ratio\_pv\_abs\]. ![ \[fig:ratio\_absorption\_t1t2\] The ratio of the cross sections with absorption to that without absorption for $p \bar p$ (left panel) and $p p$ (right panel) scattering. Here the integration over $y$ and $\phi$ was performed. ](map_t1t2_ratio_ppbar.eps "fig:"){width="40.00000%"} ![ \[fig:ratio\_absorption\_t1t2\] The ratio of the cross sections with absorption to that without absorption for $p \bar p$ (left panel) and $p p$ (right panel) scattering. Here the integration over $y$ and $\phi$ was performed. ](map_t1t2_ratio_pp.eps "fig:"){width="40.00000%"} ![ \[fig:dsig\_dt\_absorption\] Differential cross section $d \sigma/dt$, integrated over $-1<y<1$ for $p\bar p$ collisions. Dashed line: no absorptive corrections. Solid lines show the cross section with absorptive corrections included. Upper solid line: purely elastic rescattering. Lower solid line: rescattering enhanced by a factor $\lambda = 1.5$ in the amplitude. The dotted lines show the respective ratios $d \sigma^{Born + Rescatt.}/ d\sigma^{Born}$. ](dsig_dt_absorption.eps){width="40.00000%"} ![ \[fig:ratio\_pv\_abs\] The suppression factor $<S^2({\mbox{\boldmath $p$}}_V^2)>$ at $y=0$ as a function of the $J/\Psi$ transverse momentum, for $p\bar p$ (left panel) and $pp$ (right panel) collisions at $W = 1960$ GeV. ](ratio_pv_abs.eps){width="\textwidth"} In Fig.\[fig:ratio\_absorption\_t1t2\] we show the ratio of the cross section with absorption to that without absorption as a function of $t_1$ and $t_2$, again for $p \bar p$ (left) and $p p$ (right). As a consequence of averaging over different phase-space configurations the diffractive minima disappeared. Again the plot reflects, that due to the ultraperipheral nature of the photon–exchange mechanism, absorption is negligible at very small $t_{1,2}$, and rises with $t_1$ and/or $t_2$. On average, absorption for the $p \bar p$ reaction is smaller than for the $p p$ case. It is important to stress again, that the absorptive corrections in differential cross sections cannot be accounted for by simply a constant suppression factor, but show a lively dependence over phase space. In Fig. \[fig:dsig\_dt\_absorption\] we show the differential cross section $d\sigma/dt$. The dashed line shows the result without absorption, the solid lines include absorptive corrections. They differ by a factor $1.5$, by which rescattering had been enhanced in the lower curve. This enhancement of rescattering shows a modest effect, quite in agreement with the expectation mentioned above. The dependence of absorption on $t$ is quantified by the ratio of full and Born–level cross sections shown by the dotted lines. In Fig. \[fig:ratio\_pv\_abs\] we show the suppression factor $$<S^2({\mbox{\boldmath $p$}}_V^2)> = {d\sigma^{{\mathrm{Born + Rescatt.}}}/d{\mbox{\boldmath $p$}}_V^2dy \over d\sigma^{{\mathrm{Born}}}/d{\mbox{\boldmath $p$}}_V^2dy} \, ,$$ as a function of the $J/\Psi$ transverse momentum at $y=0$. It is important to emphasize once more the strong functional dependence on ${\mbox{\boldmath $p$}}_V$, which is different for $p\bar p$ and $pp$ collisions. Again, playing with the strength of absorptions shows a modest effect. The different behaviour of absorptive corrections in $pp$ and $p\bar p$ collisions is an interesting observation. It derives from the fact that rescattering corrections lift the cancellation of the interference term after azimuthal integration. Finally, let us comment on the expected reduction of rapidity distributions from absorptive corrections. These are, finally, rather flat functions of $y$. For the ratio $$<S^2(y)> = {d\sigma^{{\mathrm{Born + Rescatt.}}}/dy \over d\sigma^{{\mathrm{Born}}}/dy} \, ,$$ we obtain, in $p\bar p$ collisions $$<S^2(y = 0)>\Big|_{p\bar p} \approx 0.9 \, , \, <S^2(y = 3)>\Big|_{p\bar p} \approx 0.8 \, , \,$$ and for $pp$ collisions $$<S^2(y = 0)>\Big|_{p p} \approx 0.85 \, , \, <S^2(y = 3)>\Big|_{p p} \approx 0.75 \, . \,$$ We note that this results in a small charge asymmetry $${ d\sigma(p \bar p) / dy - d\sigma (pp) / dy \over d\sigma(p \bar p) / dy + d\sigma (pp) / dy} \approx 2 \div 3 \, \% \, ,$$ which derives entirely from absorptive corrections. Conclusions =========== In this paper we have calculated differential cross sections for exclusive $J/\psi$ production via photon-pomeron($\gamma \Pom$) and pomeron-photon ($\Pom\gamma$) exchanges at RHIC, Tevatron and LHC energies. Measurable cross sections were obtained in all cases. We have obtained an interesting azimuthal-angle correlation pattern due to the interference of the $\gamma\Pom$ and $\Pom\gamma$ mechanisms. The interference effect survives over almost the whole range of $J/\psi$ rapidities. At the Tevatron energy one can potentially study the exclusive production of $J/\psi$ at the photon-proton center-of-mass energies 70 GeV $ < W_{\gamma p} < $ 1500 GeV, i.e. in the unmeasured region of energies, much larger than at HERA. At LHC this would be correspondingly 200 GeV $ < W_{\gamma p} < $ 8000 GeV. At very forward rapidities this is an order of magnitude more than possible with presently available machines. Due to the photon–pole, the differential cross section is concentrated in the region of very small $t_1$ or/and $t_2$. Imposing cuts on $t_1$ and $t_2$ lowers the cross section considerably. Electromagnetic helicity-flip processes play some role only when both $|t_1|$ and $|t_2|$ are large, that is in a region where also the hypothetical hadronic, Odderon exchange, contribution can be present. It is a distinctive feature of the production mechanism, that mesons are produced at very small transverse momenta, where the interference of $\gamma\Pom$ and $\Pom \gamma$ mechanisms induces a strongly different shape of vector–meson ${\mbox{\boldmath $p$}}_V^2$– distributions in $pp$ vs. $p\bar p$ collisions. We also estimated absorption effects on various distributions. In some selected configurations the absorptive corrections lead to the occurence of diffractive minima. Naturally, the exact place of diffractive minima depends on the values of the model parameters, but they are washed out when integrated over the phase space or even its part. Absorptive corrections for differential distributions are lively functions of transverse momenta, and cannot be accounted for simply by constant suppression factors. We have found that absorptive corrections induce a small charge–asymmetry in rapidity distributions and total production cross sections. In the present paper we have concentrated on exclusive production of $J/\psi$ at energies $\sqrt{s} > 200$ GeV. The formalism used here can be equally well applied to exclusive production of other vector mesons, such as $\phi$, $\Upsilon$ as well as to the lower energies of e.g. FAIR, J-PARC, RHIC. For $J/\psi$–production, especially recent parametriztions [@FJMPP07] of the photoproduction cross section from threshold to the highest energies may prove useful. We leave such detailed analyses for separate studies. The processes considered here are also interesting in the context of recently proposed searches for identifying the Odderon. We find that the region of midrapidities ($ -1 < y < 1 $) and $t_1, t_2 < -0.2$ GeV$^2$ seems the best in searches for the odderon exchange. Should data reveal deviations from the conservative predictions given by us, a detailed differential analysis of both, photon and odderon–echange processes, including their interference, in $y, t_1, t_2, \phi$ will be called upon. Acknowledgments =============== We thank Mike Albrow for useful comments regarding the experimental possibilities of exclusive $J/\Psi$–production at the Tevatron. We are indebted to Tomasz Pietrycki for help in preparing some figures. This work was partially supported by the grant of the Polish Ministry of Scientific Research and Information Technology number 1 P03B 028 28. Appendix ======== In Ref.[@H1_JPsi] the differential cross section $\frac{d\sigma}{dt}$ for the reaction $\gamma^* p \to J/\psi p$ was parametrized as $$\frac{d\sigma}{dt}(W,t,Q^2) = \frac{d\sigma}{dt}\Big|_{t=0,W=W_0} \left( \frac{W}{W_0} \right)^{4 (\alpha(t)-1)} \exp( B_0 t ) \left( \frac{m_{J/\psi}^2}{m_{J/\psi}^2+Q^2} \right)^n \; , \label{H1_cross_section}$$ where $\alpha(t) = \alpha_0 + \alpha' t$ . The values of parameters found from the fit to the data are: $\frac{d\sigma}{dt}|_{t=0,W=W_0}$ = 326 nb/GeV$^2$, $W_0$ = 95 GeV, $B_0$ = 4.63 GeV$^{-2}$, $\alpha_0$ = 1.224, $\alpha'$ = 0.164 GeV$^{-2}$ , $n$ = 2.486. Assuming the dominance of the helicity-conserving transitions, and neglecting the real part, one can write $${\cal M}(s,t,Q^2) = \delta_{\lambda_{\gamma} \lambda_V} \delta_{\lambda_{p} \lambda_{p'}} i s \sqrt{ 16 \pi \frac{d\sigma}{dt}\Big|_{t=0,W=W_0} } \left( \frac{s}{W_0^2} \right)^{\alpha(t)-1} \exp( B_0 t / 2 ) \left( \frac{m_{J/\psi}^2}{m_{J/\psi}^2+Q^2} \right)^{n/2} \; , \label{H1_amplitude}$$ identical for each combination of particle helicities. In our case of hadroproduction the amplitude is a function of either $(s_1,t_1,Q_2^2)$ or $(s_2,t_2,Q_1^2)$. [100]{} S. Chekanov [*et al.*]{} \[ZEUS Collaboration\], Eur. Phys. J.  C [**24**]{}, 345 (2002). A. Aktas [*et al.*]{} \[H1 Collaboration\], Eur. Phys. J.  C [**46**]{}, 585 (2006). I. Ivanov, N.N. Nikolaev and A.A. Savin, Phys. Part. Nucl. [**37**]{} 1 (2006). D. Barberis [*et al.*]{} \[WA102 Collaboration\], Phys. Lett.  B [**440**]{} 225 (1998); Phys. Lett.  B [**432**]{}, 436 (1998); Phys. Lett.  B [**427**]{}, 398 (1998); Phys. Lett.  B [**397**]{}, 339 (1997); Phys. Lett.  B [**488**]{}, 225 (2000). F.E. Close, G.R. Farrar and Z. Li, Phys. Rev [**D55**]{} 5749 (1997); N.I. Kochelev, T. Morii and A.V. Vinnikov, Phys. Lett. [**B457**]{} 202 (1999); F.E. Close and G.A. Schuler, Phys. Lett. [**B464**]{} 279 (1999); A. B. Kaidalov, V. A. Khoze, A. D. Martin and M. G. Ryskin, Eur. Phys. J.  C [**31**]{}, 387 (2003). V.A. Khoze, A.D. Martin, M.G. Ryskin and W.J. Stirling, Eur. Phys. J. [**C35**]{} 211 (2004). R. Pasechnik, A. Szczurek and O. Teryaev, a paper in preparation. A. Szczurek, R. Pasechnik and O. Teryaev, Phys. Rev. [**D75**]{} 054021 (2007). V.A. Khoze, A.D. Martin and M.G. Ryskin, Phys. Lett. [**B401**]{} 330 (1997); M. Boonekamp, A. De Roeck, R. Peschanski and C. Royon, Phys. Lett.  B [**550**]{}, 93 (2002); C. Royon, Acta Phys. Polon.  B [**37**]{}, 3571 (2006). A. Schäfer, L. Mankiewicz and O. Nachtmann, Phys. Lett. [**B272**]{} 419 (1991). A. Bzdak, L. Motyka, L. Szymanowski and J.-R. Cudell, Phys. Rev.  D [**75**]{}, 094023 (2007). V.A. Khoze, A.D. Martin and M.G. Ryskin, Eur. Phys. J. [**C23**]{} 311 (2002). S.R. Klein and J. Nystrand, Phys. Rev. Lett. [**92**]{} 142003 (2004); V. P. Goncalves and M. V. T. Machado, Eur. Phys. J.  C [**40**]{}, 519 (2005). N. N. Nikolaev, Comments Nucl. Part. Phys.  [**21**]{}, 41 (1992). J. D. Bjorken, Phys. Rev.  D [**47**]{}, 101 (1993). B. Z. Kopeliovich, J. Nemchick, N. N. Nikolaev and B. G. Zakharov, Phys. Lett.  B [**309**]{}, 179 (1993); B. Z. Kopeliovich, J. Nemchick, N. N. Nikolaev and B. G. Zakharov, Phys. Lett.  B [**324**]{}, 469 (1994). N. N. Nikolaev and B. G. Zakharov, Z. Phys.  C [**49**]{}, 607 (1991). J. Nemchik, N. N. Nikolaev, E. Predazzi, B. G. Zakharov and V. R. Zoller, J. Exp. Theor. Phys.  [**86**]{}, 1054 (1998) \[Zh. Eksp. Teor. Fiz.  [**113**]{}, 1930 (1998)\]. I. P. Ivanov, N. N. Nikolaev and W. Schäfer, Phys. Part. Nucl.  [**35**]{}, S30 (2004). V. A. Abramovsky, V. N. Gribov and O. V. Kancheli, Yad. Fiz.  [**18**]{} (1973) 595 \[Sov. J. Nucl. Phys.  [**18**]{} 308 (1974)\]; A. Capella and A. Kaidalov, Nucl. Phys.  B [**111**]{} 477 (1976); L. Bertocchi and D. Treleani, J. Phys. G [**3**]{}, 147 (1977). N. N. Nikolaev and W. Schäfer, Phys. Rev.  D [**74**]{}, 074021 (2006). V. N. Gribov, Sov. Phys. JETP [**29**]{}, 483 (1969) \[Zh. Eksp. Teor. Fiz.  [**56**]{}, 892 (1969)\]. F. Abe [*et al.*]{} \[CDF Collaboration\], Phys. Rev.  D [**50**]{}, 5518 (1994). B. Z. Kopeliovich, N. N. Nikolaev and I. K. Potashnikova, Phys. Rev.  D [**39**]{}, 769 (1989). K. A. Ter-Martirosyan, Sov. J. Nucl. Phys.  [**10**]{}, 600 (1970) \[Yad. Fiz.  [**10**]{}, 1047 (1969)\]. H. Kowalski and D. Teaney, Phys. Rev.  D [**68**]{}, 114005 (2003); H. Kowalski, L. Motyka and G. Watt, Phys. Rev.  D [**74**]{}, 074016 (2006); M. Kuroda and D. Schildknecht, Phys. Lett.  B [**638**]{}, 473 (2006). F. Yuan, Phys. Lett.  B [**510**]{}, 155 (2001). R. Fiore, L.L. Jenkovszky, V.K. Magas, F. Paccanoni and A. Prokudin, Phys. Rev.  D [**75**]{}, 116005 (2007). [^1]: Of course here the notation $Q_i^2 = t_i$ applies only to the photon lines. [^2]: While a trend towards s-channel helicity violating effects may be visible in the H1 data [@H1_JPsi], they are surely negligible for our purpose, and within error bars, consistent with [@ZEUS_JPsi] and s-channel helicity conservation. [^3]: In the practical calculations below, for Tevatron energies, we take $\sigma^{p\bar p}_{tot} = 76$ mb, $B_{el} = 17 \, \mathrm{GeV}^{-2}$ [@Tevatron].
--- abstract: 'Using a newly-developed modeling technique, we present orbit-based dynamical models of the Carina, Draco, Fornax, Sculptor, and Sextans dwarf spheroidal (dSph) galaxies. These models calculate the dark matter profiles non-parametrically without requiring any assumptions to be made about their profile shapes. By lifting this restriction we discover a host of dark matter profiles in the dSphs that are different from the typical profiles suggested by both theorists and observers. However, when we scale these profiles appropriately and plot them on a common axis they appear to follow an approximate $r^{-1}$ power law with considerable scatter.' author: - 'John R. Jardel and Karl Gebhardt' title: Variations in a Universal Dark Matter Profile for Dwarf Spheroidals --- Introduction ============ It is a well known fact that cosmological simulations containing only collisionless dark matter produce halos that share a universal density profile $\rho_{\mathrm{DM}}(r)$ [@nav96; @spr08; @nav10]. At first, this universal profile was characterized by the double power-law Navarro-Frenk-White (NFW) profile [@nav96] with inner logarithmic slope $\alpha \equiv - d\log \rho_{\mathrm{DM}}/d\log r = 1$. Modern dark matter-only simulations with increasingly better resolution seem to produce profiles that, in analogy to the function [@ser68], transition smoothly from $\alpha=3$ in the outer regions to $\alpha \sim 1$ near the center [@mer05; @gao08; @nav10]. The exact form of $\rho_{\mathrm{DM}}(r)$ is still debated by theorists, but most agree that the inner slope is nonzero. Such profiles are called “cuspy” since $\rho_{\mathrm{DM}}$ increases as $r \to 0$. In contrast, observers modeling low-mass galaxies with stellar and gas dynamics often find $\alpha=0$ “cores” in the inner profiles [@bur95; @per96; @bor01; @deb01; @sim05]. This disagreement between theory and observations has become known as the core/cusp debate. We must remember, however, that real galaxies are the products of their unique formation histories, and complex baryonic processes can re-shape dark matter profiles in different ways. Whether originating from adiabatic compression [@blu86], supernovae winds [@nav96b], or ram-pressure stripping [@arr12], baryonic feedback has been shown to affect the dark matter profiles of galaxies by perturbing their baryons in a highly non-linear way. Since these processes differ on a galaxy-by-galaxy basis, one should not expect to observe a universal dark matter profile at $z=0$. Furthermore, given the number of different ways baryonic feedback can occur, we should not expect it to produce only cored or NFW-like profiles. Unfortunately, it is difficult to explore the possible range of profile shapes since to construct a dynamical model one generally needs to adopt a parameterization for $\rho_{\mathrm{DM}}(r)$. This is not ideal as one is forced to assume the very thing they are hoping to measure. Clearly methods that can measure $\rho_{\mathrm{DM}}(r)$ non-parametrically are advantageous. Non-parametric determination of the dark matter profile avoids biasing results by assuming an incorrect parameterization and it also allows more general profile types to be discovered. To test the universal profile assumption, we apply the technique of non-parametric Schwarzschild modeling to determine $\rho_{\mathrm{DM}}(r)$ in five of the brightest dwarf spheroidal (dSph) galaxies that orbit the Milky Way as satellites. These galaxies have excellent kinematics available [@wal09] and have been demonstrated to be good targets for this type of modeling [@jar13]. The dSphs as a population are some of the most dark matter-dominated galaxies ever observed [@mat98; @sim07] and as such are unique test sites for theories of galaxy formation at low mass scales. Past studies using Jeans models have had difficulty robustly measuring $\rho_{\mathrm{DM}}(r)$ in the dSphs [@wal09b] largely due to the degeneracy between mass and velocity anisotropy inherent to these models. In addition to being fully non-parametric, our models break the degeneracy between mass and velocity anisotropy the same way traditional Schwarzschild models accomplish this [@geb00; @rix97; @vdm98; @val04; @vdb08]. In this letter we apply the most general models to a widely-studied group of galaxies in order to measure their dark mater density profiles and test the universal profile hypothesis. Data ==== Our models use the publicly available kinematics data from @wal09 for Carina, Fornax, Sculptor and Sextans. These data are individual radial velocities for member stars with repeat observations weighted and averaged. @wal09 assign each star a membership probability $P$ based on its position, velocity, and a proxy for its metallicity. Our analysis only includes stars for which $P > 0.95$. Whenever a§ galaxy has high-quality *Hubble Space Telescope* measurements of its proper motion available (Carina and Fornax; @pia03 [@pia07]) we correct for the effects of perspective rotation following Appendix A of @wal08. As described in @jar12, stars are placed on a meridional grid according to their positions and folded over the major and minor axes. To preserve any possible rotation, we switch the sign of the velocity whenever a star is flipped about the minor axis. We then group the stars into spatial bins by dividing the grid into a series of annular bins containing roughly 50-70 stars per bin. Fornax and Sculptor have a larger number of stars with measured velocities, and to exploit this we subdivide the annular bins into two to three angular bins in analogy to spokes on a wheel. Table \[sum\] presents a summary of the data we use for the dSphs. For each spatial bin of stars, we reconstruct the full line-of-sight velocity distribution (LOSVD) from the discrete radial velocities observed. This procedure uses an adaptive kernel density estimator [@sil86] and is described in more detail in @jar13. Uncertainties in the LOSVDs are determined through bootstrap resamplings of the data. We divide each LOSVD into 15 velocity bins which serve as the observational constraint for our models. Also necessary for the models is the galaxy’s three-dimensional luminosity density profile $\nu(r)$. To obtain this, we start with the projected number density profile of stars $\Sigma_*(R)$. For Carina and Sculptor we take $\Sigma_*(R)$ from @wal03, opting to use their fitted King profile for Carina and the actual profile for Sculptor with no fit performed. We also use a King profile to describe $\Sigma_*(R)$ in Sextans, with the parameters taken from @irw95. In Fornax we use the full profile reported in @col05. We then convert $\Sigma_*(R)$ to a surface brightness profile $\mu(R)$ by adding an arbitrary zero-point shift, in log space, and adjusting the shift until the integrated $\mu(R)$ returns a luminosity consistent with the value listed in @mat98. Next we deproject $\mu(R)$ via Abel inversion through the manner described in @geb96. For simplicity in the deprojection and subsequent modeling, we assume that each galaxy is viewed edge-on. For a thorough discussion on how uncertainties in viewing angle and geometry propagate through our models we refer the reader to @tho07b. Our models are axisymmetric, so we use the stellar ellipticity to determine $\nu$ away from the major axis. [llllll]{} Carina & $104^a$ & $702^f$ & 14 & $0.33^e$ & 4.2\ Draco & $71^b$ & $170^{gh}$ & 8 & $0.29^e$ & 3.1\ Fornax & $136^a$ & $2409^f$ & 36 & $0.30^e$ & 13.5\ Sculptor & $85^c$ & $1266^f$ & 24 & $0.32^e$ & 5.1\ Sextans & $85^d$ & $388^f$ & 8 & $0.35^e$ & 5.1 \[sum\] Models ====== The non-parametric modeling technique we use is described in full detail in @jar13. It is based on the Schwarzschild modeling code of @geb00 updated by @tho04 [@tho05] and described in @sio09. We have tested our models by using kinematics generated from a Draco-sized mock dSph embedded in a larger dark matter halo with either a cored or NFW-like cuspy profile. In both cases we are able to accurately recover the density profile from which the mock kinematics were drawn. The fundamental principle behind Schwarzschild modeling, that of orbit superposition, was first introduced by @sch79. The Schwarzschild code that is the backbone of our non-parametric technique has been thoroughly tested using artificial data. It has been shown to accurately recover the mass profile and orbit structure of simple isotropic rotators [@tho05], $N$-body merger remnants [@tho07b], and a mock galaxy containing a supermassive black hole [@sio09]. The general Schwarzschild technique has also been tested with artificial data representing the binned individual velocities typically used as input for studying the dSphs [@bre12]. This method works by assuming a trial potential for the galaxy under study and determining all stellar orbits that are possible in that potential. Our orbit sampling scheme is described in detail in @tho04. The orbits are then assigned weights according to how well they match the LOSVDs and a $\chi^2$ value is determined, subject to a constraint of maximum entropy [@sio09]. If $\chi^2$ is low, the orbits are a good fit to the kinematics and the trial potential is considered to be a good estimate for the real potential. If $\chi^2$ is large, the trial potential does not support orbits that can match the kinematics and a new potential is generated. Each model is required to match $\nu(r)$ as well to machine precision. We construct the many trial potentials by solving Poisson’s equation for a specified total density profile $\rho(r)$ along the major axis. We assume the total mass distribution has the same ellipticity as the stellar component and use this adopted ellipticity to define $\rho(r,\theta)$ away from the major axis. Rather than paramaterizing $\rho(r)$ with an unknown function and sampling its parameters, we take an altogether different approach (detailed in @jar13). To describe $\rho(r)$ we divide the profile into 5 radial points $r_i$, equally spaced in $\log r$. A trial $\rho(r)$ is then represented by the density $\rho_i$ at each point. In this way, the $\rho_i$ themselves are the parameters that we adjust when picking trial potentials. To sample this parameter space, we employ a similar iterative refinement scheme as discussed in @jar13. We also impose the same constraint that each profile must be non-increasing as a function of radius. Since the dSphs orbit within the Milky Way’s halo, the possibility exists that they are being, or have been, tidally stripped. In constructing our trial potential, we account for this by leaving the slope of $\rho(r)$ outside of our model grid a free parameter $\alpha_{\infty}$. Each model profile $\rho(r)$ is run with $\alpha_{\infty} \in \{2, 3, 4\}$. In this way, we treat $\alpha_{\infty}$ as a nuisance parameter and marginalize over it for the rest of our discussion. We also truncate the dSphs at the radius $R_{\mathrm{trunc}}$ defined by the Jacobi radius given the mass of the dSph, its Galactocentric distance, and the mass of the Milky Way (assumed to be $M_{\mathrm{MW}}=3 \times 10^{12} \, M_{\odot}$ and represented by an isothermal sphere). We list values for $R_{\mathrm{trunc}}$ in Table \[sum\]. ![image](all.eps){width="15cm"} Stellar Density --------------- After running a large number of models for each galaxy, we have a non-parametric measurement of the *total* density profile $\rho(r)$. In order to obtain the dark matter density profile we must subtract the stellar density $\rho_*(r)$. This requires knowledge of the stellar mass-to-light ratio $M_*/L_V$ since $\rho_*(r) = M_*/L_V \times \nu(r)$, assuming that variations in with radius are unimportant. To estimate we use photometrically derived determinations of each galaxy’s stellar age $t_{\mathrm{age}}$ and metallicity \[Fe/H\] [@lia11]. The simple stellar population models of @mar05 then yield an estimate on given these two quantities and the assumption of either a Salpeter or Kroupa initial mass function (IMF). We characterize the uncertainty in $\rho_*(r)$ by the spread in values of that result from a choice in IMF and the uncertainties in $t_{\mathrm{age}}$ and \[Fe/H\]. For our analysis of $\rho_{\mathrm{DM}}(r)$, we add in quadrature the uncertainties on $\rho(r)$ from the models with those on $\rho_*$ due to . In all but one of the dSphs (Fornax), $\rho(r) \gg \rho_*(r)$ making the determination of $\rho_*(r)$ relatively unimportant. In Fornax, however, the relatively large uncertainties on $\rho_*(r)$ make $\rho(r) - \rho_*(r)$ a negative quantity in some cases. This is clearly unphysical as it represents a negative dark matter density. To better study Fornax and other relatively baryon-dominated galaxies, a more accurate determination of is required. $\chi^2$ Analysis ----------------- We evaluate the goodness of fit of each model with $\chi^2$ as calculated by $$\chi^2 = \sum_{i=1}^{N_{\mathrm{LOSVD}}} \sum_{j=1}^{N_{\mathrm{vel}}=15} \left ( \frac{\ell_{ij}^{\mathrm{obs}} - \ell_{ij}^{\mathrm{mod}}} {\sigma_{ij}} \right )^2 , \label{chi2eq}$$ where the sums are computed over the $N_{\mathrm{vel}}=15$ velocity bins for all of the LOSVDs in each galaxy. The $\ell_{ij}$ correspond to the value in the jth velocity bin of the ith LOSVD. The uncertainty in $\ell^{\mathrm{obs}}_{ij}$ is $\sigma_{ij}$. We identify the best-fitting model as that which has the lowest value of the (unreduced) $\chi^2=\chi^2_{\mathrm{min}}$. A naïve calculation of the reduced $\chi^2_{\nu}= \chi^2_{\mathrm{min}}/(N_{\mathrm{vel}} \times N_{\mathrm{LOSVD}})$ often yields values much less than unity due to correlation between velocity bins caused by our kernel density estimator. We instead test for the overall goodness of fit of our best model by computing $\chi^2_{\nu,GH}$: the reduced $\chi^2$ with respect to a Gauss-Hermite parameterization of our best-fitting LOSVDs. We find $\chi^2_{\nu,GH}$ ranges from $0.3-0.9$ for the four dSphs modeled here. These values are consistent with past results [@geb03; @geb09; @jar13] and have been demonstrated to lead to accurate recovery of the mass profiles of mock galaxies [@tho05]. We therefore scale our model-computed unredoced $\chi^2$ values by a factor equal to $\chi^2_{\nu} / \chi^2_{\nu,GH}$ in order to bring our reduced $\chi^2_{\nu}$ nearer to $\chi^2_{\nu,GH}$. We present our dark matter profiles at two different levels of confidence. When specifying the dark matter density at a single point, we marginalize over all other parameters using the sliding boxcar technique described in @jar13 to interpolate $\chi^2$. The 1$\sigma$ confidence interval thus corresponds to a limit of $\Delta \chi^2=1$ (for one degree of freedom) above $\chi^2_{\mathrm{min}}$. When referring to the joint 1$\sigma$ confidence interval of the entire profile, we instead include limits derived from all models within $\Delta \chi^2=5.84$ of $\chi^2_{\mathrm{min}}$ (for 5 degrees of freedom). Results ======= ![image](all_in_one.eps){width="12cm"} We present the non-parametrically determined dark matter profiles in Figure \[all\]. In addition to the new results for Carina, Fornax, Sculptor, and Sextans, we include the result from @jar13 for Draco. Each panel in Figure \[all\] contains a dashed line with $\rho_{\mathrm{DM}} \propto r^{-1}$ to show the generic shape of the NFW profile. The points with error bars in Figure \[all\] are the marginalized dark matter density determined from $\Delta \chi^2=1$ at the $r_i$ where the total density is being varied from model to model. The gray points labeled with X’s are located interior to the radial range over which stellar kinematics are available. We denote this range for each galaxy with vertical tick marks on the x-axis. The joint confidence band (shaded region) interpolates between the $r_i$ by taking the maximum and minimum value for $\rho_{\mathrm{DM}}$ at each radius for every model within $\Delta \chi^2=5.84$ of $\chi^2_{\mathrm{min}}$. Given the freedom to choose a dark matter profile of any shape, it is immediately apparent that our models have chosen a variety of shapes for the dSphs. Draco appears the most similar to the NFW profile while Sculptor most closely resembles a broken powerlaw that becomes shallower towards its center. The other galaxies host profiles that resemble neither cores nor cusps: Carina’s profile appears flat where we have kinematics but then displays a possible up-bending inside of this region. Sextans has a steeper slope than the NFW profile until its outermost point where it suddenly becomes flat. These sharp differences among dSph dark matter profiles demonstrate the variety of profile shapes in the Local Group. Unfortunately, due to a lack of central stellar velocities in the @wal09 data, the central profiles of the dSphs we model become increasingly uncertain there. This is evidenced by the larger error bars on our gray points in Figure \[all\] where we have no kinematics coverage. However, we do have some constraint from projection effects and radial orbits in our models that have apocenters at radii where we do have data. Fornax ------ Fornax is an especially difficult case for non-parametric modeling because, compared to the other dSphs, it is relatively baryon-dominated. Our imprecise determination of in Fornax causes $\rho_*(r)$ to be greater than the total modeled density at some radii, making $\rho_{\mathrm{DM}}(r)$ negative. In our analysis of Fornax, we do not plot the radial range over which this occurs as it is unphysical. Instead in Figure \[all\] we over-plot the stellar density in red to illustrate why the subtraction is difficult in Fornax. In all other panels $\rho_*(r) \ll \rho_{\mathrm{DM}}(r)$ and is not plotted. There is strong evidence from multiple studies using independent methods that suggests Fornax has a dark matter profile that is not cuspy like the NFW profile. [@goe06; @wal11; @jar12]. Each of these studies only contrasts between cored and cuspy profiles or uses a single slope to characterize the profile. It is therefore interesting to explore the non-parametric result we obtain. Even though we cannot determine $\rho_{DM}$ where the stellar density is greater than the total density, we can still place an upper limit on $\rho_{DM}$ such that it must not be greater than $\rho_*$ or the red band in Figure \[all\]. Given this constraint, we can see that the outer profile of Fornax is flat, while the inner portion rises more steeply than $r^{-1}$. Past dynamical studies of Fornax only compared generic cored and NFW profiles and did not test this up-bending profile, therefore it is difficult to compare to their results. A Common Halo? -------------- Despite the differences in the individual profiles of the dSphs, when we plot them on the same axes they appear to follow a combined $r^{-1}$ profile with scatter. We plot this combined profile in Figure \[comb\] with each galaxy’s profile as a separate color. The uncertainties on the points are the $\Delta \chi^2=1$ uncertainties from Figure \[all\]. We have scaled each galaxy’s profile relative to an arbitrary $r^{-1}$ profile. In this way the shape of each profile is preserved and only the height has been adjusted to reduce the scatter. We fit a line to the $\log \rho_{DM}$ profiles and determine that the slope $\alpha = 1.2 \pm 0.5$. We also restrict our fit to only points in the profile where we have kinematics (dotted line in Figure \[comb\]) and find a similar slope of $\alpha = 0.9 \pm 0.5$. We conclude from Figure \[comb\] that the *average* dark matter profile in the dSphs is similar to an $r^{-1}$ profile. However, when we model each galaxy individually, we find a variety of profiles that are different from the mean $r^{-1}$ profile. Our interpretation of this observation is that variations in their individual formation histories cause galaxies to scatter from the average profile. Only when multiple galaxies are averaged together does it become clear they follow a combined $r^{-1}$ profile. This single power-law profile compares well with the predicted NFW profile in the inner portion of the plot. However, at larger radii ($\gsim 1~$kpc in dwarf galaxies) the NFW profile becomes steeper than $r^{-1}$ [@spr08]. More data are needed at both large and small radii to further explore this. K.G. acknowledges support from NSF-0908639. This work would not be possible without the state-of-the-art supercomputing facilities at the Texas Advanced Computing Center (TACC). We also thank Matt Walker and the MMFS Survey team for making their radial velocities publicly available. [50]{} natexlab\#1[\#1]{} , K. S., [Klypin]{}, A., [More]{}, S., & [Trujillo-Gomez]{}, S. 2012, ArXiv e-prints, astro-ph/1212.6651 , G. R., [Faber]{}, S. M., [Flores]{}, R., & [Primack]{}, J. R. 1986, , 301, 27 , A., & [Salucci]{}, P. 2001, , 323, 285 , M. A., [Helmi]{}, A., [van den Bosch]{}, R. C. E., [van de Ven]{}, G., & [Battaglia]{}, G. 2012, ArXiv e-prints, astro-ph/1205.4712 , A. 1995, , 447, L25+ , M. G., [Da Costa]{}, G. S., [Bland-Hawthorn]{}, J., & [Freeman]{}, K. C. 2005, , 129, 1443 , W. J. G., [McGaugh]{}, S. S., [Bosma]{}, A., & [Rubin]{}, V. C. 2001, , 552, L23 , L., [Navarro]{}, J. F., [Cole]{}, S., [et al.]{} 2008, , 387, 536 , K., & [Thomas]{}, J. 2009, , 700, 1690 , K., [Richstone]{}, D., [Ajhar]{}, E. A., [et al.]{} 1996, , 112, 105 , K., [Bender]{}, R., [Bower]{}, G., [et al.]{} 2000, , 539, L13 , K., [Richstone]{}, D., [Tremaine]{}, S., [et al.]{} 2003, , 583, 92 , T., [Moore]{}, B., [Read]{}, J. I., [Stadel]{}, J., & [Zemp]{}, M. 2006, , 368, 1073 , M., & [Hatzidimitriou]{}, D. 1995, , 277, 1354 Jardel, J. R., & Gebhardt, K. 2012, , 746, 89 , J. R., [Gebhardt]{}, K., [Fabricius]{}, M. H., [Drory]{}, N., & [Williams]{}, M. J. 2013, , 763, 91 , J., [Wilkinson]{}, M. I., [Evans]{}, N. W., [Gilmore]{}, G., & [Frayn]{}, C. 2002, , 330, 792 , M. G., [Yuk]{}, I.-S., [Park]{}, H. S., [Harris]{}, J., & [Zaritsky]{}, D. 2009, , 703, 692 , S., [Grebel]{}, E. K., & [Koch]{}, A. 2011, , 531, A152 , C. 2005, , 362, 799 , M. L. 1998, , 36, 435 , D., [Navarro]{}, J. F., [Ludlow]{}, A., & [Jenkins]{}, A. 2005, , 624, L85 , J. F., [Eke]{}, V. R., & [Frenk]{}, C. S. 1996, , 283, L72 , J. F., [Frenk]{}, C. S., & [White]{}, S. D. M. 1996, , 462, 563 , J. F., [Ludlow]{}, A., [Springel]{}, V., [et al.]{} 2010, , 402, 21 , M., [Grebel]{}, E. K., [Harbeck]{}, D., [et al.]{} 2001, , 122, 2538 , M., [Salucci]{}, P., & [Stel]{}, F. 1996, , 281, 27 , S., [Pryor]{}, C., [Bristow]{}, P., [et al.]{} 2007, , 133, 818 , S., [Pryor]{}, C., [Olszewski]{}, E. W., [et al.]{} 2003, , 126, 2346 , G., [Gieren]{}, W., [Szewczyk]{}, O., [et al.]{} 2008, , 135, 1993 , H., [de Zeeuw]{}, P. T., [Cretton]{}, N., [van der Marel]{}, R. P., & [Carollo]{}, C. M. 1997, , 488, 702 , M. 1979, , 232, 236 , J. L. 1968, [Atlas de Galaxias Australes (Cordoba, Argentina: Observatorio Astronomico, Univ. Cordoba)]{} , B. W. 1986, [Density estimation for statistics and data analysis]{}, ed. [Silverman, B. W.]{} , J. D., [Bolatto]{}, A. D., [Leroy]{}, A., [Blitz]{}, L., & [Gates]{}, E. L. 2005, , 621, 757 , J. D., & [Geha]{}, M. 2007, , 670, 313 , C., [Gebhardt]{}, K., [Lauer]{}, T. R., [et al.]{} 2009, , 693, 946 , V., [Wang]{}, J., [Vogelsberger]{}, M., [et al.]{} 2008, , 391, 1685 , G. A., [Sandage]{}, A., & [Reindl]{}, B. 2008, , 679, 52 , J., [Jesseit]{}, R., [Naab]{}, T., [et al.]{} 2007, , 381, 1672 , J., [Saglia]{}, R. P., [Bender]{}, R., [et al.]{} 2005, , 360, 1355 —. 2004, , 353, 391 , M., [Merritt]{}, D., & [Emsellem]{}, E. 2004, , 602, 66 , R. C. E., [van de Ven]{}, G., [Verolme]{}, E. K., [Cappellari]{}, M., & [de Zeeuw]{}, P. T. 2008, , 385, 647 , R. P., [Cretton]{}, N., [de Zeeuw]{}, P. T., & [Rix]{}, H. 1998, , 493, 613 , C. J., [Fried]{}, J. W., [Burkert]{}, A., & [Klessen]{}, R. S. 2003, , 406, 847 , M. G., [Mateo]{}, M., & [Olszewski]{}, E. W. 2008, , 688, L75 —. 2009, , 137, 3100 , M. G., [Mateo]{}, M., [Olszewski]{}, E. W., [et al.]{} 2009, , 704, 1274 , M. G., & [Pe[ñ]{}arrubia]{}, J. 2011, , 742, 20
--- abstract: 'Single-view depth estimation using CNNs trained from unlabelled videos has shown significant promise. However, the excellent results have mostly been obtained in street-scene driving scenarios, and such methods often fail in other settings, particularly indoor videos taken by handheld devices, in which case the ego-motion is often degenerate, , the rotation dominates the translation. In this work, we establish that the degenerate camera motions exhibited in handheld settings are a critical obstacle for unsupervised depth learning. A main contribution of our work is fundamental analysis which shows that the rotation behaves as noise during training, as opposed to the translation (baseline) which provides supervision signals. To capitalise on our findings, we propose a novel data pre-processing method for effective training, , we search for image pairs with modest translation and remove their rotation via the proposed weak image rectification. With our pre-processing, existing unsupervised models can be trained well in challenging scenarios (, NYUv2 dataset), and the results outperform the unsupervised SOTA by a large margin (0.147   0.189 in the AbsRel error).' author: - | Jia-Wang Bian$^{1,2}$, Huangying Zhan$^{1,2}$, Naiyan Wang$^{3}$, Tat-Jun Chin$^{1,2}$, Chunhua Shen$^{1,2}$, Ian Reid$^{1,2}$\ \ $^{1}$University of Adelaide $^{2}$Australian Centre for Robotic Vision $^{3}$TuSimple\ bibliography: - 'reference.bib' title: 'Unsupervised Depth Learning in Challenging Indoor Video: Weak Rectification to Rescue' --- Introduction {#sec:intro} ============ Inferring 3D geometry from 2D images is a long-standing problem in robotics and computer vision. Depending on the specific use case, it is usually solved by Structure-from-Motion [@schonberger2016structure] or Visual SLAM [@davison2007monoslam; @newcombe2011dtam; @mur2015orb]. Underpinning these traditional pipelines is searching for correspondences [@lowe2004distinctive; @Bian2019Gms] across multiple images and triangulating them via epipolar geometry [@Zhang1998; @hartley2003multiple; @bian2019bench] to obtain 3D points. Following the growth of deep learning-based approaches, Eigen  [@eigen2014depth] show that the depth map can be inferred from a single color image by a CNN, which is trained with the ground-truth depth supervisions captured by range sensors. Subsequently a series of supervised methods [@liu2016learning; @eigen2015predicting; @chakrabarti2016depth; @laina2016deeper; @li2017two; @fu2018deep; @Yin2019enforcing] have been proposed and the accuracy of estimated depth is progressively improved. Based on epipolar geometry [@Zhang1998; @hartley2003multiple; @bian2019bench], learning depth without requiring the ground-truth supervision has been explored. Garg  [@garg2016unsupervised] show that the single-view depth CNN can be trained from stereo image pairs with known baseline via photometric loss. Zhou  [@zhou2017unsupervised] further explored the unsupervised framework and proposed to train the depth CNN from unlabelled videos. They additionally introduced a Pose CNN to estimate the relative camera pose between consecutive frames, and they still use photometric loss for supervision. Following that, a number of of unsupervised methods have been proposed, which can be categorised into stereo-based [@godard2017unsupervised; @zhan2018unsupervised; @zhan2019self; @watson2019self] and video-based [@Wang2018CVPR; @mahjourian2018unsupervised; @yin2018geonet; @zou2018df; @ranjan2019cc; @monodepth2; @gordon2019depth; @chen2019self; @zhou2019unsupervised; @bian2019depth], according to the type of training data. Our work follows the latter paradigm, since unlabelled videos are easier to obtain in real-world scenarios. Unsupervised methods have shown promising results in driving scenes, , KITTI [@Geiger2013IJRR] and Cityscapes [@Cordts2016Cityscapes]. However, as reported in [@Zhou_2019_ICCV], they usually fail in generic scenarios such as the indoor scenes in NYUv2 dataset [@silberman2012indoor]. For example, GeoNet [@yin2018geonet], which achieves state-of-the-art performance in KITTI, is unable to obtain reasonable results in NYUv2. To this end, [@Zhou_2019_ICCV] proposes to use optical flow as the supervision signal to train the depth CNN, and very recent [@zhao2020towards] uses optical flow for estimating ego-motion to replace the Pose CNN. However, the reported depth accuracy [@zhao2020towards] is still limited, , 0.189 in terms of *AbsRel*—see also qualitative results in [Fig. \[fig:show\]]{}. Our work investigates the fundamental reasons behind poor results of unsupervised depth learning in indoor scenes. In addition to the usual challenges such as non-Lambertian surfaces and low-texture scenes, we identify the camera motion profile in the training videos as a critical factor that affects the training process. To develop this insight, we conduct an in-depth analysis of the effects of camera pose to current unsupervised depth learning framework. Our analysis shows that (i) fundamentally the camera rotation behaves as noise to training, while the translation contributes effective gradients; (ii) the rotation component dominates the translation component in indoor videos captured using handheld cameras, while the opposite is true in autonomous driving scenarios. To capitalise on our findings, we propose a novel data pre-processing method for unsupervised depth learning. Our analysis (described in [Sec. \[sec:pose-effects\]]{}) indicates that image pairs with small relative camera rotation and moderate translation should be favoured. Therefore, we search for image pairs that fall into our defined translation range, and we weakly rectify the selected pairs to remove their relative rotation. Note that the processing requires no ground truth depth and camera pose. With our proposed data pre-processing, we demonstrate that existing state-of-the-art (SOTA) unsupervised methods can be trained well in the challenging indoor NYUv2 dataset [@silberman2012indoor]. The results outperform the unsupervised SOTA [@zhao2020towards] by a large margin (0.147  0.189 in the AbsRel error). To summarize, our main contributions are three-fold: - We theoretically analyze the effects of camera motion on current unsupervised frameworks for depth learning, and reveal that the camera rotation behaves as noise for training depth CNNs, while the translation contributes effective supervisions. - We calculate the distribution of camera motions in different scenarios, which, along with the analysis above, helps to answer the question why it is challenging to train unsupervised depth CNNs from indoor videos captured using handheld cameras. - We propose a novel method to select and weakly rectify image pairs for better training. It enables existing unsupervised methods to show competitive results with many supervised methods in the challenging NYUv2 dataset. Analysis ======== We first overview the unsupervised framework for depth learning. Then, we revisit the depth and camera pose based image warping and demonstrate the relationship between camera motion and depth network training. Finally, we compare the statistics of camera motions in different datasets to verify the impact of camera motion on depth learning. Overview of video-based unsupervised depth learning framework {#sec:sc-overview} ------------------------------------------------------------- Following SfMLearner [@zhou2017unsupervised], plenty of video-based unsupervised frameworks for depth estimation have been proposed. SC-SfMLearner [@bian2019depth], which is the current SOTA framework, additionally constrains the geometry consistency over [@zhou2017unsupervised], leading to more accurate and scale-consistent results. In this paper, we use SC-SfMLearner as our framework, and overview its pipeline in [Fig. \[fig:sc-overview\]]{}. #### Forward. A training image pair ($I_a$, $I_b$) is first passed into a weight-shared depth CNN to obtain the depth maps ($D_a$, $D_b$), respectively. Then, the pose CNN takes the concatenation of two images as input and predicts their 6D relative camera pose $P_{ab}$. With the predicted depth $D_a$ and pose $P_{ab}$, the warping flow between two images is generated according to [Sec. \[sec:proof\]]{}. #### Loss. First, the main supervision signal is the photometric loss $L_P$. It calculates the color difference in each pixel between $I_a$ with its warped position on $I_b$ using a *differentiable bilinear interpolation* [@jaderberg2015stn]. Second, depth maps are regularized by the geometric inconsistency loss $L_{GC}$, where it enforces the consistency of predicted depths between different frames. Besides, a weighting mask $M$ is derived from $L_{GC}$ to handle dynamics and occlusions, which is applied on $L_P$ to obtain the weighted $L_{P}^M$. Third, depth maps are also regularized by a smoothness loss $L_S$, which ensures that depth smoothness is guided by the edge of images. Overall, the objective function is: $$\label{eqn:loss} L = \alpha L_{P}^M + \beta L_{S} + \gamma L_{GC},$$ where $\alpha$, $\beta$, and $\gamma$ are hyper-parameters to balance different losses. ![Overview of SC-SfMLearner [@bian2019depth]. Firstly, in the forward pass, training images ($I_a$, $I_b$) are passed into the network to predict depth maps ($D_a$, $D_b$) and relative camera pose $P_{ab}$. With $D_a$ and $P_{ab}$, we obtain the warping flow between two views according to [Eqn. \[eqn:full\_tansformation\]]{}. Secondly, given the warping flow, the photometric loss $L_P$ and the geometry consistency loss $L_{GC}$ are computed. Also, the weighting mask $M$ is derived from $L_{GC}$ and applied over $L_P$ to handle dynamics and occlusions. Moreover, an edge-aware smoothness loss $L_S$ is used to regularize the predicted depth map. See [@bian2019depth] for more details. []{data-label="fig:sc-overview"}](imgs/sc-figure.pdf){width="0.96\linewidth"} Depth and camera pose based image warping {#sec:proof} ----------------------------------------- The image warping builds the link between networks and losses during training, , the warping flow is generated by network predictions (depth and camera motion) in forward pass, and the gradients are back-propagated from the losses via the warping flow to networks. Therefore, we investigate the warping to analyze the camera pose effects on the depth learning, which avoids involving image content factors, such as illumination changes and low-texture scenes. #### Full transformation. The camera pose is composed of rotation and translation components. For one point ($u1$, $v1$) in the first image that is warped to ($u2$, $v2$) in the second image. It satisfies: $$\label{eqn:full_tansformation} {\bf K}^{-1} \left(d_2 \begin{bmatrix} u_2 \\ v_2 \\ 1\end{bmatrix}\right) = {\bf R} {\bf K}^{-1} \left(d_1 \begin{bmatrix} u_1 \\ v_1 \\ 1\end{bmatrix}\right) + {\bf t},$$ where $d_{i}$ is the depth of this point in two images and ${\bf K}$ is the 3x3 camera intrinsic matrix. ${\bf R}$ is a 3x3 rotation matrix and ${\bf t}$ is a 3x1 translation vector. We decompose the full warping flow and discuss each component below. #### Pure-rotation transformation. If two images are related by a pure-rotation transformation (, ${\bf t} = {\bf 0}$), based on [Eqn. \[eqn:full\_tansformation\]]{}, the warping satisfies: $$d_2 \begin{bmatrix} u_{2} \\ v_{2}\\ 1 \end{bmatrix} = {\bf K} {\bf R} {\bf K}^{-1} \left(d_1 \begin{bmatrix} u_{1} \\ v_{1}\\ 1 \end{bmatrix}\right),$$ where $[{\bf K}{\bf R}{\bf K}^{-1}]$ is as known as the *homography matrix* ${\bf H}$ [@hartley2003multiple], and we have $$\begin{bmatrix} u_{2} \\ v_{2}\\ 1 \end{bmatrix} = \frac{d_1}{d_2} H \begin{bmatrix} u_{1} \\ v_{1}\\ 1 \end{bmatrix} = c \begin{bmatrix} h_{11} & h_{12} & h_{13} \\ h_{21} & h_{22} & h_{23} \\ h_{31} & h_{32} & h_{33} \end{bmatrix} \begin{bmatrix} u_{1} \\ v_{1}\\ 1 \end{bmatrix},$$ where $c=\frac{d1}{d2}$, standing for the depth relation between two views, is determined by the third row of the above equation, , $c = 1 / (h_{31} u_1 + h_{32} v_1 + h_{33})$. It indicates that we can obtain ($u_2$, $v_2$) without $d_1$. Specifically, solving the above equation, we have $$\label{eqn:homography} \begin{cases} u_2 = (h_{11} u_1 + h_{12} v_1 + h_{13}) / (h_{31} u_1 + h_{32} v_1 + h_{33}) \\ v_2 = (h_{21} u_1 + h_{22} v_1 + h_{23}) / (h_{31} u_1 + h_{32} v_1 + h_{33}) . \end{cases}$$ This demonstrates that the rotational flow in image warping is independent to the depth, and it is only determined by ${\bf K}$ and ${\bf R}$. Consequently, the rotational motion in image pairs cannot contribute effective gradients to supervise the depth CNN during training, even when it is correctly estimated. More importantly, if the estimated rotation is inaccurate[^1], noisy gradients will arise and harm the depth CNN in backpropagation. Therefore, we conclude that the rotational motion behaves as the noise to unsupervised depth learning. #### Pure-translation transformation. A pure-translation transformation means that ${\bf R}$ is an identity matrix in [Eqn. \[eqn:full\_tansformation\]]{}. Then we have $$d_2 \begin{bmatrix} u_{2} \\ v_{2}\\ 1 \end{bmatrix} = d_1 \begin{bmatrix} u_{1} \\ v_{1} \\ 1 \end{bmatrix} + {\bf K} {\bf t} = d_1 \begin{bmatrix} u_{1} \\ v_{1} \\ 1 \end{bmatrix} + \begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} t_1 \\ t_2 \\ t_3 \end{bmatrix},$$ where ($f_x$ $f_y$) are camera focal lengths, and ($c_x$, $c_y$) are principal point offsets. Solving the above equation, we have $$\label{eqn:translation} \begin{cases} d_2 u_2 = d_1 u_1 + f_x t_1 + c_x t_3 \\ d_2 v_2 = d_1 v_1 + f_y t_2 + c_y t_3 \\ d_2 = d_1 + t_3 \end{cases} \begin{cases} u_2 = (d_1 u_1 + f_x t_1 + c_x t_3) / (d_1 + t_3) \\ v_2 = (d_1 v_1 + f_y t_2 + c_y t_3) / (d_1 + t_3). \\ \end{cases}$$ It shows that the translation vector ${\bf t}$ is coupled with the depth $d_1$ during the warping from ($u_1$, $v_1$) to ($u_2$, $v_2$). This builds the link between the depth CNN and the warping, so that gradients from the photometric loss can flow to the depth CNN via the warping. Therefore, we conclude that the translational motion provides effective supervision signals to depth learning. Distribution of decomposed camera motions in different scenarios {#sec:pose-effects} ---------------------------------------------------------------- #### Inter-frame camera motions and warping flows. [Fig. \[fig:pose-statistics\]]{}(a) shows the camera motion statistics on KITTI [@Geiger2013IJRR] and NYUv2 [@silberman2012indoor] datasets. KITTI is pre-processed by removing static images, as in done [@zhou2017unsupervised; @bian2019depth]. We pick one image of every 10 frames in NYUv2, which is denoted as *Original NYUv2*. Then we apply the proposed pre-processing ([Sec. \[sec:method\]]{}) to obtain *Rectified NYUv2*. For all datasets, we compare the decomposed camera pose of their training image pairs the absolute magnitude and inter-frame warping flow[^2]. Specifically, we compute the averaged warping flow of randomly sample points in the first image using the ground-truth depth and pose. For each point ($u_1$, $v_1$) that is warped to ($u_2$, $v_2$), the flow magnitude is $\sqrt{(u_2-u_1)^2 + (v_2-v_1)^2}$. [Fig. \[fig:pose-statistics\]]{}(a) shows that the rotational flow dominates the translational flow in *Original NYUv2* but it is opposite in KITTI. Along with the conclusion in [Sec. \[sec:proof\]]{} that the depth is supervised by the translation while the rotation behaves as the noise, this answers the question why unsupervised depth learning methods that obtain state-of-the-art results in driving scenes often fail in indoor videos. Besides, the results on *Rectified NYUv2* demonstrate that our proposed data pre-processing can address this issue. #### Warping error sensitivity to depth error. Besides the above statistics, we investigate the relation between warping error and depth error. As the network is supervised via the warping, we expect the warping error (px) to be sensitive to depth errors. For investigation, we manually generate wrong depths for randomly sampled points and then analyze their warping errors in all datasets. [Fig. \[fig:pose-statistics\]]{}(b) shows the results, which shows that the warping error in *Original NYUv2* is about $5$ times smaller than that in KITTI when the sampled points have the same relative error. This indicates another challenge in indoor videos against driving scenes. Indeed, the issue is due to the fact that the sensitivity will be significantly decreased when the camera translation is small. Formally, when ${\bf t}$ is close to ${\bf 0}$, based on [Eqn. \[eqn:translation\]]{}, we have: $$\label{eqn:translation2} \begin{cases} u_2 = (f_x t_1 + c_x t_3 + d_1 u_1) / (t_3 + d_1) \approx d_1 u_1 / d_1 = u_1 \\ v_2 = (f_y t_2 + c_y t_3 + d_1 v_1) / (t_3 + d_1) \approx d_1 v_1 / d_1 = v_1. \end{cases}$$ This causes the warping error hard to separate accurate/inaccurate depth estimates, confusing the depth CNN. We address this issue by translation-based image pairing (see [Sec. \[sec:select\]]{}). The results on *Rectified NYUv2* demonstrate that the efficacy of our proposed method. [ccc]{} KITTI [@Geiger2013IJRR] & Original NYUv2 [@silberman2012indoor] & Rectified NYUv2\ R=$0.25^{\circ}$, T=$0.99m$ & R=$2.28^{\circ}$, T=$0.05m$ & R=$0.68^{\circ}$, T=$0.24m$\ ![Camera motion statistics (a) and warping error sensitivity investigation (b). “Rectified” stands for the proposed pre-processing described in [Sec. \[sec:method\]]{}. In (a), the first row shows the averaged magnitude of camera poses, , **R** for rotation and **T** for translation. The plot shows the distribution of decomposed warping flow magnitudes (px) over randomly sampled points. In (b), we manually generate wrong depths for randomly sampled points using the ground-truth depths for investigating the warping errors. Note different scale in vertical axis. []{data-label="fig:pose-statistics"}](imgs/kitti_rt_flows.pdf "fig:"){width="0.3\linewidth"} & ![Camera motion statistics (a) and warping error sensitivity investigation (b). “Rectified” stands for the proposed pre-processing described in [Sec. \[sec:method\]]{}. In (a), the first row shows the averaged magnitude of camera poses, , **R** for rotation and **T** for translation. The plot shows the distribution of decomposed warping flow magnitudes (px) over randomly sampled points. In (b), we manually generate wrong depths for randomly sampled points using the ground-truth depths for investigating the warping errors. Note different scale in vertical axis. []{data-label="fig:pose-statistics"}](imgs/nyu_10_rt_flows.pdf "fig:"){width="0.3\linewidth"} & ![Camera motion statistics (a) and warping error sensitivity investigation (b). “Rectified” stands for the proposed pre-processing described in [Sec. \[sec:method\]]{}. In (a), the first row shows the averaged magnitude of camera poses, , **R** for rotation and **T** for translation. The plot shows the distribution of decomposed warping flow magnitudes (px) over randomly sampled points. In (b), we manually generate wrong depths for randomly sampled points using the ground-truth depths for investigating the warping errors. Note different scale in vertical axis. []{data-label="fig:pose-statistics"}](imgs/nyu_rect_rt_flows.pdf "fig:"){width="0.3\linewidth"}\ \ ![Camera motion statistics (a) and warping error sensitivity investigation (b). “Rectified” stands for the proposed pre-processing described in [Sec. \[sec:method\]]{}. In (a), the first row shows the averaged magnitude of camera poses, , **R** for rotation and **T** for translation. The plot shows the distribution of decomposed warping flow magnitudes (px) over randomly sampled points. In (b), we manually generate wrong depths for randomly sampled points using the ground-truth depths for investigating the warping errors. Note different scale in vertical axis. []{data-label="fig:pose-statistics"}](imgs/kitti_depth_flow_k1.pdf "fig:"){width="0.3\linewidth"} & ![Camera motion statistics (a) and warping error sensitivity investigation (b). “Rectified” stands for the proposed pre-processing described in [Sec. \[sec:method\]]{}. In (a), the first row shows the averaged magnitude of camera poses, , **R** for rotation and **T** for translation. The plot shows the distribution of decomposed warping flow magnitudes (px) over randomly sampled points. In (b), we manually generate wrong depths for randomly sampled points using the ground-truth depths for investigating the warping errors. Note different scale in vertical axis. []{data-label="fig:pose-statistics"}](imgs/nyu_10_depth_flow.pdf "fig:"){width="0.3\linewidth"} & ![Camera motion statistics (a) and warping error sensitivity investigation (b). “Rectified” stands for the proposed pre-processing described in [Sec. \[sec:method\]]{}. In (a), the first row shows the averaged magnitude of camera poses, , **R** for rotation and **T** for translation. The plot shows the distribution of decomposed warping flow magnitudes (px) over randomly sampled points. In (b), we manually generate wrong depths for randomly sampled points using the ground-truth depths for investigating the warping errors. Note different scale in vertical axis. []{data-label="fig:pose-statistics"}](imgs/nyu_rect_depth_flow.pdf "fig:"){width="0.3\linewidth"}\ \ Proposed data processing {#sec:method} ======================== The above analysis suggests that unsupervised depth learning frameworks favour image pairs those have small rotational and sufficient translational motions for training. However, unlike driving sequences, videos captured by handheld cameras tend to have more rotational while less translational motions, as shown in [Fig. \[fig:pose-statistics\]]{}. In this section, we describe the proposed method to select image pairs with appropriate translation in [Sec. \[sec:select\]]{}, and reduce the rotation of selected pairs in [Sec. \[sec:rectify\]]{}. Translation-based image pairing {#sec:select} ------------------------------- For high frame rate videos (, $30$fps in NYUv2 [@silberman2012indoor]), we first downsample the raw videos temporally to remove redundant images, , extract one key frame from every $m$ frames. Here, $m=10$ is used in NYUv2. The resulting data is denoted as the *Original NYUv2* in all experiments. Then, instead of only considering adjacent frames as a pair, we pair up each image with its following $k$ frames We also let $k=10$ in NYUv2 [@silberman2012indoor]. For each image pair candidate, we compute the relative camera pose by searching for feature correspondences and using the epipolar geometry [@hartley2003multiple; @bian2019bench]. As the estimated pose is up-to-scale [@hartley2003multiple], we use the *translational flow* (, as the same as in [Fig. \[fig:pose-statistics\]]{}(a)) instead of absolute translation distance for pairing. No ground-truth data is required in the proposed method. First, we generate correspondences by using SIFT [@lowe2004distinctive] features. Then we apply the *ratio test* [@lowe2004distinctive] and GMS [@bian2019depth] to find good ones. Second, with the selected correspondences, we estimate the *essential matrix* using the five-point algorithm [@nister2004efficient] within a RANSAC [@fischler1981random] framework, and then we recover the relative camera pose. Third, for each image pair candidate, we compute the averaged magnitude of translational flows overall all inlier correspondences, which is as the same as in [Fig. \[fig:pose-statistics\]]{}(a). Based on the distribution of warping flows on KITTI that is a good example for us, we empirically set the expected range as $(10,50)$ pixels. The out-of-range pairs are removed. Although running Struecture-from-Motion (, COLMAP [@schonberger2016structure]) or VSLAM (, ORB-SLAM [@mur2015orb]) to compute relative camera poses is also possible, we argue that it is overkill for our problem. More importantly, these pipelines are often brittle, especially when processing videos with pure rotational motions and low-texture contents [@parra2019visual]. Compared with them, our method does not require a 3D map, and hence avoiding issues such as incomplete reconstruction and tracking lost. 3-DoF weak rectification {#sec:rectify} ------------------------ In order to remove the rotational motion of selected pairs, we propose a weak rectification method. It warps two images to a common plane using the pre-computed rotation matrix $\bf R$. Specifically, (i) we fist convert $\bf R$ to rotation vector $\bf r$ using Rodrigues formula [@trucco1998introductory] to obtain half rotation vectors for two images (, $\frac{\bf r}{2}$ and $-\frac{\bf r}{2}$), and then we convert them back to rotation matrices $\bf R_1$ and $\bf R_2$. (ii) Given $\bf R_1$, $\bf R_2$, and camera intrinsic $\bf K$, we warp images to a new common image plane according to [Eqn. \[eqn:homography\]]{}. Then in the common plane, we crop their overlapped rectangular regions to obtain the weakly rectified pairs. See the Matlab pseudo code in the supplementary material. Compared with the standard rectification [@fusiello2000compact], our method only uses the rotation $\bf R$ for image warping and deliberately ignores the translation $\bf T$, so our weakly rectified pairs have 3-DoF translational motions, while the rigorously rectified pairs have 1-DoF translational motions, , corresponding points have identical vertical coordinates. The reason is that we have different input settings (, temporal frames from arbitrary-motion videos  left and right images from two horizontal cameras) and different purposes (, depth learning  stereo matching) with the latter. On the one hand, due to the rigours 1-DoF requirement in stereo matching, the standard rectification [@fusiello2000compact] suffers in forward-motion pairs, where the epipoles lie inside the image and cause heavy deformation, , resulting in extremely large images [@fusiello2000compact]. Although polar rectification [@pollefeys1999simple] can mitigate the issue to some extent, the results are still deformed. However, this issue is avoided in our 3-DoF weak rectification, as we do not constrain the translational motion. On the other hand, the rigorous 1-DoF rectification is indeed unnecessary for depth learning. For example, related methods [@zhou2017unsupervised; @yin2018geonet; @ranjan2019cc; @monodepth2; @bian2019depth] work well in KITTI videos, where image pairs have 3-DoF translational motions, and the results are comparable to methods those training on KITTI stereo pairs [@garg2016unsupervised; @godard2017unsupervised; @zhan2018unsupervised]. Moreover, these methods show that the Pose CNN predicted 3-DoF translation is quite accurate, which even outperforms ORB-SALM [@mur2015orb] on short sequences (, 5-frame segments). Due to above reasons, we propose the 3-DoF weak rectification, which reduces the rectification requirement and more suits the unsupervised depth learning problem. In practice, we still let the Pose CNN to predict 6-DoF motions as all related works [@zhou2017unsupervised; @yin2018geonet; @ranjan2019cc; @monodepth2; @bian2019depth], where we use the predicted 3-DoF rotational motion to compensate the rotation residuals (see [Fig. \[fig:pose-statistics\]]{}) caused by the imperfect rectification, and use the predicted 3-DoF translational motion to help train the depth CNN. Experiments {#sec:experiment} =========== Method, dataset, and metrics ---------------------------- #### Method. We use the updated SC-SfMLearner [@bian2019depth], publicly available on GitHub, as our unsupervised learning framework. Compared with the original version, it replaces the encoder of depth and pose CNNs with a ResNet-18 [@he2016deep] backbone to enable training from the Imagenet [@imagenet_cvpr09] pre-trained model. Besides, to demonstrate that our proposed pre-processing is universal to different methods, we also experiment with Monodepth2 [@monodepth2] (ResNet-18 backbone) in ablation studies. For all methods, we use the default hyper-parameters, and train models for $50$ epochs. #### NYUv2 depth dataset. The NYUv2 depth dataset [@silberman2012indoor] is composed of indoor video sequences recorded by a handheld Kinect RGB-D camera at $640 \times 480$ resolution. The dataset contains 464 scenes taken from three cities. We use the officially provided 654 densely labeled images for testing, and use the rest $335$ sequences (no overlap with testing scenes) for training ($302$) and validation ($33$). The raw training sequences contain $268K$ images. It is first downsampled $10$ times to remove redundant frames, and then processed by using our proposed method, resulting in total $67K$ rectified image pairs. The images are resized to $320 \times 256$ resolution for training. #### RGB-D 7 Scenes dataset. The dataset [@shotton2013scene] contains 7 scenes, and each scene contains several video sequences ($500$-$1000$ frames per sequence), which are captured by a Kinect camera at $640 \times 480$ resolution. We follow the official train/test split for each scene. For training, we use the proposed pre-processing, and for testing, we simply extract one image from every $30$ frames. We first pre-train the model on NYUv2 dataset, and then fine-tune the model on this dataset to demonstrate the universality of the proposed method. #### Evaluation metrics. We follow previous methods [@liu2016learning; @laina2016deeper; @fu2018deep; @Yin2019enforcing] to evaluate depth estimators. Specifically, we use the mean absolute relative error (AbsRel), mean log10 error (Log10), root mean squared error (RMS), and the accuracy under threshold ($\delta_i$ &lt; $1.25^i$ , $i = 1, 2, 3$). As unsupervised methods cannot recover the absolute scale, we multiply the predicted depth maps by a scalar that matches the median with the ground truth, as done in [@zhou2017unsupervised; @bian2019depth; @monodepth2]. ![Qualitative comparison of single-view depth estimation on NYUv2 [@silberman2012indoor]. More results are attached in the supplementary material.[]{data-label="fig:show"}](imgs/vis1.pdf){width="0.98\linewidth"} Results {#sec:results} ------- #### Comparing with the state-of-the-art (SOTA) methods. [Tab. \[tab:nyu\]]{} shows the results on NYUv2 [@silberman2012indoor]. It shows that our method outperforms previous unsupervised SOTA method [@zhao2020towards] by a large margin. [Fig. \[fig:show\]]{} shows the qualitative depth results. Note that NYUv2 dataset is so challenging that previous unsupervised methods such as GeoNet [@yin2018geonet] is unable to get reasonable results, as reported in [@Zhou_2019_ICCV]. Besides, our method also outperforms a series of fully supervised methods [@liu2016learning; @saxena2006learning; @karsch2014depth; @liu2014discrete; @ladicky2014pulling; @li2015depth; @roy2016monocular; @wang2015towards; @eigen2015predicting; @chakrabarti2016depth]. However, it still has a gap between the SOTA supervised approach [@Yin2019enforcing]. #### Ablation studies. [Tab. \[tab:ablation\]]{} summarizes the results. First, for both SC-SfMLearner [@bian2019depth] and Monodepth2 [@monodepth2], training on our rectified data leads to significantly better results than on original data. It also demonstrates that the proposed pre-processing is independent to method chosen. Besides, note that the training is easy to collapse in original data, especially when starting from scratch. We here report the results for their successful case. #### Generalization. [Tab. \[tab:7-scene\]]{} shows the depth estimation results on 7 Scenes dataset [@shotton2013scene]. It shows that our model can generalize to previously unseen data, and fine-tuning on a few new data can boost the performance significantly. This has huge potentials to real-world applications, , we can quickly adapt our pre-trained model to a new scene. #### Timing. It takes $25$ ($28$) hours to train SC-SfMLearner [@bian2019depth] models for $50$ epochs on rectified (original) data, measured in a single 16GB NVIDIA V100 GPU. Learning curves are provided in the supplementary material, which show that our pre-processing enables faster convergence. The inference speed of models is about $210$fps on $320 \times 256$ images in a NVIDIA RTX 2080 GPU. Conclusion ========== In this paper, we investigate the degenerate motion in indoor videos, and theoretically analyze its impact on the unsupervised monocular depth learning. We conclude that (i) rotational motion dominates translational motion in videos taken by handheld devices, and (ii) rotation behaves as noises while translation contributes effective signals to learning. Moreover, we propose a novel data pre-processing method, which searches for modestly translational pairs and remove their relative rotation for effective training. Comprehensive results in different datasets and learning frameworks demonstrate the efficacy of proposed method, and we establish a new unsupervised SOTA performance in challenging indoor NYUv2 dataset. Additional details ================== #### Experimental details in Fig. 2. First, we follow [@zhou2017unsupervised; @bian2019depth] to pre-process KITTI [@Geiger2013IJRR] dataset, where static frames that are manually labelled by Eigen  [@eigen2014depth] are removed from the raw video. The images are resized to $832 \times 256$. The accurate ground truth depth and camera poses are provided by a Velodyne laser scanner and a GPS localization system. Second, as the NYUv2 [@silberman2012indoor] dataset does not provide the ground truth camera pose, we use the ORB-SLAM2 [@mur2015orb] (RGB-D mode with the ground truth depth) to compute the camera trajectory. The image resolution is $640 \times 480$. We down-sample the raw videos by picking first image of every 10 times. Third, we randomly select one long sequence from the dataset for analysis. Given a sequence, we randomly sample $1,000$ valid points (with a good depth range) per image and compute their projection magnitudes (Fig. 2(a)) and projection errors (Fig. 2(b)) using the ground truth. For box visualization, we randomly sample $1,000$ points that are collected from the entire sequence. #### Implementation details in Sec. 3. First, for computing the feature correspondence, we use the SIFT [@lowe2004distinctive] implementation by VLFeat library. The default parameters are used. Second, we use the built-in function in Matlab library to compute the *essential matrix* and relative camera pose. The maximum RANSAC [@fischler1981random] iterations are $10K$, and the inlier threshold is $1px$. we use the following pseudo Matlab code to compute the weakly rectified images. function [ImRect1, ImRect2] = WeakRectify(Im1, Im2, K, R) % Function takes two images and their camera parameters as input, % and it returns the rectified images. % Im1, Im2: two images % K: camera intrinsic % R: relative rotation matrix % % ImRect1, ImRect2: two rectified images % Make the two image planes coplanar, by rotating each half way [R1, R2] = computeHalfRotations(R); H1 = projective2d(K * R1 / K); H2 = projective2d(K * R2 / K); % Compute the common rectangular area of the transformed images imageSize = size(Im1); [xBounds, yBounds] = computeOutputBounds(imageSize, H1, H2); % Rectify images ImRect1 = transformImage(Im1, H1, xBounds, yBounds); ImRect2 = transformImage(Im2, H2, xBounds, yBounds); end function [Rl, Rr] = computeHalfRotations(R) % Conver rotation matrix to vector representation r = rotationMatrixToVector(R); % Compute right half-rotation Rr = rotationVectorToMatrix(r / -2); % Compute left half-rotation Rl = Rr'; end Additional results ================== #### Learning curves. The following figure shows the validation loss when training on NYUv2 [@silberman2012indoor]. “Rectified” stands for the proposed pre-processing, and “pt” stands for pre-training on ImageNet [@imagenet_cvpr09]. It demonstrates that training on our rectified data leads to better results and faster convergence, compared with the original dataset. ![image](imgs/validation-loss.pdf){width="0.7\linewidth"} #### Visualization of single-view depth estimation. ![More qualitative comparison of single-view depth estimation on NYUv2 [@silberman2012indoor].[]{data-label="fig:show_supp"}](imgs/vis2.pdf "fig:"){width="0.95\linewidth"}\ [Fig. \[fig:show\_supp\]]{} shows more results on NYUv2 [@silberman2012indoor]. #### Visualization of rectification and fine-tuning effects. [Fig. \[fig:vis\]]{} shows results. In NYUv2 [@silberman2012indoor], we train models on both original data and our rectified data. In 7 Scenes [@shotton2013scene], we fine-tune the model that is pre-trained on NYUv2. The qualitative evaluation results demonstrate the efficacy and universality of our proposed pre-processing, and it demonstrates the generalization ability of pre-trained depth CNN in previously unseen scenes. -- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Qualitative results for ablation studies. In NYUv2 [@silberman2012indoor], we train models on both original data and our rectified data. In 7 Scenes [@shotton2013scene], we fine-tune the model that is pre-trained on NYUv2.[]{data-label="fig:vis"}](imgs/show1.pdf "fig:"){width="0.9\linewidth"} ![Qualitative results for ablation studies. In NYUv2 [@silberman2012indoor], we train models on both original data and our rectified data. In 7 Scenes [@shotton2013scene], we fine-tune the model that is pre-trained on NYUv2.[]{data-label="fig:vis"}](imgs/show2.pdf "fig:"){width="0.9\linewidth"} -- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- #### Visualization of depth and converted point cloud. [Fig. \[fig:demo\]]{} shows the video screenshot. We predict depth using our trained model on one sequence (, office) from 7 Scenes [@shotton2013scene]. The top shows the textured point cloud generated by the predicted depth map (bottom right) and source image (bottom left). The full video is attached along with this manuscript. ![Depth and point cloud visualization on 7 Scenes [@shotton2013scene]. The top shows the textured point cloud generated by the predicted depth map (bottom right) with the source image (bottom left). The full video is also attached.[]{data-label="fig:demo"}](imgs/office_02.pdf "fig:"){width="0.9\linewidth"}\ [^1]: Related work [@zhou2017unsupervised; @yin2018geonet; @ranjan2019cc; @monodepth2; @bian2019depth] shows that the Pose CNN enables more accurate translation estimation than ORB-SALM [@mur2015orb], but its predicted rotation is much worse than the latter, as demonstrated in [@bian2019depth; @zhan2019dfvo]. [^2]: We first compute the rotational flow using [Eqn. \[eqn:homography\]]{}, and then we obtain the translational flow by subtracting the rotational flow from the overall warping flow. Here, the translational flow is also called *residual parallax* in [@Li2020MannequinChallengeLT], where it is used to compute depth from correspondences and relative camera poses.
--- abstract: 'For any positive real number $p$, the $p$-frame potential of $N$ unit vectors $X:=\{{{\mathbf x}}_1,\ldots,{{\mathbf x}}_N\}\subset {{\mathbb R}}^d$ is defined as ${{\rm FP}}_{p,N,d}(X)=\sum_{i\neq j}{\vert\langle {{{\mathbf x}}_i,{{\mathbf x}}_j} \rangle\rvert}^p$. In this paper, we focus on the special case $N=d+1$ and establish the unique minimizer of ${{\rm FP}}_{p,d+1,d}$ for $p\in (0,2)$. Our results completely solve the minimization problem of $p$-frame potential when $N=d+1$, which confirms a conjecture posed by Chen, Gonzales, Goodman, Kang and Okoudjou [@Chen].' address: - 'LSEC, Inst. Comp. Math., Academy of Mathematics and System Science, Chinese Academy of Sciences, Beijing, 100091, China School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China' - ' LSEC, Inst. Comp. Math., Academy of Mathematics and System Science, Chinese Academy of Sciences, Beijing, 100091, China' author: - Zhiqiang Xu - Zili Xu title: 'The Minimizers of the $p$-frame potential' --- [^1] Introduction ============ The $p$-frame potential ----------------------- The minimal potential energy problem has been actively discussed over the last decades since its applications in physics, signal analysis and numerical integration. It aims to find the optimal distribution of $N$ points over the unit sphere in ${{\mathbb R}}^d$ with the minimal potential energy [@y; @Cohn; @Saff]. Assume that $X:=\{{{\mathbf x}}_1,\ldots,{{\mathbf x}}_N\}$ where ${{\mathbf x}}_j\in {{\mathbb R}}^d$ with $\|{{\mathbf x}}_j\|_2=1, j=1,\ldots,N$. For $p>0$, the $$\label{e2} {{\rm FP}}_{p,N,d}(X):=\sum\limits_{i=1}^{N}\sum\limits_{j\neq i}{\vert\langle {{{\mathbf x}}_i,{{\mathbf x}}_j} \rangle\rvert}^p,$$ is called [*$p$-frame potential* ]{} (see [@Ehler; @Chen]), which depicts the redundancy of these vectors to some extent and has a lot of application in signal analysis. Throughout this paper, we use $ S(N,d) $ to denote all sets of $N$ unit-norm vectors in ${{\mathbb R}}^d$. The minimization problem of the $p$-frame potential is to solve $$\label{eq:e2} {\mathop{\rm argmin}\limits_{X\in S(N,d)}} {{\rm FP}}_{p,N,d}(X).$$ This problem actually has a long history and attracted much attention. For $N\leq d$, the set of $N$ orthogonal vectors in ${{\mathbb R}}^d$ is always the minimizer of for any positive $p$ and hence we only consider the case where $N\geq d+1$. We also note that the value of ${{\rm FP}}_{p,N,d}(X)$ does not change if we replace ${{\mathbf x}}_i$ by $c_iU{{\mathbf x}}_i$ for each $i\in\{1,2,\cdots,N\}$, where $U$ is an orthogonal matrix and $c_i\in\{1,-1\}$. Thus, to state conveniently we say the minimizer of (\[eq:e2\]) is unique if the solution to (\[eq:e2\]) is unique up to a common orthogonal transformation and a real unimodular constant for each vector. Related work ------------ There are many results which presented a lower bound of ${{\rm FP}}_{p,N,d}(X)$ when $p$ is an even number. In [@Welch], Welch presented a lower bound, i.e., $$\label{e3} {{\rm FP}}_{2t,N,d}(X)\geq \frac{N^2}{\binom{d+t-1}{t}}-N,\ t=1,2,\ldots.$$ Venkov showed in [@Venkov] that the above lower bound can be sharpened when $t>1$: $$\label{e4} {{\rm FP}}_{2t,N,d}(X)\geq N^2\frac{1\cdot 3\cdot 5\dots (2t-1)}{d(d+2)\dots (d+2t-2)}-N, \ t=2,3,\ldots.$$ In [@Fickus], Benedetto and Fickus showed that any finite unit-norm tight frame (FUNTF) can achieve the lower bound in (\[e3\]) for $t=1$. The equality in (\[e4\]) holds when $X$ is spherical designs, see [@Ehler; @ROY]. However, when $t$ is large, the existence of spherical design requires $N$ to be large enough, which implies the lower bound in (\[e4\]) is not tight for small $N$. For any $p>2$, Ehler and Okoudjou provided another bound in [@Ehler]: $$\label{e5} {{\rm FP}}_{p,N,d}(X) \geq N(N-1)\left(\frac{N-d}{d(N-1)}\right)^{\frac{p}{2}},$$ where the equality holds if and only if $\{{{\mathbf x}}_1,\ldots,{{\mathbf x}}_N\}$ is an equiangular tight frame (ETF) in ${{\mathbb R}}^d$ [@Renes; @Holmes]. We take $N=d+1$ as an example. Since there always exist $d+1$ unit vectors in ${{\mathbb R}}^d$ forming an ETF [@ETF], then the set of these $d+1$ vectors is the minimizer of the $p$-frame potential for $p> 2$. However, when $p\in (0,2)$, not much is known except few special cases. In [@Ehler], Ehler and Okoudjou solved the simplest case where $d=2$ and $N=3$ and also proved that the minimizer of the $p$-frame potential is exactly $n$ copies of an orthonormal basis if $N=nd$ where $n$ is a positive integer. In [@Glazyrin1], Glazyrin provided a lower bound for any $1\leq p\leq2$: $$\label{e7} {{\rm FP}}_{p,N,d}(X)\geq 2(N-d)\frac{1}{p^{\frac{p}{2}}(2-p)^{\frac{2-p}{2}}},$$ but the condition under which the equality holds is very harsh. In [@Chen], Chen, Gonzales, Goodman, Kang and Okoudjou considered this special case where $N=d+1$. Particularly, numerical experiments in [@Chen] show that the set $L_k^d$, which is called lifted ETF, seems to be the minimizer of the $p$-frame potential where $k$ is an integer depending on $p$. Here, $L_k^d=\{{{\mathbf x}}_1,\ldots,{{\mathbf x}}_{d+1}\}\subset {{\mathbb R}}^d$ is defined as a set of $d+1$ unit vectors in ${{\mathbb R}}^d$ satisfying $$\label{gram} {\vert\langle {{{\mathbf x}}_i,{{\mathbf x}}_j} \rangle\rvert}:=\left\{ \begin{array}{cl} \frac{1}{k} & i,j\in \{1,\ldots,k+1\}, i\neq j\\ 1 & i=j\\ 0 & else \end{array} \right. .$$ Note that $\{{{\mathbf x}}_i\}_{i=1}^{k+1}\subset L_k^d$ actually forms an ETF in some subspace $W\subset {{\mathbb R}}^d$ with dimension $k$ and the rest of $d-k$ vectors form an orthonormal basis in the orthogonal complement space of $W$. More precisely, the following conjecture is proposed in [@Chen]: \[conj\] Suppose $d\geq2$. Set $p_0:=0$, $p_d:=2$ and $p_k:=\frac{\ln(k+2)-\ln(k)}{\ln(k+1)-\ln(k)}$ for each $k\in\{1,2,\ldots,d-1\}$. Then, when $p\in(p_{k-1},p_k]$, $k=1,2,\ldots,d$, the set $L_k^d$ minimizes the $p$-frame potential. The cases $d=2$ and $p=2$ for Conjecture \[conj\] are already solved in [@Ehler] and [@Fickus], respectively. The first new result for Conjecture \[conj\] is obtained by Glazyrin in [@Glazyrin2] who shows that an orthonormal basis in ${{\mathbb R}}^d$ plus a repeated vector minimizes ${{\rm FP}}_{p,d+1,d}(X)$ for any $p\in[1,2(\frac{\ln{3}}{\ln{2}}-1)]$. Combining Glazyrin’s result with the previous ones, the minimizer of ${{\rm FP}}_{p,d+1,d}(X)$ is only known for $p\in[1,2(\frac{\ln{3}}{\ln{2}}-1)]\cup[2,\infty)$. Recently, Park extented Glazyrin’s result to the case $N=d+m$ where $1\leq m<d$, and showed that an orthonormal basis plus $m$ repeated vectors is the minimizer for any $p\in[1,2\frac{\ln{(2m+1)}-\ln{(2m)}}{\ln{(m+1)}-\ln{(m)}}]$ (see [@Park]). But the minimal $p$-frame potential problem remains open for the case $N=d+1$ when $d>2$. Our contributions ----------------- The aim of this paper is to confirm Conjecture \[conj\] and we also show that the minimizer is unique provided $p\neq p_k$. Our main result is the following theorem which completely solves the minimal $p$-frame potential problem for the case where $N=d+1$. \[T1\] Let $d\geq 2$ be an integer. Set $p_0:=0$, $p_d:=2$ and $p_k:=\frac{\ln(k+2)-\ln(k)}{\ln(k+1)-\ln(k)}$ for each $k\in\{1,2,\ldots,d-1\}$. Assume that $p\in(0,2)$ is a real number. Let $X=\{{{\mathbf x}}_1,\ldots,{{\mathbf x}}_N\}$ be a set of $N$ unit vectors in ${{\mathbb R}}^d$, where $N=d+1$. 1. For $p\in (p_{k-1},p_k), k=1,2,\ldots,d$, then for any $X\in S(d+1,d) $ we have $ {{\rm FP}}_{p,d+1,d}(X)\geq (k+1)k^{1-p} $ and equality holds if and only if $X=L_k^d$. 2. For $p=p_k, \, k=1,\ldots, d-1$, then for any $X\in S(d+1,d) $ we have $ {{\rm FP}}_{p,d+1,d}(X)\geq (k+1)k^{1-p_k} $ and equality holds if and only if $X=L_k^d$ or $X=L_{k+1}^d$. Based on the previous results and Theorem \[T1\] in this paper, in Table \[tab1\], we list the related results of the minimal $p$-frame potential problem when $N=d+1$. Note that $2(\frac{\ln{3}}{\ln{2}}-1)\approx 1.16993$ and $\frac{\ln{3}}{\ln{2}}\approx 1.58496$. Hence, $[1,2(\frac{\ln{3}}{\ln{2}}-1)]$ is a subinterval in $(0,\frac{\ln{3}}{\ln{2}})$. In Table \[tab1\], we also use the fact that $L_1^d$ is essentially an orthonormal basis plus a repeated vector and $L_d^d$ forms an ETF in ${{\mathbb R}}^d$. captype[table]{} [|l|l|l]{} $p$ & Minimizers &\ $p\in[1,2(\frac{\ln{3}}{\ln{2}}-1)]$ & $L_1^d$ [@Glazyrin2] &\ $p=2$ & $L_d^d$ [@Fickus] &\ $p\in(2,\infty)$ & $L_d^d$ [@Ehler] &\ $p\in(0,\frac{\ln{3}}{\ln{2}})$ & $L_1^d$ (Theorem \[T1\]) &\ $p\in\left(\frac{\ln({(k+1)}/{(k-1)})}{\ln({k}/{(k-1)})},\frac{\ln({(k+2)}/{k})}{\ln({(k+1)}/{k})}\right),k=2,3,\ldots,d-1$ & $L_k^d$ (Theorem \[T1\]) &\ $p\in(\frac{\ln({(d+1)}/{(d-1)})}{\ln({d}/{(d-1)})},2)$ & $L_d^d$ (Theorem \[T1\]) &\ $p=\frac{\ln((k+2)/k)}{\ln((k+1)/k)},\ k=1,2,\ldots,d-1$ & $L_k^d$ and $L_{k+1}^d$(Theorem \[T1\]) &\ Organization ------------ The paper is organized as follows. In Section 2, we prove Theorem \[T1\] based on Lemma \[M\]. The proof of Lemma \[M\] is presented in Section 3. Proof of Theorem $\ref{T1}$ =========================== In this section, we present the proof of Theorem \[T1\]. The following lemma plays a key role in our proof of Theorem \[T1\]. We postpone its proof to Section 3. To this end, we set $$M_{\alpha, d+1}(z_1,\ldots,z_{d+1})\,\,:=\,\,\sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}z_i^\alpha z_j^\alpha$$ where $\alpha>1$. We consider $$\label{con0} {\mathop{\rm argmax}\limits_{(z_1,\ldots,z_{d+1})}} M_{\alpha,d+1}(z_1,\ldots,z_{d+1}),\quad {\rm s.t.}\quad z_1+\cdots+z_{d+1}=1, z_1\geq0,\ldots,z_{d+1}\geq 0,$$ where $\alpha> 1$. Noting that $M_{\alpha,d+1}(z_1,\ldots,z_{d+1})$ is a symmetric function on $d+1$ variables $z_1,\ldots,z_{d+1}$, we view any permutation of a solution to (\[con0\]) as the same one. \[M\] Suppose that $d\geq 1$ is an integer. Set $$\label{eq:ak} a_k:=\left\{ \begin{array}{cl} \infty & k=0\\ \frac{1}{2}\cdot\frac{\ln(k+2)-\ln(k)}{\ln(k+2)-\ln(k+1)} & k\in\{1,2,\ldots,d-1\}\\ 1 & k=d \end{array} \right. .$$ 1. If $ \alpha\in (a_{k},a_{k-1})$ then the unique solution to (\[con0\]) is $\left(\underbrace{\frac{1}{k+1},\ldots,\frac{1}{k+1}}_{k+1},\underbrace{0,\ldots,0}_{d-k}\right)$ where $k=1,2,\ldots,d$. 2. Assume that $\alpha=a_k$ where $k=1,\ldots,d-1$. The (\[con0\]) has exactly two solutions: $\left(\underbrace{\frac{1}{k+1},\ldots,\frac{1}{k+1}}_{k+1},\underbrace{0,\ldots,0}_{d-k}\right)$ and $\left(\underbrace{\frac{1}{k+2},\ldots,\frac{1}{k+2}}_{k+2},\underbrace{0,\ldots,0}_{d-k-1}\right)$. We next state the proof of Theorem \[T1\]. We would like to mention that the method of estimating the $p$-frame potential in the proof is motivated by the work of Bukh and Cox [@Bukh] (see also [@Dels]). \(i) Note that $ {{\rm FP}}_{p,d+1,d}(L_k^d)= (k+1)k^{1-p} $. To this end, it is enough to show that $ {{\rm FP}}_{p,d+1,d}(X)\geq (k+1)k^{1-p} $ when $p\in(p_{k-1},p_k)$ and $L_k^d$ is the unique minimizer for each $k\in\{1,2,\ldots,d\}$. Recall that $X=\{{{\mathbf x}}_i\}_{i=1}^{d+1}\subset {{\mathbb R}}^d$ is a set of $d+1$ unit-norm vectors. Set $$G=(\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle)\,\,\in\,\, {{\mathbb R}}^{(d+1)\times(d+1)}.$$ Note that ${{\rm rank}}( G)\leq d$. Thus, there exists a unit vector ${{\mathbf y}}=(y_1,\ldots,y_{d+1})^T\in{{\mathbb R}}^{d+1}$ such that $G{{\mathbf y}}=0$. We compute the value of $(i,i)$-entry of the matrix $G{{\mathbf y}}{{\mathbf y}}^T$ and obtain that $$0=(G{{\mathbf y}}{{\mathbf y}}^T)_{i,i}=\sum\limits_{j=1}^{d+1}\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle\cdot y_iy_j=y_i^2+\sum\limits_{j\neq i}\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle\cdot y_iy_j,$$ which implies that $$y_i^2=|\sum\limits_{j\neq i}\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle\cdot y_iy_j|\leq \sum\limits_{j\neq i}|\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|\cdot |y_i||y_j|.$$ Summing up the above inequality from $1$ to $d+1$, we obtain that $$1\,\,=\,\,\sum\limits_{i=1}^{d+1}y_i^2\,\,\leq\,\, \sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}|\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|\cdot |y_i||y_j|.$$ We next present the proof of (i) with dividing the proof into two cases: [*Case 1: $p\in (0,1]$.*]{} Note that $(0,1]\subset (p_0,p_1)$. It is enough to prove that the unique solution to $ {\mathop{\rm argmin}\limits_{X\in S(d+1,d)}}{{\rm FP}}_{p,d+1,d}(X) $ is $X=L_1^d$ for any $p\in(0,1]$. We first consider the case where $p=1$. Since $$|y_i||y_j |\,\,\leq\,\, \frac{y_i^2+y_j^2}{2}\,\,\leq\,\,\frac{1}{2},\,\, \text{ for all } i\neq j$$ we obtain that $$1\leq \sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}|\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|\cdot |y_i||y_j|\leq \frac{1}{2}\cdot\sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}|\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|,$$ which implies $$\label{p1} \sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}|\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|\,\,\geq\,\, 2.$$ The equality in (\[p1\]) holds if and only if there exist $i_1,i_2\in \{1,2,\ldots,d+1\}$ with $i_1\neq i_2$ such that $|\langle {{\mathbf x}}_{i_1},{{\mathbf x}}_{i_2} \rangle|=1$ and the rest terms in the sum $\sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}|\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|$ are all zero. We arrive at the conclusion. We next turn to the case $p\in(0,1)$. Noting ${\vert\langle { {{\mathbf x}}_i,{{\mathbf x}}_j } \rangle\rvert}\leq 1$, we have $$|\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|^p\geq |\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|, \text{ for all } i\neq j$$ for any $p\in(0,1)$. Thus, we have $$\label{11} \sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}|\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|^p\geq \sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}|\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|\geq 2.$$ The equality holds if and only if $|\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|=0$ or 1 for any distinct $i,j$. Thus, the minimizer of $1$-frame potential is also the unique minimizer of $p$-frame potential for any $p\in(0,1)$. [*Case 2: $1<p<2$.*]{} For $1<p< 2$, we use H$\ddot{\text{o}}$lder’s inequality to obtain that $$\label{Holder} 1\leq \sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}|\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|\cdot |y_i||y_j|\leq (\sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}|\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|^p)^{\frac{1}{p}}(\sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}|y_i|^q|y_j |^q)^{\frac{1}{q}}$$ where $q>2$ satisfies $\frac{1}{p}+\frac{1}{q}=1$. The second equality in (\[Holder\]) holds if and only if there exists a constant $c\in{{\mathbb R}}$ such that $$\label{Holder2} c\cdot |\langle {{\mathbf x}}_i,{{\mathbf x}}_j \rangle|^{p-1}\,\, =\,\,|y_i||y_j|, \text{ for all } i\neq j.$$ The (\[Holder\]) implies $$\label{13} {{\rm FP}}_{p,d+1,d}(X)\geq \frac{1}{(\sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}|y_i|^q|y_j |^q)^{\frac{p}{q}}}.$$ Let $\alpha=\frac{q}{2}$ and $z_i=|y_i|^2$ for $i=1,2,\ldots,d+1$. Then we can rewrite the inequality (\[13\]) as $$\label{eq:eq1} {{\rm FP}}_{p,d+1,d}(X)\,\,\geq\,\, \frac{1}{(M_{\alpha,d+1}(z_1,\ldots,z_{d+1}))^{\frac{p}{q}}},$$ where $M_{\alpha,d+1}(z_1,\ldots,z_{d+1})=\sum\limits_{i=1}^{d+1}\sum\limits_{j\neq i}z_i^{\alpha}z_j^{\alpha}$, $z_1+\cdots+z_{d+1}=1, z_i\geq 0, i=1,\ldots,d+1$. Note that $\alpha=\frac{q}{2}=\frac{1}{2}+\frac{1}{2}\frac{1}{p-1}$. If $p\in(p_{k-1},p_k)\cap(1,2)$ where $k\in\{1,\ldots,d\}$, then $\alpha\in (a_{k-1},a_k)$. Here, $a_k$ is defined in (\[eq:ak\]). According to Lemma \[M\], $M_{\alpha,d+1}(z_1,\ldots,z_{d+1})$ arrives at its maximum, which is $k(k+1)^{1-2\alpha}$, only when $z_i=\frac{1}{k+1}$ for $i=1,\ldots,k+1$ and $z_i=0$ for $i\geq k+2$. Thus, we obtain that $$\label{14} {{\rm FP}}_{p,d+1,d}(X) \,\,\geq\,\, \frac{1}{(k(k+1)^{1-2\alpha})^{\frac{p}{q}}}=(k+1)k^{1-p}$$ when $p\in(p_{k-1},p_k)\cap(1,2)$, $k=1,\ldots,d$. Combining the equation (\[Holder2\]), the equality in (\[14\]) holds if and only if for $i\neq j$ $$\label{eq1:ak} {\vert\langle {{{\mathbf x}}_i,{{\mathbf x}}_j} \rangle\rvert}=\left\{ \begin{array}{cl} \frac{1}{k}, & i,j\in \{1,\ldots,k+1\}\\ 0, & else \end{array} \right. ,$$ which implies that $X=L_k^d$. Combining the result of [*Case 1*]{}, we arrive at the conclusion (i). (ii). Note that $ {{\rm FP}}_{p,d+1,d}(L_k^d)={{\rm FP}}_{p,d+1,d}(L_{k+1}^d)= (k+1)k^{1-p} $ when $p=p_k$, $k=1,2,\ldots, d-1$. To this end, it is enough to prove that ${{\rm FP}}_{p_k,d+1,d}(X) \,\,\geq\,\, (k+1)k^{1-p_k}$ and the minimizers are $L_{k}^d$ and $L_{k+1}^d$. Since $p_k\in(1,2)$ for each $k\in\{1,2,\ldots,d-1\}$, we follow our analysis in (i). If $p=p_k$ where $k\in\{1,\ldots,d-1\}$, then $\alpha$ in (\[eq:eq1\]) is equal to $a_k$. According to Lemma \[M\], $M_{a_k,d+1}(z_1,\ldots,z_{d+1})$ arrives at its maximum, which is $k(k+1)^{1-2a_k}$, at exactly two points: $\left(\underbrace{\frac{1}{k+1},\ldots,\frac{1}{k+1}}_{k+1},\underbrace{0,\ldots,0}_{d-k}\right)$ and $\left(\underbrace{\frac{1}{k+2},\ldots,\frac{1}{k+2}}_{k+2},\underbrace{0,\ldots,0}_{d-k-1}\right)$. Thus, we obtain $$\label{144} {{\rm FP}}_{p_k,d+1,d}(X) \,\,\geq\,\, \frac{1}{(k(k+1)^{1-2a_k})^{\frac{p_k}{2\cdot a_k}}}=(k+1)k^{1-p_k}$$ for $k=1,2,\ldots,d-1$. According to (\[Holder2\]), the equality in (\[144\]) holds if and only if $X=L_k^d$ or $L_{k+1}^d$, which implies the conclusion (ii). To state conveniently, we state Theorem \[T1\] and its proof for the real case. In fact, it is easy to extend the result in Theorem \[T1\] to complex case. Moreover, the method which is employed to prove Theorem \[T1\] can be used to estimate the matrix potential, i.e. $\sum\limits_{i\neq j}|A_{i,j}|^p$, where $A_{i,j}$ is the $(i,j)$-entry of any matrix $A\in {{\mathbb C}}^{(d+1)\times (d+1)}$ whose rank is $d$ and diagonal elements are equal to 1. Proof of Lemma $\ref{M}$ ======================== In this section, we present the proof of Lemma \[M\]. We begin with introducing the following lemma, which portrays the feature of the local extreme point of (\[con0\]). To state conveniently, we set $$\label{fm} f_{m_1,\alpha,d+1}(t):=M_{\alpha,d+1}\left(\underbrace{t,\ldots,t}_{m_1},\underbrace{s,\ldots,s}_{d+1-m_1} \right)=(m_1\cdot t^{\alpha}+(d+1-m_1)\cdot s^{\alpha})^2-(m_1\cdot t^{2\alpha}+(d+1-m_1)\cdot s^{2\alpha}),$$ where $s:=\frac{1-m_1t}{d+1-m_1}, m_1\in [1,\frac{d+1}{2}]\cap {{\mathbb Z}}$. \[L1\] Assume that $(w_1,\ldots,w_{d+1})$ is a local maximum point of $M_{\alpha,d+1}(z_1,\ldots,z_{d+1})$ with the constrains in (\[con0\]) and $w_i>0$ for each $i\in\{1,2,\dots,d+1\}$. Then 1. The $(w_1,\ldots,w_{d+1})$ is in the form of $\left(\underbrace{t_0,\ldots,t_0}_{m_1},\underbrace{s_0,\ldots,s_0}_{d+1-m_1}\right)$ up to a permutation where $m_1\in [1,\frac{d+1}{2}]\cap {{\mathbb Z}}, t_0\in (0,\frac{1}{m_1})$ and $s_0=\frac{1-m_1t_0}{d+1-m_1}$. 2. The $t_0$ is a local maximum point of $f_{m_1,\alpha,d+1}(t)$. \(i) We claim that $w_1,\ldots,w_{d+1}$ can only take at most two different values. Note that $M_{\alpha,d+1}(z_1,\ldots,z_{d+1})$ is a symmetric function on $z_1,\ldots,z_{d+1}$. Hence, up to a permutation, we can write $(w_1,\ldots,w_{d+1})$ as $\left(\underbrace{t_0,\ldots,t_0}_{m_1},\underbrace{s_0,\ldots,s_0}_{d+1-m_1}\right)$ for some $t_0\in(0,\frac{1}{m_1})$ and $s_0=\frac{1-m_1t_0}{d+1-m_1}$. We remain to prove the claim. Set $r_0(z_1,\ldots,z_{d+1}):=z_1+\cdots+z_{d+1}-1$ and $$r_i(z_1,\ldots,z_{d+1}):=-z_i,\quad i=1,2,\ldots,d+1.$$ Since $(w_1,\ldots,w_{d+1})$ is a local extreme point, according to KKT conditions, there exist constants $\lambda$ and $\mu_i, i=1,2,\ldots,d+1$, which are called KKT multipliers, such that the followings hold: $$\begin{aligned} \nabla M_{\alpha,d+1}(w_1,\ldots,w_{d+1})&=\lambda\nabla r_0(w_1,\ldots,w_{d+1})+\sum\limits_{i=1}^{d+1}\mu_i\nabla r_i(w_1,\ldots,w_{d+1})\label{kkt1}\\ r_0(w_1,\ldots,w_{d+1})&=0\label{kkt2}\\ r_i(w_1,\ldots,w_{d+1})&\leq 0, i=1,2,\ldots,d+1\label{kkt3}\\ \mu_ir_i(w_1,\ldots,w_{d+1})&=0, i=1,2,\ldots,d+1\label{kkt4}\\ \mu_i&\geq 0, i=1,2,\ldots,d+1.\label{kkt5}\end{aligned}$$ Combining $w_i>0$ and (\[kkt4\]), we can obtain that $\mu_i=0, i=1,2,\ldots,d+1$. Substituting $\mu_i=0$ into (\[kkt1\]), we obtain that $$\label{23} 2\alpha\cdot w_i^{\alpha-1}((w_1^\alpha+\cdots+w_{d+1}^\alpha)-w_i^\alpha)=\lambda, \quad i=1,\ldots,d+1,$$ which implies that $\lambda>0$ and $$\frac{\lambda}{2\alpha w_i^{\alpha-1}}+w_i^\alpha=w_1^\alpha+\cdots+w_{d+1}^\alpha,\quad i=1,\ldots,d+1.$$ Hence, we obtain that $$\label{eq:fw} f(w_1)\,=\,f(w_2)\,=\,\cdots\,=\,f(w_{d+1})\,>\,0$$ where $f(x):=x^\alpha+\frac{\lambda}{2\alpha}\cdot \frac{1}{x^{\alpha-1}}$. Set $w_0:=(\frac{\alpha-1}{2\alpha^2}\cdot \lambda)^{\frac{1}{2\alpha-1}}$. Noting that $f'(x)=\alpha x^{\alpha-1}-\frac{\lambda (\alpha-1)}{2\alpha}x^{-\alpha}$, we obtain that $f'(x)<0, x\in (0,w_0)$, $f'(w_0)=0$ and $f'(x)>0, x\in (w_0,\infty)$, which implies that, for any $c\in {{\mathbb R}}$, the cardinality of the set $\{x : f(x)=c, x>0\}$ is less than or equal to $2$ . Hence, the (\[eq:fw\]) implies that $w_1,\ldots,w_{d+1}$ can take at most two different values. \(ii) Combining $$f_{m_1,\alpha,d+1}(t)=M_{\alpha,d+1}\left(\underbrace{t,\ldots,t}_{m_1},\underbrace{s,\ldots,s}_{d+1-m_1} \right)$$ with $\left(\underbrace{t_0,\ldots,t_0}_{m_1},\underbrace{s_0,\ldots,s_0}_{d+1-m_1}\right)$ being a local maximum point of $M_{\alpha,d+1}\left(\underbrace{t,\ldots,t}_{m_1},\underbrace{s,\ldots,s}_{d+1-m_1}\right)$, we obtain the conclusion immediately. \[L4\] Let $m_1\in[1,\frac{d+1}{2}]\cap {{\mathbb Z}}$ and $m_2=d+1-m_1$ where $d\geq 2$ is an integer. Set $$h(x):=(m_2-1)x^{4\alpha-2}-m_2\cdot x^{2\alpha}+m_1\cdot x^{2\alpha-2}-(m_1-1)$$ where $\alpha>1$. Then 1. The $h'(x)$ has at most two zeros on $(0,\infty)$, and hence $h(x)$ has at most two extreme points on $(0,\infty)$; 2. If $\alpha< 1+\frac{1}{d-1}$, then there exist $\hat{x}_1\in(0,1)$, $\hat{x}_2\in(1,\infty)$ such that $h'(x)>0$ for $x\in (0,\hat{x}_1)\cup(\hat{x}_2,\infty)$ and $h'(x)<0$ for $x\in (\hat{x}_1,\hat{x}_2)$; 3. If $\alpha\geq 1+\frac{1}{d-1}$, then $h(x)$ is positive and monotonically increasing on $(1,\infty)$; 4. If $\alpha=1+\frac{1}{d-1}$ and $m_1=\frac{d+1}{2}$, then $h(x)$ is monotonically increasing on $(0,\infty)$; 5. If $\alpha=1+\frac{1}{d-1}$ and $m_1<\frac{d+1}{2}$, then there exists $\hat{x}_3\in(0,1)$ such that $h'(x)>0$ for $x\in (0,\hat{x}_3)\cup(1,\infty)$ and $h'(x)<0$ for $x\in (\hat{x}_3,1)$. (i). By computation, we have $$\label{26} h'(x)=h_1(x)\cdot x^{2\alpha-3},$$ where $h_1(x)=(4\alpha-2)\cdot (m_2-1)x^{2\alpha}-2\alpha\cdot m_2\cdot x^2+(2\alpha-2)\cdot m_1$. Set $$\label{x0} x_0:=\left(\frac{m_2}{(2\alpha-1)\cdot (m_2-1)}\right)^{\frac{1}{2\alpha-2}}.$$ Noting that $h_1'(x)<0, x\in (0,x_0)$, $h_1'(x)>0, x\in (x_0,\infty)$ and $h_1'(x_0)=0$, which implies that $h_1(x)=0$ has at most two distinct solutions on $(0,\infty)$. According to (\[26\]), $h'(x)=0$ also has at most two distinct solutions on $(0,\infty)$, which implies the conclusion. (ii). When $\alpha< 1+\frac{1}{d-1}$, we obtain that $h_1(1)=2\alpha(d-1)-2d<0$. Then we have $$\label{hh1} \inf\limits_{x>0} h_1(x)=h_1(x_0)\leq h_1(1)<0.$$ Observing that $m_2>1$ and $\alpha>1$, we obtain that $$\label{hh2} h_1(0)=(2\alpha-2)\cdot m_1>0$$ $$\label{hh3} \lim\limits_{x\to+\infty} h_1(x)=+\infty$$ Thus, combining (\[hh1\]), (\[hh2\]) and (\[hh3\]), we obtain that $h_1(x)=0$ has exactly two solutions $\hat{x}_1$, $\hat{x}_2$, where $\hat{x}_1\in(0,1)$, $\hat{x}_2\in(1,\infty)$. By the monotonicity of $h_1(x)$, we also know that $h_1(x)<0$, $x\in(\hat{x}_1,\hat{x}_2)$ and $h_1(x)>0$, $x\in(0,\hat{x}_1)\cup(\hat{x}_2,\infty)$. According to (\[26\]), we obtain that $h'(x)<0$, $x\in(\hat{x}_1,\hat{x}_2)$ and $h'(x)>0$, $x\in(0,\hat{x}_1)\cup(\hat{x}_2,\infty)$. (iii). Note that $$x_0\,=\,\left(\frac{1}{(2\alpha-1)\cdot (1-\frac{1}{m_2})}\right)^{\frac{1}{2\alpha-2}}\,\leq\,\left(\frac{1}{(1+\frac{2}{d-1})\cdot (1-\frac{2}{d+1})}\right)^{\frac{1}{2\alpha-2}} \,=\,1$$ where we use $m_2=d+1-m_1\geq\frac{d+1}{2}$ and $\alpha\geq 1+\frac{1}{d-1}$. So the function $h_1(x)$ is monotonically increasing when $x>1$. Noting that $h_1(1)=2\alpha(d-1)-2d\geq 0$, we have $h_1(x)>0$ on $(1,\infty)$, which implies that $h(x)$ is monotonically increasing on $(1,\infty)$. Since $h(1)=0$, we conclude that $h(x)>0$ when $x>1$. (iv). When $\alpha=1+\frac{1}{d-1}$ and $m_1=\frac{d+1}{2}$, we have $h_1(1)=0$ and $x_0=1$ from (\[x0\]), which implies that $h_1(x_0)=0$. Since $x_0$ is the minimum point of $h_1(x)$, we obtain $h_1(x)\geq 0$ on $(0,\infty)$. Then from (\[26\]) we see that $h'(x)\geq 0$ on $(0,\infty)$, which implies the conclusion. (v). Noting that $x_0\neq 1$ provided $\alpha=1+\frac{1}{d-1}$ and $m_1<\frac{d+1}{2}$, we have $$\label{hhh1} \inf\limits_{x>0} h_1(x)=h_1(x_0)< h_1(1)=2\alpha(d-1)-2d=0.$$ Since $\alpha=1+\frac{1}{d-1}$, from (iii) we have that $h(x)$ is monotonically increasing on $(1,\infty)$. Noting that (\[hh2\]) and (\[hh3\]) also hold for $\alpha=1+\frac{1}{d-1}$, we conclude that $h_1(x)=0$ has exactly two solutions $\hat{x}_3$ and 1, where $\hat{x}_3\in(0,1)$. From (\[26\]), we obtain that $h'(x)<0$, for $x\in(\hat{x}_3,1)$ and $h'(x)>0$, for $x\in(0,\hat{x}_3)\cup(1,\infty)$. We next study the local maximum point of $f_{m_1,\alpha,d+1}(t)$ for each $m_1\in[1,\frac{d+1}{2}]\cap {{\mathbb Z}}$ and $\alpha\in (1,1+\frac{1}{d-1}]$. The following lemma shows that if $1<\alpha\leq1+\frac{1}{d-1}$, then $f_{m_1,\alpha,d+1}(t)$ arrives at its local maximum at $t_0$ only if $t_0\in\{0,\frac{1}{d+1},\frac{1}{m_1}\}$. \[L2\] Assume $d\geq 2$ is an integer and $m_1\in[1,\frac{d+1}{2}]\cap {{\mathbb Z}}$. 1. Assume that $1<\alpha<1+\frac{1}{d-1}$. Assume that $t_0\in [0,\frac{1}{m_1}] $ and $f_{m_1,\alpha,d+1}(t)$ has a local maximum at $t_0$. Then $ t_0\in \left\{ 0, \frac{1}{d+1}, \frac{1}{m_1}\right\}. $ 2. Assume that $\alpha=1+\frac{1}{d-1}$. Assume that $t_0\in [0,\frac{1}{m_1}] $ and $f_{m_1,\alpha,d+1}(t)$ has a local maximum at $t_0$. Then $ t_0\in \left\{ 0, \frac{1}{m_1}\right\}. $ To state conveniently, let $m_2:=d+1-m_1>1$. Recall that $$f_{m_1,\alpha,d+1}(t)=M_{\alpha,d+1}\left(\underbrace{t,\ldots,t}_{m_1},\underbrace{s,\ldots,s}_{m_2} \right)=(m_1\cdot t^{\alpha}+m_2\cdot s^{\alpha})^2-(m_1\cdot t^{2\alpha}+m_2\cdot s^{2\alpha}),$$ where $s=\frac{1-m_1\cdot t}{m_2}$. Noting that $t,s\geq 0$ and $m_1\cdot t+m_2\cdot s=1$, we can set $t=\frac{\cos^2\theta}{m_1}$, $s=\frac{\sin^2\theta}{m_2}$, where $\theta\in[0,\frac{\pi}{2}]$. We use the substitution $t=\frac{\cos^2\theta}{m_1}, s=\frac{\sin^2\theta}{m_2}$ to transform the function from $f_{m_1,\alpha,d+1}(t)$ to $$g(\theta):=f_{m_1,\alpha,d+1}\left(\frac{\cos^2\theta}{m_1}\right)=\frac{m_1(m_1-1)}{m_1^{2\alpha}}(\cos\theta)^{4\alpha}+\frac{m_2(m_2-1)}{m_2^{2\alpha}}(\sin\theta)^{4\alpha}+\frac{2m_1m_2}{m_1^{\alpha}m_2^{\alpha}}(\cos\theta\sin\theta)^{2\alpha}.$$ To this end, it is enough to study the local maximum points of $g$ on $[0,\frac{\pi}{2}]$. A simple calculation shows that $$\label{g1} \begin{aligned} g'(\theta)=&-4\alpha\cdot\frac{m_1(m_1-1)}{m_1^{2\alpha}}(\cos\theta)^{4\alpha-1}\sin\theta+4\alpha\cdot\frac{m_2(m_2-1)}{m_2^{2\alpha}}(\sin\theta)^{4\alpha-1}\cos\theta\\ &+2\alpha\cdot\frac{2m_1m_2}{m_1^{\alpha}m_2^{\alpha}}(\cos\theta\sin\theta)^{2\alpha-1}(\cos^2\theta-\sin^2\theta). \end{aligned}$$ We can rewrite $g'(\theta)$ as $$\label{g2} g'(\theta)=4\alpha\cdot \frac{m_1}{m_1^{2\alpha}}\cdot (\cos\theta)^{4\alpha-1}\sin\theta\cdot h(v),$$ where $v:=\sqrt{\frac{s}{t}}=\sqrt{\frac{m_1}{m_2}}\cdot \frac{\sin\theta}{\cos\theta}$ and $h(v):=(m_2-1)v^{4\alpha-2}-m_2\cdot v^{2\alpha}+m_1\cdot v^{2\alpha-2}-(m_1-1)$. Particularly, when $\theta=\theta_*:=\arctan(\sqrt{\frac{m_2}{m_1}})$, we have $v=\sqrt{\frac{m_1}{m_2}}\cdot \frac{\sin\theta_*}{\cos\theta_*}=1$. Noting that $\alpha>1$, $m_1\geq 1$ and $m_2>1$, we obtain that $$h(0)=-(m_1-1)\leq0\label{h1},\,\, h(1)=0,\,\, \lim\limits_{v\to+\infty}h(v)=+\infty.$$ Since $4\alpha\cdot \frac{m_1}{m_1^{2\alpha}}\cdot (\cos\theta)^{4\alpha-1}\sin\theta$ is positive for any $\theta\in(0,\frac{\pi}{2})$, to study the monotonicity of $g(\theta)$, it is enough to consider the sign of $h(v)$ with $v>0$. \(i) First we consider the case $1<\alpha<1+\frac{1}{d-1}$. Lemma $\ref{L4}$ shows that there exist $\hat{v}_1\in(0,1)$ and $\hat{v}_2\in(1,\infty)$ such that $h'(v)>0$ for $v\in (0,\hat{v}_1)\cup(\hat{v}_2,\infty)$ and $h'(v)<0$ for $v\in (\hat{v}_1,\hat{v}_2)$. Noting that $h(1)=0$ and $\hat{v}_1<1<\hat{v}_2$, we obtain that $h(\hat{v}_1)>0$ and $h(\hat{v}_2)<0$. Combining Lemma \[L4\] and the results above, we obtain that $h(v)=0$ has exactly one solution on $[0,\hat{v}_1)$ , say $v_1$. Similarly, $h(v)=0$ also has exactly one solution on $(\hat{v}_2,\infty)$, say $v_2$. Let $\theta_1:=\arctan(v_1\sqrt{\frac{m_2}{m_1}})$ and $\theta_2:=\arctan(v_2\sqrt{\frac{m_2}{m_1}})$. If $m_1=1$, then we have $h(0)=0$ and hence $v_1=0$. From the monotonicity of $h(v)$, we obtain that $h(v)<0$, $v\in(1,v_2)$, $h(v)>0$, $v\in(0,1)\cup({v}_2,\infty)$ and $h(v)=0$, $v\in\{0,1,v_2\}$. Then from (\[g2\]) it is easy to check that $g'(\theta)<0$, $\theta\in(\theta_*,\theta_2)$, $g'(\theta)>0$, $\theta\in(0,\theta_*)\cup(\theta_2,\frac{\pi}{2})$ and $g'(\theta)=0$, $\theta\in\{0,\theta_*,\theta_2,\frac{\pi}{2}\}$, which implies $g(\theta)$ has only two local maximum points: $\theta_*$ and $\frac{\pi}{2}$. If $m_1>1$, then $h(0)<0$, which means $v_1\in(0,\hat{v}_1)$. Thus, by the monotonicity of $h(v)$ we conclude that $h(v)<0$, $v\in(0,v_1)\cup(1,v_2)$, $h(v)>0$, $v\in({v}_1,1)\cup({v}_2,\infty)$ and $h(v)=0$, $v\in\{v_1,1,v_2\}$. We can use (\[g2\]) to transform these results to $g'(\theta)$. Hence, we obtain that $g'(\theta)<0$, $\theta\in(0,\theta_1)\cup(\theta_*,\theta_2)$, $g'(\theta)>0$, $\theta\in(\theta_1,\theta_*)\cup(\theta_2,\frac{\pi}{2})$ and $g'(\theta)=0$, $\theta\in\{0,\theta_1,\theta_*,\theta_2,\frac{\pi}{2}\}$, which implies $g(\theta)$ has only three local maximum points: 0, $\theta_*$ and $\frac{\pi}{2}$. \(ii) We next consider the case where $\alpha=1+\frac{1}{d-1}$. We divided the proof into two cases. [*Case 1: $m_1=\frac{d+1}{2}$ .*]{} Lemma $\ref{L4}$ implies that $h(v)$ is monotonically increasing on $(0,\infty)$. Noting that $h(0)=-(m_1-1)<0$ and $h(1)=0$, we have $h(v)<0$, $v\in(0,1)$ and $h(v)>0$, $v\in(1,\infty)$. We use (\[g2\]) to transform the result to $g'(\theta)$ and obtain that $g'(\theta)<0$, $\theta\in(0,\theta_*)$, $g'(\theta)>0$, $\theta\in(\theta_*,\frac{\pi}{2})$ and $g'(\theta)=0$, $\theta\in\{0,\theta_*,\frac{\pi}{2}\}$, which implies $g(\theta)$ has only two local maximum points: 0 and $\frac{\pi}{2}$. [*Case 2: $m_1<\frac{d+1}{2}$ .*]{} According to Lemma $\ref{L4}$, there exists $\hat{v}_3\in(0,1)$ such that $h'(v)>0$ for $v\in (0,\hat{v}_3)\cup(1,\infty)$ and $h'(v)<0$ for $v\in (\hat{v}_3,1)$. If $m_1=1$, then $h(0)=h(1)=0$. According to the sign of $h'(v)$, we obtain that $h(v)\geq 0$, $v\in[0,\infty)$. The (\[g2\]) implies that $g'(\theta)$ is always non-negative on $[0,\frac{\pi}{2}]$, which means $\frac{\pi}{2}$ is the only local maximum point of $g(\theta)$. If $1<m_1<\frac{d+1}{2}$, then $h(0)<0$. So there exists $v_3\in(0,\hat{v}_3)$ such that $h(v)<0$, $v\in(0,v_3)$ and $h(v)\geq0$, $v\in[v_3,\infty)$. Set $\theta_3:=\arctan(v_3\sqrt{\frac{m_2}{m_1}})$. According to (\[g2\]), we have $g'(\theta)<0$, $\theta\in(0,\theta_3)$, $g'(\theta)>0$, $\theta\in(\theta_3, \frac{\pi}{2})$ and $g'(\theta)=0$, $\theta\in\{0,\theta_3,\frac{\pi}{2}\}$, which implies $g(\theta)$ has only two local maximum points: 0 and $\frac{\pi}{2}$. When $1<\alpha\leq1+\frac{1}{d-1}$, combining Lemma \[L1\] and Lemma \[L2\], we obtain that $(\frac{1}{d+1},\ldots,\frac{1}{d+1})$ is the only possible local maximum point of $M_{\alpha,d+1}(z_1,\ldots,z_{d+1})$ with the constrains $z_1+\cdots+z_{d+1}=1$ and $z_i> 0$, $i=1,2,\ldots,d+1$. We deal with the case $\alpha>1+\frac{1}{d-1}$ in the next lemma. \[L3\] Assume that $\alpha>1+\frac{1}{d-1}$ and $d\geq 2$. Assume that $(w_1,w_2,\ldots,w_{d+1})$ is a local maximum point of $M_{\alpha,d+1}(z_1,\ldots,z_{d+1})$ with the constrains in (\[con0\]). Then there exists $k_0\in \{1,\ldots,d+1\}$ such that $w_{k_0}=0$. The proof is by contradiction. For the aim of contradiction, we assume that $w_i>0$ for $i\in \{1,\ldots,d+1\}$. According to Lemma \[L1\], the $(w_1,\ldots,w_{d+1})$ is in the form of $\left(\underbrace{t_0,\ldots,t_0}_{m_1},\underbrace{s_0,\ldots,s_0}_{d+1-m_1}\right)$ up to a permutation where $m_1\in [1,\frac{d+1}{2}]\cap {{\mathbb Z}}$, $t_0\in (0,\frac{1}{m_1})$ and $s_0=\frac{1-m_1t_0}{d+1-m_1}$. Lemma \[L1\] also implies that $t_0$ is a local maximum point of $f_{m_1,\alpha,d+1}(t)$. To this end, it is enough to show the following conclusion: [*Claim 1:*]{} When $\alpha>1+\frac{1}{d-1}$, if $t_0\in (0,\frac{1}{m_1})$ is a local maximum point of $f_{m_1,\alpha,d+1}(t)$, then $\left(\underbrace{t_0,\ldots,t_0}_{m_1},\underbrace{s_0,\ldots,s_0}_{d+1-m_1}\right)$ is not a local maximum point of $M_{\alpha,d+1}(z_1,\ldots,z_{d+1})$ with the constrains in (\[con0\]). Claim 1 contradicts with $\left(\underbrace{t_0,\ldots,t_0}_{m_1},\underbrace{s_0,\ldots,s_0}_{d+1-m_1}\right)$ being a local maximum point of $M_{\alpha,d+1}(z_1,\ldots,z_{d+1})$ with the constrains in (\[con0\]). Hence, there exists $k_0\in \{1,\ldots,d+1\}$ such that $w_{k_0}=0$. We remain to prove Claim 1. To state conveniently, we set $m_2:=d+1-m_1$. Since $m_1\leq \frac{d+1}{2}$ and $d\geq 2$, we have $m_2\geq2$. Set $$F(\varepsilon):=M_{\alpha,d+1}\left(\underbrace{t_0,\ldots,t_0}_{m_1},s_0+l\varepsilon,\underbrace{s_0-\varepsilon,\ldots,s_0-\varepsilon}_{m_2-1}\right),$$ where $l=m_2-1$ and $\varepsilon\in (-\frac{s_0}{l},s_0)$. To this end, it is enough to show that $\varepsilon=0$ is not a local maximum point of $F(\varepsilon)$. In fact, we can prove that with showing that $\varepsilon=0$ is a local minimum point of $F(\varepsilon)$. A simple calculation shows that $$\begin{aligned} F(\varepsilon)=&(m_1\cdot t_0^\alpha+(s_0+l\varepsilon)^\alpha+(m_2-1)(s_0-\varepsilon)^\alpha)^2-(m_1\cdot t_0^{2\alpha}+(s_0+l\varepsilon)^{2\alpha}+(m_2-1)(s_0-\varepsilon)^{2\alpha}),\\ F'(\varepsilon)=&2\alpha\cdot (m_1\cdot t_0^\alpha+(s_0+l\varepsilon)^\alpha+(m_2-1)(s_0-\varepsilon)^\alpha)\cdot (l(s_0+l\varepsilon)^{\alpha-1}-(m_2-1)(s_0-\varepsilon)^{\alpha-1})\\ &-2\alpha\cdot (l(s_0+l\varepsilon)^{2\alpha-1}-(m_2-1)(s_0-\varepsilon)^{2\alpha-1}),\\ F''(\varepsilon)=&2\alpha^2\cdot (l(s_0+l\varepsilon)^{\alpha-1}-(m_2-1)(s_0-\varepsilon)^{\alpha-1})^2\\ &+2\alpha(\alpha-1)\cdot (m_1\cdot t_0^\alpha+(s_0+l\varepsilon)^\alpha+(m_2-1)(s_0-\varepsilon)^\alpha)\cdot(l^2(s_0+l\varepsilon)^{\alpha-2}+(m_2-1)(s_0-\varepsilon)^{\alpha-2}) \\ &-2\alpha(2\alpha-1)\cdot (l^2(s_0+l\varepsilon)^{2\alpha-2}+(m_2-1)(s_0-\varepsilon)^{2\alpha-2}). \end{aligned}$$ Noting $l=m_2-1$, we can check that $$\label{f'} F'(0)=0.$$ We claim $F''(0)>0$ and hence $\varepsilon=0$ is a local minimum point of $F(\varepsilon)$. We arrive at the conclusion. We remain to prove $F''(0)>0$. Note that $$\label{t0} \begin{aligned} F''(0)&=2\alpha\cdot (l^2+m_2-1)\cdot s_0^{\alpha-2}((\alpha-1)(m_1t_0^\alpha+m_2s_0^\alpha)-(2\alpha-1)s_0^\alpha). \end{aligned}$$ Since $t_0\notin \{ 0,\frac{1}{m_1}\}$ is a local maximum point of $f_{m_1,\alpha,d+1}(t)$, then from the equation (\[g2\]) we know that $\sqrt{\frac{s_0}{t_0}}$ is a root of $h(v)=0$, where $h(v)=(m_2-1)v^{4\alpha-2}-m_2\cdot v^{2\alpha}+m_1\cdot v^{2\alpha-2}-(m_1-1)$. According to Lemma \[L4\], $h(v)>0$ for $v>1$ provided $\alpha\geq 1+\frac{1}{d-1}$, which implies that $\sqrt{\frac{s_0}{t_0}}\leq 1$ and hence $s_0\leq t_0$. Combining $s_0>0$ and $l^2+m_2-1\geq 2$, we have $$\begin{aligned} F''(0)&\geq 2\alpha\cdot (l^2+m_2-1)\cdot s_0^{\alpha-2}((\alpha-1)(m_1s_0^\alpha+m_2s_0^\alpha)-(2\alpha-1)s_0^\alpha)\\ &=2\alpha\cdot (l^2+m_2-1)\cdot s_0^{2\alpha-2}((\alpha-1)(m_1+m_2)-(2\alpha-1))\\ &=2\alpha\cdot (l^2+m_2-1)\cdot s_0^{2\alpha-2}((d-1)\alpha-d). \end{aligned}$$ Noting that $\alpha>1+\frac{1}{d-1}$, we obtain that $$\label{f''} F''(0)>0.$$ We next present the proof of Lemma \[M\]. We prove Lemma \[M\] by induction on $d$. First, we consider the case $d=1$. For $d=1$, we have only two non-negative variables $z_1,z_2$ which satisfy $z_1+z_2=1$. For any $\alpha> 1$ we have $$M_{\alpha,2}=2z_1^{\alpha} z_2^{\alpha}\leq 2\cdot \left(\frac{z_1+z_2}{2}\right)^{\alpha}=2^{1-\alpha},$$ where the equality holds if and only if $z_1=z_2=\frac{1}{2}$. Hence, the solution to (\[con0\]) is $(\frac{1}{2},\frac{1}{2})$ which implies Lemma \[M\] holds for $d=1$. We assume that Lemma \[M\] holds for $d= d_0-1$ and hence we know the solution to (\[con0\]) for $d=d_0-1$. We next consider the case where $d=d_0$. Assume that $(w_1,\ldots,w_{d_0+1})$ is a solution to (\[con0\]) with $d=d_0$. Recall that $a_0=\infty$, $a_{d_0}=1$, $a_k=\frac{1}{2}\cdot\frac{\ln(k+2)-\ln(k)}{\ln(k+2)-\ln(k+1)}$, $k=1,2,\ldots,d_0-1$. To state conveniently, we set ${{\mathbf e}}_{k+1}:=(\frac{1}{k+1},\ldots,\frac{1}{k+1})\in {{\mathbb R}}^{k+1}$ and ${{\mathbf 0}}_{d_0-k}:=(0,\ldots,0)\in {{\mathbb R}}^{d_0-k}$. We set $$({{\mathbf e}}_{k+1},{{\mathbf 0}}_{d_0-k}):=\left(\underbrace{\frac{1}{k+1},\ldots,\frac{1}{k+1}}_{k+1}, \underbrace{0,\ldots,0}_{d_0-k}\right).$$ We first show that $$\label{eq:jihe} (w_1,\ldots,w_{d_0+1})\in \left\{\left({{\mathbf e}}_{k+1},{{\mathbf 0}}_{d_0-k}\right)\in {{\mathbb R}}^{d_0+1}:k=1,\ldots,d_0\right\}.$$ We divided the proof into two cases. [*Case 1: $\alpha\in (1+\frac{1}{d_0-1},\infty)$ .*]{} According to Lemma \[L3\], at least one of entries in $(w_1,\ldots,w_{d_0+1})$ is 0. Without loss of generality, we assume $w_{d_0+1}=0$. Since $M_{\alpha,d_0+1}(w_1,\ldots,w_{d_0},0)=M_{\alpha,d_0}(w_1,\ldots,w_{d_0})$, $(w_1,\ldots,w_{d_0})$ is the solution to (\[con0\]) with $d=d_0-1$. Hence, by induction we conclude that (\[eq:jihe\]) holds. [*Case 2: $\alpha\in (1,1+\frac{1}{d_0-1}]$.*]{} If one of entries in $(w_1,\ldots,w_{d_0+1})$ is 0, we can show that (\[eq:jihe\]) holds using the similar argument above. We next consider the case where $w_i>0$ for each $i\in\{1,\ldots,d_0+1\}$. Lemma \[L1\] shows that $ (w_1,\ldots,w_{d_0+1})$ is in the form of $\left(\underbrace{t_0,\ldots,t_0}_{m_1},\underbrace{s_0,\ldots,s_0}_{d_0+1-m_1}\right) $ up to a permutation where $m_1\in [1,\frac{d_0+1}{2}]\cap {{\mathbb Z}}$, $t_0\in (0,\frac{1}{m_1})$ and $s_0=\frac{1-m_1t_0}{d_0+1-m_1}$ . Lemma \[L1\] also implies that $t_0$ is a local maximum point of the function $f_{m_1,\alpha,d_0+1}(t)$, where $f_{m_1,\alpha,d_0+1}(t)$ is defined in (\[fm\]). According to Lemma \[L2\], we obtain that $t_0=\frac{1}{d_0+1}$. Hence $(w_1,\ldots,w_{d_0+1})=(\frac{1}{d_0+1},\ldots,\frac{1}{d_0+1})$ , which implies (\[eq:jihe\]). To this end, it is enough to compare the values among $ M_{\alpha,d_0+1}\left({{\mathbf e}}_{k+1},{{\mathbf 0}}_{d_0-k}\right), k=1,\ldots,d_0. $ Setting $H(x):=x^{1-2\alpha}(x-1)$, we obtain that $M_{\alpha,d_0+1}\left({{\mathbf e}}_{k+1},{{\mathbf 0}}_{d_0-k}\right)=H(k+1)$ for each $k\in\{1,2,\ldots,d_0\}$. A simple calculation shows that $H(x)$ is monotonically increasing on $(0,1+\frac{1}{2\alpha-2})$ and monotonically decreasing on $(1+\frac{1}{2\alpha-2},\infty)$. Hence, the sequence $H(k+1),k=1,\ldots,d_0$, is unimodal. \(i) We consider the case where $\alpha\in(a_k,a_{k-1})$, $k=1,2,\ldots,d_0$. Noting that $H(k)<H(k+1)$ and $H(k+1)>H(k+2)$, we obtain that $$\max\limits_{x\in\{1,2,\dots,d_0\}} H(x+1)=H(k+1), \text{ for all } \alpha\in(a_k,a_{k-1}),$$ where the equality holds if and only if $x=k$. Thus, $ \left({{\mathbf e}}_{k+1},{{\mathbf 0}}_{d_0-k}\right) $ is the unique solution to (\[con0\]) with $d=d_0$ when $\alpha\in(a_k,a_{k-1})$, $k=1,2,\ldots,d_0$. \(ii) We remain to consider the case where $\alpha=a_k$, $k=1,2,\ldots,d_0-1$. Noting $H(k+1)=H(k+2)$, $H(k)<H(k+1)$ and $H(k+2)>H(k+3)$ provided $\alpha=a_k$, we obtain that (\[con0\]) has two solutions which are $\left({{\mathbf e}}_{k+2},{{\mathbf 0}}_{d_0-k-1}\right)$ and $\left({{\mathbf e}}_{k+1},{{\mathbf 0}}_{d_0-k}\right)$ with $d=d_0$ . Hence, the conclusion also holds for $d=d_0$ and we arrive at the conclusion. [10]{} J. J. Benedetto and M. Fickus, 18 (2003), 357-385. S. V. Borodachov, D. P. Hardin, A. Reznikov and E. B. Saff In [*Trans. Amer. Math. Soc.*]{} , 370 (2018), 6973-6993 . B. Bukh and C. Cox. X. Chen, V. Gonzales, E. Goodman, S. Kang and K. Okoudjou. H. Cohn, A. Kumar In [*J. Amer. Math. Soc.*]{} 20 (2007), no. 1, 99-148. M. Ehler and K. A. Okoudjou. . 142 (2012), no. 3, 645-659. M. Fickus, J. Jasper, E. J.King and Dustin G.Mixon, 555(2018), 98-138. A. Glazyrin A. Glazyrin R.B. Holmes, V.I. Paulsen 377 (2004) 31-51. Mark Magsino, Dustin G. Mixon and Hans Parshall, . J. Park J.M. Renes 5 (2005) 81-92. A. Roy, A. J. Scott 48 (2007), 1-24. B. Venkov. , 37 (2001), 10-86. L. Welch. 20(3): 397-399, 1974. V. A. Yudin. In [*Diskret. Mat.*]{} 4 (1992), 115-121 [^1]: The project was supported by NSFC grant (91630203, 11688101), Beijing Natural Science Foundation (Z180002).
--- abstract: 'Data-collapse is a way of establishing scaling and extracting associated exponents in problems showing self-similar or self-affine characteristics as e.g. in equilibrium or non-equilibrium phase transitions, in critical phases, in dynamics of complex systems and many others. We propose a measure to quantify the nature of data collapse. Via a minimization of this measure, the exponents and their error-bars can be obtained. The procedure is illustrated by considering finite-size-scaling near phase transitions and quite strikingly recovering the exact exponents.' address: - | Institute of Physics, Bhubaneswar 751 005, India\ Dipartimento di Fisica, Università di Padova, Via Marzolo 8, 35131 Padova, Italy - 'INFM and Dipartimento di Fisica, Università di Padova, Via Marzolo 8, 35131 Padova, Italy' author: - 'Somendra M. Bhattacharjee[@eml2]' - 'Flavio Seno[@eml1]' title: ' A Measure of data-collapse for scaling' --- [2]{} Scaling, especially finite size scaling (FSS), has emerged as an important framework for understanding and analyzing problems involving diverging length scales. Such problems abound in condensed matter, high energy and nuclear physics, equilibrium and non-equilibrium situations, thermal and non-thermal problems, and many more. The operational definition of scaling is this: A quantity $m(t,L)$ depending on two variables, $t$ and $L$, is considered to have scaling if it can be expressed as $$\label{eq:mt} m(t,L)=L^d f(t/L^c).$$ Depending on the nature of the problem of interest, $m$ may refer to magnetization, specific heat, size or some other characteristic of a polymer, width of a growing or fluctuating surface etc. Eq. \[eq:mt\] is the FSS form if $L$ is a linear dimension of the system and $t$ is any other variable, could even be time in dynamics. In the thermodynamic limit of infinite-sized systems, such a scaling would have $t$ and $L$ representing two thermodynamic parameters like magnetic field, pressure, chemical potential etc or one could be time. If $L$ is a length scale, then $d$ would look like the dimension of this quantity $m$, and $c$ of variable $t$. In fluctuation-dominated cases, it is generally a rule, rather than an exception, that $d$ and $c$ assume nontrivial values, different from what one expects from a dimensional analysis. The exponents and the scaling function $f(x)$ then characterize the behavior of the system. The fact that two completely independent variables (both conceptually and as controlled in experiments) combine in a nontrivial way to form a single one leads to an enormous simplification in the description of the phenomenon. This underlies the importance of scaling. A quantitative way of showing scaling is data-collapse (also called scaling plot) that goes back to the original observation of Rushbrooke that the coexistence curves for many simple systems could be made to fall on a single curve[@stanley]. For example, the values of $m(t,L)$ (Eq. \[eq:mt\]) for various $t$ and $L$ can be made to collapse on a single curve if $m L^{-d}$ is plotted against $t L^{-c}$. The method of data-collapse therefore comes as a powerful means of establishing scaling. It is in fact now used extensively to analyze and extract exponents especially from numerical simulations. Given the importance of scaling in wide varieties of problems, it is imperative to have an appropriate measure to determine the “goodness of collapse” - not to be left to the eyes of the beholder. In this paper, we propose [*a measure*]{} that can be used to quantify “collapse”. This measure can be used, via a minimization principle, for an automatic search for the exponents thereby removing the subjectiveness of the approach. To show the power of the method and the measure, we use it for two exactly known cases, namely the finite-size-scaling of the specific heat for (1) the one-dimensional ferro-electric six vertex model[@lieb] showing a first order transition [@smb_mri], and (2) the Kasteleyn dimer model[@kast] exhibiting the continuous anisotropic Pokrovsky-Talapov transition[@pok; @dimer]. In addition, to show the usefulness of the method in case of noisy data as expected in any numerical simulation, we consider the one-dimensional case with extra Gaussian noise added (by hand). It is worth emphasizing that the proposed procedure, without any bias, recovered the exactly known exponents from the specific heat data for finite systems. If the scaling function $f(x)$ of Eq. \[eq:mt\] is known, then the sum of residuals $$\label{eq:res1} R = \frac{1}{N} \ \sum \mid L^{-d}\ m -f(t/L^c)\mid,$$ where the sum is over all the data points, is minimum for the right choice of $(d,c)$. In absence of any statistical or systematic error, the minimum value is zero. However in most situations the function itself is not known but is generally an analytic function. In case of a perfect collapse, any one of the sets (say set $p$) can be used for $f(x)$. An interpolation scheme can then be used to estimate the values for other sets in the [*overlapping regions* ]{}. The residuals are then calculated. Since this can be done for any set as the basis, we repeat the procedure for all sets. Let the tabulated values of $m$ and $t$ be denoted by $m_{ij}, t_{ij}$ ( $i$th value of $t$ for the $j$th set of $L$ (i.e.$L=L_j$ for set $j$ )). We now define a quantity $P_b$, $$\label{eq:pb} P_b=\left[\frac{1}{{\cal N}_{\rm over}}\ \sum_{p}\ \sum_{j\neq p} \ \sum_{i,{\rm over}} \mid L_j^{-d}\ m_{i,j} - {\cal E}_p(L_j^{-c}\, t_{ij}) \mid^{q}\right]^{^{1/q}},$$ where ${\cal E}_p(x)$ is the interpolating function based on the values of set $p$ bracketing the argument in question (of set $j$). The innermost sum over $i$ is done only for overlapping points (denoted by the “$i$, over”), ${\cal N}_{\rm over}$ being the number of pairs. Though defined with a general $q$, we use $q=1$. For ${\cal E}_p(x)$, a 4-point polynomial interpolation can be used and if any complex singularity is suspected a rational approximation may be used. Extrapolations are avoided. The minimum of this function $P_b$ is zero[@comm] and is achievable in the ideal case of perfect collapse with correct values of $(d,c)$, i.e., $$\label{eq:min} P_b \geq P_b|_{\rm abs\ min} =0$$ This inequality can then be exploited and a minimization of $P_b$ over $(d,c)$ can be used to extract the optimal values of the parameters. In addition to the values of the exponents, estimates of errors can be obtained from the width of the minimum. A simple approach is to take the quadratic part in the individual directions along the (d,c) plane. From an expansion of $\ln P_b$ around the minimum at $(d_0,c_0)$, the width is estimated as $$\label{eq:width} \Delta d= \eta d_0 \left[2 \ln \frac{P_b(d_0\pm \eta d_0,c_0)}{P_b(d_0,c_0)}\right]^{^{-1/2}} ,$$ and $$\label{eq:width2} \Delta c= \eta c_0 \left[2 \ln \frac{P_b(d_0,c_0\pm \eta c_0)}{P_b(d_0,c_0)}\right ]^{^{-1/2}},$$ for a given $\eta$. Choosing $\eta=1\%$, the final estimate for the exponents would be $d_0\pm \Delta d,c_0\pm \Delta c$ with the error bar reflecting the width of the minimum at $1\%$ level. We now use the proposed method for different test cases. In order to implement the program[@program], we have used the routines of numerical recipes[@recip]. To calculate $P_b$, [POLINT]{} or [ RATINT]{} has been used for interpolation with [HUNT]{} to place a point in the table. For minimization, [AMOEBA]{} has been used thrice to locate the minimum, each time using the current estimates to generate a new triangle enclosing the minimum. In the examples given below, there was no need for more sophisticated minimization routines, which could be needed in case of subtle crossover behaviors or with nearby minima. Let us first consider the one-dimensional six-vertex model which shows a first-order transition[@smb_mri]. With the partition function $Z=2+(2x)^N$ for $N$ sites with $x=\exp(-\epsilon/k_B T)$ as the Boltzmann factor, $\epsilon$ being the energy of the high-energy vertices, $T$ the temperature and $k_B$ the Boltzmann factor, the specific heat can be computed exactly. The first-order transition is at $x=1/2$, for $N\rightarrow\infty$, with a $\delta$-function jump in specific heat. The $N$-dependent specific heat (per site),$c_N$, is given by[@smb_mri] $$\label{eq:vert1} c_N= k_B (\ln x)^2 2 N \frac{(2x)^N}{[2+(2x)^N]^2},$$ which for large $N$ and small $t=1-2x$ has the scaling form of Eq. \[eq:mt\] with $d=1$, $c=-1$ and $$\label{eq:vert2} f(z)= k_B (\ln 2)^2 2 \frac{e^z}{(2+e^z)^2}.$$ From the exact formula, Eq. \[eq:vert1\], data were generated for $N=10, 30,50, 70$ and $90$, for various values of temperatures. A minimization of $P_b$ gave us the estimate $d=.997\pm 0.04,c=.98\pm.06$, with $P_b=0.56881E-01$. The exponents are very close to the exact ones. The error-bars or the width of the minimum is to be interpreted as an indication of the presence of non-scaling corrections. To test this, we have generated data from the exact scaling function of Eq. \[eq:vert2\]. An unbiased minimization of $P_b$ then gave $d=1\pm 0.004,c=-1\pm 0.004$ with $P_b=0.34876E-03$. The smallness of the residue and of the errors (or the width of the minimum) represents a good data collapse. The nature of the data-collapse for both the cases is shown in Fig. 1. A similar minimization of $P_b$ was carried out for the two-dimensional Kasteleyn dimer model ( also isomorphic to a two-dimensional 5 vertex model). This is an exactly solvable lattice model of the continuous anisotropic Pokrovsky-Talapov transition for surfaces, and shows a square-root singularity for specific heat with different correlations lengths in the two directions[@dimer; @comm2]. The specific heat for lattices of size $M$ along the direction of the “walls” and infinite in the transverse direction is known exactly and its finite size scaling form has been discussed in Ref [@dimer]. Using the following formula for the specific heat per site $c_M$ $$\label{eq:kast} \frac{ c_M}{k_B a}= M \int_0^{2\pi} \ \frac{(2x \cos\phi)^M}{[1+(2x \cos\phi)^M]^2}\ d\phi,$$ specific heats data were generated for $M=10,30,50,70$ and $90$. In this formula, a few unimportant factors are put under $a$ and not explicitly shown. The critical point is at $x=1/2$. A minimization of $P_b$ gave $d=.5\pm 0.03,c=-.945\pm .02$ to be compared with the exact values $d=.5,c=-1$. The residue factor is $P_b=0.12424E-01$. The importance of correction terms are clear from Fig. 5 of Ref. [@dimer], and in our approach it gets reflected in the not too small value of $P_b$. The function $P_b$ for $q=1$ for the above two-dimensional problem is shown as a surface plot over the $(d,c)$ plane in Fig. 1. The sharpness of the minimum is note-worthy. In both the examples considered, the performance of the method is remarkable. The last example we consider is the set of noisy data[@series] where $c$ is calculated from Eq. \[eq:vert2\] and Gaussian noise was added to it so that $ c_{N,\eta}=| c_N\ (1+ A\eta)|$ where $\eta$ is a Gaussian deviate and $A$ is the amplitude of the noise added. The absolute value is taken to keep $c_n$ positive. The values of the exponents are found to be insensitive to $A$ for $A<0.1$ and starts changing for higher values of $A$. In Fig. \[fig:noise\] we show $ P_b$ against $A$. The larger values of $P_b$ for larger $A$ is a sign of poor collapse, as one finds by direct plotting with the estimated values or exact values. To summarize, we have proposed a measure to quantify the nature of data collapse in any scaling analysis of the form given by Eq. (\[eq:mt\]). This measure can be used for an automated search for the exponents. The method is quite general and even-though we formulated it in terms of power-laws as in Eq. 1, it can very easily be adopted to other forms of scaling[@log]. We conclude that the subjectiveness of data-collapse can be removed and $P_b$ could be used as a quantitative measure to test or compare “goodness of collapse” in any scaling analysis. We acknowledge financial support from MURST(COFIN-99). -1cm -1cm email:[email protected] email:[email protected] H. E. Stanley, Rev. Mod. Phys. [**71**]{}, S358 (1999). E. Lieb and F. Y. Wu in [*Phase transitions and critical phenomena*]{}, Vol. 1, ed by C. Domb and M. Green (Academic, 1971). S. M. Bhattacharjee, cond-mat/0011011 . P. W. Kasteleyn, J. Math. Phys. 4, 287 (1963). V. L. Pokrovksy and A. L. Talapov, Phys. Rev. Lett. [**42**]{}, 65 (1979). J. F. Nagle, Phys. Rev. Lett. [**34**]{}, 1150 (1975). S. M. Bhattacharjee and J. F. Nagle, Phys. Rev. [**A31**]{}, 3199 (1985). Pathological cases of no overlap are avoided. This program is available on request from the authors. It is flexible enough that the Numerical Recipes[@recip] routines could be replaced by any other suitable package. W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, [*Numerical recipes in FORTRAN 77*]{}, Cambridge University Press 1992. Along the direction of the “walls” the bulk length-scale exponent is $\nu_y=1$ while in the transverse direction $\nu_x=1/2$. For analysis of noisy series, see, e.g., R. Dekeyser, F. Iglói, F. Mallezie, and F. Seno, Phys. Rev. A 42, 1923 (1990). . For example, when tested on a log-type singularity as one finds for the specific heat in the two-dimensional Ising model, the method does not give a good collapse with power laws, but yields the correct results when the power-laws are changed to log’s.
--- abstract: 'An abstract should not contain concrete mathematics, but rather should be discrete. Be brief and avoid using mathematical notation except where absolutely necessary, since this brief synopsis will be used by search engines to identify your article!' author: - Woodrow Wilson and Herbert Hoover title: Sample Article Title --- The *American Mathematical Monthly* style incorporates the following LaTeX packages. These styles should *not* be included in the document header. - times - pifont - graphicx - color - AMS styles: amsmath, amsthm, amsfonts, amssymb - url Use of other LaTeX packages should be minimized as much as possible. Math notation, like $c = \sqrt{a^2 +b^2}$, can be left in TeX’s default Computer Modern typefaces for manuscript preparation; or, if you have the appropriate fonts installed, the `mathtime` or `mtpro` packages may be used, which will better approximate the finished article. Web links can be embedded using the `\url{...}` command, which will result in something like <http://www.maa.org>. These links will be active and stylized in the online publication. First-level section heading. ============================ Section headings use an initial capital letter on the first word, with subsequent words lowercase. In general, the style of the journal is to leave all section headings unnumbered. Consult the journal editor if you wish to depart from this and other conventions. Second-level heading. --------------------- The same goes for second-level headings. It is not necessary to add font commands to make the math within heads bold and sans serif; this change will occur automatically when the production style is applied. Graphics. ========= Figures for the <span style="font-variant:small-caps;">Monthly</span> can be submitted as either color or black & white graphics. Generally, color graphics will be used for the online publication, and converted to black & white images for the print journal. We recommend using whatever graphics program you are most comfortable with, so long as the submitted graphic is provided as a separate file using a standard file format. For best results, please follow the following guidelines: 1. Bitmapped file formats—preferably TIFF or JPEG, but not BMP—are appropriate for photographs, using a resolution of at least 300 dpi at the final scaled size of the image. 2. Line art will reproduce best if provided in vector form, preferably EPS. 3. Alternatively, both photographs and line art can be provided as PDF files. Note that creating a PDF does not affect whether the graphic is a bitmap or vector; saving a scanned piece of line art as PDF does not convert it to scalable line art. 4. If you generating graphics using a TeX package, please be sure to provide a PDF of the manuscript. In the production process, TeX-generated graphics will eventually be converted to more conventional graphics so the <span style="font-variant:small-caps;">Monthly</span> can be delivered in e-reader formats. 5. For photos of contributing authors, we prefer photos that are not cropped tight to the author’s profile, so that production staff can crop the head shot to an equal height and width. If possible, avoid photographs that have excess shadows or glare. Theorems, definitions, proofs, and all that. ============================================ Following the defaults of the `amsthm` package, styling is provided for `theorem`, `definition`, and `remark` styles, although the latter two use the same styling. Theorems, lemmas, axioms, and the like are stylized using italicized text. These environments can be numbered or unnumbered, at the author’s discretion. Proofs set in roman (upright) text, and conclude with an “end of proof” (q.e.d.) symbol that is set automatically when you end the proof environment. When the proof ends with an equation or other non-text element, you need to add `\qedhere` to the element to set the end of proof symbol; see the `amsthm` package documentation for more details. Definitions, remarks, and notation are stylized as roman text. They are typically unnumbered, but there are no hard-and-fast rules about numbering. Remarks stylize the same as definitions. [Acknowledgment.]{} The authors wish to thank the Greek polymath Anonymous, whose prolific works are an endless source of inspiration. [1]{} Adam Parker, Who solved the Bernoulli equation and how did they do it? *Coll. Math. J.* **44** (2013) 89–97. Brian Hopkins, ed., *Resources for Teaching Discrete Mathematics*, Mathematical Association of America, Washington DC, 2009. received his Ph.D. in history and political science from Johns Hopkins University. He held visiting positions at Cornell and Wesleyan before joining the faculty at Princeton, where he was eventually appointed president of the university. Among his proudest accomplishments was the abolition of eating clubs at Princeton on the grounds that they were elitist. Office of the President, Princeton University, Princeton NJ 08544\ [email protected] entered Stanford University in 1891, after failing all of the entrance exams except mathematics. He received his B.S. degree in geology in 1895, spent time as a mining engineer, then was appointed by his co-author to the U.S. Food Administration and the Supreme Economic Council, where he orchestrated the greatest famine relief efforts of all time. Hoover Institution, Stanford University, Stanford CA 94305\ [email protected]
--- abstract: 'We give an overview of basic methods that can be used for obtaining asymptotic expansions of integrals: Watson’s lemma, Laplace’s method, the saddle point method, and the method of stationary phase. Certain developments in the field of asymptotic analysis will be compared with De Bruijn’s book [*Asymptotic Methods in Analysis*]{}. The classical methods can be modified for obtaining expansions that hold uniformly with respect to additional parameters. We give an overview of examples in which special functions, such as the complementary error function, Airy functions, and Bessel functions, are used as approximations in uniform asymptotic expansions.' author: - | Nico M. Temme\ IAA, 1391 VD 18, Abcoude, The Netherlands[^1]\ [ ]{}\ [ e-mail: [[email protected]]{}]{} title: Uniform Asymptotic Methods for Integrals --- 0.8cm 2000 Mathematics Subject Classification: 41A60, 30E15, 33BXX, 33CXX. Keywords & Phrases: asymptotic analysis, Watson’s lemma, Laplace’s method, method of stationary phase, saddle point method, uniform asymptotic expansions, special functions. Introduction {#sec:intro} ============ Large parameter problems occur in all branches of pure and applied mathematics, in physics and engineering, in statistics and probability theory. On many occasions the problems are presented in the form of integrals or differential equations, or both, but we also encounter finite sums, infinite series, difference equations, and implicit algebraic equations. Asymptotic methods for handling these problems have a long history with many prominent contributors. In 1863, Riemann used the method of steepest descent for hypergeometric functions, and in 1909 Debye [@Debye:1909:NFZ] used this method to obtain approximations for Bessel functions. In other unpublished notes, Riemann also gave the first steps for approximating the zeta function and in 1932 Siegel used this method to derive the Riemann-Siegel formula for the Riemann zeta function. The method of stationary phase was essential in Kelvin’s work to describe the wave pattern behind a moving ship. In the period 1940–1950, a systematic study of asymptotic methods started in the Netherlands, driven by J. G.  van der Corput, who considered these methods important when studying problems of number theory in his earlier years in Groningen. Van der Corput was one of the founders of the Mathematisch Centrum in Amsterdam, and in 1946 he became the first director. He organized working groups and colloquia on asymptotic methods and he had much influence on workers in this area. During 1950–1960 he wrote many papers on asymptotic methods and during his years in the USA he published lecture notes and technical reports. Nowadays his work on asymptotic analysis for integrals is still of interest, in particular his work on the method of stationary phase. He introduced for this topic the so-called neutralizer in order to handle integrals with several points that contribute to the asymptotic behavior of the complete integral. In later sections we give more details. Although Van der Corput wanted[^2] to publish a general compendium to asymptotic methods, the MC lecture notes and his many published papers, notes and reports were never combined into the standard work he would have liked. Actually, there were no books on asymptotic methods before 1956. In certain books and published papers these methods were considered in great detail. For example, in Watson’s book on Bessel functions [@Watson:1944:TTB] (first edition in 1922) and in Szeg[ő]{}’s book on orthogonal polynomials [@Szego:1975:OP] (first edition 1939) many results on asymptotic expansions can be found. Erdélyi’s book [*Asymptotic Expansions*]{} [@Erdelyi:1956:ASE] came in 1956, and the first edition of De Bruijn’s book [*Asymptotic Methods in Analysis*]{} [@Bruijn:1981:AMA] was published in 1958. This remarkable book was based on a course of lectures at the MC in 1954/1955 and at the Technical University in Eindhoven in 1956/1957. De Bruijn’s book was well received. It was so interesting because of the many special examples and topics, and because the book was sparklingly written. The sometimes entertaining style made Erdélyi remark in his comments in the Mathematical Reviews that the book was somewhat conversational. In addition, he missed the work of Van der Corput on the method of stationary phase. These minor criticisms notwithstanding, Erdélyi was, in his review, very positive about this book . De Bruijn motivated his selection of topics and way of treating them by observing that asymptotic methods are very flexible, and in such cases it is not possible to formulate a single theorem that covers all applications. It seems that he liked the saddle point method very much. He spent three chapters on explaining the method and provided difficult examples. For example, he considered the sum $$\label{eq:intro01} S(s,n)=\sum_{k=0}^{2n}(-1)^{k+n}{2n \choose k}^s$$ for large values of $n$ with $s$ a real number. Furthermore, he discussed the generalization of Euler’s gamma function in the form $$\label{eq:intro02} G(s)=\int_0^\infty e^{-P(u)}u^{s-1}\,du, \quad P(u)=u^N+a_{N-1}u^{N-1}+\ldots +a_0$$ with $\Re s\to\infty$. For many readers De Bruijn’s treatment of the saddle point method serves as the best introduction to this topic and many publications refer to this part of the book. Other topics are iterative methods, implicit functions, and differential equations, all with apparently simple examples, but also with sometimes complicated forms of asymptotic analysis. In this paper we give a short description of the classical methods that were available for one-dimensional integrals when De Bruijn’s book appeared in 1958: integrating by parts, the method of stationary phase, the saddle point method, and the related method of steepest descent. Shortly after this period, more powerful methods were developed in which extra parameters (and not only the large parameter) were taken into account. In these so-called uniform methods the asymptotic approximations usually contain certain special functions, such as error functions, Airy functions, Bessel functions, and parabolic cylinder functions. Some simple results were already available in the literature. For example, Watson [@Watson:1944:TTB §8.43] mentions results from 1910 of Nicholson in which Airy functions have been used. In a later section we give an overview of integrals with large and additional parameters for which we need uniform expansions. For a few cases we show details of uniform expansions of some well-known special functions, for which these uniform expansions are important Notation and symbols used in asymptotic estimates {#sec:symbols} -------------------------------------------------- For information on the special functions we use in this paper, we refer to [@Olver:2010:HMF], the [*NIST Handbook of Mathematical Functions*]{}.[^3] We use Pochhammer’s symbol $(\lambda)_n$, or shifted factorial, defined by $(\lambda)_0=1$ and $$\label{eq:not01} (\lambda)_n=\frac{\Gamma(\lambda+n)}{\Gamma(\lambda)}=\lambda(\lambda+1)\cdots(\lambda+n-1),\quad n\ge1.$$ In asymptotic estimates we use the big $O$-symbol, denoted by $\bigO$, and the little $o$-symbol. For estimating a function $f$ with respect to $g$, both functions defined in domain $\calD\in\CC$, we assume that $g(z)\ne0, z\in\calD$ and $z_0$ is a limit point of $\calD$. Possibly $g(z)\to0$ as $z\to z_0$. We use the $\bigO$-symbol in the form $$\label{eq:not02} f(z) =\bigO\left(g(z)\right), \quad z\in\calD,$$ which means that there is a constant $M$ such that $\vert f(z)\vert\le M \vert g(z)\vert$ for all $z\in\calD$. Usually, for our asymptotic problems, $z_0$ is the point of infinity, $\calD$ is an unbounded part of a sector, for example $$\label{eq:not03} \calD=\left\{z\, \colon \vert z\vert \ge r , \ \alpha \le\phase\,z\le\beta\right\},$$ where $\phase\,z$ denotes the phase of the complex number $z$, $r$ is a nonnegative number, and the real numbers $\alpha, \beta$ satisfy $\alpha\le\beta$. When we write $f(z) =\bigO(1), z\in\calD$, we mean that $\vert f(z)\vert$ is bounded for all $z\in\calD$. The little $o$-symbol will also be used: $$\label{eq:not04} f(z) =o\left(g(z)\right), \quad z\to z_0, \quad z\in\calD,$$ which means that $$\label{eq:not05} \lim_{z\to z_0} f(z)/g(z)=0,$$ where the limiting point $z_0$ is approached inside $\calD$. When we write $f(z) =o(1), z\to z_0, z\in\calD$, we mean that $f(z)$ tends to zero when $z\to z_0$, $z\in\calD$. We write $$\label{eq:not06} f(z)\sim g(z), \quad z\to z_0, \quad z\in\calD,$$ when $\dsp{\lim_{z\to z_0} f(z)/g(z)=1}$, where the limiting point $z_0$ is approached inside $\calD$. In that case we say that the functions $f$ and $g$ are asymptotically equal at $z_0$. Asymptotic expansions {#sec:defexpas} --------------------- Assume that, for $z\ne0$, a function $F$ has the representation $$\label{ex:not07} F(z)=a_0+\frac{a_1}z+\frac{a_2}{z^2}+\cdots+\frac{a_{n- 1}}{z^{n-1}}+R_n(z),\quad n=0,1,2,\ldots,$$ with $F(z)=R_0(z)$, such that for each $n = 0,1,2,\ldots$ the following relation holds $$\label{ex:not08} R_n(z)=\bigO\left(z^{-n}\right),\quad {\rm as}\quad z\to\infty$$ in some unbounded domain $\calD$. Then $\dsp{\sn a_nz^{-n}}$ is called an asymptotic expansion of the function $F(z)$, and we denote this by $$\label{ex:not09} F(z)\sim\sn a_nz^{-n},\quad z\to\infty, \quad z\in\calD.$$ Poincaré introduced this form in 1886, and analogous forms can be given for $z\to 0$, and so on. Observe that we do not assume that the infinite series in converges for certain $z-$values. This is not relevant in asymptotics; in the relation in only a property of $R_n(z)$ is requested, with $n$ fixed. \[ex:exponint\] The classical example is the so-called exponential integral, that is, $$\label{eq:expint01} F(z)=z\intp \frac{e^{-zt}}{t+1}\,dt=z\int_z^\infty t^{-1}e^{z-t}\,dt=z\,e^zE_1(z),$$ which we consider for $z>0$. Repeatedly using integration by parts in the second integral, we obtain $$\label{eq:expint02} F(z)=1-\frac1z+\frac{2!}{z^2}-\cdots+\frac{(-1)^{n-1}(n- 1)!}{z^{n-1}}+ (-1)^nn!\,z\int_z^\infty\frac{e^{z-t}}{t^{n+1}}\,dt.$$ In this case we have, since $t\ge z$, $$\label{eq:expint03} (-1)^nR_n(z)=n!\,z\int_z^\infty\frac{e^{z-t}}{t^{n+1}}\,dt\le \frac{n!}{z^n}\int_z^\infty e^{z-t}\,dt=\frac{n!}{z^n}.$$ Indeed, $R_n(z)=\bigO(z^{-n})$ as $z\to\infty$. Hence $$\label{eq:expint04} z\int_z^\infty t^{-1}e^{z-t}\,dt\sim\sn(-1)^n\frac{n!}{z^n},\quad z\to\infty.$$ For extending this result to complex $z$, see Remark \[rem:watlem\]. The classical methods for integrals {#sec:class} =================================== We give a short overview of the classical methods. In each method the contributions to the asymptotic behavior of the integral can be obtained from one or a few [*decisive points*]{} of the interval of integration[^4]. Watson’s lemma for Laplace-type integrals {#sec:aslap} ----------------------------------------- In this section we consider the large $z$ asymptotic expansions of Laplace-type integrals of the form $$\label{eq:watson01} F_\lambda(z)=\intp t^{\lambda-1} f(t)e^{-zt}\,dt,\quad \Re z>0, \quad \Re\lambda>0.$$ For simplicity we assume that $f$ is analytic in a disc $\vert t\vert\le r$, $r>0$ and inside a sector $\calD\colon \alpha<\phase\,t<\beta$, where $\alpha<0$ and $\beta>0$. Also, we assume that there is a real number $\sigma$ such that $f(t)=\bigO(e^{\sigma|t|})$ as $t \to\infty$ in $\calD$. Then the substitution of $\dsp{f(t)= \sn a_n t^n}$ gives the asymptotic expansion $$\label{eq:watson02} F_\lambda(z)\sim\sn a_n\frac{\Gamma(n+\lambda)}{z^{n+\lambda}},\quad z\to\infty,$$ which is valid inside the sector $$\label{eq:watson03} -\beta-\tfrac12\pi+\delta\le\phase\,z\le-\alpha+\tfrac12\pi-\delta,$$ where $\delta$ is a small positive number. This useful result is known as Watson’s lemma. By using a finite expansion of $f$ with remainder we can prove the asymptotic property needed for the Poincaré-type expansion. For the proof (also for more general conditions on $f$) we refer to [@Olver:1997:ASF p. 114]. To explain how the bounds in arise, we allow the path of integration in to turn over an angle $\tau$, and write $\phase\,t=\tau$, $\phase\,z=\theta$, where $\alpha<\tau<\beta$. The condition for convergence in is $\cos(\tau+\theta)>0$, that is, $- \frac12\pi<\tau+\theta<\frac12\pi$. Combining this with the bounds for $\tau$ we obtain the bounds for $\theta$ in . Several modifications of Watson’s lemma are considered in the literature, for example of Laplace transforms with logarithmic singularities at the origin. \[rem:watlem\] In Example \[ex:exponint\] for the exponential integral we have $f(t)=(1+t)^{-1}$. In that case $f$ is analytic in the sector $|\phase\,t|<\pi$, and $\alpha=-\pi,\, \beta=\pi$. Hence, the asymptotic expansion given in holds in the sector $|\phase\,z|\le\frac32\pi-\delta$. This range is much larger than the usual domain of definition for the exponential integral, which reads: $|\phase\,z|<\pi$. \[ex:modbes\] The modified Bessel function $K_\nu(z)$ has the integral representation $$\label{eq:modb01} \mathop{K_{{\nu}}\/}\nolimits\!\left(z\right)= \frac{\pi^{{\frac{1}{2}}}(2z)^{\nu}e^{-z}}{\mathop{\Gamma\/}\nolimits\!\left(\nu+\frac{1}{2}\right)}\int _{0}^{\infty}t^{\nu-\frac12}e^{{-2zt}}f(t) \,dt, \quad \Re\nu>-\tfrac12,$$ where $$\label{eq:modb02} f(t)=(t+1)^{{\nu-\frac{1}{2}}}=\sn \binom{\nu-\frac12}{n}t^n,\quad \vert t\vert <1.$$ Watson’s lemma can be used by substituting this expansion into , and we obtain $$\label{eq:modb03} K_\nu(z)\sim\sqrt{{\frac\pi{2z}}}e^{-z}\sn\frac{c_n(\nu)}{n!\,(8z)^{n}},\quad |\phase\, z |\le\tfrac32\pi-\delta,$$ where $\delta$ is a small positive number. The coefficients are given by $c_0(\nu)=1$ and $$\label{eq:modb04} c_n(\nu)=\left(4\nu^2-1\right)\left(4\nu^2-3^2\right)\cdots\left(4\nu^2-(2n-1)^2\right),\quad n\ge1.$$ The expansion in is valid for bounded values of $\nu$. Large values of $\nu$ destroy the asymptotic nature and the expansion in is valid for large $z$ in the shown sector, uniformly with respect to bounded values of $\nu$. The expansion terminates when $\nu=\pm\frac12, \pm\frac32,\ldots$, and we have a finite exact representation. By using the many relations between the Bessel functions, expansions for large argument of all other Bessel functions can be derived from this example. For all these expansions sharp estimates are available of remainders in the expansions; see Olver’s work [@Olver:1997:ASF Chapter 7], which is based on using differential equations. There are many integral representations for Bessel functions. For $K_\nu(z)$ we have, for example, $$\label{eq:modb05} K_\nu(z)=\intp e^{-z\cosh t}\cosh(\nu t)\,dt,\quad \Re z>0.$$ It is not difficult to transform this integral into the standard form given in , for example by substituting $\cosh t=1+s$, or to consider it as an example for Laplace’s method considered in §\[sec:laplacemethod\]. However, then the method for obtaining the explicit form of the coefficients as shown in will be rather complicated. The method of Laplace {#sec:laplacemethod} --------------------- In Laplace’s method we deal with integrals of the form $$\label{eq:Laplace00} F(z)=\int_a^b e^{-z p(t)} q(t)\,dt,$$ (see [@Olver:1997:ASF §3.7]), where it is assumed that these integrals can be transformed into integrals of the form $$\label{eq:Laplace01} F(z)=\intr e^{-z t^2} f(t)\,dt,\quad \Re z>0.$$ The function $f$ is assumed to be analytic inside a domain $\calD$ of the complex plane that contains the real axis in its interior. By splitting the contour of integration in two parts, for positive and negative $t$, two integrals arise that can be expanded by applying Watson’s lemma considered in §\[sec:aslap\]. On the other hand we can substitute a Maclaurin expansion $\dsp{f(t)=\sk c_k t^k}$ to obtain $$\label{eq:Laplace02} F(z)\sim \sqrt{\frac{\pi}{z}}\sk \left(\tfrac12\right)_k\frac{c_{2k}}{z^k}, \quad z\to\infty.$$ The domain of validity depends on the location of the singularities of the function $f$ in the complex plane. We can suppose that this function is even. As in Watson’s lemma we assume that $\calD$ contains a disk $\vert t\vert\le r$, $r>0$, and sectors $\alpha<\phase(\pm t)<\beta$, where $\alpha<0$ and $\beta>0$. In addition we need convergence conditions, say by requiring that there is a real number $\sigma$ such that $f(t)=\bigO(e^{\sigma|t^2|})$ as $t \to\infty$ in $\calD$. Write $z=r e^{i\theta}$ and $t=\sigma e^{i\tau}$. Then, when rotating the path, the condition for convergence at infinity is $\cos(\theta+2\tau)>0$, that is, $-\frac12\pi<\theta+2\tau<\frac12\pi$, which should be combined with $\alpha<\tau<\beta$ for staying inside the sector. Then the expansion in holds uniformly inside the sector $$\label{eq:Laplace03} -2\beta-\tfrac12\pi+\delta\le\phase\,z\le\tfrac12\pi-2\alpha-\delta,$$ for any small positive number $\delta$. For a detailed analysis of Laplace’s method we refer to [@Olver:1997:ASF pp. 121–127]. The saddle point method and paths of steepest descent {#sec:sad} ----------------------------------------------------- In this case the integrals are presented as contour integrals in the complex plane: $$\label{eq:saddle01} F(z)=\int_\calC e^{-z\phi(w)}\psi(w)\,dw,$$ where $z$ is a large real or complex parameter. The functions $\phi, \psi$ are analytic in a domain $\calD$ of the complex plane. The integral is taken along a path $\calC$ in $\calD$, and avoids the singularities and branches of the integrand. Integrals of this type arise naturally in the context of linear wave propagation and in other physical problems; many special functions can be represented by such integrals. Saddle points are zeros of $\phi^\prime(w)$ and for the integral we select modifications of the contour $\calC$, usually through a saddle point and on the path we require that $\Im(z\phi(w))$ is constant, a steepest descent path. After several steps we may obtain representations as in , after which we can apply Laplace’s method. For an extensive discussion of the saddle point method we refer to De Bruijn’s book [@Bruijn:1981:AMA]. Generating functions; Darboux’s method {#sec:darb} -------------------------------------- The classical orthogonal polynomials, and many other special functions, have generating functions of the form$$\label{eq:gfin01} G(z,w)=\sn F_n(z) w^n.$$ The radius of convergence may be finite or infinite, and may depend on the variable $z$. For example, the Laguerre polynomials satisfy the relation $$\label{eq:gfin02} (1-w)^{-\alpha-1}e^{-wz/(1-w)}=\sn L_n^\alpha(z)w^n,\quad \vert w\vert <1;$$ $\alpha$ and $z$ may assume any finite complex value. From the generating function a representation in the form a Cauchy integral follows: $$\label{eq:gfin03} F_n(z)=\frac{1}{2\pi i}\int_{\calC} G(z,w)\,\frac{dw}{w^{n+1}},$$ where $\calC$ is a circle around the origin inside the domain where $G(z,w)$ is analytic as a function of $w$. When the function $G(z,w)$ has simple algebraic singularities, an asymptotic expansion of $F_n(z)$ can usually be obtained by deforming the contour $\calC$ around the branch points or other singularities of $G(z,w)$ in the $w-$plane. \[ex:legenpol\] The Legendre polynomials have the generating function $$\label{eq:gfin04} \frac{1}{\sqrt{1-2xw+w^2}}=\sn P_n(x)w^n,\quad -1\le x\le 1,\quad |w|<1.$$ We write $x=\cos\theta$, $0\le \theta\le \pi$, and obtain the Cauchy integral representation $$\label{eq:gfin05} P_n(\cos\theta)=\frac{1}{2\pi i}\int_\calC \frac{1}{\sqrt{1-2\cos\theta \,w+w^2}}\frac{dw}{w^{n+1}},$$ where $\calC$ is a circle around the origin with radius less than $1$. There are two singular points on the unit circle: $w_\pm= e^{\pm i\theta}$. When $\theta\in(0,\pi)$ we can deform the contour $\calC$ into two loops $\calC_\pm$ around the two branch cuts. Because $P_n(-x)=(-1)^nP_n(x)$, we take $0\le x<1$, $0<\theta\le\frac12\pi$. We assume that the square root in is positive for real values of $w$ and that branch cuts run from each $w_\pm$ parallel to the real axis, with $\Re w\to +\infty$. For $\calC_+$ we substitute $w=w_+e^s$, and obtain a similar contour $\calC_+$ around the origin in the $s-$plane. We start the integration along $\calC_+$ at $+\infty$, with $\phase\,s=0$, turn around the origin in the clock-wise direction, and return to $+\infty$ with $\phase\,s=-2\pi$. The contribution from $\calC_+$ becomes $$\label{eq:gfin06} P_n^+(\cos\theta)=\frac{e^{-\left(n+\frac12\right)i\theta+\frac14\pi i}}{\pi\sqrt{2\sin\theta}}\intp e^{-ns}f^+(s)\,\frac{ds}{\sqrt{s}},$$ where $$\label{eq:gfin07} f^+(s)=\sqrt{\frac{s}{e^s-1}\,\frac{1-e^{-2i\theta}}{e^{s}-e^{-2i\theta}}}.$$ Expanding $f^+$ in powers of $s$, we can use Watson’s lemma (see §\[sec:aslap\]) to obtain the large $n$ expansion of $P_n^+(\cos\theta)$. If $\theta\in[\theta_0,\frac12\pi]$, where $\theta_0$ is a small positive number, then the conditions are satisfied to apply Watson’s lemma. The contribution from the singularity at $w_-$ can be obtained in the same way. It is the complex conjugate of the contribution from $w_+$, and we have $P_n(\cos\theta)=2\Re P_n^+(\cos\theta)$. For an application to large degree asymptotic expansions of generalized Bernoulli and Euler polynomials, see [@Lopez:2010:LDA]; see also [@Wong:2001:AAI Chapter 2]. The way of handling coefficients of power series is related to [*Darboux’s method*]{}, in which again the asymptotic behavior is considered of the coefficients of a power series $f(z)=\sum a_n z^n$. A comparison function $g$, say, is needed with the same relevant singular point(s) as $f$. When $g$ has an expansion $g(z)=\sum b_n z^n$, in which the coefficients $b_n$ have known asymptotic behavior, then, under certain conditions on $f(z)-g(z)$ near the singularity, it is possible to find asymptotic forms for the coefficients $a_n$. For an introduction to Darboux’s method and examples for orthogonal polynomials, we refer to [@Szego:1975:OP §8.4]. For the Laguerre polynomials the method as explained in Example \[ex:legenpol\] does not work. The essential singularity at $w=1$ in the left-hand side of requires a different approach. See [@Perron:1921:UDV], where the more general case of Kummer functions is considered and, more recently, [@Borwein:2008:ELA] for Laguerre polynomials $L_n^\alpha(z)$ for large values of $n$, with complex $z\notin[0,\infty)$. When relevant singularities are in close proximity, or even coalescing, we need uniform methods and for a uniform treatment we refer to [@Wong:2005:OAU]. In our example of the Legendre polynomials, uniform methods are needed to deal with small values of $\theta$; in that case $J-$Bessel functions are needed. Mellin-type integrals {#sec:mellin} --------------------- Mellin convolution integrals are of the form $$\label{eq:Mell01} F_{\lambda}(x)=\intp t^{\lambda-1} h(xt) f(t)\,dt,$$ and they reduce to the standard form of Watson’s lemma when $h(t)=e^{-t}$. For the general case powerful methods have been developed, in which asymptotic expansions are obtained for $x\to0$ and for $x\to\infty$. For more details we refer to [@Lopez:2008:AMC], [@Bleistein:1975:AEI Chapters 4,6] and [@Wong:2001:AAI Chapter VI]. A main step in the method to obtain asymptotic expansions of the integral in is the use of Mellin transforms and their inverses. These inverses can be viewed as Mellin-Barnes integrals. An example is the Meijer $G-$function $$\label{eq:Mell02} \begin{array}{ll} \dsp{\mathop{{G^{{m,n}}_{{p,q}}}\/}\nolimits\!\left(z;{a_{1},\dots,a_{p}\atop b_{1},\dots,b_{q}}\right)}=\\[8pt] \quad\quad \dsp{\frac{1}{2\pi i}\mathlarger{\int}_{{\calL}}{\frac{\prod\limits _{{\ell=1}}^{m}\mathop{\Gamma\/}\nolimits\!\left(b_{{\ell}}-s\right)\prod\limits _{{\ell=1}}^{n}\mathop{\Gamma\/}\nolimits\!\left(1-a_{{\ell}}+s\right)}{\prod\limits _{{\ell=m}}^{{q-1}}\mathop{\Gamma\/}\nolimits\!\left(1-b_{{\ell+1}}+s\right)\prod\limits _{{\ell=n}}^{{p-1}}\mathop{\Gamma\/}\nolimits\!\left(a_{{\ell+1}}-s\right)}} z^{s}\, ds.} \end{array}$$ It can be viewed as the inversion of a Mellin transform. Many special functions can be written in terms of this function. The integration path $\calL$ separates the poles of the factors $\Gamma(b_\ell-s)$ from those of the factors $\Gamma(1-a_\ell+s)$. By shifting the contour and picking up the residues, expansions for $z\to0$ (when shifting to the right) and $z\to\infty$ (when shifting to the left) may be obtained. For an extensive treatment we refer to [@Paris:2001:AMB]. The method of stationary phase {#sec:stat} ============================== We consider this method in more detail, and we give also new elements which will give uniform expansions. The integrals are of the type $$\label{eq:mstph01} F(\omega)=\int_a^b e^{i\omega\phi(t)}\psi(t)\,dt,$$ where $\omega$ is a real large parameter, $a,b$ and $\phi$ are real; $a=-\infty$ or/and $b=+\infty$ are allowed. The idea of the method of stationary phase is originally developed by Stokes and Kelvin. The asymptotic character of the integral in is completely determined if the behavior of the functions $\phi$ and $\psi$ is known in the vicinity of the decisive points. These are - stationary points: zeros of $\phi^{\prime}$ in $[a,b]$; - the finite endpoints $a$ and $b$; - values in or near $[a,b]$ for which $\phi$ or $\psi$ are singular. The following integral shows all these types of decisive points: $$\label{eq:mstph02} F(\omega)=\int_{-1}^1 e^{i\omega t^2}\sqrt{\left\vert t-c\right\vert}\,dt,\quad -1<c<1.$$ In Figure \[fig:StPhase\] we see why a stationary point may give a contribution: less oscillations occur at the stationary point compared with other points in the interval, where the oscillations neutralize each other. Integrating by parts: no stationary points {#sec:stphinpa} ------------------------------------------ If in $\phi$ has no stationary point in the finite interval $[a,b]$, and $\phi$, $\psi$ are regular on $[a,b]$, then contributions from the endpoints follow from integrating by parts. We have $$\label{eq:mstph03} \begin{array}{@{}r@{\;}c@{\;}l@{}} F(\omega)&=& \dsp{\int_a^b e^{i\omega\phi(t)}\psi(t)\,dt =\frac{1}{i\omega} \int_{a}^b \psi(t)\frac{de^{i\omega\phi(t)}}{\phi^\prime(t)} =}\\[8pt] &=& \dsp{\frac{e^{i\omega\phi(b)}}{i\omega\phi^\prime(b)}\psi(b)-\frac{e^{i\omega\phi(a)}}{i\omega\phi^\prime(a)}\psi(a)+\frac{1}{i\omega}\int_{a}^b e^{i\omega\phi(t)}\psi_1(t)\,dt,} \end{array}$$ where $$\label{eq:mstph04} \psi_1(t)=-\frac{d}{dt}\,\frac{\psi(t)}{\phi^\prime(t)}.$$ The integral in this result has the same form as the one in , and when $\phi$ and $\psi$ are sufficiently smooth, we can continue this procedure. In this way we obtain for $N=0,1,2,\ldots$ the compound expansion $$\label{eq:mstph05} \begin{array}{@{}r@{\;}c@{\;}l@{}} F(\omega)&=& \dsp{\frac{e^{i\omega\phi(b)}}{\phi^\prime(b)}\sum_{n=0}^{N-1}\frac{\psi_n(b)}{(i\omega)^{n+1}}-\frac{e^{i\omega\phi(a)}}{\phi^\prime(a)}\sum_{n=0}^{N-1}\frac{\psi_n(a)}{(i\omega)^{n+1}}}\ +\\[8pt] &&\dsp{\frac{1}{(i\omega)^N}\int_{a}^b e^{i\omega\phi(t)}\psi_N(t)\,dt,} \end{array}$$ where for $N=0$ the sums in first line vanish. The integral can be viewed as a remainder of the expansion. We have $\psi_0=\psi$ and $$\label{eq:mstph06} \psi_{n+1}(t)=-\frac{d}{dt}\,\frac{\psi_n(t)}{\phi^\prime(t)},\quad n=0,1,2,\ldots.$$ This expansion can be obtained when $\phi,\psi\in C^N[a,b]$. When we assume that we can find positive numbers $M_N$ such that $\left\vert \psi_N(t)\right\vert\le M_N$ for $t\in[a,b]$, we can find an upper bound of the remainder in , and this estimate will be of order $\bigO\left(\omega^{-N}\right)$. Observe that the final term in each of the sums in are also of order $\bigO\left(\omega^{-N}\right)$. Stationary points and the use of neutralizers {#sec:stphneut} --------------------------------------------- When $\phi$ in has exactly one simple internal stationary point, say at $t=0$, that is, $\phi^\prime(0)=0$ and $\phi^{\prime\prime}(0)\ne0$, we can transform the integral into the form $$\label{eq:mstph07} F(\omega)=\int_{a}^b e^{i\omega t^2}f(t)\,dt,\quad \omega>0,$$ possibly with different $a$ and $b$ as in , and again we assume finite $a<0$ and $b>0$. Because of the stationary point at the origin, the straightforward integration by parts method of the previous section cannot be used. We assume that $f\in C^N[a,b]$ for some positive $N$. There are three decisive points: $a, 0, b$, and we can split up the interval into $[a,0]$ and $[0,b]$, each subinterval having two decisive points. To handle this, Van der Corput [@Corput:1948:MCP] introduced neutralizers in order to get intervals in which only one decisive point exists. A [*neutralizer*]{} $N_c$ at a point $c$ is a $C^\infty(\RR)$ function such that: 1. $N_c(c)=1$, and all its derivatives vanish at $c$. 2. There is a positive number $d$ such that $N_c(x)=0$ outside $(c-d,c+d)$. It is not difficult to give explicit forms of such a neutralizer, but in the analysis this is not needed. With a neutralizer at the origin we can write the integral in in the form $$\label{eq:mstph08} F(\omega)= \intr e^{i\omega t^2}f(t) N_0(t)\,dt+ \int_{a}^b e^{i\omega t^2}f(t)(1-N_0(t))\,dt.$$ We write in the first integral $$\label{eq:mstph09} f(t)N_0(t)=\sum_{n=0}^{N-1} c_nt^n +R_N(t)$$ where the coefficients $c_n=f^{(n)}(0)/n!$ do not depend on the neutralizer, because all derivatives of $N_0(t)$ at $t=0$ vanish. We would like to evaluate the integrals by extending the interval $[a,b]$ to $\RR$. That is, we evaluate $$\label{eq:mstph10} \intr e^{i\omega t^2} t^{n}\,dt$$ of which the ones with odd $n$ vanish. The other ones with $n\ge2$ diverge. For a way out we refer to §\[sec:stphavneut\]. Other methods are based on introducing a converging factor, such as $e^{-\eps t^2}$, $\eps>0$, in the integral in . For details on this method, see [@Olver:1974:EBS]. In [@Wong:2001:AAI §II.3] a method of Erdélyi [@Erdelyi:1955:ARF] is used. In the second integral in , the stationary point at $t=0$ is harmless (it is neutralized), and we can integrate by parts to obtain the contributions from $a$ and $b$, as we have done in §\[sec:stphinpa\]. The expansion of the second integral is the same as in , now with $\phi(t)=t^2$. Again, the asymptotic terms do not depend on the neutralizer $N_0$. How to avoid neutralizers? {#sec:stphavneut} -------------------------- Neutralizers can be used in more complicated problems, and Van der Corput’s work on the method of stationary phase was pioneering. After 1960 the interest in uniform expansions came up and he once admitted (private communications, 1967) that he didn’t see how to use his neutralizers when two or more decisive points are nearby or even coalescing. This became a major topic in uniform methods, as we will see in later sections. Bleistein [@Bleistein:1966:UAE] introduced in 1966 a new method on integrating by parts in which two decisive points were allowed to coalesce.[^5] The method was not used for the method of stationary phase, but as we will see we can use a simple form of Bleistein’s method also for this case. Again we consider with finite $a<0$ and $b>0$ and $f\in C^N[a,b]$. We write $f(t)=f(0)+(f(t)-f(0))$ and obtain $$\label{eq:mstph12} F(\omega)=f(0)\Phi_{a,b}(\omega)+\int_{a}^b e^{i\omega t^2}\left(f(t)-f(0)\right)\,dt,$$ where $$\label{eq:mstph13} \Phi_{a,b}(\omega)=\int_{a}^b e^{i\omega t^2}\,dt,$$ a Fresnel-type integral [@Temme:2010:ERF]. Because the integrand vanishes at the origin we can integrate by parts in , and write $$\label{eq:mstph14} F(\omega)=f(0)\Phi_{a,b}(\omega)+\frac{1}{2i\omega}\int_{a}^b \frac{f(t)-f(0)}{t}\,de^{i\omega t^2}.$$ This gives $$\label{eq:mstph15} \begin{array}{@{}r@{\;}c@{\;}l@{}} F(\omega)&=&\dsp{\frac{e^{i\omega b^2}}{2i\omega}\frac{f(b)-f(0)}{b}- \frac{e^{i\omega a^2}}{2i\omega}\frac{f(a)-f(0)}{a}}\ +\\[8pt] &&\dsp{f(0)\Phi_{a,b}(\omega)+\frac{1}{2i\omega}\int_{a}^b f_1(t) e^{i\omega t^2}\,dt,} \end{array}$$ where $$\label{eq:mstph16} f_1(t)=-\frac{d}{dt}\frac{f(t)-f(0)}{t}.$$ The integral in can be expanded in the same way. We obtain $$\label{eq:mstph17} \begin{array}{@{}r@{\;}c@{\;}l@{}} F(\omega)&=&\dsp{ {e^{i\omega b^2}}{}\sum_{n=0}^{N-1}\frac{C_n(b)}{(2i\omega)^{n+1}}- {e^{i\omega a^2}}{}\sum_{n=0}^{N-1}\frac{C_n(a)}{(2i\omega)^{n+1}}}\ +\\[8pt] &&\dsp{\Phi_{a,b}(\omega)\sum_{n=0}^{N-1}\frac{f_n(0)}{(2i\omega)^n}+\frac{1}{(2i\omega)^N}\int_{a}^b f_N(t) e^{i\omega t^2}\,dt,} \end{array}$$ where $N\ge0$ and for $n=0,1,2,\ldots,N$ we define $$\label{eq:mstph18} C_n(t)=\frac{f_n(t)-f_n(0)}{t}, \quad f_{n+1}(t)=-\frac{d}{dt}\frac{f_n(t)-f_n(0)}{t}.$$ For $N=0$ the terms with the sums in all vanish. The integral in can be viewed as a remainder of the expansion. When we assume that we can find positive numbers $M_N$ such that $\left\vert f_N(t)\right\vert\le M_N$ for $t\in[a,b]$, we can find an upper bound of this remainder, which will be of order $\bigO\left(\omega^{-N}\right)$. The final term in each of the sums in are also of order $\bigO\left(\omega^{-N}\right)$. If we wish we can keep the Fresnel-type integral in the expansion in as it is, but we can also proceed by writing $$\label{eq:mstph19} \begin{array}{@{}r@{\;}c@{\;}l@{}} \Phi_{a,b}(\omega) &=&\dsp{\intr e^{i\omega t^2}\,dt - \int_{-\infty}^a e^{i\omega t^2}\,dt - \int_{b}^\infty e^{i\omega t^2}\,dt}\\[8pt] &=&\dsp{\sqrt{\frac{\pi}{\omega}}e^{\frac14\pi i}- \int_{-a}^\infty e^{i\omega t^2}\,dt - \int_{b}^\infty e^{i\omega t^2}\,dt}. \end{array}$$ The integrals in the second line can be expressed in terms of the complementary error function (see ) and can be expanded for large values of $\omega$ by using (when $a$ and $b$ are bounded away from zero); see [@Temme:2010:ERF §7.12(ii)]. In this way we can obtain, after some re-arrangements, the same expansions as with the neutralizer method of §\[sec:stphneut\]. However, we observe the following points. - When we do not expand the function $\Phi_{a,b}(\omega)$ for large $\omega$, the expansion in remains valid when $a$ and/or $b$ tend to zero. This means, we can allow the decisive points to coalesce (Van der Corput’s frustration), and in fact we have an expansion that is valid uniformly with respect to small values of $a$ and $b$. - There is no need to handle the divergent integrals in . As we explained after , a rigorous approach is possible by using converging factors, or other methods, but the present representation is very convenient. - The remainders in the neutralizer method depend on the chosen neutralizers; the remainder in only depends on $f$ and its derivatives. Algebraic singularities at both endpoints {#sec:singend} ----------------------------------------- We assume that $f$ is $N$ times continuously differentiable in the finite interval $[\alpha,\beta]$. Consider the following integral $$\label{eq:mstph20} F(\omega)=\int_{\alpha}^{\beta}e^{i\omega t}(t-\alpha)^{\lambda-1}(\beta-t)^{\mu-1} f(t)\,dt,$$ where $\Re \lambda>0$, $\Re \mu >0$. A straightforward approach using integrating by parts is not possible. However, see §\[sec:singendpart\]. For this class of integrals we have the following result $$\label{eq:mstph21} F(\omega)=A_N(\omega)+B_N(\omega)+\bigO\left(1/\omega^N\right),\quad \omega\to\infty,$$ where $$\label{eq:mstph22} \begin{array}{ll} \dsp{A_N(\omega)=\sum_{n=0}^{N-1}\frac{\Gamma(n+\lambda)}{n!\,\omega^{n+\lambda}}e^{i(\frac12\pi(n+\lambda)+\alpha\omega)}\left.\frac{d^n}{dt^n}\left((\beta-t)^{\mu-1} f(t)\right)\right\vert_{t=\alpha},}\\[8pt] \dsp{B_N(\omega)=\sum_{n=0}^{N-1}\frac{\Gamma(n+\mu)}{n!\,\omega^{n+\mu}}e^{i(\frac12\pi(n-\mu)+\beta\omega)}\left.\frac{d^n}{dt^n}\left((t-\alpha)^{\lambda-1} f(t)\right)\right\vert_{t=\beta}.} \end{array}$$ Erdélyi’s proof in [@Erdelyi:1955:ARF] is based on the use of neutralizers and the result can be applied to, for example, the Kummer function written in the form $$\label{eq:mstph24} \CHF{\lambda}{\lambda+\mu}{i\omega}=\frac{\Gamma(\lambda+\mu)}{\Gamma(\lambda)\Gamma(\mu)} \int_{0}^{1}e^{i\omega t}t^{\lambda-1}(1-t)^{\mu-1} \,dt, \quad \Re\lambda, \Re \mu >0.$$ By using a standard result for this function with $\omega\to+\infty$ follows, see [@Olde:2010:CHF §13.7(i)], but there are more elegant ways to obtain a result valid for general complex argument. ### Integrating by parts {#sec:singendpart} Also in this case we can avoid the use of neutralizers by using integration by parts. This cannot be done in a straightforward way because of the singularities of the integrand in at the endpoints. Observe that Erdélyi’s expansion gives two expansions, one from each decisive endpoint. By a modification of Bleistein’s procedure we obtain expansions which take contributions from both decisive points in each step. We write (see also the method used in §\[sec:stphavneut\]) $$\label{eq:mstph25} f(t)=a_0+b_0(t-\alpha)+(t-\alpha)(\beta-t)g_0(t),$$ where $a_0, b_0$ follow from substituting $t=\alpha$ and $t=\beta$. This gives $$\label{eq:mstph26} a_0=f(\alpha),\quad b_0=\frac{f(\beta)-f(\alpha)}{\beta-\alpha},$$ and for we obtain $$\label{eq:mstph27} F(\omega)=a_0\Phi+b_0\Psi+\int_{\alpha}^{\beta}e^{i\omega t}(t-\alpha)^{\lambda}(\beta-t)^{\mu} g_0(t)\,dt,$$ where (see ) $$\label{eq:mstph28} \begin{array}{ll} \Phi=(\beta-\alpha)^{\lambda+\mu-1}e^{i\omega\alpha}\dsp{\frac{\Gamma(\lambda)\Gamma(\mu)}{\Gamma(\lambda+\mu)}}\CHF{\lambda}{\lambda+\mu}{i(\beta-\alpha)\omega},\\[8pt] \Psi=(\beta-\alpha)^{\lambda+\mu}e^{i\omega\alpha}\dsp{\frac{\Gamma(\lambda+1)\Gamma(\mu)}{\Gamma(\lambda+\mu+1)}}\CHF{\lambda+1}{\lambda+\mu+1}{i(\beta-\alpha)\omega}. \end{array}$$ Now we integrate by parts and obtain, observing that the integrated terms will vanish, $$\label{eq:mstph29} F(\omega)=a_0\Phi+b_0\Psi+\frac{1}{i\omega}\int_{\alpha}^{\beta}e^{i\omega t}(t-\alpha)^{\lambda-1}(\beta-t)^{\mu-1} f_1(t)\,dt,$$ where $$\label{eq:mstph30} f_1(t)=-(t-\alpha)^{1-\lambda}(\beta-t)^{1-\mu}\frac{d}{dt}\left((t-\alpha)^{\lambda}(\beta-t)^{\mu} g_0(t)\right).$$ We can continue with this integral in the same manner, and obtain $$\label{eq:mstph31} F(\omega)= \Phi\sum_{n=0}^{N-1}\frac{a_n}{(i\omega)^n}+ \Psi\sum_{n=0}^{N-1}\frac{b_n}{(i\omega)^n}+R_N(\omega),\quad N=0,1,2,\ldots,$$ where $$\label{eq:mstph32} R_N(\omega)= \frac{1}{(i\omega)^N}\int_{\alpha}^{\beta}e^{i\omega t}(t-\alpha)^{\lambda-1}(\beta-t)^{\mu-1} f_N(t)\,dt.$$ The coefficients are defined by $$\label{eq:mstph33} a_n=f_n(\alpha),\quad b_n=\frac{f_n(\beta)-f_n(\alpha)}{\beta-\alpha},$$ and the functions $f_n$ follow from the recursive scheme $$\label{eq:mstph34} \begin{array}{@{}r@{\;}c@{\;}l@{}} f_n(t)&=&a_n+b_n(t-\alpha)+(t-\alpha)(\beta-t)g_n(t),\\[8pt] f_{n+1}(t)&=&\dsp{-(t-\alpha)^{1-\lambda}(\beta-t)^{1-\mu}\frac{d}{dt}\left((t-\alpha)^{\lambda}(\beta-t)^{\mu} g_n(t)\right)}, \end{array}$$ with $f_0=f$. The expansion in contains confluent hypergeometric functions, and Erdélyi’s expansion in is in terms of elementary functions. Erdélyi’s expansion follows by using the asymptotic expansion of the confluent hypergeometric function. Estimates of $R_N(\omega)$ follow when we know more about the function $f$, in particular when we have bounds on the derivatives of $f$. Again, the present approach avoids neutralizers, and the expansion remains valid when the decisive points $\alpha$ and $\beta$ coalesce. Of course, when $\omega(\beta-\alpha)$ is not large, the ${}_1F_1-$functions in should not be expanded. Some examples where the method fails {#sec:stphaseex} ------------------------------------ In some cases it is not possible to obtain quantitative information in the method of stationary phase. For instance, when $\phi\in C^\infty[a,b]$ does not have a stationary point in $[a,b]$, and $\psi\in C^\infty[a,b]$ vanishes with all its derivatives at $a$ and $b$. In that case we have $$\label{eq:mstph35} F(\omega)=\bigO\left(\omega^{-n}\right)$$ as $\omega\to\infty$, for any $n$, and we say the function $F(\omega)$ is exponentially small. An example is $$\label{eq:mstph36} F(\omega)=\intr e^{i\omega t}\frac{dt}{1+t^2}= \pi e^{-\omega}.$$ The integral $$\label{eq:mstph37} G(\omega)=\int_{-\frac12\pi}^{\frac12\pi} e^{i\omega t}\ \psi(t)\,dt,\quad \psi(t)=e^{-1/\cos t},$$ can be expanded as in , and all terms will vanish. We conclude that $G(\omega)$ is exponentially small, but it is not clear if we have a simple estimate $\bigO\left(e^{-\omega}\right)$. Another example is $$\label{eq:mahl1} I(a,b)=\int_a^b\left(\frac{x+a}{x-a}\right)^{2ai} \left(\frac{b-x}{b+x}\right)^{2bi}\,\frac{dx}x;$$ $a$ and $b$ are large positive parameters with $b/a=c$, a constant greater than 1. $I(a,b)$ is used in [@Lekner:1987:TRE Eq. (6.64)] for describing the Rayleigh approximation for a reflection amplitude in the theory of electromagnetic and particle waves. Because of the nature of the singular endpoints this integral is of a different type considered so far, but we may try to use the method of stationary phase. This is done in [@Mahler:1986:OSI], where it was shown that the integral is exponentially small. As Mahler admitted, his results do not imply estimates of the form $I(a,b)=\bigO(e^{-a})$. In this example the integrand can be considered for complex values of $x$, and by modifying the contour, leaving the real interval and entering the complex plane, we can show [@Temme:1989:AES] that $I(a,b)=\bigO(e^{-2\pi a})$. \[rem:trouble\] In fact, this is one of the troublesome aspects of the method of stationary phase (the use of neutralizers is another issue). The integral in is defined on the real interval $[a,b]$ with functions $\phi,\psi\in C^n[a,b]$, possibly with $n=\infty$, and $\phi$ and $\omega$ real. When the integral is exponentially small, as $G(\omega)$ given in , the method cannot give more information, unless we can extend the functions $\phi$ and $\psi$ to analytic functions defined in part of the complex plane. In that case more powerful methods of complex analysis become available, including modifying the interval into a contour and using the saddle point method. Uniform expansions {#sec:unimeth} ================== The integrals considered so far are in the form $$\label{eq:stfm01} \int_\calC e^{-\omega\phi(z)}\psi(z)\,dz$$ along a real or complex path $\calC$ with one large parameter $\omega$. In many problems in physics, engineering, probability theory, and so on, additional parameters are present in $\phi$ or $\psi$, and these parameters may have influence on the validity of the asymptotic analysis, the rate of asymptotic convergence of the expansions, or on choosing a suitable method or contour. For example, complications will arise when $\phi$ or $\psi$ have singular points that move to a saddle point under the influence of extra parameters. But it is also possible that two decisive points, say, two saddle points are proximate or coalescing. We will explain some aspects of uniform asymptotic expansions, usually with cases that are relevant in the asymptotic behavior of special functions. Van der Waerden’s method {#sec:waerden} ------------------------ In 1951 B. L. van der Waerden[^6] wrote a paper [@Waerden:1951:OMS] in which he demonstrated what to do for integrals of the type (his notation was different) $$\label{eq:waerden01} F_\alpha(\omega)=\frac{1}{2\pi i}\intr e^{-\omega t^2}\frac{f(t)}{t- i\alpha}\,dt, \quad \Re\omega>0,$$ where we assume that $f$ is analytic in a strip $\calD_a=\{t\in\CC \colon \vert\Im t\vert \le a, \Re t\in\RR\}$ for some positive $a$. Initially we take $ \Re\alpha >0$ but to let the pole cross the real axis we can modify the contour to avoid the pole, or use residue calculus. Because $\vert\alpha\vert$ may be small we should not expand $f(t)/(t-i\alpha)$ in powers of $t$ as in Laplace’s method (see §\[sec:laplacemethod\]). To describe Van der Waerden’s method we write $$\label{eq:waerden05} f(t)=\left(f(t)-f(i\alpha)\right)+f(i\alpha).$$ Then the integral in can be written in the form $$\label{eq:waerden06} F_\alpha(\omega)=\tfrac12f(i\alpha) e^{\omega\alpha^2}\erfc\left(\alpha\sqrt{{\omega}}\right)+ \frac{1}{2\pi i}\intr e^{-\omega t^2}g(t)\,dt,$$ where $$\label{eq:waerden07} g(t)=\frac{f(t)-f(i\alpha)}{t-i\alpha}.$$ We have used one of the error functions $$\label{eq:waerden02} \erf\,z =\frac{2}{\sqrt{\pi}}\int_0^z e^{-t^2}\,dt,\quad \erfc\,z =\frac{2}{\sqrt{\pi}}\int_z^\infty e^{-t^2}\,dt,$$ with the properties $$\label{eq:waerden03} \erf\,z+\erfc\,z=1,\quad \erf(-z)=-\erf\,z,\quad \erfc(-z)=2-\erfc\,z.$$ These functions are in fact the normal distribution functions from probability theory. But we also have the representation $$\label{eq:waerden04} \frac{e^{-\omega\alpha^2}}{2\pi i}\intr e^{-\omega t^2}\frac{dt}{t-i\alpha}=\tfrac12\erfc\left(\alpha\sqrt{{\omega}}\right),\quad \Re\alpha >0,$$ and for other values of $\alpha$ we can use analytic continuation. To prove this representation, differentiate the left-hand side with respect to $\omega$. The functions $f$ and $g$ of are analytic in the same domain $\calD_a$ and $g$ can be expanded in powers of $t$. When we substitute $\dsp{g(t)=\sn c_n(\alpha)t^n}$ into , we obtain the large $\omega$ asymptotic representation of $F_\alpha(\omega)$ by using Laplace’s method. This gives $$\label{eq:waerden08} F_\alpha(\omega)\sim\tfrac12f(i\alpha) e^{\omega\alpha^2}\erfc\left(\alpha\sqrt{{\omega}}\right)+ \frac{1}{2 i\sqrt{\pi\omega}}\sn \left(\tfrac12\right)_n \frac{c_{2n}(\alpha)}{\omega^n}.$$ A special feature is that this expansion also holds when $\alpha=0$ and when $\Re\alpha<0$. All coefficients $c_n(\alpha)$ are well-defined and analytic for $i\alpha\in\calD_a$. In fact the expansion in holds uniformly with respect to $i\alpha$ in a domain $\calD^*$ properly inside $\calD_a$. Van der Waerden used this method in a problem of Sommerfeld concerning the propagation of radio waves over a plane earth. He obtained a simpler uniform expansion than the one given by Ott [@Ott:1943:DSU], who obtained a uniform expansion in which each term is an incomplete gamma function. ### An example from De Bruijn’s book {#sec:bruijn} In De Bruijn’s book [@Bruijn:1981:AMA §5.12] the influence of poles near the saddle point is considered by studying the example $$\label{eq:bruijn01} F_\alpha(z)=\beta^2\intr e^{-\omega t^2}\frac{f(t)}{\beta^2+ t^2}\,dt,\quad \beta=\omega^{-\frac12\alpha}, \quad\alpha>0,$$ where $\omega$ is a positive large parameter. Observe that for all $\alpha$ the parameter $\beta $ is small. Three separate cases are distinguished: $0<\alpha<1$, $\alpha=1$, and $\alpha>1$, for the special choice $f(t)=e^t$. For each case an asymptotic expansion is given. These expansions are really different in the sense that they do not pass into each other when $\alpha$ passes unity. Splitting off the pole is not considered. We can use partial fraction decomposition to get two integrals with a single pole, but we can also write $$\label{eq:bruijn02} f(t)=a_0+b_0t+\left(\beta^2+t^2\right)g(t),$$ where we assume that $g$ is regular at the points $\pm i\beta$. This gives $$\label{eq:bruijn03} a_0=\frac{f(i\beta)+f(-i\beta)}{2},\quad b_0=\frac{f(i\beta)-f(-i\beta)}{2i},$$ and where $g(t)$ follows with these values from . Hence, $$\label{eq:bruijn04} F_\alpha(z)=a_0\beta^2\intr e^{-\omega t^2}\frac{dt}{\beta^2+ t^2}\,dt+\beta^2\intr e^{-\omega t^2} g(t)\,dt,$$ where the first integral can be written in terms of the complementary error function (see ): $$\label{eq:bruijn05} F_\alpha(z)=a_0\beta\pi \,e^{\omega^{1-\alpha}}\erfc\left(\omega^{\frac12(1-\alpha)}\right)+\beta^2\intr e^{-\omega t^2}g(t)\,dt.$$ After expanding $\dsp{g(t)=\sk g_k(\beta)t^k}$ the asymptotic representation follows: $$\label{eq:bruijn06} F_\alpha(z)\sim a_0\beta\pi \,e^{\omega^{1-\alpha}}\erfc\left(\omega^{\frac12(1-\alpha)}\right)+ \beta^2\sqrt{\frac{\pi}{\omega}}\sk c_k(\beta)\frac{1}{\omega^k},$$ where $\beta$ is defined in and $$\label{eq:bruijn07} c_k(\beta)= g_{2k}(\beta)\left(\tfrac12\right)_k.$$ These and all other coefficients are regular when $\beta\to0$. The expansion in is valid for all $\alpha\ge0$. We give a few coefficients for $f(t)=e^t$, which is used in [@Bruijn:1981:AMA]. We have $$\label{eq:bruijn08} \begin{array}{ll} \dsp{c_0(\beta)= \frac{1-\cos\beta}{\beta^2}},\\[8pt] \dsp{c_1(\beta)= \frac{\beta^2-2+2\cos\beta}{4\beta^4}},\\[8pt] \dsp{c_2(\beta)= \frac{\beta^4-12\beta^2+24-24\cos\beta}{32\beta^6}}. \end{array}$$ From the representation in we see at once the special value $\alpha=1$: when $0<\alpha<1$ the complementary error function can be expanded by using the asymptotic expansion $$\label{eq:bruijn09} \erfc\,z\sim\frac{e^{-z^2}}{z\sqrt{\pi}}\left(1-\frac{1}{2z^2}+\frac{3}{4z^4}-\frac{15}{8z^6}+\cdots\right),\quad z \to \infty.$$ When $\alpha>1$ it can be expanded by using the the convergent power series expansion of $\erf\,z=1-\erfc\,z$. When $\alpha=1$, De Bruijn gives an expansion in terms of functions related with the complementary error function. His first term is $\beta\,\pi\, e\,\erfc(1)$, which corresponds to our term $\beta\cos\beta\,\pi\, e\,\erfc(1)$. It should be noted that De Bruijn was not aiming to obtain a uniform expansion with respect to $\alpha$. His discussion was about the , that is, the role of an extra parameter which causes poles in the neighborhood of the saddle point. But it is remarkable that he has not referred to Van der Waerden’s method, which gives a very short treatment of the lengthy discussions that De Bruijn needs in his explanations. \[rem:bruijn01\] The coefficients in are well defined as $\beta\to0$. We see that, and this often happens in uniform expansions, close to some special value of a parameter (in this case $\beta=0$), the numerical evaluation of the coefficients is not straightforward, although they are defined properly in an analytic sense. In the present case, with the special example $f(t)=e^t$ in , it is rather easy to compute the coefficients by using power series expansions in $\beta$, but for more general cases special numerical methods are needed. The Airy-type expansion {#sec:airy} ----------------------- In 1957, Chester, Friedman and Ursell published a pioneering paper [@Chester:1957:ESD] in which Airy functions were used as leading terms in expansions. They extended the method of saddle points for the case that two saddle points, both relevant for describing the asymptotic behavior of the integral, are nearby or even coalescing. We describe this phenomenon for an integral representation of the Bessel function $J_\nu(z)$. We have $$\label{eq:airy02} J_\nu(\nu z)=\frac{1}{2\pi i}\int_\calC e^{\nu\phi(s)}\,ds, \quad \phi(s)=z\sinh s-s,$$ where the contour $\calC$ starts at $\infty-\pi i$ and terminates at $\infty+\pi i$. We consider positive values of $z$ and $\nu$, with $\nu$ large. From graphs of the Bessel function of high order, see Figure \[fig:AiryBess\], it can be seen that $J_\nu(x)$ starts oscillating when $x$ crosses the value $\nu$. We concentrate on the transition area $x\sim \nu$. The Airy function $\Ai(-x)$ shows a similar behavior, as also follows from Figure \[fig:AiryBess\]. =5.5cm =5.5cm There are two saddle points defined by the equation $\cosh s=1/z$. For $0<z\le1$ the saddle points are real, and in that case we have $$\label{eq:airy03} s_\pm= \pm\theta,\quad {\rm where}\quad z=1/\cosh \theta,\quad \theta\ge0.$$ When $0<z\le z_0<1$, with $z_0$ a fixed number, the positive saddle point $s_+$ is the relevant one. We can use the transformation $\phi(s)-\phi(s_+)=-\frac12t^2$ and apply Laplace’s method of §\[sec:laplacemethod\]. This gives Debye’s expansion for $J_\nu(z)$, see [@Olver:2010:BFS §10.19(ii)]. If $z$ is close to 1, both saddle point are relevant. As shown for the first time in [@Chester:1957:ESD], we can use a cubic polynomial by writing $$\label{eq:airy07} \phi(s)=\tfrac13 t^3-\zeta t+A,$$ where $\zeta$ and $A$ follow from the condition that the saddle points $s_\pm$ given in correspond to $\pm\sqrt\zeta$ (the saddle points in the $t-$plane). It is not difficult to verify that $A=0$ (take $s=t=0$) and that $$\label{eq:airy08} \tfrac23\zeta^{\frac32}=\ln\frac{1+\sqrt{1-z^2}}{z}-\sqrt{1-z^2},\quad 0<z\le 1.$$ This relation can be analytically continued for $z>1$ and for complex values of $z$, $z\ne-1$. For small values of $\vert \zeta\vert$ we have $$\label{eq:airy09} z(\zeta)= \sn z_n\eta^n=1-\eta+\tfrac3{10}\eta^2+\tfrac1{350}\eta^3- \tfrac{479}{63000}\eta^4+\ldots,\quad \eta= 2^{-\frac13}\zeta,$$ with convergence for $\vert\zeta\vert < (3\pi/2)^{2/3}$. By using conditions on the mapping in it can be proved that can be written as $$\label{eq:airy10} J_\nu(\nu z)=\frac{1}{2\pi i}\int_{\infty e^{-\frac13\pi i}}^{\infty e^{\frac13\pi i}}e^{\nu(\frac13t^3-\zeta t)}g(t)\,dt, \quad g(t)=\frac{ds}{dt}=\frac{t^2-\zeta}{\phi^\prime(s)}.$$ It is not difficult to verify that $g$ can be defined at both saddle points $t=\pm\sqrt\zeta$. We have $$\label{eq:airy11} g\left(\pm\sqrt\zeta\right)=\left(\frac{4\zeta}{1-z^2}\right)^{\frac{1}{4}}.$$ A detailed analysis is needed to show that the transformation in one-to-one maps a relevant part of the $s-$plane to the $t-$plane. One of the three solutions of the cubic equation has to be chosen, and the proper one should be such that we can write the original integral in along a chosen contour in the form of , with the contour at infinity as indicated. The proofs in [@Chester:1957:ESD] are quite complicated and later more accessible methods came available, in particular when differential equations are used as starting point of the asymptotic analysis. For details of the form of the expansion we refer to [@Olver:2010:BFS §10.20], where also references for proofs are given. In [@Wong:2001:AAI §VII.5] a detailed explanation is given of the various aspects of the conformal mapping for such transformatio for the case of an Airy-type expansion of the Laguerre polynomials starting with an integral representation. Using the construction explained [@Wong:2001:AAI §VII.4] we can find for the integral in the representation $$\label{eq:twocoal08} J_\nu(\nu z)= g\left(\sqrt{\zeta}\right) \left( \frac{\Ai\left(\zeta\nu^{\frac23}\right)}{\nu^{\frac13}}A(\nu,\zeta)+ \frac{\Ai^{\prime}\left(\zeta\nu^{\frac23}\right)}{\nu^{\frac53}}B(\nu,\zeta)\right),$$ where $\zeta$ is defined in and $g\left(\sqrt{\zeta}\right)$ is given in , and $$\label{eq:twocoal09} A(\nu,\zeta)\sim\sn\frac{A_n(\zeta)}{\nu^{2n}},\quad B(\nu,\zeta)\sim \sn \frac{B_n(\zeta)}{\nu^{2n}}.$$ The asymptotic expansions are valid for large $\nu$ and uniformly for $z\in(0,\infty)$. See [@Olver:2010:BFS §10.20(i)] for more details on the coefficients. ### A few remarks on Airy-type expansions {#sec:airyrem} Airy functions are special cases of Bessel functions of order $\pm\frac13$ and are named after G.B. Airy (1838), a British astronomer, who used them in the study of rainbow phenomena. They occur in many other problems from physics, for example as solutions to boundary value problems in quantum mechanics and electromagnetic theory. The Airy function $\Ai(z)$ can be defined by $$\label{eq:twocoal02} \Ai(z)= \frac1{2\pi i}\int_{\infty e^{-\frac13\pi i}}^{\infty e^{\frac13\pi i}}e^{\frac13t^3-z t}\,dt,$$ and we see that the integral in becomes an Airy function when the function $g$ is replaced with a constant. Airy functions are solutions of Airy’s differential equation $\dsp{\frac{d^2w}{dz^2}=zw}$. It is not difficult to verify that the function in satisfies this equation. Two independent solutions are denoted by $\Ai(z)$ and $\Bi(z)$, which are entire functions and real when $z$ is real. They are oscillatory for $z<0$ and decrease ($\Ai$) or increase ($\Bi$) exponentially fast for $z>0$; see Figure \[fig:AiryBess\]. Airy’s equation is the simplest second-order linear differential equation showing a turning point (at $z=0$). A turning point in a differential equation (see [@Olver:1997:ASF Chapter 11]) corresponds to integrals that have two coalescing saddle points, as we have seen for the Bessel function. Many special functions have Airy-type expansions. For example, the orthogonal polynomials oscillate inside the domain of orthogonality and become non-oscillating outside this domain. The large degree behavior of these polynomials can often be described by Airy functions, in particular to describe the change of behavior near finite endpoints of the orthogonality domain. In the example of the Bessel function $J_\nu(z)$, a function of two variables, we use the Airy functions $\Ai(z)$, a function of one variable. This is a general principle when constructing uniform expansions in which special functions are used as approximations: reduce the number of parameters. The results on Airy-type expansions for integrals of Chester [*et all.*]{} in 1957 were preceded by fundamental results in 1954 of Olver, who used differential equations for obtaining these expansions, see [@Olver:1954:AEB; @Olver:1954:ASL]. Olver extended earlier results for turning point equations and in his work realistic and computable error bounds are constructed for the remainders in the expansions. The form given in follows from a method of Bleistein [@Bleistein:1966:UAE], who used it for a different type of integral. Chester [*et all.*]{} have not given this form; they expanded the function $g$ in in a two-point Taylor series (see [@Lopez:2002:TPT]) of the form $$\label{eq:remairy01} g(t)=\sn a_n\left(t^2-\zeta\right)^n+t\sn b_n\left(t^2-\zeta\right)^n,$$ and the expansion obtained in this way can be rearranged to get the expansion given in . The paper by Chester [*et all.*]{} [@Chester:1957:ESD] appeared in 1957 (one year before De Bruijn published his book), and together with Van der Waerden’s paper [@Waerden:1951:OMS] it was the start of a new period of asymptotic methods for integrals. As mentioned earlier, Airy-type expansions were considered in detail by Olver (see also [@Olver:1997:ASF Chapter 11]), who provided realistic error bounds for remainders by using differential equations. In [@Olde:1994:UAE] it was shown how to derive estimates of remainders in Airy-type expansions obtained from integrals. It took some time to see how to use three-term recurrence relations or linear second order difference equations as starting point for obtaining Airy-type expansions or other uniform expansions. The real breakthrough came in a paper by Wang and Wong [@Wang:2002:UAE], with an application for the Bessel function $J_\nu(z)$. But the method can also be applied to cases when no differential equation or integral representation is available, such as for polynomials associated with the Freud weight $e^{-x^4}$ on $\RR$; see [@Wang:2003:AEF]. In [@Deift:1993:ASD] a completely new method was introduced. This method is based on Riemann-Hilbert type of techniques. All the methods mentioned above can be applied to obtain Airy-type expansions but also to a number of other cases. In the next section we give more examples of such standard forms. A table of standard forms {#sec:standard} ========================= -------------------------------------------------------------------------------------------------------------------- -- -- -- **No. &**Standard Form& **Approximant& **Decisive points\ 1&$\dsp{\int_{0}^{\infty}e^{-zt}\frac{f(t)}{t+\alpha}\,dt}$&Exponential integral &$0,-\alpha$\ 2&$\dsp{\int_{-\infty}^{\infty}e^{-zt^2}\frac{f(t)}{t-i\alpha}\,dt}$&Error function&$0,i\alpha$\ 3&$\dsp{\int_{-\infty}^{\alpha}e^{- zt^2}f(t)\,dt}$&Error function&$0,\alpha$\ 4&$\dsp{\int_0^\infty t^{\beta-1}e^{-z(\frac12t^2-\alpha t)}f(t)\,dt}$& Par. cylinder function&$0,\alpha$\ 5&$\dsp{\int_{-\infty}^{\infty}e^{-zt^2}\frac{f(t)}{(t- i\alpha)^\mu}\,dt}$& Par. cylinder function&$0,i\alpha$\ 6&$\dsp{\int_{\calL}e^{z(\frac13t^3-\alpha t)}f(t)\,dt}$& Airy function&$\pm\sqrt{{\alpha}}$\ 7&$\dsp{\int_0^\infty t^{\lambda-1}e^{-zt}f(t)\,dt}$ &Gamma function&$0,\lambda/z$\ 8&$\dsp{\int_\alpha^\infty t^{\beta-1}e^{-zt}f(t)\,dt}$ &Inc. gamma function&$0,\alpha,\beta/z$\ 9&$\dsp{\int_{-\infty}^{(0+)} t^{-\beta-1}e^{z(t+\alpha/t)}f(t)\,dt}$& Bessel $I$ function&$0,\pm\sqrt{{\alpha}}$\ 10&$\dsp{\int_0^\infty t^{\beta-1}e^{-z(t+\alpha/t)}f(t)\,dt}$& Bessel $K$ function&$0,\pm\sqrt{{\alpha}}$\ 11&$\dsp{\int_0^\infty t^{\lambda-1}(t+\alpha)^{-\mu}e^{-zt}f(t)\,dt}$& Kummer $U$ function&$0,-\alpha$\ 12&$\dsp{\int_{-\alpha}^\alpha e^{-zt}(\alpha^2-t^2)^\mu f(t)\,dt}$& Bessel $I$ function&$0,\pm\alpha$\ 13&$\dsp{\int_\alpha^\infty e^{-zt}(t^2-\alpha^2)^\mu f(t)\,dt}$& Bessel $K$ function&$0,\pm\alpha$\ 14&$\dsp{\int_0^\infty\frac{\sin z(t-\alpha)}{t-\alpha}f(t)\,dt}$& Sine integral&$0,\alpha$\ ******** -------------------------------------------------------------------------------------------------------------------- -- -- -- : An overview of standard forms\[tab:overstand\] In Table \[tab:overstand\] we give an overview of standard forms considered in the literature. In almost all cases of the table, these forms arise in the asymptotic analysis of special functions, and in all cases special functions are used as leading terms in the approximations. The decisive points mentioned in the table are the points in the interval of integration (or close to this interval) where the main contributions to the approximation can be obtained. When these points are coalescing, uniform methods are needed. The cases given in the table can be used to solve asymptotic problems for special functions, for problems in science and engineering, in probability theory, in number theory, and so on. We almost never see the forms as shown in the table ready-made in these problems, but we need insight, technical steps, and nontrivial transformations to put the original integrals, sums, solutions of equations, and so on, into one of these standard forms. We give a few comments on the cases in Table \[tab:overstand\]. Case 1 : We can use Van der Waerden’s method, see §\[sec:waerden\]. When $f=1$ the integral reduces to the exponential integral. This simple case has not been considered in the literature and we give an example. Let $\dsp{S_n(z)=\sum_{k=1}^n \frac{z^k}{k}}$. Determine the behavior as $n\to\infty$, that holds uniformly for values of $z$ close to 1. Observe that as $n\to\infty$, $$\label{eq:cases01} S_n(1)\sim \ln n, \quad S_n(z)\sim-\ln(1-z), \quad |z|<1,$$ and that $S_n(z)$ has the representation $$\label{eq:cases02} S_n(z)=-\ln(1-z)-z^{n+1}\intp\frac{e^{-ns}}{e^s-z}\,ds, \quad z\ne 1.$$ There is a pole at $s=\ln z$, at the negative axis when $0<z<1$, which can be split off, giving the exponential integral, and this special function will deal with the term $-\ln(1-z)$ in as $z\to1$. Case 2 : This has been considered in §\[sec:waerden\]. Case 3 : Again we can use the complementary error function. This standard form has important applications for cumulative distribution functions of probability theory. As explained in [@Temme:1982:UAE], the well-known gamma and beta distributions, and several other ones, can be transformed into this standard form. Case 4 : When $f=1$, the integral becomes the parabolic cylinder function $U(a,z)$ and when $\beta=1$ this case is the same as Case 3. See [@Bleistein:1966:UAE], where an integration-by-parts method was introduced that can be used in slightly different versions to obtain many other uniform expansions. See also [@Wong:2001:AAI Chapter 7]. Case 5 : Again we use a parabolic cylinder function with $\alpha$ in a domain containing the origin. If needed the path of integration should be taken around the branch cut. For an application to Kummer functions, see [@Temme:1978:UAE]. Case 6 : This case has been considered in §\[sec:airy\]. Case 7 : We assume that $z$ is large and that $\lambda$ may be large as well. This is different from Watson’s lemma considered in §\[sec:aslap\], where we have assumed that $\lambda$ is fixed. There is a saddle point at $\mu=\lambda/z$. For details we refer to [@Temme:1983:LAP; @Temme:1985:LTI]. Case 8 : This case extends the previous one with an extra parameter $\alpha$, and we assume $\lambda\ge0$, $\alpha\ge0$ and $z$ large. As in Case 7, $\lambda$ may also be large; $\alpha$ may be large as well, even larger than $\lambda$. When $f=1$ this integral becomes an incomplete gamma function. In [@Temme:1987:ILI] we have given more details with an application to the incomplete beta functions. Case 9 : When $f=1$ the contour integral reduces to a modified $I-$Bessel function when $\alpha>0$, and to a $J-$Bessel function when $\alpha<0$. For an application to Laguerre polynomials, we refer to [@Wong:2001:AAI Chapter 7]. Case 10 : This real integral with the essential singularity at the origin is related with the previous case, and reduces to a modified $K-$Bessel function when $f$ is a constant. We can give an asymptotic expansion of this integral for large values of $z$, which is uniformly valid for $\lambda\ge0$, $\alpha\ge0$. For more details we refer to [@Temme:1990:UAE], where an application is given to confluent hypergeometric functions. Case 11 : This is more general than Case 1. See [@Oberhettinger:1959:OAM] for a more details. Case 12 and Case 13 : When $f(t)=1$ the integrals reduce to modified Bessel functions, see [@Ursell:1984:IWL] (with applications to Legendre functions). In [@Ursell:2007:IWN] a contour integral is considered, with an application to Gegenbauer polynomials. In that case the expansion is in terms of the $J-$Bessel function. Case 14 : This is considered in [@Zilbergleit:1977:UAE], where a complete asymptotic expansion is derived in which the sine integral is used for a smooth transition of $x=0$ to $x>0$. Concluding remarks {#sec:concl} ================== We have given an overview of the classical methods for obtaining asymptotic expansions for integrals, occasionally by giving comments on De Bruijn’s book [@Bruijn:1981:AMA]. We have explained how uniform expansions for integrals were introduced by Van der Waerden [@Waerden:1951:OMS] and Chester [*et all.*]{} [@Chester:1957:ESD], and we have given an overview of uniform methods that were introduced since the latter pioneering paper. For Airy-type expansions we have mentioned several other approaches, which are also used for other expansions in which special functions are used as leading terms in the approximations. Acknowledgements {#acknowledgements .unnumbered} ================ The author thanks the referee for advice and comments; he acknowledges research facilities from CWI, Amsterdam, and support from [*Ministerio de Ciencia e Innovación*, Spain]{}, project MTM2009-11686. [10]{} N. Bleistein. Uniform asymptotic expansions of integrals with stationary point near algebraic singularity. , 19:353–370, 1966. N. Bleistein and R. A. Handelsman. . Holt, Rinehart, and Winston, New York, 1975. Reprinted with corrections by Dover Publications Inc., New York, 1986. D. Borwein, J. M. Borwein, and R. E. Crandall. Effective [L]{}aguerre asymptotics. , 46(6):3285–3312, 2008. C. Chester, B. Friedman, and F. Ursell. An extension of the method of steepest descents. , 53:599–611, 1957. N. G. de Bruijn. . Bibliotheca Mathematica. Vol. 4. North-Holland Publishing Co., Amsterdam, 1958. Third edition by Dover (1981). P. Debye. Näherungsformeln für die [Z]{}ylinderfunktionen für große [W]{}erte des [A]{}rguments und unbeschränkt veränderliche [W]{}erte des [I]{}ndex. , 67(4):535–558, 1909. P. Deift and X. Zhou. A steepest descent method for oscillatory [R]{}iemann-[H]{}ilbert problems. [A]{}symptotics for the [MK]{}d[V]{} equation. , 137(2):295–368, 1993. A. Erd[é]{}lyi. Asymptotic representations of [F]{}ourier integrals and the method of stationary phase. , 3:17–27, 1955. A. Erd[é]{}lyi. . Dover Publications Inc., New York, 1956. J. Lekner. , volume 3 of [*Developments in Electromagnetic Theory and Applications*]{}. Martinus Nijhoff Publishers, Dordrecht, 1987. J. L. L[ó]{}pez. Asymptotic expansions of [M]{}ellin convolution integrals. , 50(2):275–293, 2008. J. L. L[ó]{}pez and N. M. Temme. Two-point [T]{}aylor expansions of analytic functions. , 109(4):297–311, 2002. J. L. L[ó]{}pez and N. M. Temme. Large degree asymptotics of generalized [B]{}ernoulli and [E]{}uler polynomials. , 363(1):197–208, 2010. K. Mahler. On a special integral, [I]{}, [II]{}. Research reports 37, 38, Department of Mathematics, The Australian National University, 1986. F. Oberhettinger. On a modification of [W]{}atson’s lemma. , 63B:15–17, 1959. A. B. [Olde Daalhuis]{}. Chapter 13, [C]{}onfluent hypergeometric functions. In [*NIST Handbook of Mathematical Functions*]{}, pages 321–349. Cambridge University Press, Cambridge, 2010. http://dlmf.nist.gov/13. A. B. Olde Daalhuis and N. M. Temme. Uniform [A]{}iry-type expansions of integrals. , 25(2):304–321, 1994. F. W. J. Olver. The asymptotic expansion of [B]{}essel functions of large order. , 247:328–368, 1954. F. W. J. Olver. The asymptotic solution of linear differential equations of the second order for large values of a parameter. , 247:307–327, 1954. F. W. J. Olver. Error bounds for stationary phase approximations. , 5:19–29, 1974. F. W. J. Olver. . AKP Classics. A K Peters Ltd., Wellesley, MA, 1997. Reprint of the 1974 original \[Academic Press, New York\]. F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and Ch. W. Clark, editors. . U.S. Department of Commerce National Institute of Standards and Technology, Washington, DC, 2010. With 1 CD-ROM (Windows, Macintosh and UNIX). F. W. J. Olver and L. C. Maximon. Chapter 10, [B]{}essel functions. In [*NIST Handbook of Mathematical Functions*]{}, pages 215–286. Cambridge University Press, Cambridge, 2010. http://dlmf.nist.gov/10. H. Ott. Die [S]{}attelpunktsmethode in der [U]{}mgebung eines [P]{}ols mit [A]{}nwendungen auf die [W]{}ellenoptik und [A]{}kustik. , 43:393–403, 1943. R. B. Paris and D. Kaminski. , volume 85 of [ *Encyclopedia of Mathematics and its Applications*]{}. Cambridge University Press, Cambridge, 2001. O. Perron. Über das [V]{}erhalten einer ausgearteten hypergeometrischen [R]{}eihe bei unbegrenztem [W]{}achstum eines [P]{}arameters. , 151:63–78, 1921. G. Szeg[ő]{}. . American Mathematical Society, Providence, RI, 1975. N. M. Temme. Uniform asymptotic expansions of confluent hypergeometric functions. , 22(2):215–223, 1978. N. M. Temme. The uniform asymptotic expansion of a class of integrals related to cumulative distribution functions. , 13(2):239–253, 1982. N. M. Temme. Uniform asymptotic expansions of [L]{}aplace integrals. , 3:221–249, 1983. N. M. Temme. Laplace type integrals: transformation to standard form and uniform asymptotic expansions. , 43(1):103–123, 1985. N. M. Temme. Incomplete [L]{}aplace integrals: uniform asymptotic expansion with application to the incomplete beta function. , 18(6):1638–1663, 1987. N. M. Temme. Asymptotic expansion of a special integral. , 2(1):67–72, 1989. N. M. Temme. Uniform asymptotic expansions of a class of integrals in terms of modified [B]{}essel functions, with application to confluent hypergeometric functions. , 21(1):241–261, 1990. N. M. Temme. Chapter 7, [E]{}rror functions, [D]{}awson’s and [F]{}resnel integrals. In [*N[IST]{} [H]{}andbook of [M]{}athematical [F]{}unctions*]{}, pages 159–171. U.S. Dept. Commerce, Washington, DC, 2010. http://dlmf.nist.gov/7. F. Ursell. Integrals with a large parameter: [L]{}egendre functions of large degree and fixed order. , 95(2):367–380, 1984. F. Ursell. Integrals with nearly coincident branch points: [G]{}egenbauer polynomials of large degree. , 463(2079):697–710, 2007. J. G. Van der Corput. On the method of critical points. [I]{}. , 51:650–658 = Indagationes Math. 10, 201–209 (1948), 1948. B. L. Van der Waerden. On the method of saddle points. , 2:33–45, 1951. Z. Wang and R. Wong. Uniform asymptotic expansion of [$J_\nu(\nu a)$]{} via a difference equation. , 91(1):147–193, 2002. Z. Wang and R. Wong. Asymptotic expansions for second-order linear difference equations with a turning point. , 94(1):147–194, 2003. G. N. Watson. . Cambridge Mathematical Library. Cambridge University Press, Cambridge, 1944. Second edition. R. Wong. , volume 34 of [ *Classics in Applied Mathematics*]{}. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2001. Corrected reprint of the 1989 original. R. Wong and Yu-Qiu Zhao. On a uniform treatment of [D]{}arboux’s method. , 21(2):225–255, 2005. A. S. Zilbergle[ĭ]{}t. Uniform asymptotic expansion of the [D]{}irichlet integral. , 17(6):1588–1592, 1630, 1977. English translation: U.S.S.R. Computational Math. and Math. Phys. 17 (1977), no. 6, 237–242 (1978). [ ]{} [^1]: Former address: Centrum Wiskunde & Informatica (CWI), Science Park 123, 1098 XG Amsterdam, The Netherlands [^2]: He concluded a Rouse Ball lecture at Cambridge (England) in 1948 with a request [@Corput:1948:MCP] to all workers in asymptotics to send information to the MC, and a study group would include these contributions in a general survey of the whole field. [^3]: A web version is available at http://dlmf.nist.gov/ [^4]: Van der Corput used also the term [*critical point*]{}. [^5]: Van der Corput wrote a review of this paper in Mathematical Reviews. [^6]: Van der Waerden was appointed professor in the University of Amsterdam from 1948–1951, and stayed also at the Mathematisch Centrum during that period.
--- abstract: 'We use a new panoramic imaging survey, conducted with the Dark Energy Camera, to map the stellar fringes of the Large and Small Magellanic Clouds to extremely low surface brightness $V\ga32$ mag arcsec$^{-2}$. Our results starkly illustrate the closely interacting nature of the LMC-SMC pair. We show that the outer LMC disk is strongly distorted, exhibiting an irregular shape, evidence for warping, and significant truncation on the side facing the SMC. Large diffuse stellar substructures are present both to the north and south of the LMC, and in the inter-Cloud region. At least one of these features appears co-spatial with the bridge of RR Lyrae stars that connects the Clouds. The SMC is highly disturbed – we confirm the presence of tidal tails, as well as a large line-of-sight depth on the side closest to the LMC. Young, intermediate-age, and ancient stellar populations in the SMC exhibit strikingly different spatial distributions. In particular, those with ages $\sim1.5-4$ Gyr exhibit a spheroidal distribution with a centroid offset from that of the oldest stars by several degrees towards the LMC. We speculate that the gravitational influence of the LMC may already have been perturbing the gaseous component of the SMC several Gyr ago. With careful modeling, the variety of substructures and tidal distortions evident in the Magellanic periphery should tightly constrain the interaction history of the Clouds.' author: - Dougal Mackey - 'Sergey E. Koposov' - Gary Da Costa - Vasily Belokurov - Denis Erkal - Pete Kuzma title: Substructures and tidal distortions in the Magellanic stellar periphery --- Introduction ============ As the two largest satellites of the Milky Way, and amongst the closest, the Large and Small Magellanic Clouds (LMC/SMC) are important to a host of contemporary problems in near-field astrophysics. Not only is their presence statistically unusual [e.g., @tollerud:11; @robotham:12], it has significant implications for measurements of the Milky Way’s dark halo [@veraciro:13; @gomez:15] and for interpreting the number and distribution of faint Galactic satellites [@jethwa:16; @sales:17]. Moreover, the Clouds are responsible for depositing $\ga5\times10^8\,{\rm M_\odot}$ of neutral hydrogen gas into the Milky Way’s outskirts, forming the Magellanic Bridge and the $\approx200\degr$ long Magellanic Stream [@bruns:05; @nidever:10]. The orbits and interaction histories of the LMC and SMC are central to our understanding of these issues. Precise proper motion measurements have revealed that the Clouds must either be on their first pericentric passage about the Milky Way or on a very long period ($\ga4$ Gyr) orbit, where the uncertaintly is largely due to the poorly-known masses of the LMC and our Galaxy [@kallivayalil:13]. In either scenario the stripping of gas to form the Magellanic Bridge and Stream is best explained by repeated close interactions between the Clouds [@besla:10; @besla:12; @diaz:12]. The number, timing, and severity of these events is not well constrained, but they are likely also responsible for many other peculiar characteristics of the system, including the Clouds’ bursty, coupled star-formation histories [@harris:09], the irregular morphology and internal kinematics of the SMC [@nidever:13; @belokurov:17; @zivick:18], the off-centre stellar bar in the LMC [@vdm:01b], and the presence of SMC stars in the inter-Cloud region [@noel:15; @carrera:17; @belokurov:17] and LMC outskirts [@olsen:11; @deason:17]. Mapping stellar structures in the periphery of the Clouds is a potentially powerful, but only recently feasible, means of unraveling the Magellanic interaction history. Stars in the outskirts of the system are weakly bound and hence strongly susceptible to external perturbations, the signatures of which may persist for long periods. Pioneering studies showed that the LMC and SMC extend to surprisingly large radii $\sim15\degr-20\degr$, but yielded conflicting results regarding their overall shape and dimensions [e.g., @munoz:06; @majewski:09; @saha:10]. More recently, @mackey:16 used deep contiguous imaging from the Dark Energy Survey (DES) to uncover a striking arc-like substructure in the northern LMC outskirts, which @besla:16 showed could plausibly arise from repeated close encouters between the Clouds, while @belokurov:17 used Gaia DR1 photometry to reveal a bridge of stellar debris connecting the LMC and SMC – apparently yet another imprint of the Magellanic interaction history. In this Letter we present initial results from a new panoramic survey of the Magellanic stellar periphery, designed to obtain the first global view of the system at extremely low surface brightness and explore the overall extent and morphology of structural distortions in the outer regions of the Clouds. Observations and data analysis ============================== We utilise a photometric catalog derived from two sources: the first DES public data release (DES-DR1), and our own survey observations. All imaging was obtained using the Dark Energy Camera [DECam; @flaugher:15], which is a 520 megapixel imager with a $\sim3$ deg$^2$ field-of-view mounted on the 4m Blanco Telescope at the Cerro Tololo Inter-American Observatory in Chile. The DES-DR1 source catalog was produced as described in papers accompanying the data release [@abbott:18; @morganson:18]. Our observations were conducted over two separate runs: four first-half nights on 2016 February 25-28, and two full nights on 2017 October 7-8 (programs 2016A-0618 and 2017B-0906, PI: Mackey). This imaging uses the $g$ and $r$ filters, and covers $\approx440$ deg$^2$ spanning the western and southern outskirts of the LMC, the entire inter-Cloud region, most of the SMC periphery, and an area adjoining the DES footprint north-east of the LMC. Our raw data were processed by the DECam community pipeline [@valdes:14]. We carried out source detection and photometric measurements using the [SExtractor]{} and [PSFEx]{} software [@bertin:96; @bertin:11], and then constructed a point-source catalogue by merging individual detection lists and excluding galaxies using the [spread\_model]{} and [spreaderr\_model]{} parameters as in @koposov:15. To calibrate our photometry we employed DR9 of the APASS survey after removing the presence of small-scale systematics [see @koposov:18] and transforming into the DES-DR1 AB system. Small regions of overlap between our imaging and DES-DR1 exhibit offsets $\la0.03$ mag. We corrected all photometry for foreground reddening using the prescription of @schlafly:11. Results ======= In the left-hand panel of Figure \[f:cmdmap\] we show a Hess diagram for all stars lying between $10\degr-12\degr$ from the LMC and within position angles $\pm30\degr$ of north. The stellar population of the outer LMC disk is prominent and is well described by a MIST isochrone [@choi:16; @dotter:16] of age $11$ Gyr and $[$Fe$/$H$]=-1.3$, shifted to a distance modulus $\mu=18.45$. A distinct red clump is also visible. ![image](fig1a){height="88.5mm"} ![image](fig1b){height="88mm"} We use this color-magnitude diagram (CMD) as a template to define a selection box around the main-sequence turn-off (MSTO) that will allow us to map the spatial density of ancient stars across the Magellanic system with high contrast relative to contaminants. We elected not to employ a formal matched-filtering scheme due to the significant variations in stellar population characteristics and line-of-sight distance in the Clouds; the color bounds of our selection box, and its bright limit, were chosen empirically to minimize sensitivity to such variations. The faint limit is set to ensure robustness against spatial variations in detection completeness (see below). Our spatial density map is displayed in the right-hand panel of Figure \[f:cmdmap\], and reveals striking overdensities and structural distortions in the Magellanic periphery. North of the LMC is the arc-like feature discovered by @mackey:16, which extends behind the bright star Canopus to a radial distance $\approx20\degr$. To the west and south the outer LMC disk appears substantially truncated, and in the far south two large, previously-unknown substructures are visible. The SMC is significantly elongated in the direction of these substructures, and on the anti-LMC side the overdensity discovered by @pieres:17 is evident at radius $\approx8\degr$. An area of high reddening affects the edge of the survey region to the south-east of the LMC. Based on calculations by @mackey:16, our map is sensitive to features with $V$-band surface brightness as faint as $\ga32$ mag arcsec$^{-2}$. Several compact overdensities are also apparent. These include the LMC globular clusters NGC 1841 and Reticulum, the background galaxy NGC 1313, the ultra-faint dwarf galaxy Reticulum II [@koposov:15; @bechtol:15], and a new ultra-faint dwarf projected in the inter-Cloud region, Hydrus I, which is the focus of a companion paper [@koposov:18]. ![image](fig2a){height="64mm"} ![image](fig2b){height="64mm"}\ ![image](fig2c){height="64mm"} ![image](fig2d){height="64mm"} In Figure \[f:ancillary\] we display four ancillary maps to demonstrate that the structures evident in Figure \[f:cmdmap\] are not due to observational artefact. The top-left panel shows that foreground reddening is generally low across the survey area, with $E(B-V)_{\rm SFD}\la0.15$ except in the aforementioned region south-east of the LMC. The top-right panel charts the de-reddened $r$-band magnitude where our stellar catalogue begins to become significantly affected by incompleteness. We define this in $0.6\degr\times0.6\degr$ bins by locating the turn-over in the luminosity function for stars with $-0.2\leq(g-r)_0\leq0.8$ and fitting a curve to determine the level at which the number counts fall to $75\%$ of this peak. We imposed the color cut to capture the behavior of ancient MSTO-like stars. The DES region (covering the upper-right half of the map) has a rather uniform completeness level near $r_0\approx23.8$; in contrast the area covered by our survey is shallower, and, due to variable weather conditions and lunar illumination, somewhat patchier. However, excluding the small region strongly affected by reddening, our incompleteness limit is always fainter than $r_0\sim22.8$. The foreground populations closest to Magellanic MSTO stars on the CMD are due to the Milky Way halo. The lower-left panel of Figure \[f:ancillary\] maps the density of these contaminants selected using the dashed box defined in Figure \[f:cmdmap\]. The distribution is quite homogeneous with only a mild increase towards lower Galactic latitudes, likely due to distant thick-disk stars. Minor contamination near the SMC from intermediate-age subgiants is also visible (see below). The lower-right panel of Figure \[f:ancillary\] shows the spatial density of sources in our MSTO box classified as galaxies. While these are not the [*unresolved*]{} contaminants that could affect our primary map, they ought to be distributed in a similar fashion. This distribution exhibits only a weakly fluctuating filamentary structure. ![image](fig3a){height="68mm"} ![image](fig3b){height="68mm"} Having established the robustness of the features visible in our map of the Magellanic system, we turn to a disucssion of their properties. Using Gaia DR1 photometry, @belokurov:17 found that (i) the outer density contours of the SMC appear to indicate significant tidal distortion, and (ii) there is a bridge of RR Lyrae stars connecting the two Clouds. In Figure \[f:gaia\] we overplot these authors’ density contours on our map. While the Gaia SMC counts do not probe as faint as does our survey, their outer levels closely match the DECam density distribution and they provide a clearer view of the SMC’s central regions. The twisting of the Gaia isophotes into S-shaped tidal tails noted by @belokurov:17 is evident. Our map shows that the overdensity discovered by @pieres:17 is clearly an extension of the Belokurov et al. leading tail. On the opposite side the RR Lyrae bridge joins the SMC at the base of the trailing tail. The overall elongation of the SMC is aligned with both this bridge and the proper motion of the SMC relative to the LMC. The prominent inter-Cloud overdensity near $(\xi,\,\eta)\sim(-6,\,-11)$ in our map appears co-spatial with the RR Lyrae bridge, and may be associated with this larger structure. Notably, however, the southern overdensity near $(2,\,-12)$ falls outside the RR Lyrae contours. Figure \[f:cmds\] shows CMDs for key regions of interest across the Magellanic system. The fiducial sequence and red clump level marked on each CMD are those derived for the northern LMC disk in Figure \[f:cmdmap\]. We sample the southern LMC disk by selecting stars at position angles $150\degr-210\degr$ east of north, but smaller radii $8\degr-10\degr$ from the LMC centre due to the truncation of the disk in this direction. The southern disk CMD matches the northern fiducial sequence and red clump level well. This confirms the observation by @mackey:16 that the orientation of the line-of-nodes runs almost north-south in the LMC periphery; if the orientation was significantly different, we would expect an offset between north and south of several tenths of a magnitude. Our CMDs for the two main substructures to the south of the LMC do not have high contrast against contaminating populations due to the low surface density of these features. However, there is no indication that either structure possesses a significantly different line-of-sight distance than the northern and southern LMC disk regions. @belokurov:17 found that the RR Lyrae bridge connecting the Clouds exhibits a substantial line-of-sight depth, including a sizeable component at the LMC distance that stretches at least two-thirds of the way to the SMC. This implies that the structure coincident with the RR Lyrae bridge in projection is likely co-spatial in three dimensions. The structure of the SMC is known to be highly complex [see e.g., @nidever:11; @nidever:13 and references therein], and this is clearly evident in our CMDs. That for the western side of the SMC exhibits a compact red clump sitting $\sim0.4-0.5$ mag fainter than the level observed in the northern and southern LMC disk regions. This offset is also visible at the subgiant branch level, and is consistent with canonical SMC distance estimates [@degrijs:15]. In contrast, the red clump for a comparable region on the eastern side of the SMC exhibits a striking vertical extension, which @nidever:13 showed can be almost completely attributed to a large line-of-sight depth. Taking this conclusion at face value, our CMD indicates the presence of some populations in this region that must sit even closer than the northern and southern LMC disk. It is also notable that the eastern side of the SMC exhibits a much more substantial intermediate-age population than at equivalent radii to the west. No comparable intermediate-age population is visible in the CMDs for either of the southern substructures. ![image](fig4a){height="78mm"} ![image](fig4b){height="75mm"} ![image](fig4c){height="78mm"}\ ![image](fig4d){height="78mm"} ![image](fig4e){height="78mm"} ![image](fig4f){height="78mm"} ![image](fig4g){height="78mm"} The “SMC Bridge” CMD in Figure \[f:cmds\] encompasses the outer SMC wing that extends into the young stellar bridge stretching between the Clouds [e.g., @irwin:90; @battinelli:92; @belokurov:17; @mackey:17]. The young populations in this region are striking, and are not seen in our other SMC CMDs. However, this part of the SMC also clearly possesses a significant intermediate-age population, and exhibits the same line-of-sight extension seen in the CMD for the eastern SMC outskirts. Figure \[f:bridges\] maps the distribution of young and intermediate-age populations across the SMC and inter-Cloud region. The young sequence evident in the SMC Bridge CMD is well fit by a MESA isochrone of age $25$ Myr and $[$Fe$/$H$]=-0.8$, shifted to a distance modulus $\mu=18.75$. This is approximately the mean line-of-sight distance indicated by the extended red clump. We use this CMD to define a selection box around the young main sequence, and show the spatial density of young stars in the upper-right panel. The outer SMC wing and young stellar bridge are prominent. In the inter-Cloud region the young stars are highly clustered and closely trace the H[i]{} [e.g., @mackey:17]. This young stellar bridge is offset from the RR Lyrae bridge and old stellar substructures by $\approx5\degr$, likely because of the ram pressure from the Milky Way’s hot corona [@belokurov:17]. To trace the intermediate-age populations we use our eastern SMC CMD to define a selection box that is sensitive to stars of ages $\sim 1.5-4$ Gyr. The spatial density of stars in this age range is quite different from that for either younger or older populations. The intermediate-age stars do not have a significant presence in the inter-Cloud region and instead exhibit a rather spheroidal distribution confined within the SMC. Even though our map does not fully cover the SMC’s interior, it is clear that the centroid for this population is offset from that for the old stars by several degrees towards the LMC. A simple comparison of our CMDs for the eastern and western sides of the SMC highlights this lop-sided distribution. ![image](fig5a){width="40mm"}\ ![image](fig5c){width="40mm"} Discussion and Summary ====================== We have used DECam to map a large portion of the Magellanic periphery and inter-Cloud region to extremely low surface brightness, revealing several key characteristics that provide clues to the recent evolution of the system: 1. [The outer LMC disk is significantly truncated in both the west and south compared to its extension in the north. The northern and southern regions have similar line-of-sight distances, implying that the apparent truncation is not a perspective effect and, moreover, that the LMC line-of-nodes at radii beyond $\approx9\degr$ must be aligned nearly north-south. This is different from the orientation inferred for inner regions of the LMC by $\sim20\degr-60\degr$ [e.g., @vdm:01; @subra:10], as expected if the LMC disk is warped.]{} 2. [Large diffuse substructures comprised of ancient stellar populations are present both to the north and south of the LMC, and in the inter-Cloud region. At least one of these features appears co-spatial with the RR Lyrae bridge discovered by @belokurov:17 that connects the Clouds.]{} 3. [The SMC is strongly distorted, exhibiting (i) twisting of isophotes with increasing galactocentric radius; (ii) tidal tails extending into the RR Lyrae bridge towards the LMC and the @pieres:17 overdensity in the opposite direction; and (iii) a very large line-of-sight depth on the side of the system closest to the LMC.]{} 4. [Substantial young populations are present in the outer SMC wing, which morphs into a second stellar bridge joining the Clouds. This bridge closely follows the peak of the H[i]{} distribution in the inter-Cloud region [e.g., @dinescu:12; @skowron:14] and is offset from the ancient substructures by $\approx5\degr$.]{} 5. [Intermediate-age populations are largely confined to the SMC interior, where they exhibit a spheroidal distribution with a centroid offset from that of the old populations by several degrees towards the LMC. This cannot be a perspective effect [cf. @nidever:11] as both the intermediate-age and old populations appear to share a similar extension along the line-of-sight.]{} While some of these features have been known for years, our deep, panoramic survey has laced them into a coherent picture that starkly illustrates the strongly interacting nature of the LMC-SMC pair. In future, careful characterization of the main structural distortions in the Magellanic outskirts, combined with detailed numerical modeling, holds the promise of providing tight constraints on the allowed orbital parameter space – in particular, the masses of the two Clouds, the timescale on which they have been bound, and the frequency and impact parameter of recent close passages. For now, we hypothesize that our two new southern substructures could plausibly represent either (i) disrupted regions of the outer LMC disk, or (ii) strong periodic stripping of SMC stars. Since the substructure CMDs do not obviously exhibit the intermediate-age populations so prevalent in the eastern SMC we tentatively favour the former option; future spectroscopic observations should be decisive given the difference in mean metallicity between the Clouds. The strong offset between the intermediate-age and old stellar populations in the SMC is also instructive. Since these stars orbit in the same potential this arrangement must represent formation conditions $\sim1.5-4$ Gyr ago, and we speculate that the gravitational influence of the LMC may already have been affecting the gaseous component of the SMC at this time. If so, our observations imply that the Magellanic Clouds have likely been a bound pair for at least the past several Gyr, as appears probable if the Magellanic system is on its first infall [@kallivayalil:13]. ADM holds an Australian Research Council (ARC) Future Fellowship (FT160100206). ADM and GDC acknowledge support from ARC Discovery Project DP150103294. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement no. 308024. We thank the International Telescopes Support Office at the Australian Astronomical Observatory for providing travel support. This project used data obtained with the Dark Energy Camera (DECam), and public archival data from the Dark Energy Survey (DES). Funding for the DES Projects has been provided by the DOE and NSF (USA), MISE (Spain), STFC (UK), HEFCE (UK), NCSA (UIUC), KICP (U. Chicago), CCAPP (Ohio State), MIFPA (Texas A&M), CNPQ, FAPERJ, FINEP (Brazil), MINECO (Spain), DFG (Germany), and the collaborating institutions in the Dark Energy Survey, which are Argonne Lab, UC Santa Cruz, University of Cambridge, CIEMAT-Madrid, University of Chicago, University College London, DES-Brazil Consortium, University of Edinburgh, ETH Z[ü]{}rich, Fermilab, University of Illinois, ICE (IEEC-CSIC), IFAE Barcelona, Lawrence Berkeley Lab, LMU M[ü]{}nchen and the associated Excellence Cluster Universe, University of Michigan, NOAO, University of Nottingham, Ohio State University, University of Pennsylvania, University of Portsmouth, SLAC National Lab, Stanford University, University of Sussex, and Texas A&M University. Based on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory (programs 2016A-0618 and 2017B-0906, PI: Mackey), which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. Abbott, T. M. C., et al. 2018, arXiv:1801.03181 Battinelli, P., & Demers, S. 1992, AJ, 104, 1458 Bechtol, K., et al. 2015, ApJ, 807, 50 Belokurov, V., Erkal, D., Deason, A. J., Koposov, S. E., De Angeli, F., Evans, D. W., Fraternali, F., & Mackey, A. D. 2017, MNRAS, 466, 4711 Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393 Bertin, E. 2011, Astronomical Data Analysis Software and Systems XX, 442, 435 Besla, G., Kallivayalil, N., Hernquist, L., van der Marel, R. P., Cox, T. J., & Keres, D. 2010, ApJ, 721, L97 Besla, G., Kallivayalil, N., Hernquist, L., van der Marel, R. P., Cox, T. J., & Keres, D. 2012, MNRAS, 421, 2109 Besla, G., Martínez-Delgado, D., van der Marel, R. P., Beletsky, Y., Seibert, M., Schlafly, E. F., Grebel, E. K., & Neyer, F. 2016, ApJ, 825, 20 Brüns, C., et al. 2005, A&A, 432, 45 Carrera, R., Conn, B. C., Noël, N. E. D., Read, J. I., & López Sánchez, Á. R. 2017, MNRAS, 471, 4571 Casetti-Dinescu, D. I., Vieira, K., Girard, T. M., & van Altena W. F. 2012, ApJ, 753, 123 Choi, J., Dotter, A., Conroy, C., Cantiello, M., Paxton, B., & Johnson, B. D. 2016, ApJ, 823, 102 de Grijs, R., & Bono, G. 2015, AJ, 149, 179 Deason, A. J., Belokurov, V., Erkal, D., Koposov, S. E., & Mackey, A. D. 2017, MNRAS, 467, 2636 Diaz, J., & Bekki, K. 2012, ApJ, 750, 36 Dotter, A. 2016, ApJS, 222, 8 Flaugher, B., et al. 2015, AJ, 150, 150 Gómez, F. A., Besla, G., Carpintero, D. D., Villalobos, Á., O’Shea, B. W., & Bell, E. F. 2015, ApJ, 802, 128 Harris, J., & Zaritsky, D. 2009, AJ, 138, 1243 Irwin, M. J., Demers, S., & Kunkel, W. E. 1990, AJ, 99, 191 Jethwa, P., Erkal, D., & Belokurov, V. 2016, MNRAS, 461, 2212 Kallivayalil, N., van der Marel, R. P., Besla, G., Anderson, J., & Alcock, C. 2013, ApJ, 764, 161 Koposov, S. E., Belokurov, V., Torrealba, G., & Evans, N. W. 2015, ApJ, 805, 130 Koposov, S. E., et al. 2018, MNRAS, submitted Mackey, A. D., Koposov, S. E., Erkal, D., Belokurov, V., Da Costa, G. S., & Gómez, F. A. 2016, MNRAS, 459, 329 Mackey, A. D., Koposov, S. E., Da Costa, G. S., Belokurov, V., Erkal, D., Fraternali, F., McClure-Griffiths, N. M., & Fraser M. 2017, MNRAS, 472, 2975 Majewski, S. R., Nidever, D. L., Muñoz, R. R., Patterson, R. J., Kunkel, W. E., & Carlin, J. L. 2009, in IAU Symp. 256, “The Magellanic System: Stars, Gas, and Galaxies”, p.51 Morganson, E., et al. 2018, arXiv:1801.03177 Muñoz, R. R., et al. 2006, ApJ, 649, 201 Nidever, D. L., Majewski, S. R., Burton, W. B., & Nigra, L. 2010, ApJ, 723, 1618 Nidever, D. L., Majewski, S. R., Muñoz, R. R., Beaton, R. L., Patterson, R. J., & Kunkel, W. E. 2011, ApJ, 733, L10 Nidever, D. L., Monachesi, A., Bell, E. F., Majewski, S. R., Muñoz, R. R., & Beaton, R. L. 2013, ApJ, 779, 145 Noël, N. E. D., Conn, B. C., Read, J. I., Carrera, R., Dolphin, A., & Rix, H.-W. 2015, MNRAS, 452, 4222 Olsen, K. A. G., Zaritsky, D., Blum, R. D., Boyer, M. L., & Gordon, K. D. 2011, ApJ, 737, 29 Pieres, A., et al. 2017, MNRAS, 468, 1349 Robotham, A. S. G., et al. 2012, MNRAS, 424, 1448 Saha, A., et al. 2010, AJ, 140, 1719 Sales, L. V., Navarro, J. F., Kallivayalil, N., & Frenk, C. S. 2017, MNRAS, 465, 1879 Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525 Skowron, D. M., et al. 2014, ApJ, 795, 108 Subramanian, S., & Subramaniam, A. 2010, A&A, 520, A24 Tollerud, E. J., Boylan-Kolchin, M., Barton, E. J., Bullock, J. S., & Trinh, C. Q. 2011, ApJ, 738, 102 Valdes, F., et al. 2014, Astronomical Data Analysis Software and Systems XXIII, 485, 379 van der Marel, R. P., & Cioni, M.-R. 2001, AJ, 122, 1807 van der Marel, R. P. 2001, AJ, 122, 1827 Vira-Ciro, C., & Helmi, A. 2013, ApJ, 773, L4 Zivick, P., et al. 2018, ApJ, submitted (arXiv:1804.04110)
--- abstract: 'We introduce *advocacy learning*, a novel supervised training scheme for attention-based classification problems. Advocacy learning relies on a framework consisting of two connected networks: 1) $N$ *Advocates* (one for each class), each of which outputs an argument in the form of an attention map over the input, and 2) a *Judge*, which predicts the class label based on these arguments. Each Advocate produces a class-conditional representation with the goal of convincing the Judge that the input example belongs to their class, even when the input belongs to a different class. Applied to several different classification tasks, we show that advocacy learning can lead to small improvements in classification accuracy over an identical supervised baseline. Though a series of follow-up experiments, we analyze when and how such class-conditional representations improve discriminative performance. Though somewhat counter-intuitive, a framework in which subnetworks are trained to competitively provide evidence in support of their class shows promise, in many cases performing on par with standard learning approaches. This provides a foundation for further exploration into competition and class-conditional representations in supervised learning.' author: - 'Ian Fox Jenna Wiens Department of Computer Science and Engineering, University of Michigan, Ann Arbor, USA {ifox, wiensj}@umich.edu' bibliography: - 'references.bib' title: | Advocacy Learning:\ Learning through Competition and Class-Conditional Representations --- Introduction ============ [0.25]{} ![**a)** A simple single-attention framework. The encoder-decoder produces an attention map $\mathbf{a} \in \mathcal{R}^{n \times n}$, which is combined with the input $\mathbf{x} \in \mathcal{R}^{n \times n}$ via an element-wise product (indicated by $\odot$) to create the input to the decision module, or Judge $J$. **b)** Our advocacy learning framework. Each decoder $Dec^i$ is trained separately to output a class-conditional attention map, or argument $\mathbf{a}^i \in \mathcal{R}^{n \times n}$, which is combined with the input via element-wise product (separately for each attention map) to create evidence $\mathbf{E} = [\mathbf{e}_0, \dots, \mathbf{e}_N]$, where $\mathbf{e}_i = \mathbf{a}^i \odot \mathbf{x}$ is evidence supporting class $i$. Each advocate is shown in a different color, the number of Advocates is equal to the number of classes. **c)** An example of an attention map $\mathbf{a}^i$ used to generate evidence $\mathbf{e}_i$.[]{data-label="fig:framework"}](figures/attn_framework2 "fig:"){width="\textwidth"}   [0.5]{} ![**a)** A simple single-attention framework. The encoder-decoder produces an attention map $\mathbf{a} \in \mathcal{R}^{n \times n}$, which is combined with the input $\mathbf{x} \in \mathcal{R}^{n \times n}$ via an element-wise product (indicated by $\odot$) to create the input to the decision module, or Judge $J$. **b)** Our advocacy learning framework. Each decoder $Dec^i$ is trained separately to output a class-conditional attention map, or argument $\mathbf{a}^i \in \mathcal{R}^{n \times n}$, which is combined with the input via element-wise product (separately for each attention map) to create evidence $\mathbf{E} = [\mathbf{e}_0, \dots, \mathbf{e}_N]$, where $\mathbf{e}_i = \mathbf{a}^i \odot \mathbf{x}$ is evidence supporting class $i$. Each advocate is shown in a different color, the number of Advocates is equal to the number of classes. **c)** An example of an attention map $\mathbf{a}^i$ used to generate evidence $\mathbf{e}_i$.[]{data-label="fig:framework"}](figures/advocacy_framework2 "fig:"){width="\textwidth"}   [0.32]{} ![**a)** A simple single-attention framework. The encoder-decoder produces an attention map $\mathbf{a} \in \mathcal{R}^{n \times n}$, which is combined with the input $\mathbf{x} \in \mathcal{R}^{n \times n}$ via an element-wise product (indicated by $\odot$) to create the input to the decision module, or Judge $J$. **b)** Our advocacy learning framework. Each decoder $Dec^i$ is trained separately to output a class-conditional attention map, or argument $\mathbf{a}^i \in \mathcal{R}^{n \times n}$, which is combined with the input via element-wise product (separately for each attention map) to create evidence $\mathbf{E} = [\mathbf{e}_0, \dots, \mathbf{e}_N]$, where $\mathbf{e}_i = \mathbf{a}^i \odot \mathbf{x}$ is evidence supporting class $i$. Each advocate is shown in a different color, the number of Advocates is equal to the number of classes. **c)** An example of an attention map $\mathbf{a}^i$ used to generate evidence $\mathbf{e}_i$.[]{data-label="fig:framework"}](figures/demo_saliency "fig:"){width="\textwidth"} In recent years, researchers have proposed a large number of modifications to the standard supervised learning setting with the goal of improving performance [@parascandolo_learning_2018; @vaswani_attention_2017]. These modifications focus on training different parts of the network (*i.e.*, subnetworks) to *cooperate*. However, in several real-world settings, such as in economics and law, agents who *compete* are critical for identifying good solutions. While recent work in adversarial networks investigates the use of competition for training models, the final model evaluated (*i.e.*, the generator) is cooperative [@goodfellow_generative_2014]. In contrast, we investigate a training scheme in which subnetworks compete during training *and* evaluation. In our model, subnetworks compete to provide evidence in the form of class-conditional attention maps. Here, we use the term ‘attention map’ to refer to a filter that indicates parts of the input that are useful for accurate classification, similar to the idea of saliency [@itti_model_1998]. An example of a standard network architecture with attention is given in **Figure \[fig:attn\]**. We hypothesize that class-conditional attention maps could offer advantages over standard attention maps by emphasizing portions of the input indicative of their class. Our proposed approach consists of two main subnetworks: a single Judge and multiple Advocates (see **Figure \[fig:advocate\]**). Each Advocate produces an attention map that *advocates* for a particular class. A decision is reached by the Judge, which weighs the arguments produced by the Advocates. For this approach to work well, there must be a balance among the Advocates (so that each Advocate can influence the Judge), and the Judge must be able to effectively use the given evidence (so as not to be deceived by the advocates). We achieve this balance via *advocacy learning*, which trains the components jointly, but according to multiple objectives. These different objectives are key to striking the right balance between providing strong but factual evidence. We also explore a variant, honest advocacy learning, where the Advocates are not trained to deceptively compete with one another, but still provide class-conditional attention maps. In a series of experiments, we compare advocacy learning to several baselines in which the entire network is trained according to the same standard objective. Across several image datasets, we observe a small but consistent improvement in classification accuracy by using class-conditional attention maps. Methods ======= We propose a novel approach to optimizing networks for supervised classification that encourages class-conditional representations of evidence in the form of attention maps. We hypothesize that, depending on how they are learned, class-conditional attention maps could offer advantages over standard attention maps by encouraging competition among components of the network. At a high level, the Judge learns to solve the classification problem given some evidence, while each Advocate supplies that evidence by arguing in support of their class. This setup disentangles evidence supporting each class, encouraging strong class-conditional representations. Advocacy learning consists of both a specific architecture (*i.e.*, Advocate and Judge subnetworks) and a specific method for training, both are described below. Problem Setting --------------- We consider the task of solving a multi-class classification problem in a supervised learning setting with element-wise attention. We assume access to a labeled training set consisting of labeled examples $\{{\mathbf{x}},y\}$, where ${\mathbf{x}}\in \mathbb{R}^{d}$ (where $d$ may be a product $d_1 \times d_2$, such as in an image) and $y \in \{1, \dots, N\}$, where $N$ is the number of classes. We refer to the one-hot label distribution entailed by $y$ as ${\mathbf{y}}$, so ${\mathbf{y}}[y] = 1$ and ${\mathbf{y}}[j] = 0$ for all $j \ne y$. We use square brackets for indexing into a vector. We focus on deep learning methods due to their suitability for representation learning. We indicate the parameters of deep models as ${\boldsymbol{\theta}}$ and subscript them according to the particular subnetwork being referenced. The parameters of the Judge are ${\boldsymbol{\theta}}_J$ and the parameters for each Advocate $i$ are ${\boldsymbol{\theta}}_i$ where $i \in \{1, \dots, N\}$. Our proposed approach aims to solve the multi-class classification problem through a novel training scheme, designed to produce class-specific evidence. Network Architecture -------------------- As mentioned above, our proposed approach is composed of two sets of modules: multiple Advocates and a single Judge. A high-level overview of our architecture, which we call an Advocacy Net, is given in **Figure \[fig:advocate\]**. Here, for context, we describe a generic framework, providing specific implementation details in a later section. ### Advocate Subnetwork This subnetwork consists of $N$ Advocates $Adv^i$, where $i$ corresponds to the index of the class the Advocate represents. Given the input ${\mathbf{x}}$, each advocate generates an argument in the form of an attention map $Adv^i({\mathbf{x}}; {\boldsymbol{\theta}}_i) \rightarrow {\mathbf{a}}^i \in [0, 1]^{d}$. The Advocate modules produce an attention map with dimensionality equal to the input using a convolutional encoder-decoder, as is standard for producing pixel-level output in images [@badrinarayanan_segnet:_2017]. Note that for complex input, such as medical images, other fully convolutional architectures such as U-Nets may be more appropriate [@ronneberger_u-net:_2015]. Based on these attention maps, each Advocate presents an argument ${\mathbf{e}}_i$ (or evidence) to the Judge in the form of an element-wise product between attention maps and the input, ${\mathbf{e}}_i = {\mathbf{a}}^i \odot {\mathbf{x}}$. Each Advocate is trained to emphasize aspects of the input indicative of the Advocate’s class. This differs from a supervised attention map, which focuses on aspects of the input indicative of the inputs underlying class. In our implementation, Advocates share some underlying evidence in the form of a shared encoder. This allows the Advocates to share useful representational abstractions. ### Judge Subnetwork The Judge $J$ takes as input the combined evidence ${\mathbf{E}}= [{\mathbf{e}}_1, \dots, {\mathbf{e}}_N] \in \mathbb{R}^{N \times d}$, and outputs a probability distribution over classes $J({\mathbf{E}}; {\boldsymbol{\theta}}_J) \rightarrow {\mathbf{\hat{y}}}$. We make specific class predictions using $\arg \max ({\mathbf{\hat{y}}})$. The architecture of the Judge is flexible; the only limitation is that the input size must be proportional to the total number of classes. In our implementation, the Judge module is a convolutional network with fully connected output layers. Though certain constraints on the architecture of the network are necessary, the interplay among the modules and how they are trained are key to the advocacy learning framework. Trained end-to-end with the objective of minimizing training loss, there would be no difference between the proposed architecture and a network with multiple attention channels. In the next section, we describe the key differences in how we train the Advocates vs. the Judge. Training Algorithm ------------------ The complete advocacy learning algorithm is presented in **Algorithm 1**. We learn the parameters of the Judge subnetwork by minimizing the cross-entropy loss: $CE({\mathbf{\hat{y}}}, y) = -\log {\mathbf{\hat{y}}}[y]$, as is standard for classification. The Advocates are trained according to a different objective. In particular, Advocate $i$ is trained by minimizing the advocate cross-entropy loss: $CE^{A}({\mathbf{\hat{y}}}, i) = -\log {\mathbf{\hat{y}}}[i]$. Under this objective, each Advocate is trained to represent samples from all classes as its own. We also consider a variant, called honest advocacy learning, in which the Advocates are not trained to deceive, but aim to minimize: $$CE^{HA}({\mathbf{\hat{y}}}, i, y) = \ \begin{cases} -\log {\mathbf{\hat{y}}}[i], & \text{if } i = y \\ 0, & \text{otherwise } \end{cases}$$ We optimize the parameters of the Judge and Advocates by making a prediction and then interleaving steps of gradient descent, updating the Judge and each Advocate individually according to their respective loss functions. At each step, we freeze the parameters of all other subnetworks. [latex@errorgobble]{} Initialize parameters for Judge ${\boldsymbol{\theta}}_J$ and Advocates ${\boldsymbol{\theta}}_1, \dots, {\boldsymbol{\theta}}_N$ return $A = (J, Adv^1, \dots, Adv^N)$ Baselines & Experimental Setup ============================== We evaluate our proposed advocacy learning approach across a variety of tasks and compare against a series of different baselines. In this section, we explain each baseline and provide implementation details. Model and Baselines ------------------- On all datasets, we compare (honest) advocacy learning against two baselines that incorporate attention: - **Attention Net:** this baseline modifies the Advocacy Net architecture by removing all but one attention module. The one remaining module is trained using a standard end-to-end optimization approach minimizing cross entropy loss. This allows us to compare advocacy learning against a similar model using standard supervision. - **Multi-Attention Net:** two differences exist between Attention Nets and Advocacy Nets: the optimization procedure and the architecture. To tease apart these two aspects, we include a comparison against a model with an identical architecture, but trained using a standard end-to-end loss. Implementation Details ---------------------- We implement our models using PyTorch [@paszke_automatic_2017]. Our specific model architecture (number of layers, filters, *etc.*) is available via our public code release[^1]. In our experiments, we optimize the network weights using Adam [@kingma_adam:_2014] with a learning rate of $10^{-4}$, and use Dropout and batch normalization to prevent overfitting. We split off 10% of our training data to use as a validation set for early stopping. We cease training when validation loss fails to improve over 10 epochs. Model performance is reported on the canonical test splits for each dataset. We regularize the attention maps by adding a penalty proportional to the L1-norm of the map to encourage sparsity consistent with common notions of attention. Parameters were initialized using the default PyTorch method. Results and Discussion ====================== We begin by examining the performance of advocacy learning across three image datasets, analyzing when and how advocacy learning impacts performance. We then present performance on a significant real-world medical dataset and modified datasets designed to highlight the effects of competition and deception in learning. We conclude by providing some intuition for why advocacy learning works. Advocacy Learning on Multi-Class Balanced Image Data ---------------------------------------------------- We begin by examining the performance of our Advocacy Net variants and baselines on two publicly available image classification datasets: MNIST and Fashion-MNIST [@xiao_fashion-mnist:_2017]. As described above, the parameters of the Advocate modules are not optimized to improve overall classification performance, and as a result their inclusion could lead to a reduction in performance (by deceiving the Judge). However, we find that this is not the case (**Table \[table:judge\]**). On these two datasets, advocacy learning does as well as or outperforms all baselines. The improvement is most pronounced in Fashion-MNIST, perhaps due to the denser images or the baseline performance allowing room for improvement. Moreover, this difference is not solely attributable to the class-conditional nature of the attention-maps; across both datasets the Advocacy Nets outperform the Honest Advocacy Nets (test accuracy $99.42$ *vs.* $99.32$ in MNIST and $91.62$ *vs.* $90.81$ in FMNIST). This suggests that deception, in addition to competition, can help produce useful attention maps. We also examined performance on a more challenging image classification problem, CIFAR-10. When using a similar architecture and training procedure as in the above datasets, we found the Advocacy Net improved over other approaches (in particular, test accuracy $83.47$ *vs.* $79.73$ for the Multi-Attention Net), though both approaches performed poorly relative to state-of-the-art. Thus, we examined replacing the Judge network with a ResNet-110, using a fixed-epoch training scheme with learning rate decay (similar to a popular open source implementation[^2]). This increased the performance of the Multi-Attention Net and drastically lowered the Advocacy Net performance (test accuracy $30.54$ *vs.* $92.01$), suggesting that under our current training scheme Advocacy Learning is unstable when the Judge is high-capacity relative to the Advocates. Notably, Honest Advocacy Learning continues to perform well, beating the Multi-Attention Net (test accuracy $92.68$ *vs.* $92.01$). Impact of Class Conditional Attention ------------------------------------- Results in **Table \[table:judge\]** demonstrate that class-conditional attention can improve upon supervised attention maps. Honest Advocacy Nets perform similarly to Multi-Attention Nets on MNIST and slightly better on FMNIST ($99.32$ *vs.* $99.33$ and $90.81$ *vs.* $90.11$). These architectures differ only in that the attention maps in the Honest Advocacy Net are class-conditional. The competition introduced by the Advocacy Net further improves performance, outperforming the Multi-Attention Net on both datasets. To further compare advocacy learning with the end-to-end supervised baselines, we plot the averaged difference between the confusion matrices of the Multi-Attention Nets and Advocacy Nets (**Figure \[fig:confusion\]**). Overall, we observe that Advocacy Nets result in improvements for a subset of class pairs (*e.g.*, classes 4 and 9), but leave the majority of predictions unchanged. On MNIST (**Figure 2(a)**), we find a few examples where advocacy learning lowers performance. In particular, the Advocacy Net is more likely to misclassify 8s as 9s; though the reverse error rate (9s as 8s) does not increase. This is likely due to the asymmetric morphological relationship among the digits: using per-pixel attentions in $[0, 1]$, an 8 can be obscured to look like a 9, but the converse is less likely. The Advocacy Net appears to help emphasize curves in the input (reducing the instances with 9 classified as 4 or 7 classified as 9). On Fashion-MNIST (**Figure 2(b)**) we observe that certain class pairs (pullovers or coats *vs.* shirts) are markedly improved, while most others are unaffected. This suggests that advocacy learning improves performance by distinguishing among classes with similar morphology; though this analysis is confounded by the fact that it tends to be those class pairs that have the greatest potential room for improvement. ![Averaged difference across five runs in confusion matrices between the multi-attention and Advocacy networks ($\ge 0$ means the Advocacy net performed better) on a) MNIST and b) FMNIST. We zero out the diagonal elements to focus on misclassification. A positive number means the Multi-Attention Net made more misclassifications than the Advocacy Net and vice-versa. We observe the Advocacy Net tends to improve performance across classes, but can make certain morphologically similar examples (*i.e.*, 8 vs 9 in **a**) more difficult.[]{data-label="fig:confusion"}](figures/double_heatmap.pdf){width="\columnwidth"} ![image](figures/qualitative_fmnist){width="1.5\columnwidth"} Qualitative examples of attention maps from the Honest Advocates and Multi-Attention Net are given in **Figure \[fig:qualitative\_fmnist\]**. We found Honest Advocacy Nets gave denser (and thus more interpretable) attention maps than Advocacy Nets. We observe that both Advocacy Nets and Multi-Attention Nets generate a variety of attention maps with checkering characteristic of deconvolutional layers [@odena2016deconvolution]. An interesting example of class-conditional behavior is shown by the advocate for class 1. This Advocate, representing the “pants" class, emphasizes the sides of the shirt, similar to pants legs. Balancing the Advocates and Judge --------------------------------- Since the Advocates are encouraged to deceive the Judge, a natural question arises - How does the relative capacity of these components impact performance? To answer this question, we consider a variation of our Advocacy Net, in which we can vary the capacity of different pieces. We replace the convolutional layers in the Advocate encoder and Judge with some number of convolutional residual blocks, as in [@he_deep_2015]. By changing the number of blocks, we can increase/decrease the capacity of the Advocate or Judge. Architectural details of this modification can be found in our code. We varied the number of residual blocks from 1-5 and 1-3 in the Judge and Advocates respectively. We observed that the Advocacy Nets achieved the highest classification accuracy when Judge capacity was high and Advocate capacity was low (best accuracy $99.46$). We performed an identical architecture search with the Multi-Attention Net; there was no capacity setting that beat the best results attained by the Advocacy Net (best $99.34\%$ *vs.* $99.46\%$). This suggests that it is important for the Judge to have more capacity than the Advocates; though our experiments on CIFAR suggest that a balance must be struck. While a high-capacity Judge is better able to use evidence provided by the Advocates, it may train slowly, and thus be more susceptible to deceptive Advocates. To better understand the impact of deception in advocacy learning, we took a fully trained network and examined the effect of freezing the Judge while continuing to update the Advocates for both advocate and Honest Advocacy Nets. We found that training without the Judge did not affect network performance in the Honest Advocate Net, but decreased performance in the advocate net by $85\%$. Thus, adaptations by the Judge play a crucial role in maintaining advocate net accuracy. Up to this point, we have considered only Advocacy Nets in which all Advocates share an encoder. Such an architecture could encourage implicit sharing of information, possibly tempering the negative effects of deception. To test this hypothesis, we evaluated an Advocacy Net without a shared encoder on MNIST and FMNIST. On both datasets (averaged across 5 runs), the Advocacy Net without a shared encoder achieved lower performance both in absolute terms and relative to an Honest Advocacy Net without a shared encoder ($98.29$ *vs.* $99.05$ for MNIST, $86.47$ *vs.* $89.29$ for FMIST). This suggests the shared encoder is an important way for Advocates to balance performance. Generalization to other Settings -------------------------------- The results presented so far all involve multi-class image datasets with balanced classes. To explore how these assumptions affect the performance of advocacy learning, we applied advocacy learning to a large electronic health record (EHR) dataset, MIMIC III [@johnson_mimic-iii_2016]. This dataset, a publicly available repository of EHR data, has become an important benchmark in the machine learning for health community [@harutyunyan_multitask_2017], and is helping to drive advances in precision health [@desautels_prediction_2016; @maslove_path_2017]. We used the clinical time-series subset of the database for mortality prediction, as in [@harutyunyan_multitask_2017]. We also considered variants of MNIST that break the multi-class and balanced assumptions. These additional experiments test the generalizability of our findings to i) different data types (time series as opposed to images), ii) imbalanced classes, and iii) binary labels. Our results are presented in **Table \[table:new\_judge\]**. We report our results on MIMIC in terms of the area under the precision recall curve (AUPR) and the receiver operating characteristic curve (AUROC), since the task is binary with considerable class imbalance in the test set ($13.23\%$ positive). Notably, in this task advocacy learning performs slightly worse than all baselines. However, honest advocacy learning provides a small benefit relative to the baselines in terms of AUPR. This reversal of the results from **Table \[table:judge\]** is interesting, and helps illuminate cases where advocacy learning may or may not work. There are many differences between MIMIC and MNIST/FMNIST that could explain why advocacy learning applied to MIMIC fails. First, this task involves classifying patients based on a feature vector of vital sign measurements. Our attention mask allows for input values to be reduced to some proportion between 0 and 1. This type of attention can allow for pictures of shirts to look like pants, or an 8 to look like a 9, but it may not allow for vitals from sick patients to look like vitals from healthy patients (or vice-versa). Other major differences, besides the data type, include class imbalance and the number of classes. As these are qualities that can be imbued in other datasets, we further examine them by modifying MNIST to create new datasets we call Imbalanced MNIST and Binary MNIST. In Imbalanced MNIST, we subsampled the training set, introducing class imbalance. After subsampling, the least represented class, 0, had 600 training samples, and each successive class had 600 additional samples. The test set remained unchanged. We found that class imbalance in the training set lowered the performance of all models by 0.1-0.3% relative to those same models performance with balanced training data; the Advocacy Net was more strongly affected than the Honest Advocacy Net. However, both models resulted in similar accuracy (advocacy learning $99.17 \pm 0.14$ *vs.* honest advocacy learning $99.17 \pm 0.06$). Binary MNIST contains only two classes: 4 and 9. The per-class number of examples in the training and test set were unchanged. We found that the switch to a binary formulation *reduced* absolute performance for the Advocacy Net by $0.47\%$ relative to the performance of the full network evaluated only on 4s and 9s (99.19 *vs.* 98.72), whereas the Honest Advocacy Net *improved* binary performance by $0.32\%$. This decrease suggests that, in practice, the competition among many Advocates helps the Judge achieve good performance in the presence of deception. In all datasets, the class-conditional attention provided by honest advocacy learning did not hurt, and in the presence of imbalanced data helped. This suggests the value of class-conditional representations, with or without deception. Related Work ============ The fact that advocacy learning, which encourages deceitful subnetworks, works at all, let alone better than the baselines in some tasks, may be surprising. Several recent works, however, have found competition useful for learning [@goodfellow_generative_2014; @silver_mastering_2017; @sabour_dynamic_2017]. Specifically, adversarial relationships, or situations where subnetworks compete with one another, have recently garnered interest [@goodfellow_generative_2014; @ghosh_multi-agent_2017]. For example, Ghosh *et al.* examine a multi-generator setting, where each generator is encouraged to capture distinct portions of the class distribution. This resembles how advocates captures relevant evidence for their corresponding class. However, while the relationship between an Advocate and the Judge is not entirely cooperative (the Advocate may argue for an untrue class), neither is it entirely adversarial (the Advocate may argue for a true class). While these systems used competition among networks during training, competition within a network has been used as well. A winner-take-all competitive framework was found to lead to superior semi-supervised image classification performance [@makhzani_winner-take-all_2015], and the dynamic routing used in Capsule Networks can been seen as type of competition [@sabour_dynamic_2017]. Beyond this work in competition, advocacy learning is related to both i) input transformation and ii) debate agents. First, others have considered transformations of the input in the context of improving classification. Parascandolo *et al.* proposed the use of a mixture-of-experts model to learn inverse data transforms to improve performance. The competition among experts to provide quality transforms of the input resemble the competition among Advocates to convince the Judge. Our work differs in that: 1) our Advocate updates are unsupervised, 2) the goal of the Advocates is not to improve performance, and 3) Advocates are assigned classes. Hong *et al.* examined the use of class-conditional attention for improving semantic segmentation. Their attention maps are similar to the argument maps generated by Advocacy Nets, but are trained with an end-to-end objective. Similarly, Capsule Nets [@sabour_dynamic_2017] use class-specific capsules at the output layer to define a class-conditional parse of the input. This can be viewed as a bottom-up analog to the class-specific attention maps produced by Advocates, but it trained end-to-end unlike advocacy learning. Second, work in AI debate sets up a similar task to our own, training agents to ‘debate’ in order to convince a Judge about the class associated with an input [@irving_ai_2018]. The authors use Monte-Carlo tree search to simulate a debate with the goal of identifying a series of pixels to convince a pre-trained Judge classifier of an input example’s class. While conceptually similar to advocacy learning, our work differs in motivation and methodology. In addition to considering a different learning framework: neural networks, vs. Monte-Carlo tree search, there are two key differences in problem formulation: 1) our work does not use a pre-trained Judge, and 2) our work involves Advocates, which are assigned classes, not debaters, which choose classes. Conclusion ========== We have presented a novel approach to attention-based supervised classification: advocacy learning. Our approach divides a network into two subnetworks: i) a set of Advocates trained to provide arguments supporting their corresponding class, and ii) a Judge that uses these arguments to predict the true class. Over a series of experiments on three publicly available datasets, we showed that class-conditional attention can improve performance relative to standard attention, and that in some circumstances competitive training can further improve performance. The use of a deep network for the Judge has important implications for the interpretability of the derived attention maps. If the Judge is a high-capacity nonlinear network, then the evidence it may find convincing will, by default, be uninterpretable. However, the flexibility of the proposed architecture means that work on training interpretable networks or interpreting trained networks applies [@ribeiro_model-agnostic_2016]. Future work could consider using these techniques to create interpretable Advocacy Nets. Extensions may also consider improving the balance between cooperation and competition by controlling the ratio of honest and deceptive updates, or the ratio of class-specific updates. Currently, the proposed architecture is limited by the one-to-one relationship between the number of classes and number of advocates, which makes training on datasets like ImageNet infeasible. Future work could examine methods to remove this linear relationship, such as training advocates that work across class hierarchies. While there are many avenues for future improvements, the experiments explored in this work suggest that competition and class-conditional representations can in some cases be in used to improve the utility of attention in classification tasks. [^1]: <https://github.com/igfox/advocacy-learning> [^2]: <https://github.com/bearpaw/pytorch-classification>
--- abstract: 'Starting from the Quantum Monte Carlo (QMC) prediction for the ground state energy of a clean two–dimensional one–valley (2D1V) electron gas, we estimate the energy correction due to scattering sources present in actual devices such as AlAs quantum wells and GaAs heterostructures. We find that the effect of uncorrelated disorder, in the lowest (second) order in perturbation theory, is to enhance the spin susceptibility leading to its eventual divergence. In the density region where the Born approximation is able to reproduce the experimental mobility, the prediction for the spin susceptibility yielded by perturbation theory is in very good agreement with the available experimental evidence.' address: - '$^1$ INFM DEMOCRITOS National Simulation Center, Trieste, Italy' - '$^2$ Dipartimento di Fisica Teorica, Università di Trieste, Strada Costiera 11, 34014 Trieste,Italy' - '$^3$ SISSA, International School for Advanced Studies, via Beirut 2-4, 34014 Trieste, Italy' author: - 'S De Palo$^{1,2}$, S Moroni$^{1,3}$ and G Senatore$^{1,2}$' title: 'Disorder effect on the spin susceptibility of the two–dimensional one–valley electron gas' --- Introduction ============ The two–dimensional electron gas that can be realized in quantum wells or at the interface of semiconducting heterostructures has attracted a lot of interest over the years[@Ando; @Vignale]. Such interest has been recently renewed by the the discovery of an apparent metallic phase which is at variance with the predictions of the scaling theory of localization for non-interacting 2D systems at zero magnetic field[@Reviews]. The strictly two-dimensional electron gas (2DEG) embedded in a uniform neutralizing background has been often used to describe the physics of these devices[@Ando; @Vignale]. However it has been recently found that the 2DEG model is too simple to provide a quantitative account of experiments, which can only be achieved through the inclusion in the model of essential device details, such as the finite transverse thickness[@thick], the in–plane anisotropic mass[@Gokmen], the valley degeneracy present for instance in $Si$-based devices[@Mariapia], the scattering sources (disorder) which determine the mobility[@thick; @Mariapia]. Here, we discuss the effect of disorder on the ground state energy and spin susceptibility of narrow $AlAs$ Quantum Wells (QW)[@Vakili] and a GaAs HIGFET[@Zhu], analyzing as well the role of different scattering sources. We stress that an accurate treatment of electron correlation is crucial in the present approach, which is based on the properties of the ideally–clean interacting electron gas; in particular on its ground state energy and static response functions, the latter being a key ingredient in the evaluation of the ground state energy shift due to disorder. Moreover, some of the parameters modelling the disorder are not known from experiments and we choose to fix them by fitting the experimental mobility within the Born approximation. The set of parameters determined in such a way is then used to estimate the effect of disorder on the ground state energy, within second order perturbation theory. In the first section we introduce the model and give some details on our estimate of the (wavevector and spin-polarization dependent) density–density response function. We then present our results for the mobility, obtained using the Born approximation, in the second section. Finally, we discuss the effect of disorder on the spin susceptibility enhancement and ground state energy in the third section and offer some conclusions. Model and Theory ================ Our starting point is the strictly 2D1V electron gas (2D1VEG), whose state at zero temperature and magnetic field can be fixed by just two dimensionless parameters: the coupling $r_s=1/\sqrt{\pi n}a_B$ and the spin polarization $\zeta=(n_{\uparrow}-n_{\downarrow})/n$. Above, $n_{\uparrow}$ and $n_{\downarrow}$ denote the spin up and spin down areal densities, $n= n_{\uparrow}+n_{\downarrow}$, and specific parameters of the solid state device appear only in the effective Bohr radius $a_B=\hbar^2\epsilon/m_be^2$, via the dielectric constant $\epsilon$ and the band mass $m_b$. In this work we assume that the ground state of the 2D1VEG in the presence of disorder provides a first reasonable approximations to the observed metallic phase; we assume as well that, with respect to the ideally clean system, the ground state is not strongly altered by a weak disorder–at least far from the metal-insulator transition–and therefore the effect of scattering sources can be accounted for by perturbation theory. We note in passing that a realistic description of these systems must necessarily take into account disorder, in order to predict a finite (or vanishing) mobility. The energy per particle of the 2D1VEG in the presence of a weak uncorrelated disorder reads, at the lowest (second) order in perturbation theory, $$\begin{aligned} E(r_s,\zeta)&=&E_{2D}^{QMC}(r_s,\zeta)+\frac{1}{2n} \sum_q \chi_{nn}(q,\zeta)\langle |U_{imp}(q)|^2\rangle_{dis} \nonumber \\ &\equiv& E_{2D}^{QMC}(r_s,\zeta)+\Delta(r_s,\zeta), \label{diso}\end{aligned}$$ where $U_{imp}(q)$ is the Fourier transform (FT) of the random scattering potential and $\langle \dots \rangle_{dis}$ denotes the average on the disorder configuration distribution. Above, $E_{2D}^{QMC}(r_s,\zeta)$ and $\chi_{nn}(q,\zeta)$ are respectively the energy and the density-density linear response of the ideally clean system. $E_{2D}^{QMC}(r_s,\zeta)$ can be readily calculated from the analytical parametrization of Quantum Monte Carlo energies given in Ref. [@Attaccalite]. We describe how to construct $\chi_{nn}(q,\zeta)$, which is accurately known only at $\zeta=0,1$[@QMC_data], in the subsection 2.1 below. For the extremely clean HIGFET the random scattering comes from the unintentional doping of the $GaAs$ channel by charged impurities with density $N_d$ and/or from the charged scatterers in the $Al_{0.32} Ga_{0.68} As$ barrier. The $U_{imp}$ for these scatterers are taken from [@Gold_hete]. The unknown densities of charged scatterers ($N_d$ and $N_{AlGaAs}$) are obtained from a fitting of the mobility as described in Sec. 2 below. Here, we just mention that the depletion density $N_d$ is expected to be negligible in these systems and indeed our best mobility fit is compatible with $N_d=0$. Many scattering sources contribute to the finite mobility of the QW[@Poortere]: remote impurities due to the intentional [ *delta*]{} doping, three dimensional homogenous background doping with density $N_b$ in the AlGaAs, possible unintentional doping in the AlAs channel with density $N_c$ and above all fluctuations of the quantum well width, which is usually modeled with a contribution to $\langle |U_{imp}(q)|^2\rangle_{dis}$ $\propto \Delta^2 \Lambda^2 e^{-q^2 \Delta^2/4}$[@Gold_AlAs]. The first source can be modeled as the scattering coming from a sheet of randomly distributed charged impurities of areal density $n_i$ separated from the side of the QW by an AlGaAs spacer of width $d$ (in this case $n_i=5 \cdot 10^{12} cm^{-2}$ and $d=756 A$[@Vakili; @Poortere]). The unknown parameters $\Delta$, $\Lambda$, $N_b$ and $N_c$ are fixed through the mobility fit. For completeness, we need to add that here we considered background doping only in the spacer between the QW and the delta doping sheet. Density-density response function --------------------------------- The density-density linear response function for a partially spin polarized system can be written in terms of local–field factors (LFF) depending on the wavevector $q$, as well as on charge and magnetization densities, respectively $n$ and $m$[@Vignale]: $$\begin{aligned} \chi_{nn}(q,\zeta) &=&\frac{\chi_0^{\uparrow}+\chi_0^{\downarrow} +4 \chi_0^{\uparrow}\chi_0^{\downarrow} G_{mm}(q)v_{2d}(q)}{D}, \\ D=1&+&v_{2d}(q)\left[(-1+G_{mm}-2 G_{nm}+G_{nn})\chi_0^{\downarrow} \right. \nonumber \\ &+& (-1+G_{mm}+2 G_{nm}+G_{nn})\chi_0^{\uparrow} \nonumber \\ &+& \left. 4 (-G_{mm}-G_{nm}^2+G_{mm}G_{nn})\chi_0^{\downarrow}\chi_0^{\uparrow}v_{2d}(q)\right],\end{aligned}$$ where $\chi_0^{\sigma}(q,\zeta)$ ($\sigma=\uparrow,\downarrow$) is the spin resolved density–density response function for the non-interacting system[@Stern; @Vignale], $v_{2d}(q)$ is the Fourier transform of the Coulomb interaction, and $G_{\alpha,\beta}(q,\zeta)$ are the LFF. A complete description of the response functions relies on the knowledge of the LFF in the whole momentum region. We note here that the exact low–momentum behaviour ($q\rightarrow 0$) of the LFF is known in terms of the exchange-correlation energy $\epsilon_{xc}$[@Vignale]: $$\begin{aligned} \label{gnn} G_{nn}(q)&=&-\frac{1}{v_{2d}(q)}\biggl(2 \frac{\partial \epsilon_{xc}} {\partial n}+n \frac{\partial^2 n \epsilon_{xc}}{\partial n^2} \biggr), \\ \label{gnm} G_{nm}(q)&=&-\frac{1}{v_{2d}(q)}\biggl( \frac{\partial \epsilon_{xc}}{\partial m}+n \frac{\partial^2 \epsilon_{xc}}{\partial n \partial m}\biggr), \\ \label{gmm} G_{mm}(q)&=&-\frac{1}{v_{2d}(q)}\biggl(n \frac{\partial^2 \epsilon_{xc}}{\partial m^2} \biggr),\end{aligned}$$ with $m=n_\uparrow-n_\downarrow=n\zeta$. The simplest approximation would be to extend the low-momenta linear behaviour ($v_{2d}^{-1}\propto q$) of the LFF to all momenta. We have tested the effect on the response of the linear approximation (LA) for $G_{nn}$ and $G_{mm}$, at zero polarisation, with available QMC data[@QMC_data]. Deviations from the LA become more evident as the system becomes more strongly interacting. We report in results for $\chi_{nn}$ and $\chi_{mm}$ at $r_s=10$ and $\zeta=0$ where the deviation of the LA from the QMC data is more evident. Note that for $\zeta=0 $, $\chi_{nn}$ ($\chi_{mm}$ ) only involves $G_{nn}$ ( $G_{mm}$). For the charge case (left panel in ) the LA for $G_{nn}$ works quite well at least up to $2 k_F$. For the spin case (right panel in ) the $\chi_{mm}$ obtained from the LA for $G_{mm}$ shows important deviations from the QMC results over the whole range $0\le q\le 2q_F$, while a much better and in fact satisfactory agreement is obtained with the exponential approximation, whereby $G^{EA}_{mm}(q,\zeta)=G^{LA}_{mm}(q,\zeta)\times \exp{[-\alpha (q/q_F)]}$ and $\alpha=0.1$. An analytical parametrizations of the LFF[@polini] embodying the exact known behaviour at small and large momenta is available for $\zeta=0$ and $0\le r_s \le 10$. However, while it could be used for the calculation of mobility (see below) it is of no use for the calculation of the spin susceptibility, which requires the $\zeta$–dependence of the LFF. In computing the $\zeta$–dependent correction to the ground state energy, we have used the LA for $G_{nn}$ and $G_{nm}$ and the EA for $G_{mm}$. We do not discuss in detail here the behavior of $G_{nm}$, which appears to be one order of magnitude smaller that the other two LFF and therefore should not affect results in an appreciable manner. The mobility {#fitting} ============ Quite generally, not all parameters entering the modelling of the scattering sources are know from experiments and we take the customary approach in which the unkwon once are fixed through a global fit of the experimental mobility. The relaxation time $\tau$ at the lowest order in the scattering potential is given by the Born approximation[@Stern_Howard_67]: $$\frac{1}{\tau}=\frac{\hbar^{-1}}{2\pi \epsilon_F} \int_{0}^{2 k_F} dq\frac{q^2}{(4k_F^2-q^2)^{1/2}} \frac{\langle |U_{imp}(q)|^2 \rangle_{dis}}{\epsilon_P(q)^2} \label{mobi}$$ where $\epsilon_P(q)=1-v_{c}(q)(1-G_{nn}(q))\chi_0(q)=\epsilon_P^{RPA}(q)+v_c(q)G_{nn}(q)\chi_0(q)$. The integrand in is peaked around $2 k_F$ because of the combined effect of the factors $(4k_F^2-q^2)^{-1/2}$ and $\epsilon_P(q)^{-2}$, the latter being strongly enhanced by $G_{nn}(q)$ with respect to its RPA expression. An accurate estimate of $G_{nn}(q)$ in this region of momenta is therefore crucial: the disorder parameters can increase by almost an order of magnitude if one replace $\epsilon_P(q)$ with $\epsilon_P^{RPA}(q)$ in the mobility fit. Here we use $G_{nn}$ for a strictly two dimensional system and accordingly we set $v_c(q)=v_{c,2D}(q)=2\pi e^2/\epsilon q$. This may look at first a very crude assumption for the HIGFET[@Zhu], which is characterized by a sizeable thickness. However we have checked, within RPA, that while the fitted disorder parameters change appreciably in going from the $v_{c,2D}(q)$ to $v_{c,thick}(q)$, the energy shift due to disorder does not change sensibly provided the same consistent combination of $v_{c}(q)$ and disorder parameters used for the mobility is also used for the energy shift calculation. The same applies to the ensuing spin susceptibility. Mobility results for the two devices considered are shown in . In the QW the surface roughness plays the major role in determing the mobility at high densities ($\Delta=3.4 A$, $\Lambda=15 A$) in agreement with existing literature[@Gold_AlAs] (See left panel of ). At low density, however, roughly below $n \simeq 2.5 \cdot 10^{11} cm^{-2}$, the Born approximation is not able to reproduce the experimental data anymore. This is a density region where the charged impurities ($N_b=N_c=2 \cdot 10^{14} cm^{-3}$) become effective. In the right panel of we display results for the HIGHFET with $N_d=0$ and $N_{AlGaAS}=8.2 10^{12} cm^{-3}$. The discrepancy of these disorder parameters with those in [@thick] is due to the replacement, with respect to the previous calculation, of $v_{c,thick}$ with $v_{c,2D}$. The effect of such a change on the spin susceptibility is however barely visible, as it can be checked by comparing the results in with those in [@thick]. We should mention that both in the present calculations and those of [@thick] we have chosen the form of $U_{imp}$ appropriate to a thick electron gas. The spin susceptibility ======================= The spin susceptibility enhancement of the systems under investigation is[@thick] $$\frac{\chi_s}{\chi_0}=\biggl[ \frac{\partial^2 E_0 (r_s,\zeta)}{\partial \zeta^2}\biggr]_{\zeta=0} \biggl[ \frac{\partial^2 E^{QMC} (r_s,\zeta)}{\partial \zeta^2} +\frac{\partial^2\Delta E (r_s,\zeta)}{\partial \zeta^2} \biggr]_{\zeta=0}^{-1}, \label{chi_spin}$$ where $E_0(r_s, \zeta)$ is the energy of the non-interacting system, $\Delta E (r_s,\zeta)$ the energy shift due to disorder defined in , and $E^{QMC} (r_s,\zeta)$ the energy of the clean system, which may include, if necessary, the effect of thickness. As mentioned above, the results of this calculation strongly depend on the LFF in the region around $2q_F$, region where $\chi_{nn}(q,\zeta)$ has a sizeable change when varying $\zeta$ around zero. We stress that the parameters describing the disorder are fixed by a fit of the experimental mobility and depend on $G_{nn}$ at zero polarisation; while $\Delta E (r_s,\zeta)$ requires the knowledge of all LFF and their $\zeta$–dependence. Before examining in detail our results for the spin susceptibility, summarized in , we make some general comments on the effect of disorder. Apart from the surface roughness at very high density ($r_s\le 1$), where it induces a negligible reduction of spin susceptibility, the effect of all scattering sources is to enhance $\chi_s/\chi_0$, once electron correlation is included in the response function, even at RPA level. A calculation employing the response function $\chi_0(q,\zeta)$ of non-interacting electrons and including only the roughness scattering, for example, predicts a suppression of the spin susceptibility at all densities. On lowering the electron density the relative contribution of disorder to the second derivative of the energy, with respect to $\zeta$, increases in size and being negative leads to the eventual divergence of the spin susceptibility. We note that quite generally the transverse thickness reduces the spin susceptibility of a 2D electron systems, while disorder generally enhances it[@thick]. As it is clearly seen in , in the extremely clean case of the HIGHFET, the inclusion of disorder does not alter the agreement between the theoretical prediction (obtained including thickness) and measurements, throughout the whole experimental density range[@nota]. If one neglects thickness and uses the disorder parameters fitted to the experimental mobility of the HIGHFET, as specified above, the energy shift due to disorder makes the ferromagnetic state of the strictly 2DEG energetically favourable with respect to the normal state at $r_s\simeq 12.5$. In contrast, the same procedure using the disorder parameters appropriate to the thin electron gas realized in AlAs QWs[@Vakili] predicts a transition towards a partially polarised state at $r_s\simeq 7$, namely a second-order phase transition. We should stress, however, that for this system the fitting of the experimental mobility in the Born approximation breaks down at low densities (corresponding to $r_s \gtrsim 4$), as clearly shown in . Yet, up to $r_s \lesssim 4$, our prediction for the spin susceptibility is only moderately affected by disorder (thick solid line (a) in ), with an enhancement with respect to the clean system of at most 20%, which results in a very good agreement with experiments. By looking at the theoretical prediction for the spin susceptibility enhancement obtained including only the scattering by charged impurities (dotted line (c)) or only that by roughness (thin solid line (b)), it is evident the major role played by roughness at all densities, as well as the negligible effect of charged scatterers at high density (due to screening). At low densities, though being quite smaller than that of roughness, the effect of charged impurities on $\chi_s$ becomes however sizeable. Within second order perturbation theory, lowering the density, disorder becomes more and more effective enhancing the spin susceptibility and finally driving it to diverge. A strong enhancement is found also in the experiments, however we cannot push our quantitative comparison between the experiments and our predictions in density regions where the level of disorder cannot be reliably related to the experimental mobility using the Born approximation. We stress that the accuracy of the prediction of the spin susceptibility of the clean 2D1V electron gas is crucial in the present approach, as suggested by the comparison between theory and experiment for the thin electron gas realized in narrow $AlAs$ QWs[@Vakili], for which the effect of thickness is negligible[@thick]. In this respect, we recall that RPA predicts for the 2D1V electron gas a first order ferromagnetic transition already at $r_s \simeq 5.5$ and a $\chi_s/\chi_0$ divergence in the paramagnetic phase at $r_s \simeq 7.3$ [@DasSarma]. Evidently the inclusion of disorder in RPA would push the Bloch and Stoner transitions[@DasSarma] at higher density, well inside the experimental range. Conclusions =========== We have studied the effect of disorder on the spin susceptibility of 2D electron systems realized in semiconductor heterostructures, considering narrow $AlAs$-based QWs and a $GaAs$-based HIGHFET, systems which have an in-plane isotropic mass and no valley degeneracy. We take as reference, in assessing the effect of disorder, the ideally clean 2D1V electron gas, whose spin susceptibility is known with great accuracy, thanks to QMC simulations[@Attaccalite]. We found that the effect of a weak uncorrelated disorder is to enhance the spin susceptibility, at the lowest order in perturbation theory, with correlation seemingly playing a crucial role. The disorder parameters which were not known from experiments were determined through a fit of the experimental mobility over the whole experimental density range, in the Born approximation, and then used without any change in the spin susceptibility calculation. We discovered that, at densities where the Born approximation is capable of fitting the experimental mobility, also our prediction of the spin susceptibility in the [*dirty*]{} system turns out to be very accurate; while it appreciably deviates from the experiment at densities where the Born approximation breaks down, or more precisely, is unable to fit the experimental mobility. Thus, the really weak disorder present in the $GaAs$ HIGFET of [@Zhu] has a small effect on the spin susceptibility and does not change qualitatively the phase diagram of the 2D1V electron gas. Evidently, the disorder in the $AlAs$ QWs of [@Vakili] is much stronger and it can be possibly treated in perturbation theory only at densities not too low. It is anyhow reassuring that when the perturbative approach is capable of quantitatively fitting the experimental mobility also the resulting prediction of the spin susceptibility enhancement is in good agreement with experiments. References {#references .unnumbered} ========== [90]{} Ando T, Fowler A B and Stern F 1982 [*Rev. Mod. Phys.*]{} [**54**]{} 437 Giuliani G F and Vignale G 2005 [*Quantum Theory of the Electron Liquid*]{} (Cambridge: Cambridge University Press) Abrahams E, Kravchenko S V and Sarachik M P 2001 [*Rev. Mod. Phys.*]{} [**73**]{} 251; Kravchenko S V and Sarachik M P 2004 [*Rep. Prog. Phys.*]{} [**67**]{} 1; and reference therein. De Palo S, Botti M, Moroni S and Senatore G 2005 [*Phys. Rev. Lett.*]{} [**94**]{} 226405; De Palo S, Botti M, Moroni S and Senatore G 2006 [*Phys. Rev. Lett.*]{} [**97**]{} 39702; De Palo S, Botti M, Moroni S and Senatore G 2006 [*Phys. Rev. Lett.*]{} [**97**]{} 39702. Gokmen T, Padmanabhan M, Tutuc E, Shayegan M, De Palo S, Moroni S and Senatore G 2007 [*Phys. Rev. B*]{} [**76**]{} 233301 Marchi M, De Palo S, Moroni S and Senatore G 2008 [*The correlation Energy and the Spin Susceptibility of the Two-Valley Two-dimensional Electron Gas*]{} arXiv:0808.2569v1 Vakili K , Shkolnikov Y P , Tutuc E , De Poortere, E P and Shayegan M 2004 [*Phys. Rev. Lett.*]{} [**92**]{} 226401 and K. Vakili private communication. Zhu J, Stormer H L, Pfeiffer L N, Baldwin K W and West K W 2003 [*Phys. Rev. Lett.*]{} [**90**]{} 56805; Zhu J Phd thesis. Attaccalite C, Moroni S, Gori-Giorgi P and Bachelet G B 2002 [*Phys. Rev. Lett.*]{} [**88**]{} 256601 De Poortere E.P., Shkolnikov Y.P., and Shayegan M. 2003 [*Phys. Rev. B*]{} [**67**]{} 153303. Gold A 1989 [*Appl. Phys. Lett.*]{} [**54**]{} 2100 Gold A 208 [*Appl. Phys. Lett.*]{} [**92**]{} 082111; Gold A 1987 [*Phys. Rev. B.*]{} [**35**]{} 723. Stern F 1967 [*Phys. Rev. Lett.*]{} [**14**]{} 546 Davoudi B, Polini M, Giuliani G F, and Tosi M P 2001 [*Phys. Rev. B*]{} [**64**]{} 153101; 2001 [**64**]{} 233110. Moroni S, Ceperley D M, and Senatore G 1992 [*Phys. Rev. Lett.*]{} [**69**]{} 1837 Senatore G, Moroni S, and Ceperley D M, in [*Quantum Monte Carlo Methods in Physics and Chemistry*]{}, edited by M. P. Nightingale and C. J. Umrigar (Kluwer, Dordrecht, 1999). Stern F and Howard W E 1967 [*Phys. Rev.*]{}[**163**]{} 816 The apparent discrepancy between theory end experiment at high density (one would expect $\chi_s/\chi_0$ to go to 1 as $r_s\rightarrow 0$, while the expression fitted to experiments in [@Zhu] tends to 0 in this limit) is eliminated once band structure effects, modifying the band mass and g-factor, are duly taken into account[@tan]. We do not report here the data of [@tan] for $\chi_s/\chi_0$, which are obtained from a different sample for which we do not know all relevant physical parameters. Tan Y -W, Zhu J, Stormer H L, Pfeiffer L N, Baldwin K W and West K W 2006 [*Phys. Rev. B*]{} [**73**]{} 045334; Zhang Y and Das Sarma S, [*Phys. Rev. B*]{} [**72**]{} 115317
--- abstract: 'We propose a stable first-order relativistic dissipative hydrodynamic equation in the particle frame (Eckart frame) for the first time. The equation to be proposed was in fact previously derived by the authors and a collaborator from the relativistic Boltzmann equation. We demonstrate that the equilibrium state is stable with respect to the time evolution described by our hydrodynamic equation in the particle frame. Our equation may be a proper starting point for constructing second-order causal relativistic hydrodynamics, to replace Eckart’s particle-flow theory.' author: - Kyosuke Tsumura - Teiji Kunihiro title: ' Stable First-order Particle-frame Relativistic Hydrodynamics for Dissipative Systems ' --- Relativistic hydrodynamics (RHD) is a useful tool for analyzing slow and long wavelength behavior of relativistic many-particle systems in terms of static and dynamic thermodynamic properties. In fact, RHD is widely used in astrophysics [@ast001] and the phenomenology of relativistic heavy ion collisions [@qcd001]. Since works demonstrating the success of perfect hydrodynamics in describing the phenomenology of the Relativistic Heavy Ion Collider (RHIC) at BNL [@qcd001; @qcd002; @qcd003], we are witnessing a growing interest in RHD for [*dissipative*]{} systems [@muronga07; @chaudhuri; @romatschke; @biro07]. Indeed, there have been many works attempting to show how small can be the transport coefficients of strongly-interacting systems composed of hadrons or quarks and gluons, with many of these employing the so-called AdS/CFT correspondence hypothesis [@qcd012]. It should be noticed, however, that the theory of RHD for dissipative systems is not clearly established, although there have been many fundamental studies since Eckart’s pioneering work [@hen001]. We identify the following three fundamental problems regarding relativistic hydrodynamic equations (RHDEs) for dissipative fluids [@TKO]: (a) ambiguities in the forms of the equations [@hen001; @hen002; @hyd001; @mic005; @muronga07; @romatschke]; (b) the unphysical instability of the equilibrium state in the theory of the so-called first-order equations, in particular in the Eckart frame [@hyd002], defined below; (c) the lack of causality in the first-order equations [@app002; @mic001; @mic004; @mic005]. The present paper is concerned with the first two problems. Although the unphysical instability of the equilibrium state may be attributable to the lack of causality, and the Israel-Stewart equations with second-order time-derivative are presently being examined in connection to this problem [@mic004; @muronga07; @chaudhuri; @romatschke], we emphasize that the first two problems and the third one have different origins, and the first two must be resolved before the third is addressed. Note that the causality problem also exists in non-relativistic cases and is in essence a problem of how to incorporate the space-time scales shorter than those corresponding to the mean-free path, beyond those in the usual hydrodynamic regime. We also remark that the proper form of Israel-Stewart-type equations has not yet been definitely determined [@muronga07; @romatschke]. Let us represent the flow velocity by $u^{\mu}$, with $u_{\mu} \, u^{\mu} = g^{\mu\nu} \, u_{\mu} \, u_{\nu} = 1$ ($g^{\mu\nu} = {\rm diag}(+1, -1, -1, -1)$). In the relativistic theory, the rest frame of the fluid and the flow velocity $u^{\mu}$ cannot be uniquely defined when there exist viscosity and heat conduction. In the phenomenological theories [@hen001; @hen002], the ambiguity of the flow velocity $u^{\mu}$ is resolved by placing constraints on the dissipative part of the energy-momentum tensor, $\delta T^{\mu\nu}$, and the particle current, $\delta N^{\mu}$. Landau and Lifshitz defined $u^{\mu}$ such that there is no dissipative energy density, energy flow nor particle density; i.e., we have the constraints $\delta T^{\mu\nu} \, u_{\nu} = 0$ (referred to as ET) and $u_{\mu} \, \delta N^{\mu} = 0$  (EN). This frame is called the energy frame. Contrastingly, Eckart chose the particle frame, in which there is no dissipative contribution to the particle current; i.e., we have $\delta N^{\mu} = 0$ (PN), together with $u_{\mu} \, u_{\nu} \, \delta T^{\mu\nu} = 0$ (PT): These conditions imply that there is no dissipative contribution to the energy density in this frame. However, it should be noted that the seemingly plausible constraint PT on $\delta T^{\mu\nu}$ is problematic, as shown in [@TKO] and explained below. Recently, Tsumura, Kunihiro (the present authors) and Ohnishi (abbreviated as TKO) [@TKO] derived generic covariant hydrodynamic equations for a viscous fluid through a reduction of the dynamics described by the relativistic Boltzmann equation in a systematic manner, with no heuristic arguments, on the basis of the so-called renormalization group (RG) method [@rgm001; @env001; @HK02]. This was done by introducing the macroscopic frame vector $\Ren{a}^{\mu}$ that defines the macroscopic Lorenz frame, in which the slow dynamics are described. The generic equation derived by TKO can produce a relativistic dissipative hydrodynamic equation in any frame with the appropriate choice of $\Ren{a}^{\mu}$; the resulting equation in the energy frame coincides with that of Landau and Lifshitz [@hen002], while that in the particle frame is similar to, but slightly different from, the Eckart equation. Interestingly, the TKO equation in the particle frame does not satisfy the constraints PT on $\delta T^{\mu\nu}$ but, instead, satisfies $\delta T^{\mu}_{\,\,\,\mu} = 0$, which we call PT’, together with PN. It should be noted that the new constraints, PT’, are identical to a matching condition postulated by Marle and Stewart (MS) in the derivation of the RHD from the Boltzmann equation with use of Grad’s moment theory [@mic003]. We call the constraints PT’, together with PN, the Grad-Marle-Stewart (GMS) constraints. In [@TKO], TKO proved that the simultaneous constraints PT and PN cannot be compatible with the underlying Boltzmann equation if the hydrodynamic equation describes the slow, long wavelength limit of the solutions of the Boltzmann equation. This is interesting in connection to problem (b), i.e., the fact that the solutions of the Eckart equation around the thermal equilibrium are unstable [@hyd002], while the Landau theory is stable. An immediate question is whether the solutions of the new equations in the particle frame are stable around the thermal equilibrium. In fact, the hydrodynamic equations of MS and TKO in the particle frame are of different forms, although both satisfy the constraints PT’ and PN. In the present paper, we examine the stability problem for the new equations in the particle frame. Because second-order equations, such as the Israel-Stewart equations, are usually constructed in the particle frame, as an extension of the Eckart equation, finding a stable first-order equation in the particle frame is of fundamental significance. As the RG method has been employed to construct the slow dynamics of various systems through the explicit construction of the slow, stable manifold of the dynamics, we conjecture that the hydrodynamic equation obtained as the slow, long wavelength limit of the Boltzmann equation on the basis of the RG method will provide a description in which the thermal equilibrium state is stable. We demonstrate that this is indeed the case by performing a linear stability analysis using the EOS and the transport coefficients for a rarefied gas. By contrast, we find that the MS equation, like the Eckart equation, is unstable. Hence, for the first time, a stable RHDE is obtained in the particle frame. We believe that this will provide a sound starting point for the construction of the proper second-order equations. The energy-momentum tensor for our equation in the particle frame reads $$\begin{aligned} T^{\mu\nu} &=& \epsilon \, u^{\mu} \, u^{\nu} - p \, \Delta^{\mu\nu} + \lambda \, u^{\mu} \, \nabla^{\nu} T + \lambda \, u^{\nu} \, \nabla^{\mu} T\nonumber\\ & & {} + \zeta \, (3 \, u^{\mu} \, u^{\nu} - \Delta^{\mu\nu}) \, [ - ( 3 \, \gamma - 4)^{\mbox{-}2} \, \nabla \cdot u ]\nonumber\\ & & {} + \eta \, ( \nabla^{\mu} u^{\nu} + \nabla^{\nu} u^{\mu} - 2/3 \, \Delta^{\mu\nu} \, \nabla \cdot u ), \end{aligned}$$ while the particle current is given by $N^{\mu} = n \, u^{\mu}$, with $\Delta^{\mu\nu} \equiv g^{\mu\nu} - u^\mu \, u^\nu$ and $\nabla^{\mu} \equiv \Delta^{\mu\nu} \, \partial_{\nu}$. Here $T$, $\mu$, $\epsilon$, $p$, $n$ and $\gamma$ are the temperature, the chemical potential, the internal energy, the pressure, the particle density and the ratio of the specific heats, respectively, and $\zeta$, $\lambda$ and $\eta$ denote the bulk viscosity, the heat conductivity and the shear viscosity, respectively. The MS equations are obtained from the above equations through the replacements $-\zeta ( 3 \, \gamma - 4)^{\mbox{-}2} \, \nabla \cdot u \longrightarrow + \zeta ( 3 \, \gamma - 4)^{\mbox{-}1} \, \nabla \cdot u$ and $\lambda \nabla^{\mu} T \longrightarrow \lambda(\nabla^{\mu} T - T \, Du^{\mu})$, where $D \equiv u^{\nu} \, \partial_{\nu}$. One can easily check that both equations satisfy the GMS constraints. Nevertheless, we find the following differences between them: (A)  the thermal forces in the MS equations contain the time-like derivative of the flow velocity $Du^{\mu}$, while those in our equations involve only the space-like derivative $\nabla^{\mu}$, and (B) the sign of the thermodynamic force owing to the bulk viscosity in our equation is the same as that in the Landau equation and opposite that in the MS equation. We can trace the two characteristic features of our theory back to the simple ansatz that only the spatial inhomogeneity, over distances of the order of the mean free path, is the origin of the dissipation. It should be noted that the same ansatz for the non-relativistic case leads naturally to the Navier-Stokes equation, as shown in [@HK02], and hence our framework can be interpreted as the most natural covariantization of the non-relativistic case. The thermal equilibrium state is given by $u^{\mu}(x) = (1,\,0,\,0,\,0) \equiv u_0^{\mu}$, $T(x) = T_0$ and ${\mu}(x) = \mu_0$, with $T_0$ and $\mu_0$ being constant. This is a trivial solution to the equations. Let us investigate the linear stability of the equilibrium solution. Writing $T(x) = T_0 + \delta T(x)$, $\mu(x) = \mu_0 + \delta \mu(x)$ and $u^{\mu}(x) = u_0^{\mu} + \delta u^{\mu}(x)$, we examine the time evolution of the deviations in the linear approximation using the evolution equation given by $\partial_{\mu} T^{\mu\nu} =0$ and $\partial_{\mu} N^{\mu} = 0$. Here we note that the independent variables are the five quantities $\delta T(x)$, $\delta \mu(x)$ and $\delta u^i(x)$ ($i = 1,\,2,\,3$), because $\delta u^0(x) = 0$, due to the constraint $u_{\mu}(x) \, u^{\mu}(x) = 1$. In terms of the Fourier components $\tilde{\Phi}_\alpha(k) \equiv {}^\mathrm{t}(\delta \tilde{u}^1(k),\, \delta \tilde{u}^2(k),\, \delta \tilde{u}^3(k),\, \delta \tilde{T}(k),\, \delta \tilde{\mu}(k))$, defined through $\Phi_{\alpha}(x) = \int \!\! \frac{\mathrm{d}^4k}{(2\pi)^4} \, \tilde{\Phi}_{\alpha}(k) \, \mathrm{e}^{\mbox{-}i k \cdot x}$, the linearized hydrodynamic equation reduces to the algebraic equation $\sum_{\beta=1}^5 \, M_{\alpha\beta} \, \tilde{\Phi}_\beta = 0$, with $$\begin{aligned} \label{eq:2-003} M_{\alpha\beta} \equiv \left( \begin{array}{ccccc} \mathcal{L}_1 & 0 & 0 & 0 & 0\\ 0 & \mathcal{L}_1 & 0 & 0 & 0\\ 0 & 0 & \mathcal{L}_1 - \mathcal{L}_2 \, (k^3)^2 & i \, \mathcal{L}_3 \, k^3 & i \, \mathcal{L}_4 \, k^3\\ 0 & 0 & -i \, \mathcal{L}_5 \, k^3 & \mathcal{L}_6 & \mathcal{L}_7\\ 0 & 0 & -i \, \mathcal{L}_8 \, k^3 & \mathcal{L}_9 & \mathcal{L}_{10} \end{array} \right), \end{aligned}$$ where we have set $k^{\mu} = (k^0,\,0,\,0,\,k^3)$ without loss of generality. The first and second components of $\tilde{\Phi}_{\alpha}$ describe the transverse mode, while the third component the longitudinal one. Here $\mathcal{L}_{i=1 \sim 10}$ are given by $\mathcal{L}_1 \equiv (\epsilon + p) \, (-i \, k^0) + \eta \, |\Vec{k}|^2$, $\mathcal{L}_2 \equiv - \eta /3 - \zeta_P$, $\mathcal{L}_3 \equiv {\partial p}/{\partial T} - \lambda \, (-i \, k^0)$, $\mathcal{L}_4 \equiv {\partial p}/{\partial \mu}$, $\mathcal{L}_5 \equiv - (\epsilon + p) + 3 \, \zeta_P\, (-i \, k^0)$ $\mathcal{L}_6 \equiv {\partial \epsilon}/{\partial T} \, (-i \, k^0) + \lambda \, |\Vec{k}|^2$, $\mathcal{L}_7 \equiv {\partial \epsilon}/{\partial \mu} \, (-i \, k^0)$, $\mathcal{L}_8 \equiv - n$, $\mathcal{L}_9 \equiv {\partial n}/{\partial T} \, (-i \, k^0)$ and $\mathcal{L}_{10} \equiv {\partial n}/{\partial \mu} \, (- i \, k^0)$, with $\zeta_P \equiv \zeta \, (3 \, \gamma - 4)^{\mbox{-}2}$ being the effective bulk viscosity in the particle frame. In the above, the quantities, $\epsilon$, $p$, $n$, $\gamma$, $\zeta$, $\lambda$, $\eta$, ${\partial \epsilon}/{\partial T}$, ${\partial \epsilon}/{\partial \mu}$, ${\partial p}/{\partial T}$, ${\partial p}/{\partial \mu}$, ${\partial n}/{\partial T}$ and ${\partial n}/{\partial \mu}$ take their equilibrium values, with $T = T_0$ and $\mu = \mu_0$. The existence condition of a solution reads $\det M = 0$, which reduces to $$\begin{aligned} \label{eq:2-004} & & \mathcal{L}^2_1 \, \Big[ ( \mathcal{L}_1 - |\Vec{k}|^2 \, \mathcal{L}_2 ) \, ( \mathcal{L}_6 \, \mathcal{L}_{10} - \mathcal{L}_7 \, \mathcal{L}_9 ) - |\Vec{k}|^2 \, \mathcal{L}_5 \, ( \mathcal{L}_3 \, \mathcal{L}_{10}\nonumber\\ & & {}- \mathcal{L}_4 \, \mathcal{L}_9 ) - |\Vec{k}|^2 \, \mathcal{L}_8 \, ( \mathcal{L}_4 \, \mathcal{L}_6 - \mathcal{L}_3 \, \mathcal{L}_7 ) \Big] = 0. \end{aligned}$$ This equation gives the dispersion relation $k^0 = k^0(|\Vec{k}|)$ for the hydrodynamic modes, and the stability condition for the equilibrium state reads $\mathrm{Im}k^0 \le 0$, $\forall \, |\Vec{k}|$. We see the dispersion relation for the transverse mode is given by $\mathcal{L}_1 = 0$, whose solution is $k^0 = - i \, \eta\, |\Vec{k}|^2/(\epsilon + p)$. Thus, we find that the transverse mode is stable. Here we again stress that the equation we study does not contain a term proportional to $D u^{\mu}$ in the thermal force for the heat flow. What would happen if such a term were present in the thermal forces, as in the case of the MS and the Eckart theories? In this case, the corresponding equation becomes $\mathcal{L}_1 =(\epsilon + p) \, (-i \, k^0) - T \, \lambda \, (-i \, k^0)^2 + \eta \, |\Vec{k}|^2 = 0$, which possesses a root with a positive imaginary part, and hence an unstable transverse mode appears. We emphasize that this instability is inevitable if the heat flow term contains $D u^{\mu}$ [@biro07]. Next, we examine the dispersion relations of the longitudinal modes. We first consider the simple but interesting case in which the heat conductivity vanishes (i.e., $\lambda = 0$), but the bulk and the shear viscosities may be positive (i.e., $\zeta \neq 0$ and $\eta \neq 0$). This simple case is often studied in the literature. We subsequently carry out a full analysis in which all the transport coefficients, including $\lambda$, may be positive. In the simple case with $\lambda = 0$, the equation has the root $k^0 = 0$ and those satisfying $a_0 \, (-i \, k^0)^2 + b_0 \, (-i \, k^0) + c_0 = 0$, with $a_0 \equiv (\epsilon + p)\, \{ \epsilon \,,\, n \}$, $b_0 \equiv |\Vec{k}|^2 \, [({4\eta}/{3} + \zeta_P) \, \{\epsilon \,,\, n\} - 3 \,\zeta_P \, \{ p \,,\, n \}]$ and $c_0 \equiv |\Vec{k}|^2 \, [(\epsilon + p) \, \{ p \,,\, n \} + n \, \{\epsilon \,,\, p \}]$, where we have written the Jacobian as $\{ F \,,\, G \} \equiv \partial(F\,,\, G)/\partial(T\,,\, \mu)$. Now, a simple analysis of the algebraic equation $a_0 \, \omega^2 + b_0 \, \omega + c_0 = 0$ with $\omega = -i \, k^0$ shows that the necessary and sufficient condition for $\exists \, k^0$ with $\mathrm{Im}k^0 \le 0$ is that $b_0/a_0 \ge 0$ and $c_0/a_0 \ge 0$. Owing to the properties of the Jacobian and the thermodynamic relations, the last inequality generally holds, because the l.h.s can be rewritten as $c_0/a_0 = |\Vec{k}|^2 \, (\partial p/\partial \epsilon)_S = |\Vec{k}|^2 \, c_s^2$, with $c_s$ being the sound velocity. Then, the stability condition reduces to $b_0/a_0 \ge 0$, which is equivalent to $$\begin{aligned} \label{eq:2-008} {4\eta}/{3} + \zeta_P \, [ 1 - 3 \, (\partial p/{\partial \epsilon})_n ] \ge 0. \end{aligned}$$ This is a new condition that involves not only the EOS but also the bulk and shear viscosities. It can be shown analytically [@nex001] that this inequality is satisfied at least for a rarefied gas in the massless limit. To see this, first notice that $\epsilon = 3 \, p$ for a relativistic gas composed of massless particles. Then the inequality reduces to the trivial one $\eta \ge 0$, because the second term with a bracket on the l.h.s vanishes, provided that the effective bulk viscosity, $\zeta_P = \zeta \, (3 \, \gamma -4)^{-2}$, is finite in the massless limit. In fact, it can be shown that this is the case using the microscopic formula for $\zeta$ [@nex001], although $3 \, \gamma - 4 \rightarrow 0$ in the massless limit. We also remark that numerical calculations using the viscosities $\zeta$ and $\eta$ obtained from the Boltzmann equation reveal that the inequality (\[eq:2-008\]) is always satisfied, even for a rarefied gas of massive particles. Instead of presenting the numerical results for this limiting case, we present the results for the general case, i.e., that in which $\lambda\not= 0$, $\zeta\not= 0$ and $\eta \not=0$, below. We now demonstrate that the thermal equilibrium state is stable with respect to the dynamics described by our equation, even when the heat conductivity, $\lambda $, is finite. The dispersion equation for the longitudinal modes is obtained from the roots of the cubic equation $a \, \omega^3 + b \, \omega^2 + c \, \omega + d = 0$, with $\omega =-i \, k^0$, where the coefficients are given by $a \equiv a_0 + |\Vec{k}|^2 \, 3 \, \zeta_P \, \lambda \, ({\partial n}/{\partial \mu})_T$, $b \equiv b_0 + n \, \lambda \, |\Vec{k}|^2 \, ({\partial \epsilon}/{\partial \mu})_T$, $c \equiv c_0 + |\Vec{k}|^4 \, ({4\eta}/{3} + \zeta_P) \, \lambda \, ({\partial n}/{\partial \mu})_T$ and $d \equiv |\Vec{k}|^4 \, n \, \lambda \, ({\partial p}/{\partial \mu})_T$. The condition ${\rm Im} k^0 \le 0$ implies that the above equation for $\omega$ has roots only in the left half plane or on the imaginary axis in the complex $\omega$ plane. An elementary analysis shows that this condition is given by $$\begin{aligned} \label{eq:022} a \ge 0, \, b \ge 0, \, d \ge 0 \,\, {\rm and} \,\, b \, c - a \, d \ge 0. \end{aligned}$$ Here, the equality holds in the case that the imaginary part of $k^0$ vanishes. Note that the above equalities imply that $c \ge 0$. Now we demonstrate that these inequalities are satisfied for rarefied systems. For a relativistic free gas, we have $n = (2\pi)^{\mbox{-}3} \, 4 \, \pi \, m^3 \, \mathrm{e}^{\frac{\mu}{T}} \, [ z^{\mbox{-}1} \, K_2(z) ]$, $\epsilon = m \, n \, [ K_3(z)/K_2(z) - z^{\mbox{-}1} ]$, $p = n \, T$ and $\gamma = 1 + [ z^2 + 3 \, \hat{h} - (\hat{h} - 1)^2 ]^{\mbox{-}1}$ with $z = m/T$ and $\hat{h} = (\epsilon + p) / n \, T$ being the reduced enthalpy. Here, $K_2(z)$ and $K_3(z)$ denote the second and third modified Bessel functions, respectively. It is seen that the positivity condition $d > 0$ holds from the formula $p = n \, T$ with $n \propto \mathrm{e}^{\frac{\mu}{T}}$, which implies that $(\partial p/\partial \mu)_T > 0$. It remains to demonstrate the rest of the inequalities, i.e., $a \ge 0$, $b \ge 0$ and $b \, c - a \, d \ge 0$, for which we need explicit forms of the transport coefficients as well. The transport coefficients $\zeta$, $\lambda$ and $\eta$ for a rarefied gas can be obtained from the collision term in the Boltzmann equation. The Galerkin approximation using the Ritz polynomial expansion [@mic001] with a constant cross section $\sigma$ in the collision integral gives $\zeta = \frac{1}{32 \pi} \frac{T}{\sigma} \mathrm{e}^{\mu/T} \big[z^2 K^2_2(z) [ (5 - 3 \gamma) \hat{h} - 3 \gamma ]^2\big]/ [2 K_2(2 z) + z K_3(2 z)]$, $\lambda = \frac{3}{32 \pi} \frac{1}{\sigma} \mathrm{e}^{\mu/T} \big[z^2 K^2_2(z) [ \gamma / (\gamma - 1)]^2\big] /[(z^2 + 2) K_2(2 z) + 5 z K_3(2 z)]$ and $\eta = \frac{15}{32 \pi} \frac{T}{\sigma} \mathrm{e}^{\mu/T} [z^2 K^2_2(z) \hat{h}^2]/ [(5 z^2 + 2) K_2(2 z) + (3 z^3 + 49 z) K_3(2 z)]$. Note that all the transport coefficients are proportional to the inverse of the cross section, $\sigma$. This implies that a strongly (weakly) interacting system has small (large) transport coefficients. The numerical results for $a$, $b$ and $b \, c - a \, d$ are displayed in Fig.[\[fig:002\]]{}, where the $z = m/T$ dependence is shown using $\sigma \, T^2 = 1$ for a wide range of values of the three momentum: $|\Vec{k}|/T=0.1$ - $10$. We have confirmed that the positivity of these quantities holds for a wide range of values of the cross section: $\sigma \, T^2 = 0.01$ - $10$. We point out that a rarefied gas is a system in which dissipative effects are most significant. Thus, we have demonstrated that the thermal equilibrium solution is stable within the description provided by our hydrodynamic equation in the particle frame. Obviously, a solution with flow is unstable in a viscous fluid, as it must relax to the equilibrium state. ![ The $z = m/T$ dependence of $a/( T^{9} \, \mathrm{e}^{3\mu/T})$, $b/( T^{10} \, \mathrm{e}^{3\mu/T})$ and $(b \, c - a \, d)/({8 \, a^2 \, T^3})$ for $|\Vec{k}|/T=0.1, \, 1, \, 10$. Some factors have been multiplied so that the variables become independent of $\mu$. It is seen that all these quantities are positive in all cases. []{data-label="fig:002"}](fig_a.eps){width="\linewidth"} ![ The $z = m/T$ dependence of $a/( T^{9} \, \mathrm{e}^{3\mu/T})$, $b/( T^{10} \, \mathrm{e}^{3\mu/T})$ and $(b \, c - a \, d)/({8 \, a^2 \, T^3})$ for $|\Vec{k}|/T=0.1, \, 1, \, 10$. Some factors have been multiplied so that the variables become independent of $\mu$. It is seen that all these quantities are positive in all cases. []{data-label="fig:002"}](fig_b.eps){width="\linewidth"} ![ The $z = m/T$ dependence of $a/( T^{9} \, \mathrm{e}^{3\mu/T})$, $b/( T^{10} \, \mathrm{e}^{3\mu/T})$ and $(b \, c - a \, d)/({8 \, a^2 \, T^3})$ for $|\Vec{k}|/T=0.1, \, 1, \, 10$. Some factors have been multiplied so that the variables become independent of $\mu$. It is seen that all these quantities are positive in all cases. []{data-label="fig:002"}](fig_est.eps){width="\linewidth"} In this Letter, we first pointed out that the constraint proposed by Eckart, $u_{\mu} \, u_{\nu} \, \delta T^{\mu\nu} = 0$ (PT), is incompatible with the underlying relativistic Boltzmann equation. This important point has been largely unnoticed. We then showed that the reduction of the Boltzmann equation employing the RG method leads to an RHDE in the particle frame, that satisfies the constraint $\delta T^{\mu}_{\,\,\,\mu} = 0$, while $u_{\mu} \, u_{\nu} \, T^{\mu\nu} = \epsilon - 3 \, \zeta_P \, \nabla \cdot u$, which includes a contribution from the flow as well as the internal energy. This equation might imply that the energy density of an expanding system extracted from the hydrodynamic analysis can be erroneous. We have demonstrated that the solution around the equilibrium state in the new equation is stable. This was done by carrying out a linear stability analysis using the EOS and the transport coefficients for a rarefied gas. We conclude that Eq. (1) represents the first viable possibility as a stable, first-order, particle-frame RHDE for a viscous fluid. This is significant because the Israel-Stewart causal equation is usually constructed in the particle frame with PT. A detailed presentation of this work and applications of the new equation studied here will be reported elsewhere. We are grateful to Dirk Rischke and Miklos Gyulassy for comments. T. K. is supported by a Grant-in-Aid for Scientific Research by Monbu-Kagakusyo of Japan (No. 17540250), by the 21st Century COE “Center for Diversity and Universality in Physics" of Kyoto University and by YIPQS at the Yukawa Institute for Theoretical Physics, Kyoto. [40]{} As a review article, see, for example, D. Balsara, [Astrophys. J. Suppl.]{} **132** (2001) 83. See review articles, P.Huovinen, in “Quark Gluon Plasma 3”, ed. R.C.Hwa and X.N.Wang, (World Scientific, Singapore), p.600; P.F.Kolb and U.W.Heinz, ibid , p.634. T. Hirano and K. Tsuda, [Phys. Rev. C]{} **66** (2002) 054905; D. Teaney, [Phys. Rev. C]{} **68** (2003) 034913. M.Gyulassy and L.McLerran, [Nucl.Phys.A]{} **750**(2005)30. A. Muronga, [Phys. Rev. C]{} **69** (2004) 034903; A. Muronga and D. H. Rischke, arXiv:nucl-th/0407114; A. Muronga, Phys. Rev.  C [**76**]{} (2007) 014910. A. K. Chaudhuri and U. W. Heinz, nucl-th/0504022; A. K. Chaudhuri, arXiv:nucl-th/0703027; arXiv:nucl-th/0703029; arXiv:0704.0134. R. Baier, P. Romatschke and U. A. Wiedemann, Phys. Rev.  C [**73**]{} (2006) 064903; Nucl. Phys.  A [**782**]{} (2007) 313; R. Baier and P. Romatschke, Eur. Phys. J.  C [**51**]{} (2007) 677; P. Romatschke, HBT radii,” Eur. Phys. J.  C [**52**]{} (2007) 203; P. Romatschke and U. Romatschke, arXiv:0706.1522. P. Van and T. S. Biro, arXiv:0704.2039, where the authors also derived hydrodynamic equation that involves only space-like derivatives in a totally different context. P. K. Kovtum, D. T. Son and A. O. Starinets, [Phys. Rev. Lett.]{} **94** (2005) 111601. C. Eckart, [Phys. Rev.]{} **58** (1940) 919. K. Tsumura, T. Kunihiro and K. Ohnishi, Phys. Lett. B **646** (2007) 134\[arXiv:hep-ph/0609056\]. L. D. Landau and E. M. Lifshitz, *Fluid Mechanics*, (Pergamon Press, London, 1959). Ch. G. van Weert, *Some Problems in Relativistic Hydrodynamics*, in Lecture Notes in Mathematics **1385**, p.290, ed. by A. Anile and Y. Choquet-Bruhat (Springer, 1987). N.G. van Kampen, [J. of Stat. Phys.]{} **46** (1987) 709. W. A. Hiscock, L. Lindblom, [Phys. Rev. D]{} **31**(1985)725. H. Grad, [Comm. Pure Appl. Math.]{} **2** (1949) 331. S. R. de Groot, W. A. van Leeuwen and Ch. G. van Weert, *Relativistic Kinetic Theory*, (Elsevier North-Holland, 1980). W. Israel and J. M. Stewart, [Ann. Phys.]{} **118** (1979) 341. L. Y. Chen, N. Goldenfeld and Y. Oono, [Phys. Rev. Lett.]{} **73** (1994) 1311; [Phys. Rev. E]{} **54** (1996) 376. T. Kunihiro, [Prog. Theor. Phys.]{} **94** (1995) 503; **95**(1996) 835 (E); [Jpn. J. Ind. Appl. Math.]{} **14** (1997) 51; [Prog. Theor. Phys.]{} **97** (1997) 179; S.-I. Ei, K. Fujii, and T. Kunihiro, [Ann. Phys.]{} **280** (2000) 236. Y. Hatta and T. Kunihiro, [Ann. Phys.]{} **298** (2002) 24; T. Kunihiro and K. Tsumura, J. Phys. A **39** (2006) 8089. J. M. Stewart, *Non-Equilibrium Relativistic Kinetic Theory* (Lecture Notes in Physics No. 10; Springer, Berlin, 1971). K. Tsumura, T. Kunihiro and K. Ohnishi, in preparation.
--- abstract: 'Nuclear spirals naturally form as a gas response to non-axisymmetry in the galactic potential, even if the degree of this asymmetry is very small. Linear wave theory well describes weak nuclear spirals, but spirals induced by stronger asymmetries in the potential are clearly beyond the linear regime. Hydrodynamical models indicate spiral shocks in this latter case that, depending on how the spiral intersects the $x_2$ orbits, either get damped, leading to the formation of the nuclear ring, or get strengthened, and propagate towards the galaxy centre. Central massive black hole of sufficient mass can allow the spiral shocks to extend all the way to its immediate vicinity, and to generate gas inflow up to 0.03 , which coincides with the accretion rates needed to power luminous local Active Galactic Nuclei.' author: - | Witold Maciejewski\ Obserwatorium Astronomiczne Uniwersytetu Jagiellońskiego, ul. Orla 171, 30-244 Kraków, Poland title: 'Nuclear spirals in galaxies: gas response to asymmetric potential. II. Hydrodynamical models' --- hydrodynamics — shock waves — galaxies: kinematics and dynamics — galaxies: ISM — galaxies: spiral — galaxies: structure — galaxies: nuclei — ISM: kinematics and dynamics Introduction ============ It has been recently proposed that nuclear spirals in galaxies may be related to the fueling of Seyfert activity (Regan & Mulchaey 1999). This was a straightforward conclusion when a search for the fueling mechanism using highest-resolution optical observations of a sample of Seyfert nuclei with the Hubble Space Telescope returned nuclear spirals in 6 out of 12 galaxies. Nuclear spirals turned out to be much more frequent in Seyfert galaxies than gas inflow related to nuclear bars, the commonly proposed feeding mechanism. Observations of a larger sample that followed (Martini & Pogge 1999) found nuclear spirals in 20 out of 24 Seyfert 2 galaxies, and in a later sample of 46 Seyfert 1 and 2 galaxies (Pogge & Martini 2002) almost all were classified as having nuclear spirals. The authors of these latter surveys also showed that nuclear spirals are not self-gravitating, and that they are likely to be shocks in nuclear gas discs. The most recent study (Martini et al. 2003a,b) involves a sample of 64 Seyfert galaxies, as well as a control sample, which together are big enough so that trends can be noticed. In particular, the authors of this study point out that all grand-design nuclear spirals occur in barred galaxies, but not all barred galaxies develop nuclear spirals — some of them have nuclear rings. Tightly-wound nuclear spirals tend to avoid barred galaxies instead. ---- ------- ---------- ---------------- ------------ --------------- ------------ \# model MBH mass sound speed type of radial extent inner grid name in in gas asymmetry of the grid boundary 1 0W20o 0 20 km s$^{-1}$ weak oval 0.02 - 16 kpc outflow 2 0W05o 0 5 km s$^{-1}$ weak oval 0.02 - 16 kpc outflow 3 8W20o $10^8$ 20 km s$^{-1}$ weak oval 0.02 - 16 kpc outflow 4 8W20r $10^8$ 20 km s$^{-1}$ weak oval 0.02 - 16 kpc reflection 5 8W20c $10^8$ 20 km s$^{-1}$ weak oval 0.005 - 4 kpc reflection 6 8W05o $10^8$ 5 km s$^{-1}$ weak oval 0.02 - 16 kpc outflow 7 0S20r 0 20 km s$^{-1}$ strong bar 0.02 - 16 kpc reflection 8 0S05r 0 5 km s$^{-1}$ strong bar 0.02 - 16 kpc reflection 9 8S20r $10^8$ 20 km s$^{-1}$ strong bar 0.02 - 16 kpc reflection 10 0D20o 0 20 km s$^{-1}$ double bar 0.02 - 16 kpc outflow ---- ------- ---------- ---------------- ------------ --------------- ------------ Why do some barred galaxies develop nuclear spirals, while other develop nuclear rings? Why do tightly wound nuclear spirals prefer galaxies that do not have a bar? Can any type of nuclear spirals generate inflow sufficient to feed local Seyfert nuclei? Seyfert galaxies require mass accretion rates of $\sim 0.01$ /yr (e.g. Peterson 1997). Here I attempt to answer these questions under an assumption that nuclear spirals are density waves generated in gas by a rotating potential, as described in the accompanying Paper I (Maciejewski 2004). Implications of the linear theory (originally proposed by Goldreich & Tremaine 1979) derived in Paper I will serve here as a guideline, but by themselves they cannot provide answers to the questions above, since the amplitude of strong bars very much exceeds the linear theory. In the linear approximation the arm/interarm density ratio is a scalable value, as long as the perturbation is small. To estimate how big this ratio is for a given asymmetry in the potential, one can search for nonlinear solutions (e.g. Yuan & Cheng 1989, 1991) or directly involve hydrodynamical modeling. The second approach has the advantage that a whole range of non-axisymmetries, from small ones to ones of the order of the axisymmetric component, can be studied with the same tool. I take this approach, and I construct in this paper hydrodynamical models of gas flow in the nuclear regions of weakly and strongly non-axisymmetric potentials. To study structures in gaseous nuclear discs of galaxies with the help of hydrodynamical models exceptionally high resolution is required, which has been achieved only recently. Athanassoula’s models of gas flow in barred galaxies (1992b) show curling of the inner parts of the straight principal shock, but the resolution of these models prevents us from following this feature further inwards. Nuclear spirals generated by the bar inside the straight principal shocks, and winding by more than a $\pi$-angle, were first noticed in hydrodynamical simulations by Maciejewski (1998, 2000), and Englmaier & Shlosman (2000). The latter work interpreted these features in terms of spiral density waves, weak enough so that the linear theory is applicable. On the other hand, Maciejewski, Teuben, Sparke & Stone (2002, hereafter MTSS02) point out that nuclear spirals in their models take the form of a shock, which is beyond the scope of the linear treatment. In this paper I construct hydrodynamical models of nuclear spirals for realistic gravitational potentials represented by rotation curves characterized in Paper I. In particular, I am interested in how the gas flow in the nucleus is modified by the presence of a central massive black hole (MBH) or a density cusp. As pointed out in Paper I, the central MBH significantly changes the nuclear gravitational potential, and therefore it should be able to regulate gas flow around itself. Here, I investigate how its presence modifies gas inflow onto the centre. In Section 2, I list the models to be analyzed in this paper, and I describe the code with which they were built. In order to link the models with the linear theory, in Section 3 I analyze models of gas flow in a weak oval, where they should not depart significantly from the linear prediction. In Section 4, I apply the same analysis to gas flow in strong bars. Preliminary results about nuclear spirals in double bars are listed in Section 5. The code and setup of the models ================================ The models were calculated using the CMHOG hydrodynamical code (Piner et al. 1995, MTSS02), which solves the single-fluid equations in their Eulerian form on a fixed polar grid. The gas is isothermal with the sound speed $c=20$ km/s (hereafter termed hot gas), the value suitable for centres of galaxies (Englmaier & Gerhard 1997), but runs with $c=5$ km/s (hereafter termed cold gas) have also been done for comparison. The gas is not self-gravitating. All models are built on the grid covering half of the plane, and point symmetry is assumed. The grid has 174 cells spaced logarithmically in the radial direction, and 80 cells covering a 180 angle. In order to discuss the physical processes operating in relation to nuclear spirals, I chose 10 hydrodynamical models showing typical features for a given gravitational potential and gas characteristics. The models, with their parameters, are listed in Table 1. Models starting with ’0’ are built for the potential characterized by rotation curve $A$ from Paper I (linear inner rise), and models starting with ’8’ are for the potential characterized by rotation curve $B$ (a $10^8$   MBH added in the centre). The potential in all models includes an $n=2$ Ferrers bar. The bar is either identical to the primary bar in models of MTSS02 (termed ’strong bar’ in Table 1, model names with ’S’), or its quadrupole moment is 10 times smaller than in MTSS02 models, and the axial ratio of the bar is decreased to 1.5 from the original 2.5 for strong bar (hereafter I call it ’weak oval’, model names with ’W’). The potential with a double bar is identical to that in MTSS02. In the linear approach, the potentials of both the strong bar and the weak oval have the outer Inner Lindblad Resonance (ILR) at 2.3 kpc, and, in the absence of a central MBH (models starting with ’0’), they also have an inner ILR (iILR) at 0.13 kpc. Models with a central MBH (models starting with ’8’) have no iILR. For each model, the initial gas density is constant throughout the grid (10  pc$^{-2}$), and the initial kinematics is gas motion on circular orbits with rotation velocity derived from the axisymmetrized potential, where the bar mass was incorporated into the bulge component. Then, through the first 0.1 Gyr of each run, the bar or oval is extracted continuously from the bulge, and its strength remains unchanged afterwards till the end of the simulation. The method to introduce the secondary bar is discussed in Section 5 devoted to models of nested bars. Polar grid has singularity at $r=0$, and models built on it cannot include the galactic centre, but they stop at a certain minimal radius: the inner grid boundary. The calculated gas flow in the innermost parts of the galaxy may depend on the boundary conditions adopted there. Usually outflow conditions are imposed with no inflow onto the grid allowed (e.g. Piner et al. 1995, Englmaier & Shlosman 2000, MTSS02). However, this boundary condition effectively means creation of a sink for gas, which may generate unphysical inflow, and unclear rules for wave reflection or absorption. Therefore in this paper I introduced also a reflection condition at the inner grid boundary. The benefit of this boundary condition is that the sink term is removed from the problem, because no gas leaves the grid. Consequently, more conservative estimates for gas inflow can be given (Section 3.4). With this condition, waves in gas are fully reflected at the inner boundary. In Section 3.3 I show that the reflected wave is unlikely to play important role in gas dynamics. Gas flow in a weak oval ======================= Models 1–6 are built for the potential with a weak oval, whose departure from axisymmetry is much smaller than that for a strong bar studied by MTSS02. The $Q_T$ parameter, defined as the maximum ratio of tangential to radial force (Combes & Sanders 1981), is 0.21 for models with a strong bar, but only 0.01 for models with a weak oval. In a real galaxy, such asymmetry will most likely remain undetected, leading to unbarred classification even in the recent detailed infrared studies (e.g. Laurikainen, Salo & Buta 2004). Global morphology and kinematics of a nuclear spiral in hot gas --------------------------------------------------------------- A snapshot of gas density, representative for a nuclear spiral generated by a weak oval in hot gaseous disc is shown in Fig.1. It was taken from Model 8W20, 0.4 Gyr into the run, once the flow has stabilized. Model 0W20 shows the same morphology, except for its innermost parts, because gravitational potentials in these two models are almost identical at radii above a few hundred parsecs. The nuclear spiral is clearly visible in the top panel of Fig.1, although the straight principal shocks disappeared completely once the bar was replaced by a weak oval. The density contrast between the arm and the inter-arm region is about 2 (Fig.1, middle panel), therefore the spiral should be clearly visible in the color maps. On the other hand, this contrast is small enough, so that the perturbation of the velocity field is small. Therefore there are no shocks forming in the gas, and gas flow in discs with this type of the nuclear spiral is almost circular. ![[**Top:**]{} Snapshot of gas density in model 8W20r, in which a weak oval asymmetry is present in the potential characterized by rotation curve $B$ in Paper I. Gas density is shown at time 0.4 Gyr, after the morphology of the flow has stabilized. Darker color indicates larger densities. The solid ellipse outlines the oval, and the dashed circles mark the oILR at 2.3 kpc and the corotation at 5.6 kpc. The dotted circle marks the position of the 4:1 resonance at 3.9 kpc. Units on axes are in kpc. [**Middle:**]{} Radial density profile (dotted line), and azimuthal velocity (solid line), plotted against rotation curve $B$ from Paper I (dashed line) as a function of radius along the vertical line in the top panel, in models 8W20, at time 0.4 Gyr. To show the structure of the innermost regions, at radii smaller than 0.3 kpc data from model 8W20c are being used instead of 8W20r. The velocity units are in , the density units are arbitrary. [**Bottom:**]{} Density variation in model 8W20r along two circles: at 1.5 kpc (dashed line) and at 3 kpc (solid line). Because of the assumed bisymmetry of the models, variations over only 180 are shown. The density units are arbitrary, the azimuthal angle is in degrees.[]{data-label="f3"}](figII1a.ps){width="0.95\linewidth"} ![[**Top:**]{} Snapshot of gas density in model 8W20r, in which a weak oval asymmetry is present in the potential characterized by rotation curve $B$ in Paper I. Gas density is shown at time 0.4 Gyr, after the morphology of the flow has stabilized. Darker color indicates larger densities. The solid ellipse outlines the oval, and the dashed circles mark the oILR at 2.3 kpc and the corotation at 5.6 kpc. The dotted circle marks the position of the 4:1 resonance at 3.9 kpc. Units on axes are in kpc. [**Middle:**]{} Radial density profile (dotted line), and azimuthal velocity (solid line), plotted against rotation curve $B$ from Paper I (dashed line) as a function of radius along the vertical line in the top panel, in models 8W20, at time 0.4 Gyr. To show the structure of the innermost regions, at radii smaller than 0.3 kpc data from model 8W20c are being used instead of 8W20r. The velocity units are in , the density units are arbitrary. [**Bottom:**]{} Density variation in model 8W20r along two circles: at 1.5 kpc (dashed line) and at 3 kpc (solid line). Because of the assumed bisymmetry of the models, variations over only 180 are shown. The density units are arbitrary, the azimuthal angle is in degrees.[]{data-label="f3"}](figII1b.eps){width="0.72\linewidth"} ![[**Top:**]{} Snapshot of gas density in model 8W20r, in which a weak oval asymmetry is present in the potential characterized by rotation curve $B$ in Paper I. Gas density is shown at time 0.4 Gyr, after the morphology of the flow has stabilized. Darker color indicates larger densities. The solid ellipse outlines the oval, and the dashed circles mark the oILR at 2.3 kpc and the corotation at 5.6 kpc. The dotted circle marks the position of the 4:1 resonance at 3.9 kpc. Units on axes are in kpc. [**Middle:**]{} Radial density profile (dotted line), and azimuthal velocity (solid line), plotted against rotation curve $B$ from Paper I (dashed line) as a function of radius along the vertical line in the top panel, in models 8W20, at time 0.4 Gyr. To show the structure of the innermost regions, at radii smaller than 0.3 kpc data from model 8W20c are being used instead of 8W20r. The velocity units are in , the density units are arbitrary. [**Bottom:**]{} Density variation in model 8W20r along two circles: at 1.5 kpc (dashed line) and at 3 kpc (solid line). Because of the assumed bisymmetry of the models, variations over only 180 are shown. The density units are arbitrary, the azimuthal angle is in degrees.[]{data-label="f3"}](figII1c.eps){width="0.72\linewidth"} Unlike nuclear spirals in strong bars, which unwind outwards rapidly in order to match the principal shock in the bar, nuclear spirals in weak ovals follow the linear mode longer, and wind up to 3 times around the centre (6$\pi$ angle). After the flow gets stabilized, the bisymmetric nuclear spiral seen in Fig.1 remains unchanged in the frame rotating with the bar: it does not rotate, neither it winds up or unwinds around the centre. Between the inner grid boundary, and the outer ILR (oILR) it winds around the centre by about a 5$\pi$ angle. ![image](figII2.ps){width="1.05\linewidth"} The $m=4$ spiral outside the oILR --------------------------------- In the linear approximation, the nuclear spiral should not extend outwards beyond the outer ILR (Paper I, Section 3.1). In fact, from the top panel of Fig.1 one can see that a clear double-arm spiral does not extend out beyond the oILR at 2.3 kpc (dashed circle). However, there [*is*]{} spiral structure detectable in gas morphology out to about 3.5 kpc. The density contrast is much weaker there, and a closer inspection indicates that a four-arm spiral is present outside the two-arm one. Linear theory (see Paper I, Section 3) predicts that such a spiral can be generated by an $m=4$ mode in the potential, and that it should extend from the galactic centre out to the radius where $\Omega - \kappa/4 = \Omega_B$. This is the $(4:-1)$ resonance in notation of Paper I, hereafter called for simplicity 4:1. For our potential it is located at 3.9 kpc (dotted circle in the top panel of Fig.1), which is consistent with the observed extent of the four-arm spiral. Ferrers’ bar can be decomposed into even-$m$ components, among them $m=4$, and this component is responsible for a four-arm spiral outside the proper nuclear spiral. Note that two arms of this four-arm spiral are just continuations of the two-arm nuclear spiral from smaller galactic radii, albeit with much lower density contrast. In addition, two other arms start at the oILR, at position angles $\sim 90$and $\sim 270$, and extend outwards. The transition from a two-arm spiral inside the oILR to a four-arm one outside it is illustrated in the bottom panel of Fig.1, which shows the density profiles as a function of angle along the rim of two circles: one of radius 1.5 kpc, which is located inside the oILR, and the other one of radius 3 kpc, placed between the oILR and the 4:1 resonance. Along the first circle, there is only one clear density maximum per $\pi$ angle, which indicates a two-arm spiral. The density ratio is about 1.8. Along the second circle, two weak but still clear density maxima are seen in a $\pi$ angle. This is characteristic for a four-arm spiral. The density ratio between maxima and minima along this circle ranges between 1.1 and 1.2, depending on which maximum/minimum values are taken. Thus one may expect weak four-arm spiral structures outside grand-design two-arm nuclear spirals. The smooth transition from the nuclear spiral to the four-arm spiral in the hydrodynamical models can be also seen in the radial changes of the pitch angle (Fig.2b, open circles). It closely follows the linear prediction for a two-arm spiral inside the oILR (Fig.2b, solid line), but its value remains almost unchanged also at larger radii, where the nuclear spiral should unwind and disappear. There, spiral arms of the nuclear spiral continue outwards, assuming outside the oILR pitch angle predicted by the linear theory for a four-arm spiral (dotted line in Fig.2b). Thus continuation of the nuclear spiral to larger radii may hide the presence of the oILR in the galaxy. Morphology of the innermost regions ----------------------------------- The linear theory says that if inside the oILR another ILR is present, the nuclear spiral should not propagate inwards of this ILR (the iILR). Here I follow the hydrodynamical realization of this rule, using models of gas flow in gravitational potentials with the iILR (models 0W20), and without it (models 8W20). Note that the inclusion of a $10^8$  MBH, which is the sole difference between the gravitational potentials in these models, is sufficient to remove the iILR in a galaxy with a constant-density core. In models 0W20 (rotation curve $A$), the nuclear spiral unwinds rapidly when it approaches the iILR from the outside, with its pitch angle well following the linear prediction (Fig.2b), and it disappears just outside the iILR. It remains strong all the way until reaching the iILR, which may be the reason why the leading spiral predicted by linear theory to form at the iILR (Paper I) is absent. On the other hand, the 8W20 models (rotation curve $B$) have only one ILR, and a clear nuclear spiral extends there all the way to the inner boundary (Fig.2c). ![Density distribution in the innermost parts of models 8W20c ([*left*]{}), and 8W20o ([*right*]{}) at the end of the run at 0.9 Gyr.[]{data-label="f5"}](figII3a.ps "fig:"){width="0.48\linewidth"} ![Density distribution in the innermost parts of models 8W20c ([*left*]{}), and 8W20o ([*right*]{}) at the end of the run at 0.9 Gyr.[]{data-label="f5"}](figII3b.ps "fig:"){width="0.48\linewidth"} In the model 8W20r, the wave generating this nuclear spiral reflects from the inner boundary and interferes with the incoming wave, which may perturb the solution. However, the original wave moving inwards is focused towards the centre, while the reflected wave diverges away from the centre, and it is quickly overcome by the incoming wave. Thus the perturbation caused by reflection does not propagate beyond the innermost 10-20 cells, which on the standard grid corresponds to the range of radii 30 – 45 pc. Moreover, this boundary condition has parallels to the actual physical situation, when the wave propagating inwards encounters the accretion disc of the MBH with density likely higher, which causes reflection, and any inflow in the spiral accumulates in the accretion disc. Nevertheless the steady-state solution for model 8W20r does not reflect the winding of the spiral in the innermost regions predicted by the linear theory (compare the central-bottom panel of Fig.2 in Paper I to Fig.2c here). This is clearly seen as a discrepancy between the value of the pitch angle predicted by the linear theory, and measured in the model (Fig.2d, open circles) at radii below 200 pc. After the flow stabilizes, the maximal pitch angle (36) is clearly larger than the linear prediction (23), and the maximum occurs at a smaller radius. However, when the model is examined before the flow settles down (here at 0.25 Gyr), the measured pitch angle is much closer to the linear prediction (Fig.2d, triangles). To investigate whether this effect is numerical (vicinity of the inner grid boundary), I built two more versions of model 8W20: one with the outflow inner boundary condition (8W20o), and one still with the reflective inner boundary, but extending four times further towards the galaxy centre, down to the radius of 5 pc (8W20c). In this last version, a nuclear spiral winding up towards the centre develops during the early stages of the simulation. Its shape (Fig.2e) is similar to that in the central-bottom panel of Fig.2 in Paper I, and its pitch angle closely follows the linear theory (Fig.2f, triangles). However, when the flow stabilizes, the innermost part of the nuclear spiral unwinds, with its pitch angle growing and reaching the same values as in model 8W20r (Fig.2f, open circles). This result remains unchanged when outflow through the inner boundary is allowed (model 8W20o). In fact, once the flow stabilizes, the morphology of the nuclear spiral in all 3 versions of model 8W20 gets identical, and it remains so till the end of each run at 0.9 Gyr (Fig.3). Regardless of whether gas accumulates in the innermost cells of the grid in models with reflective inner boundary condition (8W20c, 8W20r) or whether it is removed from the grid when outflow is allowed (8W20o), the nuclear spiral reaches the same steady state. Thus I conclude that the unwinding of the innermost part of the nuclear spiral is not an effect of proximity of the inner grid boundary, but rather has hydrodynamical origin. However, it is unlikely to be an effect of wave reflection in the galaxy centre, since it also appears in the model version with the outflow boundary condition. Gas inflow triggered by a nuclear spiral in a weak oval ------------------------------------------------------- The rate of inflow can be deduced from how the mass contained within various radii changes with time. For model 8W20r, Fig.4 shows gas mass within a number of radii as a function of time. Only mass enclosed in the innermost circle of the radius of 40 pc changes significantly. Throughout the run it increases from 0.37$\times 10^5$ to 2.14$\times 10^5$, but most of the inflow occurs between 0.25 Gyr and 0.4 Gyr, when over 1.5$\times 10^5$ is dumped into radii below 40 pc. Thus average inflow during this period of 150 Myr is about $10^{-3}$. This inflow occurs exactly when the transition from a tightly wound spiral (Fig.2e) to the steady-state solution (Fig.2c) occurs. Thus the formation of a nuclear spiral results in a single event of gas inflow into innermost parsecs of the galaxy, which dumps there about $10^5$of gas. After this single dump, the mass inflow is negligible, consistent with zero. Such a one-time dump happens only in the models with the nuclear MBH (8W20). In models without it (0W20) the mass accumulated in the innermost regions does not change significantly with time. ![Mass accumulated within various radii (indicated in the plot) as a function of time for model 8W20r.[]{data-label="f6"}](figII4.eps){width="0.9\linewidth"} Nuclear spirals may be generated not only by weak ovals, but also by transient phenomena like a passing globular cluster or a giant molecular cloud. Such nuclear spirals would then also be transient and reoccurent. Model 8W20r indicates that every time the spiral reappears, it dumps some $10^5$ of gas onto the innermost parsecs of the galaxy, which may provide a way to sustain a weak nuclear activity. However, for such reoccurent dumping to take place, material has to be replenished into the inner 100pc, because formation of the spiral leads to mass increase within this radius of a few percent only, while 60% of gas within 100 pc radius ends up within 40 pc radius after formation of the spiral. ![Snapshot of gas density in model 8W05o, in the same galactic potential as in model 8W20r presented in Fig.1, and at at the same evolutionary time. The only difference is the assumed sound speed in the isothermal gas, which is 5  here (cold gas). Reference lines and the scale are the same as in the top panel of Fig.1.[]{data-label="f7"}](figII5.ps){width="0.95\linewidth"} Note that by imposing the reflection condition on the inner boundary of the polar grid used in these simulations I get the lower limit for the inflow. If free outflow through the inner boundary is imposed instead, (model 8W20o, as in Piner et al. 1995), a flux of about $2 \times 10^{-3}$ is crossing this boundary continuously after the flow stabilizes. Although this can be interpreted as an inflow, it is likely caused by the assumption here that gas can leave, but not re-enter the grid, which creates a sink term in the problem. The mass enclosed within larger radii does not change significantly under the action of the nuclear spiral. There is some inflow at radii between 0.5 and 1 kpc, since the mass enclosed within these radii increases during the 0.9-Gyr run by 56% and 42% of its initial value, respectively. The mass accumulated there increases gradually throughout the run, which translates into an average inflow up to 0.01 . However, this mass does not get much further inwards than 500 pc from the galaxy centre, since the mass accumulated within the radius of 250 pc actually slightly decreases throughout the run. Thus no mass transport from kpc- to pc-scale is expected by nuclear spirals of this kind. Nuclear spirals in cold gas --------------------------- The linear theory predicts that the pitch angle $i$ of the spiral (see eq.24 in Paper I) is proportional to the speed of sound in the isothermal gas in which the density wave propagates. Thus the spiral in models 0W05 and 8W05, that involve cold gas with sound speed of 5 , is much more tightly wound. Only model 8W05 is shown (Fig.5), since the other one looks almost identical. From the linear theory, the pitch angle in cold-gas models is expected to be four times smaller than in the hot-gas models. For a tightly-wound spiral, at any radius $R$, the radial distance $dR$ between the adjacent density maxima is $\pi R \tan i$. In cold-gas models considered here it can be be as small as $0.13R$. This corresponds to the radial separation of only 4 cells on our grid, which is insufficient to resolve the waves and results in numerical damping. Such an effect is seen in Fig.5, where stronger spiral is only present close to the oILR at 2.3 kpc, where its pitch angle is expectedly larger. There, the arm-interarm density ratio approaches 3. This amplitude gets damped quickly inwards, although the spiral can be traced down to the radius below 1 kpc, where it winds almost by a $6 \pi$ angle. As in the hot-gas case, a four-arm spiral is seen outside the oILR. According to the linear theory (Paper I), a leading nuclear spiral should develop in the vicinity of the iILR in model 0W05o. In cold gas its propagation should not be affected by the interference with the trailing spiral originating at the oILR, as it happens in hot-gas models. Interestingly, such a leading spiral does not form – I discuss reasons for it in Section 6.3. Gas inflow in cold-gas models is negligibly small: in both 0W05 and 8W05 it never exceeds $10^{-5}$even when outflow from the grid is allowed through the inner boundary. Nuclear gas flow generated by a strong bar ========================================== The perturbation in the stellar gravitational potential coming from a typical galactic bar is too strong to be described in linear terms. Thus gas flows generated by such perturbation cannot be well described by the linear wave theory. One can get a better insight from the orbital theory of bars (Athanassoula 1992a, see also reviews by Sellwood & Wilkinson 1993 and by Maciejewski 2003). Hydrodynamical models indicate that in the main body of the bar, two symmetric shocks (the principal shocks) form on the leading sides of the bar. If the bar is strong and if it extends to its own corotation, these shocks are straight. Otherwise the shocks curl and start resembling a trailing spiral (Athanassoula 1992b). If there is an ILR in the galaxy, the principal shocks do not point at the galactic centre, but they are offset from it. Gas and dust get compressed in the principal shocks, which are seen as dust lanes in the optical images of barred galaxies (see e.g. NGC 1097, NGC 1300, NGC 4303, and NGC 6951 in the Hubble Atlas of Galaxies, Sandage 1961). Inside the inner ends of the dust lanes, there are nuclear rings (e.g. NGC 4314, Benedict et al. 2002) or nuclear spirals (e.g. NGC 1530, Regan, Vogel & Teuben 1997, Pogge & Martini 2002). ![image](figII6a.ps){width="0.48\linewidth"} ![image](figII6b.ps){width="0.48\linewidth"} Until recently, limited computing resources prevented us from studying these nuclear structures inside the principal dust lanes, because the resolution of the models was not high enough. In the models by Athanassoula (1992b), the straight principal shocks curl inwards in their innermost parts, but they cannot be followed to much smaller radii because of the limited resolution. Piner et al. (1995) employed polar grid in their hydrodynamical code (CMHOG, used also to build models presented in this paper). Resolution of this grid increases inwards, which allowed them to clearly resolve nuclear rings. They explained formation of these rings in terms of the orbital structure in the bars. This explanation is summarized in Section 4.1 below, where I compare the mechanisms leading to the formation of nuclear rings and spirals. The CMHOG code allowed also to resolve for the first time nuclear spirals inside the straight principal shocks in the bar (Maciejewski 1998, 2000). Similar nuclear spirals were seen by Englmaier & Shlosman (2000) in their models with a different code, but also on the polar grid. In this section I analyze in detail gas flow in the central regions of a strongly barred galaxy. How to generate nuclear ring, and how nuclear spiral ---------------------------------------------------- It has been commonly accepted that the nuclear ring in barred galaxies forms when the shocked gas leaves the $x_1$ orbits and settles on the lower-energy $x_2$ ones (Athanassoula 1992a,b, Piner et al. 1995). The $x_2$ orbits are almost round, and they do not intersect one another, making therefore a perfect location for gas to accumulate. However, strong bar, like any asymmetry in the nuclear potential, should also generate nuclear spirals inside its ILR. In the orbital theory, the ILR is defined as the outer limit to which the $x_2$ orbits can extend. Thus the nuclear spiral should be cospatial with the nuclear ring. Yet some barred galaxies show clear nuclear rings, while other display nuclear spirals without rings. Still some show nuclear spirals inside nuclear rings. What is the reason for this variety? Formation of the nuclear ring is explained by the orbital theory, and the shapes of orbits that underlie this ring solely depend on the properties of the gravitational potential. On the other hand, the nuclear spiral has a wave-like nature, and its properties are determined by the dispersion relation (eq. 16 in Paper I). This relation depends not only on the gravitational potential, but also on the gas characteristics. If gas is assumed to be isothermal, it depends on the sound speed in gas. Thus two mechanisms: orbital and wave-like, compete in shaping the dynamics of gas flow in central parts of galaxies. Using two models of gas flow in a strongly barred galaxy, I examine the outcome of this competition when the gas is hot (model 0S20r), and when it is cold (model 0S05r). Aside for two different sound speeds in gas, all other parameters in these models are identical. On the early stages of evolution, right after the bar has reached its full strength, the shocked gas tends to settle on the $x_2$ orbits marked in Fig.6 with dashed lines. At the same time, the inner parts of the principal shock in the bar curl inwards, and tend to follow the linear dispersion relation inside the ILR. Thus in general, the shocked gas tends to follow a path different from the direction in which the shock propagates, because the pitch angle of the the shock is different from the pitch angle of the orbit at a given location (Fig.6, inserts). Moreover, the linear formula for the pitch angle of the spiral wave (eq. 24 in Paper I) indicates that this angle is larger for larger sound speeds. In the hot gas (model 0S20r, Fig.6, left panel), the sound speed is high, thus the pitch angle of the spiral is large. As shown in the insert, this pitch angle is [*always larger*]{} than the pitch angle of the $x_2$ orbits which the shock crosses. The shock propagates inwards crossing each $x_2$ orbit only once, and then moving to smaller orbits. Thus the post-shock gas, which tends to settle on these orbits, always moves away from the shock front. In other words, the spiral shock propagates [*out of*]{} the post-shock gas condensation, into regions where the density of gas is much lower. This is clearly seen in the left panel of Fig.6, after the shock crosses the second outermost $x_2$ orbit. Propagation of shock from a high-density to low-density medium triggers a shoe-lace effect: the shock gets strengthened. In model 0S20r it gains strength high enough that it continues to propagate all the way to the galactic centre. On the other hand, the post-shock gas tends to settle on the $x_2$ orbits, as the gas condensation between the two outermost $x_2$ orbits indicates. In the cold gas (model 0S05r, Fig.6, right panel), the sound speed is low, thus the pitch angle of the spiral is small. It can become smaller than the pitch angle of the $x_2$ orbits at their intersection with the shock (Fig.6, insert of the right panel). When it happens, the shock, still propagating inwards, crosses the $x_2$ orbits from inside out. This means that the dispersion relation forces the shock to propagate back [*into*]{} the post-shock gas, which tends to settle on these $x_2$ orbits. In this gas condensation the shock gets damped. This happens before the shock reaches the major axis of the bar in the right panel of Fig.6. Once the spiral shock weakens and disappears inwards, the gas settles on closed trajectories originating from the $x_2$ orbits. However, they are not exactly the $x_2$ orbits, since the shock constantly penetrates the gas lane from inside, forcing the steady-state solution to be rounder than the $x_2$ orbits in the same area. This mechanism is confirmed by detailed hydrodynamical simulations of nuclear rings, where they appear almost circular, while the underlying $x_2$ orbits are significantly flattened (Piner et al. 1995, MTSS02). In short, the nuclear spiral shock, being a continuation of the straight principal shock in the bar, can propagate towards the galactic centre when it is able to escape the gas condensation emerging from this principal shock. It can do it when the pitch angle of the spiral is large enough. If this pitch angle is too small, the spiral shock gets damped in this gas condensation, and a nuclear ring forms. ![[**Top:**]{} Snapshot of gas density (greyscale), and of $div^2 {\bf v}$ (for $div {\bf v} < 0$, contours, shock indicator) in model 0S20r, at the time of 130 Myr, when the spiral shock reaches the inner grid boundary. The dashed circles mark the iILR at 0.13 kpc and the oILR at 2.3 kpc. Overplotted are dotted circles of radii 40 pc, 100 pc, 250 pc, 500 pc, 1 kpc and 2 kpc, in order to help in relating the amount of inflow in Fig.9 to the observed morphology. Units on axes are in kpc. [**Middle:**]{} Radial density profile (dotted line), $div^2 {\bf v}$ for $div {\bf v} < 0$ (short-dashed line), and azimuthal velocity (solid line) with the rotation curve (long-dashed line) as a reference are plotted for the snapshot from the top panel as a function of radius along the line connecting the centre with the bottom-right corner of that panel. The velocity units are in , the density and $div^2 {\bf v}$ units are arbitrary, but the same as in Fig.8. [**Bottom:**]{} Tangens of the pitch angle $i$ of the shock, as indicated by maxima of $div^2 {\bf v}$ in models 0S20r (open circles) and 8S20r (filled triangles) plotted for the same time as the snapshot from the top panel. The lines mark the linear prediction for an $m=2$ spiral in the potential of model 8S20r (solid), and 0S20r (dashed).[]{data-label="f9"}](figII7a.ps){width="0.95\linewidth"} ![[**Top:**]{} Snapshot of gas density (greyscale), and of $div^2 {\bf v}$ (for $div {\bf v} < 0$, contours, shock indicator) in model 0S20r, at the time of 130 Myr, when the spiral shock reaches the inner grid boundary. The dashed circles mark the iILR at 0.13 kpc and the oILR at 2.3 kpc. Overplotted are dotted circles of radii 40 pc, 100 pc, 250 pc, 500 pc, 1 kpc and 2 kpc, in order to help in relating the amount of inflow in Fig.9 to the observed morphology. Units on axes are in kpc. [**Middle:**]{} Radial density profile (dotted line), $div^2 {\bf v}$ for $div {\bf v} < 0$ (short-dashed line), and azimuthal velocity (solid line) with the rotation curve (long-dashed line) as a reference are plotted for the snapshot from the top panel as a function of radius along the line connecting the centre with the bottom-right corner of that panel. The velocity units are in , the density and $div^2 {\bf v}$ units are arbitrary, but the same as in Fig.8. [**Bottom:**]{} Tangens of the pitch angle $i$ of the shock, as indicated by maxima of $div^2 {\bf v}$ in models 0S20r (open circles) and 8S20r (filled triangles) plotted for the same time as the snapshot from the top panel. The lines mark the linear prediction for an $m=2$ spiral in the potential of model 8S20r (solid), and 0S20r (dashed).[]{data-label="f9"}](figII7b.eps){width="0.75\linewidth"} ![[**Top:**]{} Snapshot of gas density (greyscale), and of $div^2 {\bf v}$ (for $div {\bf v} < 0$, contours, shock indicator) in model 0S20r, at the time of 130 Myr, when the spiral shock reaches the inner grid boundary. The dashed circles mark the iILR at 0.13 kpc and the oILR at 2.3 kpc. Overplotted are dotted circles of radii 40 pc, 100 pc, 250 pc, 500 pc, 1 kpc and 2 kpc, in order to help in relating the amount of inflow in Fig.9 to the observed morphology. Units on axes are in kpc. [**Middle:**]{} Radial density profile (dotted line), $div^2 {\bf v}$ for $div {\bf v} < 0$ (short-dashed line), and azimuthal velocity (solid line) with the rotation curve (long-dashed line) as a reference are plotted for the snapshot from the top panel as a function of radius along the line connecting the centre with the bottom-right corner of that panel. The velocity units are in , the density and $div^2 {\bf v}$ units are arbitrary, but the same as in Fig.8. [**Bottom:**]{} Tangens of the pitch angle $i$ of the shock, as indicated by maxima of $div^2 {\bf v}$ in models 0S20r (open circles) and 8S20r (filled triangles) plotted for the same time as the snapshot from the top panel. The lines mark the linear prediction for an $m=2$ spiral in the potential of model 8S20r (solid), and 0S20r (dashed).[]{data-label="f9"}](figII7c.eps){width="0.75\linewidth"} Properties of the nuclear spiral -------------------------------- I analyze properties of nuclear spirals in the central regions of a strongly barred galaxy using 2 hot-gas models: 0S20r and 8S20r. The early evolution of both models shows that as the time passes, the nuclear spiral starts at the inner ends of the principal shocks and propagates inwards (Fig.7, top panel). However, the density enhancement related to it is very small (below 40%), and the spiral is only seen as large $div^2 {\bf v}$, for negative $div {\bf v}$, which indicates the shock. Thus on the early stages of evolution, the principal shock in the bar gets extended inwards as a nuclear spiral shock. It has a number of properties that make it different from the density wave predicted by the linear theory: - the strength of the shock does not drop significantly when its shape converts from straight to spiral; to the contrary, as can be seen in the middle panel of Fig.7, the strength of the spiral shock (measured by $div^2 {\bf v}$) at the radius of 400 pc, where it winds by $5\pi/4$ angle, is larger than that of the principal straight shock (at 1.5 kpc at this position angle); - the nuclear spiral in model 0S20r, having the iILR, does not stop at this resonance, but crosses it, and is propagating inwards, while in the linear theory the wave does not extend beyond the resonance; - throughout the extent of the spiral shock, its pitch angle differs significantly from the linear prediction for both models 0S20r and 8S20r (bottom panel of Fig.7), although model 0S05r indicates that it still increases with the sound speed in gas, as in the linear theory. In both models 0S20r and 8S20r, at the simulation time about 130 Myr, the spiral shock reaches the inner boundary of the polar grid located at the radius of 20 pc. All plots in Fig.7 show characteristics of the models at this moment. Due to the imposed reflective inner boundary condition, the wave making the nuclear spiral reflects at this boundary and interferes with the incoming spiral wave. However the reflected wave geometrically diverges, and it perturbs the incoming wave, which converges on the centre, only at the innermost radii (see also Section 3.3). Note that the wave reflecting at the inner boundary may initially be weak, and not a shock, since $div^2 {\bf v}$ along the spiral decreases in the innermost parts of the galaxy at these early stages of evolution (Fig.7, top panel). This is consistent with the nuclear spiral in model 8S20r following the linear prediction for the pitch angle at the innermost radii below $\sim 50$ pc (Fig.7, bottom panel). ![[**Top:**]{} Same as in Fig.7, but for model 8S20r, at the time of 0.5 Gyr, after the morphology of the flow has stabilized. The meaning of the circles, and the units are the same as in Fig.7. [**Middle:**]{} Same as in Fig.7, but for the snapshot from this top panel. To achieve better resolution for the innermost features, the plot has two adjacent parts drawn in different radial scales: the left one covers radii from 20 pc to 80 pc, and the right one – from 80 pc to 1.5 kpc. Velocity, density and $div^2 {\bf v}$ units are the same as in Fig.7. [**Bottom:**]{} Same as in Fig.7, but for the snapshot from this top panel. Filled triangles mark the values measured in the hydrodynamical model, while the solid line is the linear prediction. Note how the hydrodynamical shock tends to follow the linear solution at radii 0.2 - 0.5 kpc, but still clearly differs from it.[]{data-label="f10"}](figII8a.ps){width="0.95\linewidth"} ![[**Top:**]{} Same as in Fig.7, but for model 8S20r, at the time of 0.5 Gyr, after the morphology of the flow has stabilized. The meaning of the circles, and the units are the same as in Fig.7. [**Middle:**]{} Same as in Fig.7, but for the snapshot from this top panel. To achieve better resolution for the innermost features, the plot has two adjacent parts drawn in different radial scales: the left one covers radii from 20 pc to 80 pc, and the right one – from 80 pc to 1.5 kpc. Velocity, density and $div^2 {\bf v}$ units are the same as in Fig.7. [**Bottom:**]{} Same as in Fig.7, but for the snapshot from this top panel. Filled triangles mark the values measured in the hydrodynamical model, while the solid line is the linear prediction. Note how the hydrodynamical shock tends to follow the linear solution at radii 0.2 - 0.5 kpc, but still clearly differs from it.[]{data-label="f10"}](figII8b.eps){width="0.75\linewidth"} ![[**Top:**]{} Same as in Fig.7, but for model 8S20r, at the time of 0.5 Gyr, after the morphology of the flow has stabilized. The meaning of the circles, and the units are the same as in Fig.7. [**Middle:**]{} Same as in Fig.7, but for the snapshot from this top panel. To achieve better resolution for the innermost features, the plot has two adjacent parts drawn in different radial scales: the left one covers radii from 20 pc to 80 pc, and the right one – from 80 pc to 1.5 kpc. Velocity, density and $div^2 {\bf v}$ units are the same as in Fig.7. [**Bottom:**]{} Same as in Fig.7, but for the snapshot from this top panel. Filled triangles mark the values measured in the hydrodynamical model, while the solid line is the linear prediction. Note how the hydrodynamical shock tends to follow the linear solution at radii 0.2 - 0.5 kpc, but still clearly differs from it.[]{data-label="f10"}](figII8c.eps){width="0.75\linewidth"} After the reflection of the spiral wave from the inner boundary, its morphology quickly reaches a steady state, and it remains unchanged till the end of the runs at 0.5 Gyr. In model 0S20r, the spiral shock recedes from the centre after the reflection, and in the steady state it is confined to the outside of the iILR. Characteristics of the flow in model 8S20r at the time when its appearance has stabilized are presented in Fig.8. Several interesting features of the flow can be observed: - as can be seen in the middle panel of Fig.8, the strength of the shock is roughly the same at 1.1 kpc, 0.3 kpc, and 0.05 kpc — the first location is at the principal straight shock, while the two last locations are at the nuclear spiral shock; comparing middle panels of Fig.8 and Fig.7 one can see that the strength of the spiral shock at $0.3 - 0.4$ kpc has not changed throughout the run (models 0S20r and 8S20r do not differ much at radii that large); - variations in the profile of the tangential velocity (Fig.8, middle panel), which are much larger than in the model 8W20r with a weak oval (Fig.1, middle panel), also indicate that the departures from the circular rotation are nonlinear here, and indicative of a shock; - the structure of the shock is best resolved in the cut through the spiral at 0.3 kpc (Fig.8, middle panel): regions of enhanced density (dotted line) occur directly outside of the regions of large velocity convergence, which indicates the shock (dashed line), with the contact between the two zones at 0.33 kpc; for the trailing spiral it means that the density enhancement occurs downstream from the shock; - contrary to the early stages of evolution (Fig.7, top panel), when largest density concentration occurs around the principal shock, at later stages (Fig.8, top panel) it is located in the nuclear spiral; middle panels of Figs. 7 and 8 (drawn to scale) show that the peak density increased between the early and late stage by a factor of about 2 in the principal shock at $1.1 - 1.9$ kpc, but in the nuclear spiral at $0.3 - 0.4$ kpc the rise is by a factor of more than 20; - the pitch angle of the nuclear spiral (Fig.8, bottom panel) still differs from the linear prediction (it is persistently larger), although it shows similar trends: the linear wave theory proposed by Englmaier & Shlosman (2000) to explain the nuclear spirals in bars points out these trends, but the flow is nonlinear and literal application of the linear theory is not adequate here. Inflow in the spiral shock -------------------------- The inflow in the spiral shock has been determined analogously to that in the weak nuclear spiral presented in section 3.4 (Fig.4). The evolution with time of mass accumulated within a number of radii for models 0S20r and 8S20r is shown in the top panel of Fig.9. The difference between Figures 4 and 9 is clear: there is strong inflow at virtually all radii in models with a bar, and especially in model 8S20r which includes a $10^8$ MBH in the centre. The circle of radius 4 kpc occurs slightly outside the 4:1 resonance in the assumed potential, which is the outer limit of the straight principal shock (see MTSS02), therefore inflow through this circle should be small. In fact, the mass accumulated within this radius oscillates in both models 0S20r and 8S20r between $7.2 \times 10^8$ and $8.9 \times 10^8$  throughout the run. The principal shock in the bar cuts through the circles of radii 2 kpc and 1 kpc (see Fig.8, top panel), and largest inflow is expected here. In fact, in the period between the stabilizing of the the morphology of the large-scale flow at 200 Myr, and the end of the run at 500 Myr, about $2.2 \times 10^8$ of gas crosses inwards through each circle, which corresponds to the average inflow of 0.7 (see also bottom panels of Fig.9). Thus by the end of the simulation, the mass included in each of these circles increases several times compared to its initial value. ![[**Top:**]{} Mass accumulated within various radii (indicated in the plot) as a function of time for model 8S20r (solid line) and 0S20r (dotted line). [**Bottom:**]{} Mass inflow averaged over 20-Myr intervals as a function of time in model 8S20r. The inflow is followed through circles of various radii indicated in the plot. Note the small, but not negligible inflow triggered at the innermost radii in model 8S20r after the arrival of the spiral shock there (at about 130 Myr).[]{data-label="f11"}](figII9a.eps){width="0.95\linewidth"} ![[**Top:**]{} Mass accumulated within various radii (indicated in the plot) as a function of time for model 8S20r (solid line) and 0S20r (dotted line). [**Bottom:**]{} Mass inflow averaged over 20-Myr intervals as a function of time in model 8S20r. The inflow is followed through circles of various radii indicated in the plot. Note the small, but not negligible inflow triggered at the innermost radii in model 8S20r after the arrival of the spiral shock there (at about 130 Myr).[]{data-label="f11"}](figII9b.eps){width="0.75\linewidth"} The circles of radius 500 pc and smaller are located in the region of the nuclear spiral shock. The mass accumulated within radii of 500 and 250 pc evolves identically in models 0S20r and 8S20r, therefore inflow at such radii most likely is not influenced by the presence of a MBH in the galaxy centre. Nevertheless, this inflow is considerable. It is so because the nuclear spiral hosts a shock, whose nature is dissipative. This spiral shock, like the principal shock in the bar, takes away angular momentum from gas. However, the velocity jump in the spiral shock is much smaller than that in the principal shock, and the extraction of angular momentum is less efficient. The timescale of inflow in the nuclear spiral is much longer than in the principal shock, and gas accumulates in the spiral, as is seen when comparing top panels of Figs. 7 and 8. This results in extreme over-densities in the nuclear spiral (Fig.8, middle panel): over hundred times larger than the initial density. Note that the mechanism extracting angular momentum from gas in the the nuclear spiral continues to work at the same efficiency per density unit. The increasing density means increasing inflow. Thus gas from the inflow in the principal shock gets collected on the nuclear spiral, and when its highest condensation moves inwards along this spiral towards a certain radius, the inflow at this radius increases. In the models it can be seen on the example of the 500 and 250 pc radii. Top panel of Fig.8 indicates that the maximum density in the spiral has already passed the radius of 500 pc, and keeps propagating inwards along the spiral. This is consistent with the inflow through this radius plotted in the bottom panels of Fig.9: it was constantly increasing between 100 and 400 Myr, when the density peak in the spiral was propagating inwards to reach the radius of 500 pc. When it reached this radius, the amplitude of inflow stabilized. However, the inflow does not decay after that time, because the principal shock keeps the gas supply open. Note that at the end of the run, the inflow in the nuclear spiral at 500 pc is the same as that in the principal shock at 1 kpc, which indicates that some kind of equilibrium has been established. On the other hand, the density peak in the spiral does not reach the radius of 250 pc within the simulation time (Fig.8, top panel), and inflow through this circle keeps increasing (Fig.9, middle panel). At the end of the run it reaches the value of 0.2, 3.5 times smaller than that in the principal shock, and in the nuclear spiral at 500 pc. However, the evolution of the models suggests that when the density peak reaches also this radius, the inflow will stabilize at the value equal to that at the larger radii, and it will be the same for each smaller radius, so that eventually a steady-state develops throughout the spiral, with the inflow in the spiral equal to that in the principal shock. However, it takes some 0.5 Gyr to establish such inflow at 500 pc, and likely over 2 Gyr to establish it at 250 pc. Thus although in principle nuclear spirals can cause strong inflow of about 1at arbitrarily small radii, it is uncertain whether such spirals exist for periods long enough, so that the inflow has time to reach these small radii. However, another mechanism of inflow in nuclear spirals generated by a strong bar takes place at the innermost radii. In Section 3.4, I noticed a period of inflow in the weak spiral related to the change in its innermost morphology. It is best seen in Fig.4 as a change of mass accumulated within the radius of 40 pc. Similar increase within this radius is seen in model 8S20r after the spiral shock reaches the inner grid boundary at about 130 Myr (Fig.9, top panel). Between that time and 200 Myr, the mass within the radius of 40 pc increases from $0.37 \times 10^5$  to about $6 \times 10^5$. However, contrary to the weak spiral in model 8W20r, inflow in the spiral shock never ceases (Fig.9, bottom panel), but it rather increases with time, reaching the value of about 0.03 /yr at the end of the run. Similar rate of inflow is present at the radius of 100 pc, and both rates similarly evolve in time. The inflow of $\sim 0.03$ takes place only in the model with the MBH in the centre (8S20r). The top panel of Fig.9 indicates that the evolution of mass accumulated in the inner 40 and 100 pc is significantly different in model 0S20r without the MBH. In this model, after the spiral shock reaches the inner grid boundary, the mass enclosed within 40 pc increases initially by some 70%, but later it decreases, and oscillates around lower values. A quasi-monotonical mass increase occurs after 300 Myr, but it is likely related to the first mechanism of inflow described above, which propagates inwards from larger radii. In any case, the mass enclosed within radius of 40 pc at the end of the run in model 0S20r is $3.2 \times 10^5$ , which corresponds to the average inflow of 0.0015 in the period between 300 and 500 Myr. This inflow is 20 times smaller than in model 8S20r with a MBH. Nuclear spirals in double bars ============================== Hydrodynamical models of gas flow in dynamically possible double bars (each bar supported by orbits calculated in this potential) were built by MTSS02. Already the orbital analysis (Maciejewski & Sparke 2000) indicated that straight principal shocks cannot form in the inner bar in such systems, but gas should rather settle in rings elongated with the inner bar. Hydrodynamical models of MTSS02 confirmed these predictions, and evolutionary stars+gas models of Rautiainen et al. (2002) showed that gas settles on orbits calculated by Maciejewski & Sparke (2000). However, these models have been constructed for cold gas only. When the outer bar is identical to that in the models analyzed in the previous section, it should by itself generate a nuclear spiral in the hot gas (see model 8S20r, Fig.8, top panel). On the other hand, the orbital structure of the inner bar supports formation of gaseous rings, like the ring in model D05 in MTSS02, which is elongated with that bar. Again, the ring and the spiral should occur at the same location since the inner bar is confined within the ILR of the outer one. Thus also in the case of double bar, there is a competition between the orbital structure and the propagating wave. In order to see what comes out of this competition, I built a model of gas flow in the potential of a doubly barred galaxy identical to that in the models of MTSS02, but this time for the hot gas. In this model, labeled 0D20o, the sound speed in gas is 20 , and both bars are being introduced simultaneously in the first 100 Myr of the run. In the linear approximation outlined in Paper I, the solution is additive, and each independently rotating perturbation in the potential generates its own spiral mode in gas, which propagates with a specific dispersion relation. Note however that in the model considered here, the inner bar rotates with pattern speed 110 , therefore it has no ILR (see the top-left panel of Fig.2 in Paper I). Thus this inner bar does not generate a nuclear spiral on its own. Only the outer bar, which has a wide ILR, generates a nuclear spiral in the inner kiloparsec, as model 0S20r indicates. ![Snapshots of gas density in model 0D20o of hot gas in the gravitational potential of a doubly barred galaxy identical to that used in MTSS02, taken at 145 Myr ([*left*]{}), and 200 Myr ([*right*]{}). The outer bar is vertical, position of the inner bar can be deduced from oval density enhancements. Units on axes are in kpc.[]{data-label="f12"}](figII10a.ps "fig:"){width="0.48\linewidth"} ![Snapshots of gas density in model 0D20o of hot gas in the gravitational potential of a doubly barred galaxy identical to that used in MTSS02, taken at 145 Myr ([*left*]{}), and 200 Myr ([*right*]{}). The outer bar is vertical, position of the inner bar can be deduced from oval density enhancements. Units on axes are in kpc.[]{data-label="f12"}](figII10b.ps "fig:"){width="0.48\linewidth"} Fig.10 shows two snapshots of gas density in model 0D20o for hot gas in a doubly barred galaxy. Since the pitch angle of the spiral shock in such gas is high, the spiral shock usually propagates out of the density enhancement emerging from the straight principal shock. Therefore the nuclear spiral propagates inwards in this doubly barred galaxy despite the action of the inner bar. It reaches the inner grid boundary at about 145 Myr (Fig.10, left panel). Although according to the linear theory the curvature of the spiral does not change with time at any point in the frame rotating with the outer bar, the curvature of the trajectories on which gas parcels move changes at a given point in this frame with the rotation of the [*inner bar*]{}. The pitch angle of these trajectories may become larger than the pitch angle of the spiral, causing discontinuities in the spiral shock propagating inwards. On later evolutionary stages, when the spiral shock traps considerable amounts of gas around itself, its shape is more influenced by the motion of the inner bar. At times, it may resemble straight principal shocks in the inner bar (Fig.10, right panel) although such shapes are transient. Note that this structure is still caused by the outer bar, even if it resembles the principal shock in the inner bar, to which it may be wrongly ascribed. Also with time a broad ring forms around the inner bar, but with moderate over-density. Because of the complexity of this problem, detailed investigation of gas dynamics in nested bars requires further work, but this brief analysis already returned some important information: nuclear spirals generated by the outer bar in doubly barred systems can propagate inside the inner bar. Thus the presence of nuclear spirals in galaxies does not exclude a cospatial coexistence of inner bars in the same galaxies. Moreover, nuclear spirals can hide the presence of inner bars, as there is not much difference in the gas kinematics between the systems displayed in the left panel of Fig.10 and in the top panel of Fig.7. Discussion ========== Morphology of nuclear spirals ----------------------------- The number of galaxies with color- or structure maps of their central regions has recently become large enough, so that first attempts of morphological classification have been made (Malkan et al. 1998, Martini et al. 2003a,b). This second attempt seems to better reflect the characteristic structures observed in galactic nuclei. There nuclear spirals are segregated into one of four classes: grand design, tightly wound, loosely wound and chaotic. Here I attempt to link this classification to the morphology observed in the models built in this paper. Hydrodynamical models of gas flow in rotating potentials presented here show that nuclear spirals are triggered even by small asymmetries in the potential. There they propagate as weak density waves, and they are not bracketed by straight principal shocks, as it is the case in strong asymmetries induced by galactic bars. This should be expected, because the $x_1$ orbits supporting a weak oval asymmetry are round, with no cusps, and thus do not induce shocks in gas. Since there are no straight shocks to join, the spirals can continue winding around the centre, closely following predictions of the linear theory. Thus tightly-wound nuclear spirals, which can propagate freely in weak asymmetries of the potentials, may be observationally associated with galaxies where the bar is too weak to be detected. However, if the potential or conditions in gas imply large pitch angle of the spiral wave in a given galaxy, then a loosely wound spiral will appear in a galaxy classified as unbarred. On the other hand, in strong bars nuclear spirals rapidly unwind outwards to match the shape of the straight principal shock in the bar. Thus regardless of the underlying potential or velocity dispersion in gas, nuclear spirals in strong bars are not likely to appear observationally as tightly-wound spirals. This is consistent with the statistics of Martini et al. (2003 a,b): tightly wound spirals avoid barred galaxies. Grand-design nuclear spirals require a strong driver which acts continuously over long time periods. In the statistics of Martini et al., they appear only in galaxies classified as barred. This implies that the galactic bar can serve as such a driver, and that there may be no other driver that fulfills the criterion above. Tightly or loosely wound spirals in the classification of Martini et al. can be generated by a weak oval, or when they show clear discontinuities, by a passing perturbation in the potential (globular cluster or giant molecular cloud). With growing discontinuities, one moves to the class of chaotic spirals, whose generation mechanism is likely different (acoustic noise, see Elmegreen et al. 1998). Gas kinematics. Feeding of the AGN. ----------------------------------- Recent observational statistics of the central morphology in a sample of active galaxies accompanied by a control sample (Martini et al. 2003 a,b) indicates that nuclear spirals occur with comparable frequency in active and non-active galaxies. On the other hand, models presented in this paper show that nuclear spirals generated by a strong bar take the form of shocks in gas and trigger moderate gas inflow onto the central MBH, while nuclear spirals generated by a weak oval do not cause the inflow. I propose that what determines the inflow is not the driver, but the nature of the spiral. If it is a shock, then it is likely to trigger inflow. If it is not, inflow will not occur. Note that the morphology of the spiral shock departs from the linear prediction (Fig.8, bottom panel) in the sense that the pitch angle of the spiral is larger than in the linear theory. Thus spiral shocks that do not appear as grand-design spirals, are likely to be observationally classified as [*loosely wound*]{} spirals. In the full sample of Martini et al. (2003a), grand-design and loosely wound spirals occur in 60% of active galaxies, and only in 23% of inactive ones. This difference is statistically significant, and it may indicate that although not all nuclear spirals are fueling the AGN, some spirals most likely do it. However, morphological considerations are not sufficient to verify this hypothesis, and observations of gas kinematics in the spirals is needed. Clear departures from circular motion are expected in spiral shocks (Fig.8), but not in weak density waves that do not trigger inflow (Fig.1). It became recently generally accepted that all galaxies may host a MBH at their centres (see e.g. Kormendy & Gebhardt 2001 for a recent review), and there are attempts to measure the mass of this MBH from the gas kinematics around it (e.g. Macchetto et al. 1997, Bower et al. 1998, Maciejewski & Binney 2001). This method can be derailed by non-circular gas motions in the nuclear discs, especially when they exhibit spiral structure. Spiral shocks can strongly perturb the velocity field (Fig.8). However, if tightly wound nuclear spirals in fact correspond to models where the spiral is a weak density wave, then gas flow in such a spiral is almost circular (Fig.1), and methods based on gas kinematics should yield a reliable MBH mass here. Leading spirals, nuclear rings ------------------------------ According to the linear theory outlined in Paper I, nuclear spirals generated at the OLR and at the oILR are trailing, and hydrodynamical models built in this paper well reproduce the trailing spirals related to the oILR. These spirals propagate inwards, as expected. However, the linear theory also predicts formation of a [*leading*]{} spiral at the iILR that propagates outwards. Such combinations of a leading spiral inside a trailing one are very unusual in galaxies, with the only familiar example being NGC 6902 (Grosb[ø]{}l 2003). In the models presented here leading spirals do not form, even when the iILR is present. One reason for it may be the extent of the trailing spiral generated at the oILR, which propagates inwards to the vicinity of the iILR in model 0W20, and even past it in model 0S20 of a spiral shock. This may suppress formation of the leading spiral. However, in model 0W05, where the trailing spiral gets damped not far inwards from the oILR, leading spiral does not form either. Some explanation here may come from the applications of the non-linear theory of density waves developed by Yuan & Kuo (1997). It allows to investigate the effect of viscosity on the nuclear spirals. In the results presented by Yuan, Lin & Chen (2003) it can be seen that the leading spiral forms only for high viscosity. Viscosity in the hydrodynamical code used to build models in this paper is very low, and this may be the reason for the absence of the leading spiral. On the other hand, its absence in the observed nuclei of galaxies may indicate low effective viscosity in the ISM, much lower than what the quality of the code often imposes on the available hydrodynamical models. Another feature promoted by numerical models and definitely under-abundant in the observed galaxies are nuclear rings. Only 2 nuclear rings are found in the sample of 43 Seyfert galaxies observed by Pogge & Martini (2002), and an eye-examination of the larger sample of 123 galaxies (Martini 2003a) picks up a dozen of nuclear rings. In Section 4.1 I showed that nuclear rings form as an effect of interaction between the wave-nature of the principal shock in bar, and the orbital structure there. This interaction creates conditions favourable to the formation of nuclear rings when the velocity dispersion in gas is low. In fact, low velocity dispersion in gas is assumed in models that form nuclear rings (e.g. Piner et al. 1995, Regan & Teuben 2003). However, velocity dispersion in the inner discs of spiral galaxies is likely to be higher (Englmaier & Gerhard 1997, Elmegreen et al. 1998), which is consistent with the observed frequency of nuclear spirals that is higher than that of nuclear rings. Thus studies of nuclear rings may only partially reflect gas dynamics in centres of galaxies. In particular, stagnation of gas inflow in the bar caused by these rings is not a general evidence against the possibility that bar-related inflow can occur at radii typical for rings. Nuclear rings make only one type of nuclear gas flow, and another type, nuclear spirals, may extend bar-related inflow to the innermost regions of galaxies. Conclusions =========== In this paper I analyzed high-resolution hydrodynamical models of gas flow in nuclear spirals generated in the gaseous disc by a rotating potential. Nuclear spirals form naturally even if the asymmetry in the potential is very small, with the maximal ratio of radial to tangential force (the $Q_T$ parameter, Combes & Sanders 1981) about 0.01. Thus asymmetries in galaxies often to weak to be detected observationally, like a weak triaxiality of the bulge, may be sufficient to generate nuclear spirals. Models with weak asymmetry in the potential well conform to the linear prediction, while the nature of nuclear spirals in strong bars considerably differs from what is predicted by the linear density-wave theory. Models of galaxies with weak ovals indicate that nuclear spirals form even if the asymmetry in the potential is too weak to generate straight shocks along its major axis. In such potentials, nuclear spirals are not forced to unwind rapidly outwards to match the straight shocks: they can follow predictions of the linear density-wave theory for longer, and wind more tightly than in the presence of a strong bar. This is consistent with the recent statistical analysis by Martini et al. (2003a,b) which finds that tightly wound nuclear spirals rather avoid galaxies that are barred. The smooth continuation of the nuclear spiral into a four-arm spiral seen in the models with weak ovals indicates that the extent of such tightly-wound spirals may not be a good indicator of the location of the ILR in galaxies without a clear bar. Nuclear spirals in weak ovals are not efficient in transporting gas from kiloparsec- to parsec- scale, but some inflow in the innermost parsecs of the galaxy occurs during formation of such a spiral in models without an iILR (with a central MBH). Since these nuclear spirals can naturally re-appear as a response to the driver, gas dumped each time onto the MBH can maintain a weak nuclear activity. In strong bar the nuclear spiral has the nature of a shock in gas. The spiral shock is less tightly wound than what the linear theory predicts (compare Fig.1 to Fig.8). This may suggest correspondence between spiral shocks and loosely wound spirals in the classification of Martini et al. (2003a). Hydrodynamical models built in this paper show that such spiral shocks trigger gas inflow, although of different nature than that in the straight principal shock in the bar. In the outer regions of the nuclear spiral, the inflow timescale is longer than in the principal shock bracketing it from outside. Therefore gas initially accumulates there, but with time it is transported inwards along the spiral. The inflow rate at density peaks along the spiral equals that in the principal shock. Another inflow mechanism is present in the innermost tens of parsecs of the nuclear spiral in the presence of a central MBH: after the initial dumping of matter onto the centre, common with the models for weak ovals, the mass inflow does not stop, but continues at a steady rate of up to 0.03 . Local Seyfert galaxies require mass accretion rates of $\sim 0.01$ (e.g. Peterson 1997), therefore the inflow rate in the models presented here is sufficient to feed luminous local Active Galactic Nuclei, and the feeding can continue over long timescales. An observational support for this mechanism comes from the fact that when one groups together grand-design nuclear spirals (explicitly linked to the bars) and loosely wound spirals, they appear considerably more often in active than in non-active galaxies (Martini et al. 2003a,b). Nuclear spirals are more common in galaxies than nuclear rings. Which of these two will be triggered by a barred potential depends on the interplay between the post-shock gas condensations, which tend to follow the lowest-energy orbits, and the shock, whose inner shape adheres to the rules for wave-propagation. In the ISM with low velocity dispersion, the shock is damped in gas condensation, and a nuclear ring forms. When velocity dispersion in the ISM is high, the shock propagates away from gas condensation, gets strengthened, and continues inwards as a nuclear spiral. Higher frequency of nuclear spirals than of nuclear rings in galaxies favours ISM with high velocity dispersion in centres of disc galaxies. Secondary inner bars in barred galaxies do not halt the propagation of nuclear spirals inwards. Thus nuclear spirals can co-exist with inner bars in galaxies. Moreover, they can mask the presence of inner bars in galactic nuclei. Acknowledgments {#acknowledgments .unnumbered} =============== The CMHOG hydrodynamical code was written by James Stone, and I am grateful to him for the right to use it. I would like to thank Lia Athanassoula, Peter Erwin, and Paul Martini for useful discussions and for comments on this paper. Helpful comments of the anonymous referee provided a good guidance in improving the clarity of this paper. I acknowledge the post-doctoral fellowship from Osservatorio Astrofisico di Arcetri, where most of this research has been done. I am grateful to Instytut Astronomiczny Uniwersytetu Wroc[ł]{}awskiego for its hospitality during writing this paper. Athanassoula E., 1992a, MNRAS, 259, 328 Athanassoula E., 1992b, MNRAS, 259, 345 Benedict G. F., Howell D. A., J[ø]{}rgensen I., Kenney J. D. P., Smith B. J., 2002, AJ, 123, 1411 Bower G.A. , 1998, ApJ, 492, L111 Combes F., Sanders, R. H., 1981, A&A, 96, 164 Elmegreen B. G. et al., 1998, ApJ, 503, L119 Englmaier P., Gerhard O., 1997, MNRAS, 287, 57 Englmaier P., Shlosman I., 2000, ApJ, 528, 677 Goldreich P., Tremaine S., 1979, ApJ, 233, 857 Grosb[ø]{}l P., 2003, in Contopoulos G., Voglis N., eds, Lecture Notes in Physics Vol.626, Galaxies and Chaos. Springer-Verlag, Berlin, Heidelberg, p.201 Kormendy J., Gebhardt K., 2001, in 20th Texas Symposium on relativistic astrophysics, p.363 Laurikainen E., Salo H., Buta R., 2004, ApJ, 607, 103 Macchetto F., Marconi A., Axon D. J., Capetti A., Sparks W., Crane P., 1997, ApJ, 489, 579 Maciejewski W., 1998, Ph.D. Thesis, Univ. of Wisconsin Maciejewski W., 2000, in Combes F. et al., eds, ASP Conf. Ser. Vol. 197, Dynamics of Galaxies: from the Early Universe to the Present. Astron. Soc. Pac., San Francisco, p.63 Maciejewski W., 2003, in Boily C. M. et al., eds, EAS Publication Series Vol.10, Galactic & Stellar Dynamics. EDP Sciences, Les Ilis, p.3 Maciejewski W., 2004, MNRAS submitted (Paper I) Maciejewski W., Binney J., 2001, MNRAS, 323, 831 Maciejewski W., Sparke L. S., 2000, MNRAS, 313, 745 Maciejewski W., Teuben P. J., Sparke L. S., Stone J. M., 2002, MNRAS, 329, 502 Malkan M. A., Gorjian V., Tam R., 1998, ApJS, 117, 25 Martini P., Pogge R. W., 1999, AJ, 118, 2646 Martini P., Regan M. W., Mulchaey J. S., Pogge R. W., 2003a, ApJS, 146, 353 Martini P., Regan M. W., Mulchaey J. S., Pogge R. W., 2003b, ApJ, 589, 774 Peterson B. M., 1997, An Introduction to Active Galactic Nuclei (Cambridge: Cambridge Univ. Press) Piner B. G., Stone J. M., Teuben P. J., 1995, ApJ, 449, 508 Pogge R. W., Martini P., 2002, ApJ, 569, 624 Rautiainen P., Salo H., Laurikainen E., 2002, MNRAS, 337, 1233 Regan M. W., Mulchaey J. S., 1999, AJ, 117, 2676 Regan M. W., Teuben P. J., 2003, ApJ, 582, 723 Regan M. W., Vogel S. N., Teuben P. J., 1997, ApJL, 482, L143 Sandage A., 1961, The Hubble Atlas of Galaxies (Washington: Carnegie Institution) Sellwood J. A., Wilkinson A., 1993, Rep. Prog. Phys., 56, 173 Yuan C., Cheng Y., 1989, ApJ, 340, 216 Yuan C., Cheng Y., 1991, ApJ, 376, 104 Yuan C., Kuo C.-L., 1997, ApJ, 486, 750 Yuan C., Lin L.-H., Chen Y.-H., 2003, in Ho L. C., ed., Carnegie Observatories Astrophysics Series, Vol. 1: Coevolution of Black Holes and Galaxies (Pasadena: Carnegie Observatories, http://www.ociw.edu/ociw/symposia/series/symposium1/proceedings.html)
--- author: - | [^1]\ Institut für Theoretische Physik II, Ruhr-Universität Bochum, D-44780 Bochum, Germany\ E-mail: - | I. O. Cherednikov[^2]\ INFN Cosenza, Universit$\grave{a}$ della Calabria, I-87036 Rende (CS), Italy and\ Bogoliubov Laboratory of Theoretical Physics, JINR, 141980 Dubna, Russia\ E-mail: - | A. I. Karanikas\ University of Athens, Department of Physics, Nuclear and Particle Physics Section, Panepistimiopolis, GR-15771 Athens, Greece\ E-mail: title: 'Role and Properties of Wilson Lines in Transverse-Momentum-Dependent Parton Distribution Functions' --- Introduction and Theoretical Framework {#sec:intro} ====================================== One of the problems inherent in the definition of hadronic observables is how to ensure gauge invariance. This problem arises because correlators are nonlocal quantities which contain local operators that transform differently under gauge transformations, hence entailing a dependence on the gauge adopted. Clearly, physical quantities should not depend on the choice of the gauge we choose to work — this should merely be a matter of (calculational) convenience. To render *integrated* parton distribution functions (PDF)s gauge invariant, it is sufficient to insert into their definition a Wilson line — a gauge link — between the two Heisenberg quark operators that renders their product gauge invariant [@CS81]. From the point of view of renormalizability, this operation introduces additional contributions to the anomalous dimension of the PDFs. These contributions stem from the local obstructions of the gauge contours: endpoints, cusps, and self-crossing points (see [@CS08] for a technical exposition and references). It is to be emphasized that although the gauge link is nonlocal, no explicit path dependence is introduced, e.g., on the gauge-contour length. Actually, to ensure the gauge invariance of the PDFs it is even sufficient to use a straight lightlike line, because integrated PDFs are defined on the light cone and the only contribution from the gauge link to the anomalous dimension of the PDF comes from its endpoints (see, for instance, [@Ste83] and references cited therein). Hence, one has for the integrated PDF of a quark $i$ in a quark $a$ $$\begin{aligned} f_{i/a}(x) = \frac{1}{2} \int \frac{d\xi^{-}}{2\pi} e^{- i k^{+} \xi^{-}} \langle P |\bar{\psi}_i (\xi^{-}, \mathbf{0}_\perp) \gamma^{+} [\xi^-, 0^-|\mathcal{C}] \psi_{i}(0^{-},\mathbf{0}_\perp) | P \rangle \ , \label{eq:int-pdf}\end{aligned}$$ where $$\begin{aligned} [\xi^-, 0^-|\mathcal{C}] = {\cal P} \exp \Bigg[-i g \int_{0^{-}[\mathcal{C}]}^{\xi^-} dz^\mu A_{\mu}^{a}(0,z^-, \mathbf{0}_\perp) t_{a} \Bigg] \label{eq:link}\end{aligned}$$ is a path-ordered gauge link (Wilson line) in the lightlike direction from $0$ to $\xi$ along the contour $\mathcal{C}$. One may insert a complete set of states and split the gauge link $[\xi^-, 0^-]$ into two gauge links connecting the points $0$ and $\xi$ through $\infty$. This is mathematically sound, provided the junction (hidden at infinity) of the two involved contours is smooth, i.e., entails only a trivial renormalization of the junction point so that the validity of the algebraic identity $ [x_2,z \ |\ \mathcal{C}_1] \ [z,x_1\ | \ \mathcal{C}_2] = [x_2,x_1\ | \ \mathcal{C}=\mathcal{C}_1\cup \mathcal{C}_2] $ is ensured. This being the case, it is possible to associate each of the quark fields with its own gauge link because the attached contour has no bearing on the definition of $f_{i/a}(x)$. Then, the struck quark can be replaced by an “eikonalized quark” $$\Psi (x^-|\Gamma) = \psi(x^-) [x^-,\infty^-|\Gamma] \equiv \psi(x^-) \mathcal{P}{\rm exp} \left[ -ig \int_{\infty^- [\Gamma]}^{x^-} dz_{\mu} A^{\mu}_{a}(0^+, z^-, \mathbf{0}_\perp) t_{a} \right] \label{eq:eikonal-quark}$$ which is a contour-dependent Mandelstam fermion field [@Man68YM] (with an analogous definition for the antifermion field). In this scheme, the gluon reconstitution in the gauge-invariant correlator for the integrated PDF involves gluons emanating either from the gauge links — giving rise to selfenergy-like diagrams — or contractions with the gluon self-fields of the Heisenberg operator for the struck quark which generate crosstalk-type diagrams. Note that for the sake of clarity and simplicity, we ignore bound states (spectators). As a result, one has $$\begin{aligned} f_{i/a}^{\rm split}(x) = \frac{1}{2} \sum_n \int \frac{d\xi^-}{2\pi} e^{- i k^{+} \xi^{-}} \langle P |\bar{\Psi}_{i}\left(\xi^-,\mathbf{0}_\perp|{\cal C}_{1}\right) | n \rangle \gamma^{+} \langle n |\Psi_{i}\left(0^-,\mathbf{0}_\perp|{\cal C}_{2}\right) |P\rangle \ . \label{eq:pdf-link-split}\end{aligned}$$ This concept was carried over to *unintegrated* PDFs, i.e., to those PDFs which still depend on the transverse momenta — hence termed TMD PDFs. However, TMD PDFs with only longitudinal gauge links are not completely gauge invariant under different boundary conditions on the gluon propagator in the light-cone gauge $A^+=0$. The reason is that $x^-$-independent gauge transformations are still possible under the same gauge condition. Hence, the naive collinear gauge-invariant TMD PDF definition as for the integrated case is inapplicable. Refurbishment is provided via the introduction of transverse gauge links which necessarily stretch out off the light cone to infinity [@BJY03; @BMP03]. This generalizes Eq. (\[eq:pdf-link-split\]) for a quark with $k_\mu = (k^+, k^-, \mathbf{k}_\perp)$ in a quark with $p_\mu = (p^+, p^-, \mathbf{0}_\perp)$ to the expression $$\begin{aligned} f_{q/q}(x, \mathbf{k}_\perp) = \frac{1}{2} \int \frac{d\xi^{-}}{2\pi} \frac{d^2{\bm \xi}_\perp}{(2\pi)^2} \exp\left( - i k^{+} \xi^{-} + i \mathbf{k}_\perp \cdot {\bm \xi}_\perp \right) \Big\langle q(p) |\bar \psi (\xi^-, {\bm \xi}_\perp) [\xi^-, {\bm \xi}_\perp; \infty^-, {\bm \xi}_\perp]^{\dagger}\nonumber \\ \times [\infty^-, {\bm \xi}_\perp; \infty^-, {\bm \infty}_\perp]^\dagger \gamma^+ [\infty^-, {\bm \infty}_\perp; \infty^-, \mathbf{0}_\perp] [\infty^-, \mathbf{0}_\perp;0^-, \mathbf{0}_\perp] \psi (0^-, \mathbf{0}_\perp) |q(p) \Big\rangle \Big|_{\xi^+ =0} \label{eq:tmd-pdf}\end{aligned}$$ in which $$[\infty^{-}, {\bm \xi}_{\perp}; \xi^{-}, {\bm \xi}_{\perp}] \equiv \mathcal{P} \exp \left[ i g \int_{0}^{\infty} d\tau \, n_{\mu}^{-} A_{a}^{\mu} t^{a} (\xi + n^- \tau) \right] \ , \label{eq:lightlike-link}$$ $$[ \infty^{-}, {\bm \infty}_{\perp}; \infty^{-}, {\bm \xi}_{\perp} ] \equiv \mathcal{P} \exp \left[ i g \int_{0}^{\infty} d\tau \, \mathbf{l} \cdot \mathbf{A}_{a} t^{a} ({\bm \xi}_{\perp} + \mathbf{l}\tau) \right] \label{eq:transverse-link}$$ are the lightlike and the transverse gauge link, respectively. One-Loop Gluon Virtual Corrections in the $A^+=0$ Gauge {#sec:one-loop-corr} ======================================================= The pursuit of a proper definition of TMD PDFs is a long-standing problem that was not accomplished with the definition above. The reason is — frankly speaking — that nobody knows how the contour behaves at light-cone infinity when it ventures out in the transverse directions. This behavior has influence on the singularity structure of the gluon propagator in the light-cone gauge $A^+=0$, notably, $ D_{\mu\nu}^{\rm LC} (q) = \frac{-i}{q^2 - \lambda^2 + i0} \Big( g_{\mu\nu} -\frac{q_\mu n^-_\nu + q_\nu n^-_\mu}{[q^+]} \Big) , $ via the boundary conditions to go around its singularities. To estimate this influence, one has to calculate the one-loop virtual corrections in the $A^+=0$ gauge in conjunction with various boundary conditions (which absorb large-scale effects) and carry out the renormalization of the contour-dependent quark operators defined in Eq. (\[eq:eikonal-quark\]). Two of us undertook this calculation, announced in [@CS07; @CS08; @CS09], with a summary of the approach being given in [@SC09Trento]. The contributing diagrams are shown here in Fig. \[fig:fig1\], while the corresponding algebraic expressions are given in Table \[tab:virt-corr\] using the following symbolic abbreviations (the couplings $g$ and $g^{\prime}$ below are labeled differently only in order to keep track of their origin; ultimately, they will be set equal):\ (i) $\mathbf{Q}$: struck quark $ \psi_i(\xi) = e^{- ig [ \int\! d\eta \bar \psi \hat {\cal A} \psi ]} $ $\psi_i^{\rm free} (\xi)$ — Heisenberg operator,\ (ii) longitudinal gauge link: $[n^-]$,\ (iii) transverse gauge link: $[\mathbf{l}_{\perp}]$,\ (iv) $g$ refers to the QCD Lagrangian — see item (i),\ (v) coupling $g^{\prime}$ refers to the exponent of the gauge links, i.e., $g^{\prime}\int_{0}^{\infty} d\tau \ldots$  ,\ (vi) product $g^{\prime}g^{\prime}$ corresponds to path-ordered line integrals in the exponent of the gauge links, i.e., $g^{\prime}g^{\prime}\int_{0}^{\infty} d\tau \int_{0}^{\tau} d\sigma \ldots \ .$ struck quark longitudinal gauge link transverse gauge link ------------------------- -------------------------------------------------- ------------------------------------------------ ----------------------------------------------------------------------- struck quark $\mathbf{Q}\mathbf{Q}$ $\Longleftrightarrow$ (a) $\mathbf{Q}[n^-]$    $\Longleftrightarrow$ (b) $\mathbf{Q}[\mathbf{l}_{\perp}]$    $\Longleftrightarrow$ (d) longitudinal gauge link $[n^-][n^-]$ $\Longleftrightarrow$ (c) $[n^-][\mathbf{l}_{\perp}]$     $\Longleftrightarrow$ (f)=0 transverse gauge link $[\mathbf{l}_{\perp}][\mathbf{l}_{\perp}]$  $\Longleftrightarrow$ (e) : Structure of the one-loop gluon virtual corrections to $f_{q/q}(x, \mathbf{k}_{\perp})$ shown in Fig. 1.[]{data-label="tab:virt-corr"} Without going into too much detail, the results of this study show that the overlapping ultraviolet (UV) and rapidity divergences cannot be solely controlled by the dimensional (or any other) regularization. The ensuing divergence is of the type $(1/\epsilon)\ln (\eta/p^{+})$, which becomes infinite when $\eta \to 0$, and has, therefore, to be cured by an appropriate renormalization procedure. At this point it is important to mention that the terms on the diagonal in Table \[tab:virt-corr\] represent selfenergy contributions, while all other terms are of the crosstalk type. In the gauge $A^+=0$ only the terms $\mathbf{Q}\mathbf{Q}$ and $\mathbf{Q}[\mathbf{l}_{\perp}]$ are non-vanishing. Moreover, the pole-prescription dependence in diagram (a) is canceled by its counterpart in (d) — see Fig. \[fig:fig1\]. ![One-loop gluon virtual corrections to $f_{q/q}$ in the $A^+=0$ gauge. The double lines describe the gauge links attached to the fermions (heavy lines), while the curly lines represent gluons, and the symbol $\otimes$ denotes a line integral. The Hermitian-conjugate (mirror) diagrams are not shown.[]{data-label="fig:fig1"}](virt-gluon-corr-all4_1.eps){width=".5\textwidth"} Taking into account the mirror contributions to (a) and (d) (not shown in Fig. \[fig:fig1\]), one finds the following total contribution from virtual gluon corrections [@CS08; @CS07]: $$\Sigma_{\rm UV}^{\rm (a+d)}(\alpha_s, \epsilon) = 2\frac{\alpha_s}{\pi}C_{\rm F} \left[ \frac{1}{\epsilon} \left( \frac{3}{4} + \ln \frac{\eta}{p^+} \right) - \gamma_E + \ln 4\pi \right] \ . \label{eq:total-virt}$$ From this expression one obtains for $f_{q/q}(x, \mathbf{k}_{\perp})$ the anomalous dimension $\left(\gamma = \frac{\mu}{2}\frac{1}{Z}\frac{\partial\alpha_s}{\partial\mu} \frac{\partial Z}{\partial\alpha_s} \right)$ $$\gamma_{\rm one-loop}^{\rm LC} = \frac{\alpha_s}{\pi} C_{\rm F} \left( \frac{3}{4} + \ln \frac{\eta}{p^{+}} \right) = \gamma_{\rm smooth} -\delta\gamma \ , \label{eq:anom-dim}$$ where $\eta$ is the rapidity parameter with $[\eta]=[\rm mass]$ and $\delta\gamma$ represents the deviation from the anomalous dimension of the gauge-invariant quark propagator in a covariant gauge (see [@Ste83] and earlier references cited therein). As argued in [@CS07; @CS08; @SC09Trento], such an anomalous dimension can be associated with a cusp in the gauge contour at infinity and originates from the renormalization of the gluon interactions with this local contour obstruction. Therefore, one can claim that $\delta\gamma$ can be identified with the universal cusp anomalous dimension [@KR87] at the one-loop order. But the choice of the gauge $A^+=0$ should not affect the renormalization properties of the TMD PDF. Thus, the definition of $f_{q/q}(x, \mathbf{k}_{\perp})$ given by Eq. (\[eq:tmd-pdf\]) has to be modified by a soft factor (counter term) [@CH00] $$R \equiv \Phi (p^+, n^- | 0) \Phi^\dagger (p^+, n^- | \xi) \ , \label{eq:soft-factor}$$ where $\Phi$ and $\Phi^\dagger$ are appropriate eikonal factors to be evaluated along a jackknifed contour off the light cone (the explicit expressions and a graphic illustration can be found in [@CS07; @CS08; @SC09Trento]). We have shown there by explicit calculation that in the $A^+=0$ gauge with $q^-$-independent pole prescriptions (advanced, retarded, principal value), the anomalous dimension associated with this quantity exactly cancels $\delta\gamma$, rendering the modified definition of the TMD PDF free from gauge artifacts. On the other hand, adopting instead a $q^-$-dependent pole prescription (Mandelstam [@Man82], Leibbrandt [@Lei83]), no anomalous-dimension anomaly appears and the soft factor reduces benignly to unity [@CS09]. Inclusion of Pauli Spin Interactions {#sec:Pauli} ==================================== The conventional way to restore the gauge invariance of hadronic matrix elements is to use gauge links as those defined in Eqs. (\[eq:lightlike-link\]) and (\[eq:transverse-link\]). However, this is only the minimal way to achieve this goal; it ignores the direct spin interactions because the gauge potential $A_{\mu}^{a}$ is spin-blind. To accommodate the direct interaction of spinning particles with the gauge field, one has to take into account the so-called Pauli term $\sim F^{\mu\nu}S_{\mu\nu}$, where $S_{\mu\nu}=\frac{1}{4}[\gamma_{\mu}, \gamma_{\nu}]$ is the spin operator. Following this generalized conception of gauge invariance, we promote the definition of the TMD PDF to [@CKS10] $$\begin{aligned} f_{i/h}^{\Gamma}(x, \mathbf{k}_{\perp}) = \frac{1}{2} {\rm Tr} \int\! dk^- \int \frac{d^4 \xi}{(2\pi)^4} e^{- i k \cdot \xi} \langle h |\bar \psi_i (\xi) [[ \xi^{-}, {\bm \xi}_{\perp};\infty^{-}, {\bm \xi}_{\perp} ]]^\dagger [[ \infty^-, {\bm \xi}_{\perp};\infty^-, {\bm \infty}_{\perp} ]]^\dagger \nonumber \\ \times \Gamma {[[} \infty^-, {\bm \infty}_{\perp};\infty^{-}, \mathbf{0}_{\perp} {]]} [[\infty^-, \mathbf{0}_{\perp}; 0^-, \mathbf{0}_{\perp}]] \psi_i (0) | h \rangle \cdot R \ , \label{eq:TMD-PDF-Pauli}\end{aligned}$$ where $\Gamma$ denotes one or more $\gamma$ matrices in correspondence with the particular distribution in question, and the state $|h\rangle$ stands for the appropriate target. In the unpolarized case we have $|h\rangle=|h(P)\rangle$, with $P$ being the momentum of the initial hadron, whereas for a (transversely) polarized target the state is $|h\rangle=|h(P), S_{\perp}\rangle$. The enhanced lightlike and transverse gauge links (denoted by double square brackets) contain the Pauli term and are given, respectively, by the following expressions: $$\begin{aligned} [[\infty^-, \mathbf{0}_{\perp}; 0^-, \mathbf{0}_\perp]] = \mathcal{P} \exp \left[ - i g \!\!\! \int_{0}^{\infty} \! d\sigma \ u_{\mu} A_{a}^{\mu}(u \sigma)t^a - i g \!\!\! \int_{0}^{\infty} \! d\sigma \ S_{\mu\nu} F_{a}^{\mu\nu}(u \sigma)t^a \right] \ ,\end{aligned}$$ $$\begin{aligned} [[\infty^-, {\bm \infty}_{\perp}; \infty^-, \mathbf{0}_\perp]] = \mathcal{P} \exp \left[ - i g \!\!\! \int_{0}^{\infty} \! d\tau \mathbf{l}_{\perp} \! \cdot \! \mathbf{A}_{\perp}^{a}(\mathbf{l}\tau)t^a - i g \!\!\! \int_{0}^{\infty} \!\!\! d\tau S_{\mu\nu}F_{a}^{\mu\nu}(\mathbf{l}\tau)t^a \right] \ .\end{aligned}$$ Gauge links with Pauli terms up to $\mathcal{O}(g^2)$ {#subsec:virtual-Pauli} ----------------------------------------------------- Adopting this reasoning, we have to calculate in the $A^+=0$ gauge the expression $$[[ \infty^-, {\bm \infty}_\perp;\infty^-, \mathbf{0}_\perp ]] \cdot [[\infty^-, \mathbf{0}_\perp;0^-, \mathbf{0}_\perp ]] = 1 - i g \left( \mathcal{U}_{1} + \mathcal{U}_{2} + \mathcal{U}_{3} \right) - g^2 \left( \mathcal{U}_{4} + \mathcal{U}_{5} + \ldots \mathcal{U}_{10} \right) \label{eq:gauge-links-product}$$ with $ F_{a}^{\mu\nu} (\infty^-, 0^+, {\bm \xi}_\perp) = 0, ~~~ \psi_i(\xi) = e^{[- ig \int\! d\eta \ \bar \psi \hat {\cal A} \psi ]} \psi_i^{\rm free} (\xi) $ and an analogous expansion for the transverse gauge links (see Table \[tab:link-terms\]), whereas the contributing diagrams are displayed in Fig. \[fig:pauli-virt-dia\]. ------------------------------------------------------------------------------------------------------------------------------------- Symbols                 Expressions Figure 2     Value     -------------------- ------------------------------------------------------------------------------------- ---------- --------------- $\mathcal{U}_{1}$ $ \int_0^\infty \! d\tau \ \mathbf{l} (a) $\neq 0$ \cdot \mathbf{{\cal A}}(\mathbf{l} \tau)$ $\mathcal{U}_{2}$ $ \int_0^\infty \! d\tau \ S \cdot {\cal F}(u \tau)$ (b) $\neq 0$ $\mathcal{U}_{3}$ $ \int_0^\infty \! d\tau \ S \cdot {\cal F}(\mathbf{l} \tau)$ — $0$ $\mathcal{U}_{4}$ $\int_0^\infty \! d\tau \int_0^{\tau} d \sigma — $0$ \ (\mathbf{l} \cdot \mathbf{{\cal A}}(\mathbf{l} \tau)) \ (\mathbf{l} \cdot \mathbf{{\cal A}}(\mathbf{l} \sigma)) $ $\mathcal{U}_{5}$ $\int_0^\infty \! d\tau \int_0^{\tau} d \sigma — $0$ \ (\mathbf{l} \cdot \mathbf{{\cal A}}(\mathbf{l} \tau)) \ (S\cdot {\cal F}(\mathbf{l} \sigma)) $ $\mathcal{U}_{6}$ $\int_0^\infty \! d\tau \int_0^{\tau} d \sigma — $0$ \ (S\cdot {\cal F}(\mathbf{l} \tau))\ (\mathbf{l} \cdot \mathbf{{\cal A}}(\mathbf{l} \sigma)) $ $\mathcal{U}_{7}$ $ \int_0^\infty \! d\tau \int_0^{\tau} d \sigma (c) $0$ \ (S\cdot {\cal F}(u \tau))\ (S \cdot {\cal F}(u \sigma)) $ $\mathcal{U}_{8}$ $\int_0^\infty \! d\tau \int_0^{\tau} d \sigma — $0$ \ (S\cdot {\cal F}(\mathbf{l} \tau))\ (S \cdot {\cal F}(\mathbf{l} \sigma)) $ $\mathcal{U}_{9}$ $\int_0^\infty \! d\tau \int_0^{\infty} d \sigma (d) $\neq 0$ \ (\mathbf{l} \cdot \mathbf{{\cal A}}(\mathbf{l} \tau))\ (S\cdot {\cal F} (u \sigma) )$ $\mathcal{U}_{10}$ $\int_0^\infty \! d\tau \int_0^{\infty} d \sigma — $0$ \ (S\cdot {\cal F}(\mathbf{l} \tau))\ (S \cdot {\cal F}(u \sigma)) $ ------------------------------------------------------------------------------------------------------------------------------------- : Individual virtual-gluon contributions appearing in the evaluation of Eq. (\[eq:gauge-links-product\]) up to $\mathcal{O}(g^2)$.[]{data-label="tab:link-terms"} Let us quote here some important features of the presented theoretical framework referring for details to our recent work in Ref.[@CKS10]: (i) The Pauli term is not reparameterization invariant — unlike the usual Dirac term. Therefore, we have to use the dimensionful vectors $ n^+_\mu \to u^*_\mu = p^- n^+_\mu \ , \quad n^-_\mu \rightarrow u_\mu = p^+ n^-_\mu \ , \quad \mathbf{l}_{\perp} \rightarrow p^+ \mathbf{l}_{\perp} $. (ii) The Pauli spin-interaction terms do not completely vanish along $n^-$ in the $A^+=0$ gauge, whereas terms containing ${\cal F} (\mathbf{l} \tau)$ (or ${\cal F} (\mathbf{l} \sigma)$) cancel out in the product of the gauge links and $ F_{a}^{\mu\nu} (\infty^-, 0^+, {\bm \xi}_\perp) = 0 $. (iii) To the $g^2$-order level, the Pauli term reads $$S \cdot {\cal F} \equiv S_{\mu\nu} {\cal F}^{\mu \nu} = 2 S_{+-} {\cal F}^{+-} + 2 S_{+i} {\cal F}^{+i} + 2 S_{-i} {\cal F}^{-i} + S_{ij} {\cal F}^{ij}$$ and has the following non-zero components: $$\begin{aligned} {\cal F}^{+-} & = & \partial^+ {\cal A}^- \ , \ {\cal F}^{+i} = \partial^+ {\cal A}^i \ , \\ \ {\cal F}^{-i} & = & \partial^- {\cal A}^i - \partial^i {\cal A}^- \ , \ {\cal F}^{ij} = \partial^i {\cal A}^j - \partial^j {\cal A}^i \ .\end{aligned}$$ (iv) The diagrams (a)–(d) in Fig \[fig:pauli-virt-dia\] represent virtual gluon corrections and contain UV and rapidity divergences that give rise to the anomalous dimension of the TMD PDF. In contrast, the diagrams (e)–(g), which describe real-gluon exchanges across the cut (vertical dashed line), contribute only finite terms. ![One-loop gluon virtual corrections to $f_{q/q}$ in the $A^+=0$ gauge. Graphs (a), (b), (c), and (d) describe virtual gluon corrections; graphs (e), (f), and (g) represent real-gluon exchanges across the cut (vertical dashed line). The double lines decorated with a ring represent enhanced gauge links containing the Pauli term.[]{data-label="fig:pauli-virt-dia"}](pauli_dia_160410.eps){width="40.00000%"} From Fig. \[fig:pauli-virt-dia\], we see that the gauge-link correlator contains contributions of two different types related to selfenergy- and crosstalk-type diagrams. To discuss the structure of the correlator in a compact way, it is useful to use the following symbolic abbreviations:\ $\displaystyle\mathbb{Q}$: Gauge self-field in the Heisenberg quark operator $ \psi_i(\xi) = e^{- ig[ \int\! d\eta \ \bar \psi \hat {\cal A} \psi ]} \ \psi_i^{\rm free} (\xi) $\ $\displaystyle \mathbf{l} \cdot \mathbf{{\cal A}}(\mathbf{l} \tau) \equiv \mathbb{A^\perp}$: Standard transverse gauge potential\ $\displaystyle S \cdot {\cal F}(\mathbf{l} \tau) \equiv \mathbb{F}$: Tensor (Pauli) term Then we obtain at $\mathcal{O}(g^2)$ the following results (consult Fig. \[fig:pauli-virt-dia\] in conjunction with Table \[tab:link-terms\]): **Selfenergy-type contributions** - $\displaystyle\mathbb{A}^\perp \displaystyle\mathbb{A}^\perp$: $ \langle \mathcal{U}_{4} \rangle = 0 $       not shown - $\mathbb{F^-} \mathbb{F^-}$ : $ \langle \mathcal{U}_{7} \rangle = 0 $       diagram (c) in Fig. \[fig:pauli-virt-dia\] - $\mathbb{F^\perp} \mathbb{F^\perp}$ : $ \langle \mathcal{U}_{8} \rangle = 0 $       not shown **Crosstalk-type contributions** - $ \displaystyle\mathbb{Q} \mathbb{A^\perp} $: $ \langle \mathcal{U}_{1} \rangle^{\rm UV} = - \alpha_s \, C_{\rm F}\, \frac{1}{\varepsilon}\, i \, C_\infty $ with $C_\infty = \{ 0 (\rm adv); -1 (ret); -\frac{1}{2} (PV) \}$ — diagram (a). This term cancels the pole-prescription-dependent term in the UV-divergent part of the fermion selfenergy $\displaystyle\mathbb{Q} \displaystyle\mathbb{Q}$. - $ \displaystyle\mathbb{Q} \mathbb{F^-} $: $ \langle \mathcal{U}_{2} \rangle $ with $ (\displaystyle\mathbb{Q} \mathbb{F^-})^- = \langle \mathcal{U}_{2}^- \rangle $ and $ (\displaystyle\mathbb{Q} \mathbb{F^-})^\perp = \langle \mathcal{U}_{2}^\perp \rangle $ — diagram (b). Accordingly, for the leading twist-two TMD PDF, we find for the semi-inclusive DIS (SIDIS) $$\begin{aligned} \Gamma_{\rm tw-2} \langle \mathcal{U}_{2}^- \rangle + \langle \mathcal{U}_{2}^- \rangle^\dagger \Gamma_{\rm tw-2} & = & \frac{i}{2} C_{\rm F} \Gamma_{\rm tw-2} \ , \\ \Gamma_{\rm tw-2} \langle \mathcal{U}_{2}^\perp \rangle + \langle \mathcal{U}_{2}^\perp \rangle^\dagger \Gamma_{\rm tw-2} & = & -\frac{i}{4} C_{\rm F} \ \Gamma_{\rm tw-2} \, . \label{eq:U-2}\end{aligned}$$ These two results combine to produce a constant phase (unrelated to that found in [@BJY03]) $$\delta_{\rm tw-2} = \alpha_s C_{\rm F} \pi \label{eq:phase}$$ which is also valid for the twist-three TMD PDF, i.e., $ \delta_{\rm tw-3} = \alpha_s C_{\rm F} \pi $, but *flips sign* for the Drell-Yan (DY) process because it depends on the direction of the longitudinal gauge link. Hence, our analysis [@CKS10] predicts the important relation $$\delta_{\rm SIDIS} = - \delta_{\rm DY} \, . \label{eq:DY-phase}$$ - $ \displaystyle\mathbb{Q} \mathbb{F^\perp} $: $ \langle \mathcal{U}_{3} \rangle = 0 $       not shown - $ \mathbb{A^\perp} \mathbb{F^\perp} $: $ \langle \mathcal{U}_{6} \rangle = - \langle \mathcal{U}_{5} \rangle $, hence mutually canceling - $ \mathbb{A^\perp} \mathbb{F^-} $: $ \langle \mathcal{U}_{9} \rangle = \langle \mathcal{U}_{9} \rangle^\dagger $ — diagram (d) in Fig. \[fig:pauli-virt-dia\] — (“gluon mass” $\lambda^2$ drops out at the end): $$\langle \mathcal{U}_{9} \rangle = - \frac{1}{8\pi} \ C_{\rm F} [\gamma^+, \gamma^-] \Gamma (\epsilon) \left( 4\pi \frac{\mu^2}{\lambda^2} \right)^\epsilon \ . \label{eq:U-9}$$ This nontrivial Dirac structure entails $$\begin{aligned} & & \Gamma_{\rm unpol.} = \gamma^+ ~~~~~~~\! : \Gamma_{\rm unpol.} [\gamma^+, \gamma^-] = - [\gamma^+, \gamma^-] \Gamma_{\rm unpol.} \ , \\ & & \Gamma_{\rm helic.} \; = \gamma^+ \gamma^5 ~~~ : \, \Gamma_{\rm helic.} [\gamma^+, \gamma^-] ~\! = - [\gamma^+, \gamma^-]\Gamma_{\rm helic.} \ , \\ & & \Gamma_{\rm trans.} \; = i \sigma^{i+} \gamma^5 : \, \Gamma_{\rm trans.} [\gamma^+, \gamma^-] ~\! = - [\gamma^+, \gamma^-] \Gamma_{\rm trans} \, ,\end{aligned}$$ where obvious acronyms have been used. Taking into account the mirror diagrams (not shown in Fig. \[fig:pauli-virt-dia\]), the twist-two terms mutually cancel by virtue of the relation $$[\gamma^+, \gamma^-] \Gamma_{\rm tw-2} = - \Gamma_{\rm tw-2} [\gamma^+, \gamma^-] = 2 \Gamma_{\rm tw-2} \ ,$$ which permits a probabilistic interpretation of the twist-two TMD PDF as a density on account of $ \mathbb{A^\perp} \mathbb{F^-}\rightarrow 0 $. On the other hand, the twist-three TMD PDF gets a non-vanishing contribution to its anomalous dimension as one sees from $$\Gamma_{\rm tw-3} \langle \mathcal{U}_{9} \rangle + \langle \mathcal{U}_{9} \rangle^{\dagger} \Gamma_{\rm tw-3} = -\frac{C_{\rm F}}{4\pi} [\gamma^+, \gamma^-] \Gamma (\epsilon) \left(\! 4\pi \frac{\mu^2}{\lambda^2}\! \right)^\epsilon \ .$$ - $ \mathbb{F^\perp} \mathbb{F^-} $: $ \langle \mathcal{U}_{10} \rangle = 0 $ without assuming any particular form of the gauge field at light cone $\infty$. Real-Gluon Contributions at $\mathcal{O}(g^2)$ {#subsec:real-Pauli} ---------------------------------------------- Besides the virtual gluon corrections, there are also real gluon exchanges that contribute finite contributions to the TMD PDF. The main difference from the previously considered case is that now the discontinuity goes across the gluon propagator that has to be replaced by the cut one. Moreover, the Dirac structures, marked above by the symbol $\Gamma$, are sandwiched between Dirac matrices stemming from Pauli terms standing on different sides of the cut. The real-gluon contributions are specified in Table 3. ----------------------------------------------------------------------------------------------------------------------- Symbols                 Expressions Figure \[fig:pauli-virt-dia\] -------------------- ------------------------------------------------------------------ ------------------------------- $\mathcal{U}_{11}$ $ \int_0^\infty \! d\tau \int_0^{\infty} d \sigma (e) \ (S \cdot {\cal F}(u \tau))\ \Gamma \ (S \cdot {\cal F}(u \sigma + \xi^-; {\bm \xi}_\perp)) $ $\mathcal{U}_{12}$ $\int_0^\infty \! d\tau \int_0^{\infty} d \sigma (f) \ (\mathbf{l} \cdot {\cal A}(\mathbf{l} \tau))\ \Gamma \ (S \cdot {\cal F}(u \sigma + \xi^-; {\bm \xi}_\perp)) $ $\mathcal{U}_{13}$ $ \int_0^{\infty} d \sigma (g) \Gamma \ (S\cdot {\cal F} (u \sigma + \xi^-; {\bm \xi}_\perp) ) $ ----------------------------------------------------------------------------------------------------------------------- : Individual real-gluon contributions to $\mathcal{O}(g^2)$ corresponding to the diagrams $(e), (f), (g)$ in Fig. \[fig:pauli-virt-dia\].[]{data-label="tab:link-terms_real"} Using the same symbolic notation as in the previous subsection, we briefly remark that - $ \mathbb{F^-} \mathbb{F^-} $: $ \langle \mathcal{U}_{11} \rangle \rightarrow 0$ (at least power-suppressed $\sim p^-$) - $ \mathbb{A^\perp} \mathbb{F^-} $: $ \langle \mathcal{U}_{12} \rangle + \langle \mathcal{U}_{12} \rangle^\dagger \sim \Gamma [\gamma^+, \gamma^-] + [\gamma^+, \gamma^-] \Gamma = 0 $ - $ \displaystyle\mathbb{Q} \mathbb{F^-} $: $ \langle \mathcal{U}_{13}^- \rangle + \langle \mathcal{U}_{13}^- \rangle^\dag $ and $ \langle \mathcal{U}_{13}^\perp \rangle + \langle \mathcal{U}_{13}^\perp \rangle^\dag $ mutually cancel up to a power-suppressed term. Highlights and Conclusions {#sec:concl} ========================== We argued that the dimensional regularization of overlapping UV and rapidity divergences in TMD PDFs is not sufficient to render the TMD PDF finite — one needs renormalization [@CS07; @CS08]. To remedy this deficiency, a soft factor [@CH00] along a jackknifed contour off the light cone was introduced into the definition of the TMD PDF [@CS07] whose anomalous dimension cancels in leading loop order the cusp anomalous dimension entailed by this overlapping divergence (with a full-fledged discussion being given in [@CS08]). The modified TMD PDF reproduces the standard integrated PDF and is controlled by an evolution equation with the same anomalous dimension as one finds in covariant gauges with no dependence on the adopted pole prescription for the gluon propagator — this would be impossible without the soft renormalization factor (see [@CS08] and for a more dedicated discussion [@CS09Yer]). In particular, using the $A^+=0$ gauge in conjunction with the Mandelstam-Leibbrandt pole prescription [@Man82; @Lei83], no anomalous-dimension defect appears and thus the soft factor becomes trivial. An important finding of this approach is that the anomalous dimension of the unpolarized TMD PDF for SIDIS and the DY process is the same, i.e., $\gamma_{f_{q/q}}^{\rm SIDIS}=\gamma_{f_{q/q}}^{\rm DY}$, albeit the sign of the $\epsilon$ term in the gluon propagator $ \frac{1}{q^+ + i \epsilon} $ is different for these two processes — irrespective of the boundary condition applied. Quite recently, Collins discussed alternative ways to redefine the TMD PDFs in such a way as to avoid rapidity divergences [@Col02]. We also presented a new scheme for gauge-invariant TMD PDFs which includes the direct interaction of spinning particles with the gauge field by means of the Pauli term in the longitudinal and transverse gauge links. In some sense, the Pauli spin interaction is the abstract analogue of a Stern-Gerlach apparatus — sort of — and gives rise through the transverse gauge link to a constant phase $\delta=\alpha_s C_{\rm F} \pi$, which is the same for twist-two and twist-three TMD PDFs, but flips sign when the direction of the gauge link is reversed — thus breaking universality. As a result, one finds $\delta_{\rm DY} = -\delta_{\rm SIDIS}$. To facilitate calculations, we developed in Ref. [@CKS10] Feynman rules for enhanced gauge links — longitudinal and transverse — which supplement those derived before for the standard gauge links by Collins and Soper [@CS81]. Because the Pauli term contributes to the anomalous dimension of the twist-three TMD PDF, the evolution of such quantities is more delicate and may require the modification of the renormalization factor to preserve its density interpretation. Bottom line: Our results — most significant amongst them the appearance of a non-universal phase — may stimulate both theoretical and experimental activities. On the other hand, T-even and T-odd TMD PDFs may become “measurable” on the lattice, so that it seems possible that non-trivial Wilson lines, as those we discussed in this presentation, may be revealed in the future. We are grateful to Anatoly Efremov for useful discussions. We acknowledge financial support from the Heisenberg–Landau Program, Grant 2010, the DAAD, and the INFN. N.G.S. is thankful to the DAAD for a travel grant and to the Organizers for the hospitality and financial support. [99]{} J.C. Collins and D.E. Soper, *Back-to-back jets in QCD*, *Nucl. Phys.* [**B193**]{} (1981) 381; *Nucl. Phys.* [**B213**]{} (983) 545 (E). I.O. Cherednikov and N.G. Stefanis, *Wilson lines and transverse-momentum dependent parton distribution functions: A renormalization-group analysis*, *Nucl. Phys.* [**B802**]{} (2008) 146 \[[arXiv:0802.2821 \[hep-ph\]]{}\]. N.G. Stefanis, *Gauge Invariant Quark Two Point Green’s Function Through Connector Insertion To $\mathcal{O}(\alpha_s)$*, *Nuovo Cim.* [**A83**]{} (1984) 205. S. Mandelstam, *Feynman rules for electromagnetic and Yang-Mills fields from the gauge independent field theoretic formalism*, *Phys. Rev.* [**175**]{} (1968) 1580. A.V. Belitsky, X. Ji and F. Yuan, *Final state interactions and gauge invariant parton distributions*, *Nucl. Phys.* [**B656**]{} (2003) 165 \[[arXiv:hep-ph/0208038]{}\]. D. Boer, P.J. Mulders and F. Pijlman, *Universality of T-odd effects in single spin and azimuthal asymmetries*, *Nucl. Phys.* [**B667**]{} (2003) 201 \[[arXiv:hep-ph/0303034]{}\]. I.O. Cherednikov and N.G. Stefanis, *Renormalization, Wilson lines, and transverse-momentum dependent parton distribution functions*, *Phys. Rev.* [**D77**]{} (2008) 094001 \[[arXiv:hep-ph/0710.1955]{}\]. I.O. Cherednikov and N.G. Stefanis, *Renormalization-group properties of transverse-momentum dependent parton distribution functions in the light-cone gauge with the Mandelstam-Leibbrandt prescription*, *Phys. Rev.* [**D80**]{} (2009) 054008 \[[arXiv:hep-ph/0904.2727]{}\]. N.G. Stefanis and I.O. Cherednikov, *Renormalization-group anatomy of transverse-momentum dependent parton distribution functions in QCD*, *Mod. Phys. Lett.* [**A24**]{} (2009) 2913 \[[arXiv:hep-ph/0910.3108]{}\]. G.P. Korchemsky and A.V. Radyushkin, *Renormalization of the Wilson Loops Beyond the Leading Order*, *Nucl. Phys.* [**B283**]{} (1987) 342. J.C. Collins and F. Hautmann, *Infrared divergences and non-lightlike eikonal lines in Sudakov processes*, *Phys. Lett.* [**B472**]{} (2000) 129 \[[hep-ph/9908467]{}\]. S. Mandelstam, *Light Cone Superspace And The Ultraviolet Finiteness Of The N=4 Model*, *Nucl. Phys.* [**B213**]{} (1983) 149. G. Leibbrandt, *The Light Cone Gauge In Yang-Mills Theory*, *Phys. Rev.* [**D29**]{} (1984) 1699. I.O. Cherednikov, A.I. Karanikas and N.G. Stefanis, *Wilson lines in transverse-momentum dependent parton distribution functions with spin degrees of freedom*, *Nucl. Phys.* [**B840**]{} (2010) 379 \[[arXiv:1004.3697 \[hep-ph\]]{}\]. I.O. Cherednikov and N.G. Stefanis, *Understanding the evolution of transverse-momentum dependent parton densities*, Talk given at Workshop on Transverse Partonic Structure of Hadrons (TPSH 2009), Yerevan, Armenia, 21-26 Jun 2009 \[[arXiv:0911.1031 \[hep-ph\]]{}\]. J. Collins, *Rapidity divergences and valid definitions of parton densities*, in proceedings of *LIGHT CONE 2008 Relativistic Nuclear and Particle Physics*, July 7-11, 2008, Mulhouse, France, \[[arXiv:0808.2665 \[hep-ph\]]{}\]. [^1]: Also at Bogoliubov Laboratory of Theoretical Physics, JINR, 141980 Dubna, Russia. [^2]: Also at ITPM, Moscow State University, 119899 Moscow, Russia.
--- abstract: 'We quantum-simulated particle-antiparticle pair production with a bosonic quantum gas in an optical lattice by emulating the requisite 1d Dirac equation and uniform electric field. We emulated field strengths far in excess of Sauter-Schwinger’s limit for pair production in quantum electrodynamics, and therefore readily produced particles from “the Dirac vacuum” in quantitative agreement with theory. The observed process is equivalently described by Landau-Zener tunneling familiar in the atomic physics context.' address: 'Joint Quantum Institute, National Institute of Standards and Technology and University of Maryland, Gaithersburg, MD 20899, USA' author: - 'A. M. Piñeiro, D. Genkina, Mingwu Lu and I. B. Spielman' bibliography: - 'SPP.bib' title: 'Exceeding the Sauter-Schwinger limit of pair production with a quantum gas' --- The creation of particle-antiparticle pairs from vacuum by a large electric field is a phenomenon arising in quantum electrodynamics (QED) [@Dirac1928; @Sauter1931; @Heisenberg1934; @Heisenberg1936], with threshold electric field strength ${E_{\rm c} \approx 10^{18}\ {\rm V}/ {\rm m}}$ first computed by J. Schwinger [@PhysRev.82.664]. Electric fields on this scale are not experimentally accessible; even the largest laboratory fields produced by ultrashort laser pulses [@Bulanov2010] fall short, making direct observation of pair production out of reach of current experiments. To experimentally probe this limit with a bosonic quantum gas, we engineered the relativistic 1d Dirac Hamiltonian with $mc^{2}$ reduced by $17$ orders of magnitude, allowing laboratory scale forces to greatly exceed Sauter-Schwinger’s limit. We readily measured pair production and demonstrated that this high-energy phenomenon is equivalently described by Landau-Zener tunneling [@Landau; @Zener696]. In the Dirac vacuum, the enormous electric field required is $$\begin{aligned} {E_{\rm c} = m_{\rm e}^2c^3/\hbar q_{\rm e}} ,\end{aligned}$$ determined by the particle/antiparticle mass $m_{\rm e}$ and charge $q_{\rm e}$. For an applied electric field $E$, the pair production rate is governed only by the dimensionless ratio ${E/E_{\rm c}}$, allowing our physical system with very different characteristic scales to be used to realize the underlying phenomenon. Our system was well described by the 1d Dirac Hamiltonian [@Gerritsma_2010; @Weitz_2010; @Weitz_2011; @LeBlanc_2013] $$\begin{aligned} \hat{H}_{\rm D} = c\hat{p} \sigma_z + m c^2 \sigma_x,\end{aligned}$$ where $\hat{p}$ is the momentum operator and ${\sigma_{x,y,z}}$ are the Pauli operators. Starting with $m=0$, the 1d Dirac Hamiltonian describes particles with velocities $\pm c$, that are then coupled with strength $mc^2$ to give the familiar ${\mathcal{E} (p) = \pm (p^{2} c^{2} + m^{2} c^{4})^{1/2}}$ dispersion relation for relativistic particles and antiparticles. At zero momentum, this dispersion has a gap equal to twice the rest mass, at which point the curvature is inversely proportional to the rest mass. The Dirac vacuum consists of occupied states in the lower (antiparticle) band of the Dirac dispersion and vacant states in the upper (particle) band. Vacancies in the antiparticle band represent antiparticles and occupied states in the particle band represent particles. We probed Schwinger’s limit for pair production by measuring the probability and rate of ‘pairs’ out of this vacuum as a function of rest mass and applied force. We emulated $\hat{H}_{\rm D}$ with the lowest two bands of a 1d optical lattice, giving the pair of relativistic modes at the edge of the Brillouin zone shown in figure 1. The rest mass ${m^{*}c^{*2} = V/4}$ is set by the peak-to-valley lattice depth $V$ generated by our ${\lambda_{\rm L} = 1064\ {\rm nm}}$ laser light. The speed of light is replaced by the greatly reduced speed of light ${{c^{*} = \hbar k_{\rm{L}}/m_{\rm Rb} \approx 4.3\ {\rm mm/s}}}$ equal to the single photon recoil velocity. The single photon recoil momentum ${\hbar k_{\rm{L}} = 2\pi \hbar/\lambda_{\rm{L}}}$ specifies the recoil energy ${E_{\rm{L}} = \hbar^2 k_{\rm{L}}^{2}/2m_{\rm Rb} = h \times 2.02\ {\rm kHz}}$. These recoil units set the scale for all physical quantities in our analog system. The effective Compton wavelength ${\lambda_{\rm{C}} = h/m^{*}c^{*} = 8 \lambda_{\rm L} E_{\rm L}/V}$ is about $10^{6}$ times larger than that of an electron. Our simulations consisted of ultracold bosons first prepared in the antiparticle band, then subjected to a constant force, ${F_{\rm e} = q_{\rm e} E = \hbar dq/dt}$, modeling an electric field. During the application of this force atoms may transfer from the antiparticle band to the particle band, emulating the pair production phenomenon. We measured the fraction of atoms transferred to the particle band as a function of the effective rest mass and the applied force. This transfer between bands is described by Landau-Zener tunneling. In this manuscript, we begin by using a Bose-Einstein condensate (BEC) to illucidate the connection between Landau-Zener tunneling and pair production; we then model the Dirac vacuum by uniformily filling the Brillouin Zone of the antiparticle band and observe the predicted rate of pair production. Our experiments began with nearly pure $^{87}$Rb Bose-Einstein condensates (BECs) in the ${\vert F = 1, m_{F} = -1 \big>}$ internal state, in a crossed optical dipole trap [@Lin2009] formed at the intersection of two laser beams traveling along ${\bf e}_{x}$ and ${\bf e}_{y}$, giving trap frequencies ${(f_x, f_y, f_z) = (44, 45, 94) \ {\rm Hz}}$. The low density of our ${N\! \approx10^3}$ atom BECs limited unwanted scattering processes in regimes of dynamical instability [@PhysRevLett.96.020406]. The optical lattice potential was formed by a retro-reflected ${\lambda = 1064\ {\rm nm}}$ laser beam with a waist of ${\approx 150\ \mu m}$. Emulated electric forces were applied by spatially displacing the optical dipole beam providing longitudinal confinement (by frequency shifting an acousto-optic modulator). This effectively added a linear contribution to the existent harmonic potential for displacements small compared to the beam waist. We loaded BECs into the optical lattice by linearly increasing the lattice laser intensity from zero to the final intensity in $300\ {\rm ms}$, a time-scale adiabatic with respect to all energy scales. Once the final lattice depth—determined by the laser intensity$\!$$\!^{~\footnote{The lattice was calibrated by using Kapitza-Dirac diffraction of the BEC off a pulsed lattice potential.~\cite{Kapitza-Dirac-diffraction}}}$—was achieved, we applied a force for a time $t_{F}$. Immediately thereafter, the lattice was linearly ramped off in $1\ {\rm ms}$, mapping crystal momentum states to free particles states [@PhysRevLett.74.1542; @PhysRevLett.87.160405; @PhysRevLett.94.080403; @RevModPhys.80.885]. This process mapped atoms in the anti-particle band to free particle states with momentum between ${-k_{\rm L}}$ and ${k_{\rm L}}$, and mapped atoms in the particle band to states ${\pm k_{\rm L}}$ and ${\pm 2k_{\rm L}}$. The resulting momentum distribution was absorption imaged after a $15.7\ {\rm ms}$ time-of-flight (TOF). We began with an experiment that is natural in the cold atom setting but is unphysical in high energy physics: we applied an effective electric field and varied the rest mass. The result of this changing mass is schematically shown in figure 2(a) by an increasing gap at zero momentum. The decreasing probability of pair production with increasing mass anticipated by (1) is schematically illustrated by the filled or partially filled circles. Figure 2(b) shows the distribution of atoms at time $t_{F}$ selected so that the atoms traversed a full Brillouin zone, i.e., underwent a single Bloch oscillation [@PhysRevLett.76.4508]. Atoms in the top portion of the panel (blue tone) represent particles and atoms in the bottom portion of the panel (red tone) denote filled vacuum states. Figure 2(c) quantifies this effect in terms of the fractional population of a BEC transferred into the particle band, and as expected, the probability of pair production monotonically decreased with increasing effective rest mass. For small rest masses, the atoms almost completely populated the particle band, while for large mass the particle band was nearly empty. The solid curve in figure 2(c) plots the Landau-Zener diabatic transition probability [@Landau; @Zener696; @PhysRevA.23.3107; @Raizen_1998; @Weitz_2010] given by $$\begin{aligned} P_{\rm LZ} = e^{-2\pi\Gamma}, \textrm{with}\ \Gamma = \frac{a^{2}}{\hbar |\frac{d}{d t}\Delta E|}, \end{aligned}$$ describing the transit through a crossing with gap $a=2 m^*c^{*2}$ while the massless energy difference $E(p) = 2 c^* p$ changes as $p$ is swept at constant rate $dp/dt$. The rate ${\hbar dq/dt}$ sets the electric force ${F_{\rm e} = q_{\rm e} E}$. Remarkably in terms of these parameters, the Landau-Zener coefficient is defined by ${F_{\rm e}/F_{\rm c} = 1/2 \Gamma}$ where ${F_{\rm c} = q_{\rm e} E_{\rm c}}$ is exactly Schwinger’s limit of pair production. This prediction is in near perfect agreement with data in figure 2(c). As suggested by the quadratic dependance on mass in (1), figure 3(a) plots the probability of pair production as a function of rest mass squared for 5 different forces, illustrating their similar behavior. The inset figure confirms the agreement with the Landau-Zener expression by ploting the rest mass required to achieve a ${50 \%}$ probability of pair production as a function of the field strength, and the solid curve shows good agreement with the Landau-Zener prediction. Figure 3(b) displays the same data (circles) now as a function of ${F_{\rm e}/F_{\rm c}}$ that collapses onto the predicted transition probability (solid curve). This collapse confirms that the physics of pair production exhibits universal behavior when the field is expressed in units of $F_{\rm c}$ as predicted by the Landau-Zener expression. The vertical dashed line marks the ratio ${F/F_{\rm c} = 1}$, Sauter-Schwinger’s limit for pair production; much of our data is in the high-field limit, which is extremely difficult to achieve in other physical contexts. Our data densly samples the critical threshold regime near $F_{\rm c}$ and spans the full gamut from vanishing pair prduction to nearly complete pair production. Together these data show the clear connection between the Landau-Zener tunneling of a single quantum state and the phenomenon of pair production. Still, the single occupied state defined by our BEC is far from the Dirac vacuum state. We therefore created an initial state mimincking the Dirac vacuum in which the negative energy states were occupied with equal probability and the positive energy states were vacant. We prepared this state by first adiabatically loading a BEC into a fairly deep optical lattice (${V \approx 3.4E_{\rm L}}$, making the lowest bands well separated) and applied a force for $1\ {\rm s}$, sufficient for about 300 Bloch oscillations to take place [@PhysRevLett.76.4508]. During this time, crystal momentum changing collisions [@PhysRevLett.96.020406] and dephasing processes uniformly filled the antiparticle band. We then adiabatically reduced the lattice depth in $300\ {\rm \mu s}$ giving $m^{*}c^{*2}/E_{\rm L} = 0.2(1)$, and proceeded as described in the experimental methods section. Here, pairs were produced at a constant rate for the entire time $t_{F}$ the force was applied, filling initially vancant states in the particle band while depleting initially occupied states in the antiparticle band. For these data ${F_{\rm e}/F_{\rm c} = (2.3(6), 2.7(5), 3.3(1), 4.1(3), 5.5(1), 8.2(6), 16.5(3))}$. Figure 4(a) shows the atomic momentum distribution following this procedure for small (dashed curve) and larger (solid curve) values of $t_F$. In these data the states in the antiparticle band fall in the pink region, and states in the particle band fall in the blue regions. A uniformly filled initial band would ideally produce a top-hat distribution in the pink region; we confirmed that the observed time-independent peaks in the distribution are consistent with a slight defocus of our imaging system [@Turner_thesis; @Turner_2004; @Putra_2014]. The evolution from the dashed curve to the solid curve clearly shows particles that have been created and then accelerated by $F_{\rm e}$. Figure 4(b) summarizes data of this type for variable $t_F$, making clear the appearance of particles already visible in figure 4(a), but also showing a similar reduction of occupation in the antiparticle band. The white dashed line marks the anticipated acceleration given by $F_{\rm e}$; the slightly reduced observed acceleration results from the harmonic confining potential. Finally, figure 4(c) plots the fractional occupation probability of the particle band, clearly showing the production of pairs at a constant rate. The line plots the rate, ${dq/dt = 2\hbar k_{\rm L}/t_{F}}$ and ${t_{F} = 3.4\ {\rm ms}}$, derived from the transition probability in (3). This then is the direct analog of the constant rate of pair production from vacuum from a uniform electric field. In this experiment, we used a cold atom system to quantitatively probe both the underlying mechanism and the overall phenomena of pair production as initially conceived of in high-energy physics. Our analysis shows that pair production can be equivalently understood as a quantum tunneling process, and our data spans the full parameter regime from low applied field (negligible pair production) below the Sauter-Schwinger limit, to high field (maximum rate of pair production) far in excess of the Sauter-Schwinger limit. High-intensity pulsed laser experiments [@PhysRevLett.101.200403; @PhysRevLett.102.080402; @Kirk_2009; @Bulanov2010; @Hill_2017] promise to measure the vacuum non-linearity and ultimately exceed Sauter-Schwinger’s limit in its original context. Current theory suggests that the actual threshold for pair production will be somewhat in excess of $E_{\rm c}$ resulting from the Coulomb attraction between electron-position pairs. Future cold atom experiments with repulsively interacting fermions could probe this “excitionic” shift as well, allowing more quantitative comparison with higher order corrections to the threshold field strength. In addition, the pair-production phenomena occurring in strongly interacting field theories, even absent applied electric fields, may also be realized using mixtures of ultracold bosons and fermions [@Kasper2016; @Kasper2017]. This work was partially supported by the National Institute of Standards and Technology, Air Force Office of Scientific Research’s Quantum Matter Multidisciplinary University Research Initiative, and the National Science Foundation through the Physics Frontier Center at the Joint Quantum Institute. A.M.P. was supported by the U.S. National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1322106.\ References {#references .unnumbered} ==========
--- author: - Shaon Ghosh and Gijs Nelemans bibliography: - 'References.bib' title: Localizing gravitational wave sources with optical telescopes and combining electromagnetic and gravitational wave data --- Introduction ============ The second generation GW detectors are scheduled to come online from 2015 and should be operating at their design sensitivities by the end of 2019 [@prospectsOfSkyloc]. A worldwide network of these advanced detectors will enable us in conducting GW astronomy for the first time in history. Compact binary coalescing (CBC) systems are the most promising candidates for detection of gravitational waves by these detectors as their GW frequency sweep through the sensitivity band towards merger. Binary neutron stars (BNS) and neutron star black hole binaries (NSBH) are also prospective progenitors of short duration gamma ray bursts (GRB). Current models of GRB mechanisms [@Nakar:2007yr; @Piran:1999kx] predicts that the prompt emissions are launched just before the merger of the compact binary. This presents to us a unique opportunity for joint EM-GW astronomy. EM astronomy could complement GW astronomy in a number of ways. On one hand, gravitational wave sky-localization for the advanced ground based detectors could have large uncertainties, typically $\sim 10-100$ square degrees [@TriangulationSteve]. Typical sky-localization of arc-second accuracy could be achieved in conventional EM astronomy. On the other hand, gravitational waves observation allows us to make an independent and direct measurement of masses and luminosity distances of these sources. Evidently, a tremendous amount of scientific merit lies in combining EM and GW observations. Implementation of EM information for detection of gravitational waves is already an active area of research. LIGO Scientific collaboration (LSC) and Virgo Scientific collaboration have been conducting search for gravitational waves from progenitors of short duration gamma ray bursts (sGRB) by following up on GCN (Gamma-ray Coordination Network) triggers [@s5grb; @s6grb]. In the context of GW parameter estimation, the use of EM source information gets an even richer dimension. A non-spinning compact binary system can be characterized by nine parameters, the luminosity distance $d_L$, the binary component masses $m_1$ and $m_2$, the sky position of the binary $(\alpha, \delta)$, the inclination angle $\iota$ of the axis of the binary w.r.t the observer’s line of sight, the polarization angle $\psi$ of the binary orbit, the time of arrival of the signal $t_a$ and the coalescence phase of the binary $\delta_0$. EM information can fix (or constrain) a subset of these parameters reducing the dimensionality of the posterior probability distribution of [@RoeverMCMC; @sweta] which directly translate into considerable reduction in computational power requirements for the estimation of the source parameters. In this paper, we first show in Sec. \[sec:EMPriors\] how the EM information can improve the estimation of the parameters. We examine how different types of EM information can help in the estimation of inclination angle of binary and luminosity distance to the source. Then in Sec. \[sec:GWskyLoc\] we explore how GW parameter estimations can in turn aid EM astronomy. Here we focus entirely on sky localization of gravitational wave sources. We discuss how to generate telescope pointing for optical telescope from error regions of the sky localization. In that context, we acknowledge the efforts towards construction of a dedicated GW follow-up facility. Gravitational wave parameter estimation in presence of electromagnetic information {#sec:EMPriors} ================================================================================== Most likely sources of electromagnetic information for gravitational wave parameter estimation could be in the form of prompt emissions like short duration gamma ray bursts, or from associated afterglows and kilonovae. A fully coherent parameter estimation incorporates a Bayesian Markov chain Monte Carlo technique which is computationally expensive. Thus it is extremely important that we choose a representative system. For this study, we injected inspiral gravitational wave signals from the TaylorT4 waveform family with order 3.5PN [@TaylorT4] in initial LIGO colored Gaussian noise. The source was chosen to be a neutron star - black hole binary (NSBH), with a $1.4 M_{\odot}$ neutron star and a $10 M_{\odot}$ black hole, inclined at angle of $10^{\circ}$ w.r.t the observers line of sight, located at a distance of $\sim 30$ Mpc from earth.\ The choice of a NSBH system is motivated by the fact that a fair amount of research is ongoing on binary neutron star (BNS) systems while parameter estimation studies of NSBH systems are virtually non-existent. The choice of the inclination angle was motivated by the fact that even though there exists some uncertainties regarding the bounds on short GRB opening angles, recent studies [@grbOpeningAngle] estimated the median opening angles of short GRBs to be $\approx 10^{\circ}$. For the comparative analysis of how EM information can help GW parameter estimation we conducted three classes of studies. The first study constitutes the control of our experiment in the form of ‘blind’ parameter estimation where we have no EM information available at our disposal. In the second class of analysis, we used the sky position of the source information, an astrophysical instance of which would be the case where a GRB was detected and located by the gamma ray observatory. And finally the third class of analysis involved the use of sky position and distance information. This corresponds to the case where a GRB and a subsequent afterglow associated with the same were discovered thus conveying to us the sky position and the distance information in the form of redshift. We found that not all parameters displayed improvement in estimation upon use of EM information. However, at the same time we note that certain parameters that are strongly correlated with another parameter, showed considerable improvement in its estimation upon using the correlated parameter EM information. As an example, we show the estimation of inclination angle in the presence of various EM information in figure \[fig:iotaCorrel\]. ![[*Top left*]{}: Estimation of inclination angle for the three classes of studies, blind, sky position known and sky position and distance known. The shaded region denotes the $2\sigma$ spread of the distribution and the vertical magenta line denotes the actual injected inclination angle. Note that in the presence of distance information the inclination angle estimation gets better. [*Top right*]{}: The $2D$ posterior probability distribution plot for the inclination angle and distance shows the strong correlation between the two parameters. [*Bottom left*]{} No measurable improvement in estimation of luminosity distance upon utilizing sky position information. [*Bottom right*]{}: Improvement in estimation of distance in the presence of inclination angle constraints. []{data-label="fig:iotaCorrel"}](inclinationCompare.png "fig:"){width="5cm"} ![[*Top left*]{}: Estimation of inclination angle for the three classes of studies, blind, sky position known and sky position and distance known. The shaded region denotes the $2\sigma$ spread of the distribution and the vertical magenta line denotes the actual injected inclination angle. Note that in the presence of distance information the inclination angle estimation gets better. [*Top right*]{}: The $2D$ posterior probability distribution plot for the inclination angle and distance shows the strong correlation between the two parameters. [*Bottom left*]{} No measurable improvement in estimation of luminosity distance upon utilizing sky position information. [*Bottom right*]{}: Improvement in estimation of distance in the presence of inclination angle constraints. []{data-label="fig:iotaCorrel"}](distance_inclination_density_strong.png "fig:"){width="5cm"} ![[*Top left*]{}: Estimation of inclination angle for the three classes of studies, blind, sky position known and sky position and distance known. The shaded region denotes the $2\sigma$ spread of the distribution and the vertical magenta line denotes the actual injected inclination angle. Note that in the presence of distance information the inclination angle estimation gets better. [*Top right*]{}: The $2D$ posterior probability distribution plot for the inclination angle and distance shows the strong correlation between the two parameters. [*Bottom left*]{} No measurable improvement in estimation of luminosity distance upon utilizing sky position information. [*Bottom right*]{}: Improvement in estimation of distance in the presence of inclination angle constraints. []{data-label="fig:iotaCorrel"}](distanceCompare.png "fig:"){width="5cm"} ![[*Top left*]{}: Estimation of inclination angle for the three classes of studies, blind, sky position known and sky position and distance known. The shaded region denotes the $2\sigma$ spread of the distribution and the vertical magenta line denotes the actual injected inclination angle. Note that in the presence of distance information the inclination angle estimation gets better. [*Top right*]{}: The $2D$ posterior probability distribution plot for the inclination angle and distance shows the strong correlation between the two parameters. [*Bottom left*]{} No measurable improvement in estimation of luminosity distance upon utilizing sky position information. [*Bottom right*]{}: Improvement in estimation of distance in the presence of inclination angle constraints. []{data-label="fig:iotaCorrel"}](compareGRB_noGRB.png "fig:"){width="5cm"} We note from this study that an sky position information does not improve inclination angle estimation. However, knowledge of distance greatly improves the precision and accuracy of the estimation as shown in figure \[fig:iotaCorrel\] (top left). This is due to the well known strong correlation between the inclination angle and distance that we have shown in figure \[fig:iotaCorrel\] (top right). Though we have seen in figure \[fig:iotaCorrel\] (bottom left) that the knowledge of sky position itself do not provide any improvement in the estimation of distance, however one should note that if the knowledge of the sky position came from the detection of a GRB, then one might put constrains on the inclination angle of the source. This stems from the fact that for a GRB to be observed it would indicate that the source might be more favorably oriented to the observer. We show that as a result of constraining of inclination angle to $20^{\circ}$, the estimation of the distance in figure \[fig:iotaCorrel\] (bottom right) improves compared to unconstrained distance estimation. Studies of edge-on systems revealed some interesting features. Edge-on systems are different from face-on systems owing to the lack of the cross-polarization. Parameter estimation algorithm very quickly realizes its absence in the date and consequently estimates the inclination angle very precisely. This is observed in figure \[fig:edgeOnvsFaceOn\] (left), where we have compared the estimation of the inclination angle for an edge-on source with face-on oriented source. We conducted a similar study for sources with EM information in the form of sky position in figure. \[fig:edgeOnvsFaceOn\] (right). While sky position and inclination do not have strong correlation and therefore not much benefit is expected in the inclination angle estimation from sky position information, as we have seen in figure \[fig:iotaCorrel\] (bottom left) red and blue plots, the situation is very different for edge-on systems. In figure \[fig:edgeOnvsFaceOn\] we compare the results of inclination angle estimation of edge-on and face-on systems in the absence (left) and presence (right) of sky position information. It is clear that the presence of sky position information strongly aids the inclination angle estimation for the edge-on configuration, unlike the previously shown face-one case. This can be explained as follows. The strain at a detector due to a particular polarization does not depend only on the inclination angle but also depends on the sky position. h(t) = F\_+(, , )h\_+(r, , m\_1, m\_2, \_c, t\_c) + F\_(, , )h\_(r, , m\_1, m\_2, \_0, t\_c), where the $F_+(\theta, \phi, \psi)$ and $F_{\times}(\theta, \phi, \psi)$ are the antenna pattern functions and $h_+$ and $h_{\times}$ are the two gravitational wave polarization components. In absence of sky position information, we essentially have the freedom to change the antenna pattern functions during the estimation of the inclination angle and a wider posterior probability distribution could be accommodated due to this freedom. Thus, when the sky position information is used, we are likely to get narrower estimate of the inclination angle. ![[*Left*]{}: Inclination angle posterior probability distributions showing dependence on the inclination angle of the injected source. We observe that the edge-on source has the smallest spread. Note that higher PDFs are narrower. [*Right*]{}: Similar study conducted for systems for which the sky position is known from EM observations. []{data-label="fig:edgeOnvsFaceOn"}](compareIncl_estimation.png "fig:"){width="5cm"} ![[*Left*]{}: Inclination angle posterior probability distributions showing dependence on the inclination angle of the injected source. We observe that the edge-on source has the smallest spread. Note that higher PDFs are narrower. [*Right*]{}: Similar study conducted for systems for which the sky position is known from EM observations. []{data-label="fig:edgeOnvsFaceOn"}](compareIncl_estimation_skyPosKnown.png "fig:"){width="5cm"} Gravitational wave sky localization for electromagnetic follow-up studies {#sec:GWskyLoc} ========================================================================= Past observations have not yielded any detection of short duration GRB within the typical advanced LIGO range $\sim 300$ Mpc, for which we have redshift information [@Nakar:2007yr]. This could partly be explained by the lower event rate compared to their longer duration counterparts and strong beaming mechanism. Gravitational wave radiation on the other hand is expected to be more isotropic. Thus, gravitational wave source localization could act as a pointer for optical follow-up studies of short GRBs. Short GRB afterglows progressively gets more isotropic as we go from lower wavelength X-ray to higher wavelength optical band. An optical afterglow associated with a prompt emission is expected to be visible by ground based telescopes for typically $\sim 1$ day. Kilonovae produced by r-process decay of neutron rich radioactive elements created due the disruption of the neutron star in a neutron star binary is expected to be highly isotropic. The emission from these events could predominantly be in the near-infrared regime [@kilonova; @kilonovaOpacity]. These events might or might not accompany a short GRB, thus have the potential of being an optical/near-infrared transient candidate associated with a class of compact binary mergers independent of whether or not they produce a prompt emission.\ In order to meaningfully follow-up EM counterparts of gravitational wave triggers it is of paramount importance to identify these events very quickly [@svdPaper], and then localize them with very low latency. Typical time scales of a fully Bayesian Markov chain Monte Carlo (MCMC) study is of the order of a day, hence not useful for the purpose of optical followup. In order to address this issue, a rapid sky localization technique, known as BAYESTAR, has been developed that is capable of localizing gravitational wave events in the sky within minutes of receiving the trigger using techniques based on timing, phase and amplitude consistency [@first2years]. We generate gravitational wave error regions for these events which can then be followed-up by optical and infrared telescopes around the world. However, it is important to recognize that in the first year of advanced detectors only the LIGO detectors will be operating. A less sensitive Virgo detector will be coming online a year after. The error regions in these first two years are going to be elongated shaped and large with a median of $500$ ($200$) square degrees with $90\%$ confidence level in first year (second year) of opearation respectively [@first2years]. Under such circumstances, it is important to have telescopes that can cover such error regions within the optical transient timescales. We are currently constructing an array of telescopes in La Silla, Chile, called the BlackGEM that will initially (2016) have four telescopes, each with a field of view of $2.7$ square degrees, that can potentially reach up to 23rd magnitude in 5 minutes of integration time. The full BlackGEM array of 15 telescopes is aimed to become operational in 2018. The multiple telescopes in the array can adjust their configurations to morph the field of view of the array to the shape of the error region. We created telescope pointing for BlackGEM using a number of sky-localizations received from the low latency timing pipeline to see how well BlackGEM is able to cover them. In our preliminary studies, we used 10 injections that were localized using the low latency pipeline. Half of the injections were found as doubles and half as triples. We found that a four-telescope array will be pressed hard to cover error region for two detectors. Typical error region from LIGO only sky-localizations required $> 50$ pointing from BlackGEM. With each pointing requiring 5 minutes of integration time, it will require many hours to cover the error regions completely. Moreover, we found that the error regions can span across both the hemispheres indicating it may not always be feasible to cover the entire error region by one observatory. ![BlackGEM pointing of $2\sigma$ error region for a double coincident detection in LIGO interferometer. Note that the long arcs span across both the hemisphere. One telescope might not be able to fully follow-up the entire error regions for all cases. ](BGPointing_2Det.png){width="10.0cm"} The situation gets better when Virgo comes online. With the error regions generally more localized and located within a single hemisphere, we found that we are in a position to cover the desired region within $6-20$ BlackGEM pointing for sources lasting in time scales of hours. With more number of telescopes in the array, it will be realistically possible to cover most of the error regions from three detector localizations, and we expect that the full array of 15 telescopes should be able to cover a large fraction of two detector error regions. ![BlackGEM pointing of $2\sigma$ error region for a triple coincident detection in LIGO-Virgo interferometers. The error region in this case can be more realistically followed-up with BlackGEM.](BGPointing_3Det.png){width="10.0cm"} Discussion ========== In this work, we explored various avenues of improving GW parameter estimation of NSBH systems using EM information from GRB and/or associated afterglows and Kilovovae. Our results show that EM information in the form of redshift from afterglows could be very useful for GW parameter estimation due to strong $d_L-\iota$ correlation. We also examined the improvement upon constraining inclination angle to $20^{\circ}$ for known GRB triggers. We then explored the methods of conducting optical follow-up of GW triggers and presented some preliminary results of this study for the BlackGEM array of telescopes. We observed that it will be difficult to cover the full LIGO $2\sigma$ error regions in the first year when the Virgo detector is not operating. With the second year of operation of LIGO as the sensitivities improve, as Virgo comes online and as the full BlackGEM array is constructed, we would be seeing realistic possibilities of covering the GW error regions and possibly start making concurrent observations of EM counterparts to GW events. Acknowledgement =============== We would like to acknowledge Steven Bloemen and Paul Groot for providing us the sky pixel data of BlackGEM that was utilized to construct the pointing. SG is thankful to Marc van der Sluys for the discussions on the techniques of parameter estimation and MCMC. We would also like to thank Larry Price and Leo Singer for their assistance regarding the use of the low latency sky localization pipeline and constructive feedback regarding the work.
--- author: - 'Julija Markeviciute,' - 'Jorge E. Santos' bibliography: - 'hairybh.bib' title: 'Hairy Black Holes in AdS$_5\times S^5$' --- \[sec:introduction\]Introduction ================================ Black holes play a paramount role in gauge-gravity dualities. In particular, they are typically the dominant saddle points in the high temperature regime of the canonical ensemble. One can thus gain insight into typical states of conformal field theories at high temperature by studying novel black hole solutions that are asymptotically anti-de Sitter (AdS). In this paper we will devote our attention to a particular conformal field theory (CFT) living on the Einstein static universe $\mathbb{R}_t\times S^3$, namely $\mathcal{N}=4$ SYM with gauge group $SU(N)$. There are many reasons to study this particular CFT, perhaps the most important being that this is the theory for which AdS/CFT was first formulated in [@Maldacena:1997re], and for which our holographic dictionary is best understood [@Gubser:1998bc; @Witten:1998qj; @Aharony:1999ti]. In [@Maldacena:1997re], the strong coupling limit of $\mathcal{N}=4$ SYM at large t’Hooft coupling and at infinite gauge group rank $N$ was conjectured to be IIB supergravity on AdS$_5\times S^5$. We are thus led to consider black hole solutions of IIB supergravity with AdS$_5\times S^5$ asymptotics if we want to understand the thermodynamic saddle points of $\mathcal{N}=4$ SYM living on the Einstein static universe. Since we will be working in the supergravity limit, we will be considering states on the CFT at energies of order $N^2$. However, even studying black hole solutions in IIB supergravity is far from being an easy task. For instance, small Schwarzschild-AdS black holes can be shown to be unstable to a localisation on the $S^5$ if their radius in AdS units is sufficiently small [@Banks:1998dd; @Peet:1998cr; @Hubeny:2002xn; @Buchel:2015gxa]. The bumpy black holes in AdS$_5\times S^5$ that branch from the onset of this instability were only recently constructed and necessarily require solving partial differential equations [@Dias:2015pda]. The work developed in this paper ignores such instabilities and focuses on solutions of five-dimensional $\mathcal{N}=8$ gauged supergravity, which is thought to be a consistent truncation of IIB supergravity on AdS$_5\times S^5$ [^1]. This truncation is such that enough symmetry is assumed so that the localisation phenomenon described above does not occur. Even dealing with all the supergravity fields of five-dimensional $\mathcal{N}=8$ gauged supergravity proves to be a rather complicated task. In order to bypass this, we will focus on a truncation of $\mathcal{N}=8$ supergravity which, to our knowledge, was first proposed in [@Bhattacharyya:2010yg]. The spectrum of five-dimensional gauged $\mathcal{N}=8$ supergravity comprises one graviton, 42 scalars, 15 gauge fields and 12 form fields. The consistent truncation that we are going to consider contains the graviton, a complex scalar field and a Maxwell field, under which the scalar field is charged. For more details on this truncation we refer the reader to [@Bhattacharyya:2010yg]. Once the dust settles, the action reads: $$\begin{gathered} \label{eq:action} S=\frac{1}{16\pi G_5}\int \mathrm{d}^5x\sqrt{g}\left\{R[g]+12-\frac{3}{4}F_{\mu\nu}F^{\mu\nu}-\frac{3}{8}\left[(D_\mu\phi)(D^\mu\phi)^\dagger-\frac{\nabla_\mu\lambda\,\nabla^\mu\lambda}{4(4+\lambda)}-4\lambda\right]\right\} \\ -\frac{1}{16\pi G_5}\int \mathrm{d}^5x\, F\wedge F\wedge A,\end{gathered}$$ where $F_{\mu\nu}=2\partial_{[\mu}A_{\nu]}$, $D_\mu\phi=\nabla_\mu\phi-i\,e A_\mu\phi$, $e=2$, $\lambda = \phi \phi^\dagger$, the radius of AdS$_5$ is set to unity and $G_5=\pi/(2N^2)$ is the five-dimensional Newton’s constant. We note that at this stage we have already used AdS/CFT, in the sense that $G_5$ is given in terms of the rank of the gauge group of $\mathcal{N}=4$ SYM. The tachyonic scalar field $\phi$ has the charge $e=2$ and $m_\phi^2=-4$, which saturates the five-dimensional Breitenlöhner-Freedman (BF) bound [@Breitenlohner:1982jf]. The couplings and scalar field charges that come from this embedding in IIB have very particular forms and values. Indeed, in [@Basu:2010uz; @Dias:2011tj] a bottom up model that shares many features with (\[eq:action\]) was considered. There, the scalar field charge $e$ was a free parameter and the self-coupling potential of the scalar field was a simple mass term - the action was that of the Abelian Higgs model in AdS. The details of the phase diagram will turn out to depend rather nontrivially on the specific form of the action (\[eq:action\]). Nevertheless, the authors of [@Basu:2010uz; @Dias:2011tj] concluded that small black holes in AdS are afflicted by an instability, so long as the charged scalar field has sufficiently large $e$. The origin of this instability goes back to the so-called superradiant scattering [@Starobinsky:1973], which can only occur for charged scalar field satisfying $\omega< e\,\mu$, where $\omega$ is the frequency used in the scattering process and $\mu$ the chemical potential of the background solution. Interestingly enough, from the analysis of [@Basu:2010uz; @Dias:2011tj], it was not clear whether scalar fields with charge $e=2$ could be unstable to the superradiant instability. The reason for this is worth emphasising: normal modes of scalar fields saturating the BF bound around pure AdS have an energy gap given by $\omega = 2$. Furthermore, we will see that charged black holes maximise $\mu$ at fixed energy when they are extremal. Finally, if the black hole is small, one can show that $\mu\simeq 1$ for extremal holes. If we now use our condition for superradiant scattering, one concludes that the system will be unstable if $e> 2$. We note that this does not mean that *other* types of instabilities cannot exist even for small values of the scalar field charge[^2]. We shall see that the superradiant instability is pervasive even for small charged black hole solutions of (\[eq:action\]). There are a handful of solutions to the equations of motion derived from (\[eq:action\]) that are known to be analytic, the most general being the Kerr-Reissner-Nordström black hole [@Gibbons:2004uw; @Gibbons:2004ai; @Cvetic:2004ny; @Chong:2005hr; @Chong:2005da; @Chong:2006zx; @Cvetic:2005zi; @Gutowski:2004yv; @Gutowski:2004ez; @Kunduri:2006ek; @Wu:2011gq]. They all have one thing in common, namely that the charged scalar field vanishes. Hairy solutions, *i.e.* solutions with a nontrivial scalar field profile $\phi$, were first constructed in a matched asymptotic expansion in [@Bhattacharyya:2010yg], where the black holes were taken to be arbitrarily small. In this paper we construct the novel hairy black hole solutions of (\[eq:action\]) at the full nonlinear level, *i.e.* our black holes are not necessarily small. As a test of our numerical procedure, we give a detailed comparison with the analytic results of [@Bhattacharyya:2010yg]. Our findings will be consistent with those of [@Bhattacharyya:2010yg] for sufficiently small asymptotic charges. We thus start by reviewing their conjectured phase diagram, which is depicted in Fig. \[fig:eg\]. The perturbative analysis carried by Bhattacharyya *et al.* shows that in the phase diagram infinitesimally small hairy black holes smoothly join to a horizonless solitonic solution saturating the five dimensional BPS bound (red solid line in Fig. \[fig:eg\]). Note that because of the unusual normalisation of the kinetic term for the photons in (\[eq:action\]) the supersymmetric bound occurs for solutions satisfying $M=3|Q|$. This supersymmetric soliton was then numerically constructed for large values of the charge $|Q|$, and was found to become singular at a specific charge $Q_c$. The approach to this critical charge revealed an intricate spiralling behaviour. Bhattacharyya *et al.* went further, and constructed a singular solitonic solution that extended to infinite values of $Q$, and approached the same spiral as the regular soliton, but from values above $Q_c$ (wiggly black line in Fig. \[fig:eg\]). In [@Bhattacharyya:2010yg] a number of possibilities were envisaged for the behaviour of large black holes in this system. We aim to finally unravel the full nonlinear picture. This paper is organised as follows: section \[sec:setup\] introduces in more detail the setup that we are considering, including the equations of motion derived from (\[eq:action\]). In section \[sec:num\], we detail the numerical method we used to solve this problem. In section \[sec:results\], we present our main results; in section \[sec:comparison\], we compare the full nonlinear results to those obtained in [@Bhattacharyya:2010yg] and section \[sec:discussion\] concludes the paper with a discussion and future directions. ![\[fig:eg\]The proposed microcanonical phase diagram by Bhattacharyya *et al.* (taken from [@Bhattacharyya:2010yg], not drawn to scale). The lower solid line is the BPS bound on which the supersymmetric soliton resides. The straight red segment represents smooth branch and the wiggly black part represents singular soliton. Hairy black holes were proposed to exist between the curve indicating the onset of the superradiant instability (solid blue) and the BPS bound. The dotted black curve is the extremal RNAdS black holes. The grey solid line shows a possible phase transition between two different types of hairy black holes, with different zero size limits.](phase_diagram.png){width="\textwidth"} \[sec:setup\]Setup ================== The consistent truncation is described by the five-dimensional Einstein-Maxwell AdS gravity coupled to a charged complex scalar field with action given as in (\[eq:action\]). The equations of motion derived from are \[eq:a\] $$\begin{aligned} \label{eq:a:eeq} &G_{\mu\nu}-6 g_{\mu\nu}=\frac{3}{2}T_{\mu\nu}^{EM}+\frac{3}{8}T_{\mu\nu}^{mat}\\ \label{eq:a:maxwell} &\nabla_\lambda F_\mu{}^\lambda=\frac{1}{4}\varepsilon^{\mu\nu\rho\alpha\lambda}F_{\mu\nu}F_{\rho\alpha}+\frac{i}{4}\left[\phi(D_\mu\phi)^\dagger-\phi^\dagger D_\mu\phi\right]\\ \label{eq:a:scalar} &D_\mu D^\mu\phi+\phi\left[\frac{(\nabla_\mu\lambda)(\nabla^\mu\lambda)}{4(4+\lambda)^2}-\frac{\nabla_\mu \nabla^\mu \lambda}{2(4+\lambda)}+4\right]=0,\end{aligned}$$ where $$\begin{aligned} T_{\mu\nu}^{EM}&=F_\mu{}^\lambda F_{\nu\lambda}+\frac{1}{4}g_{\mu\nu}\,F^2\\ T_{\mu\nu}^{mat}&=\frac{1}{2}\left[D_\mu\phi\,(D_\nu\phi)^\dagger+D_\nu\phi\,(D_\mu\phi)^\dagger\right]-\frac{1}{2}g_{\mu\nu}(D_\alpha\phi)(D^\alpha\phi)^\dagger+2g_{\mu\nu}\, \lambda\\ &-\frac{1}{4(4+\lambda)}\left[(\nabla_\mu\lambda)(\nabla_\nu\lambda)-\frac{1}{2}g_{\mu\nu}(\nabla_\alpha\lambda)(\nabla^\alpha\lambda)\right].\end{aligned}$$ We look for static, spherically symmetric and asymptotically global AdS$_5$ solutions and for now we will not specify our gauge choice[^3]: $$\mathrm{d}s^2=-f(r)\mathrm{d}t^2+g(r)\mathrm{d}r^2+\Sigma(r)^2\mathrm{d}\Omega^2_3,\quad A_\mu\mathrm{d}x^\mu=A(r)\mathrm{d}t,\quad \phi=\phi^\dagger=\phi(r).$$ Since our solutions are only electrically charged, the Chern-Simons term (first term on the left hand side of Eq. (\[eq:a:maxwell\])) plays no role. The Einstein equation, the Maxwell equation and and the scalar equation yield a system of four equations [@Bhattacharyya:2010yg]: $$\begin{aligned} \label{eq:eom} \begin{split} f'&-\frac{f}{2 \Sigma \Sigma'}\left[4 g+g\Sigma^2 \left(8+\phi^2\right)-4 \Sigma'^2+\frac{\Sigma^2 \phi'^2}{4+\phi^2}\right]-\frac{\Sigma}{2 \Sigma'}\left(A^2\phi^2g-A'^2\right)=0, \\ g'&+g^2 \left(\frac{4}{\Sigma \Sigma'}+\frac{8 \Sigma}{\Sigma'}+\frac{\Sigma\phi^2}{\Sigma'}\right)+g\left(\frac{f'}{f}+\frac{\Sigma A'^2}{f\Sigma'}+\frac{4 \Sigma'}{\Sigma}+\frac{2 \Sigma''}{\Sigma'}\right)=0, \\ A''&+\frac{1}{2}\left(\frac{6\Sigma'}{\Sigma}-\frac{f'}{f}-\frac{g'}{g}\right)A'-g\,\phi^2A=0, \\ \phi''&+\frac{1}{2}\left(\frac{6 \Sigma'}{\Sigma}+\frac{f'}{f}-\frac{g'}{g}\right)\phi'-\frac{\phi}{4+\phi^2}\,\phi'^2+\frac{g}{f}\left(A^2+f\right)\left(4+\phi^2\right)\phi=0. \end{split}\end{aligned}$$ The $'$ denotes the derivative with respect to $r$. At this point we pick a gauge where $\Sigma(r)=r$, so that $r$ measures the radius of the round $S^3$ in AdS$_5$. In this gauge, we require that our solutions are asymptotically AdS$_5$, *i.e.* at large $r$ they must satisfy the following expansion [@Ashtekar:1984zz; @Henneaux:1985tv; @Henningson:1998gx; @deHaro:2000vlm] $$\begin{aligned} \begin{split} f(r)&=r^2+1+\mathcal{O}(r^{-2}),\quad g(r)=\frac{1}{1+r^2}+\mathcal{O}(r^{-6}),\\ A(r)&=\mu+\mathcal{O}(r^{-2}),\quad\phi(r)=\frac{\varepsilon}{r^2}+V\frac{\log r}{r^2}+\mathcal{O}(r^{-4})\,, \end{split} \label{eq:expansion}\end{aligned}$$ where $\mu$ is the chemical potential and the constants $V$ and $\varepsilon$ will shortly be identified. Using the AdS/CFT correspondence [@Gubser:1998bc; @Witten:1998qj], $V$ is regarded as the source for the operator dual to $\phi$ and $\varepsilon$ is its expectation value, *i.e.* $\varepsilon = \langle \mathcal{O}_\phi \rangle$. This choice implicitly assumes standard quantisation. The operator dual to $\phi$ has conformal scaling dimension $\Delta=2$. We will be interested in solutions representing states of the conformal field theory that are not sourced, so we will set $V=0$. These normalizable conditions give rise to a four parameter set of asymptotically AdS$_5$ solutions to  [@Bhattacharyya:2010yg]. Further imposing suitable regularity and normalisability conditions results in two parameter space of solutions which may be taken to be the mass ($M$) and charge $(Q)$ of the black hole, with $\varepsilon$ and $\mu$ being determined as a function of $M$ and $Q$. The frequency of the lowest normal mode of $\phi$ is $\Delta = 2$. In [@Basu:2010uz] it was shown that small Reissner-Nordström AdS (RNAdS) black holes suffer from superradiant instability whenever $e\mu>\Delta$, where $\mu$ is a chemical potential of the black hole. For RNAdS black holes $\mu\leq (1+2R^2)$, where $R$ is the Schwarzschild radius of the black hole[^4], therefore, small black holes satisfy $\mu\leq 1$ (saturating at extremality). Hence small charged black holes are always stable when $e<e_c=\Delta$ and in our setup small near extremal black holes lie at the edge of the instability. These small near extremal charged black holes are unstable to the superradiant tachyon condensation and evolve towards a small black hole with the charged scalar hair. Known solutions --------------- All known solutions are found in the radial gauge where $\Sigma(r)=r$. ### The Reissner-Nordström black hole If we switch off the scalar field we recover the familiar Reissner-Nordström two parameter set of solutions to  $$\begin{aligned} \label{eq:RN} \begin{split} &f(r)=\frac{\mu^2 R^4}{r^4}-\frac{(R^2+\mu^2+1)R^2}{r^2}+r^2+1,\\ &g(r)=\frac{1}{f(r)},\quad A(r)=\mu\left(1-\frac{R^2}{r^2}\right),\quad\phi(r)=0. \end{split}\end{aligned}$$ We record the thermodynamic formulae for later use (henceforth all the thermodynamic quantities will be scaled by N$^2$) $$\begin{aligned} \begin{split} M&=\frac{3}{4}R^2\left(1+R^2+\mu^2\right)\\ Q&=\frac{1}{2}\mu R^2\\ S&=\pi R^3\\ T&=\frac{1}{2\pi R}\left(1+2R^2-\mu^2 \right). \end{split}\end{aligned}$$ Note that $R$ is the outer horizon if the condition $\mu^2\leq(1+2R^2)$ is satisfied. This inequality is saturated at extremality, where $T=0$. The resulting extremal black hole is regular, and has a degenerate bifurcating Killing horizon. ### The BPS solitons In this section we briefly outline the numerical study of the spherically symmetric smooth and singular solitons given in [@Bhattacharyya:2010yg]. We shall see later that these can be regarded as the BPS limit of the hairy black hole configurations. Soliton solutions are easier to determine, since they are known to be supersymmetric. Instead of solving the equations of motion (\[eq:eom\]) directly one resorts to searching for nontrivial solutions of the Killing spinor equations, which are first order in space. After some nontrivial manipulations, one can cast *any* supersymmetric solution of the action (\[eq:action\]) into the following form $$\begin{gathered} f(r)=\frac{1+\rho^2h^3}{h^2}\,,\qquad g(r)=\frac{4\rho^2h^2}{(2\rho h+\rho^2\dot{h})^2(1+\rho^2h^3)} \\ A(r)=\frac{1}{h(r)}\,,\qquad \phi(r)=2\left[\left(h+\frac{\rho \dot{h}}{2}\right)^2-1\right]^{1/2} \end{gathered}$$ where the $\dot{\,}$ denotes the derivative with respect to the variable $\rho$, given by $r^2=\rho^2h$, and $h$ has to satisfy the following second order differential equation $$\rho\left(1+\rho^2h^3\right)\ddot{h}+\left(3+7\rho^2h^3+\rho^3h^2\dot{h}\right)\dot{h}-4\rho\left(1-h^2\right)h^2=0. \label{eq:soliton}$$ This equation has a number of remarkable properties. Perhaps the most striking being that at large $\rho$ it demands $h(\rho)|_{\rho\rightarrow\infty}=1$. This condition automatically ensures normalisability of the physical fields $f$, $g$, $A$ and $\phi$. At the origin, $r=0$ or equivalently $\rho=0$, there are a number of possibilities. Assuming that solutions to (\[eq:soliton\]) behave as $$\lim_{\rho\to0}h = \frac{h_\alpha}{\rho^\alpha}\,,$$ gives the following possible exponents $\alpha=0,1,2/3,2$. Solutions with $\alpha = 0$ are regular, and the remaining are singular. For each of these exponents we can find solitonic solutions, but the dimension of their moduli space strongly depends on $\alpha$. For $\alpha=0,1$ there is a one parameter family of solutions, for $\alpha=2/3$ there is a unique solution and for $\alpha =2$ the solution spans a two dimensional moduli space. In addition, in [@Bhattacharyya:2010yg] it was shown that the smooth soliton solutions with $\alpha=0$ exist for small values of the charge $Q$ and that the singular solitonic solution with $\alpha = 1$ exists for large values of $Q$. The two families merge precisely at a special point which is given by the singular soliton with $\alpha = 2/3$. ![*Left*: Charge of the solitonic solutions $Q$ versus the vacuum expectation value of the dual operator $\langle\mathcal{O_\phi}\rangle$. The black solid line from above is the singular soliton and the red line from below is the smooth solution. The dotted gridlines show coordinates of the special solution with $\alpha=2/3$. *Right:* The $\alpha=2$ soliton solutions. The wedges are for constant $h_2$ which decreases from $1$. These solution curves appear to extend to $|Q|\rightarrow+\infty$. We also did not find any limiting value for $\langle\mathcal{O_\phi}\rangle$.[]{data-label="fig:solitons"}](sol6.pdf){width="\textwidth"} ![*Left*: Charge of the solitonic solutions $Q$ versus the vacuum expectation value of the dual operator $\langle\mathcal{O_\phi}\rangle$. The black solid line from above is the singular soliton and the red line from below is the smooth solution. The dotted gridlines show coordinates of the special solution with $\alpha=2/3$. *Right:* The $\alpha=2$ soliton solutions. The wedges are for constant $h_2$ which decreases from $1$. These solution curves appear to extend to $|Q|\rightarrow+\infty$. We also did not find any limiting value for $\langle\mathcal{O_\phi}\rangle$.[]{data-label="fig:solitons"}](sol4.pdf){width="\textwidth"} We reproduce the results of [@Bhattacharyya:2010yg]. The line of smooth solitons terminates at the singular solution with the “critical” value $Q_c\simeq0.2613$ as the central density $h_0\rightarrow\infty$; the family of singular solitons with $\alpha=1$ branches out of this point at $h_1\rightarrow 0$, extending to higher charges. The critical charge $Q_c$ can also be obtained by solving for the solution with $\alpha = 2/3$ and $h_{2/3}=1$, thus confirming the picture of [@Bhattacharyya:2010yg]. In addition, Bhattacharyya *et al.* analysed the asymptotic behaviour around the limiting solution analytically and proposed that these two soliton branches exhibit damped (possibly periodic) oscillations around $Q_c$ in the space parametrized by $Q$ and $\left<\mathcal{O}_\phi\right>$, resulting in an infinite discrete non-uniqueness of the soliton solutions as $Q\rightarrow Q_c$ (see Fig. \[fig:solitons\]). Note that there exists a maximum charge $Q_{max}\simeq 0.2643$ for the $\alpha=0$ family and a minimum charge $Q_{min}\simeq 0.2605$ for the $\alpha=1$ singular soliton. The limiting expectation value for the operator dual to the scalar field is $\left<\mathcal{O}_\phi\right>_c\sim 1.8710$, and corresponds to the $\alpha=2/3$ singular solution. We also compute the singular $\alpha=2$ case which provides a two parameter class of solutions parametrized by $h_2$ and $\left.\rho^3 \partial_{\rho}h\right|_{\rho\rightarrow\infty}$ [^5]. The latter can be regarded as setting the charge $Q$, therefore, for any $h_2$, solutions exist with any value of the charge. It appears that these solutions are not connected to the other solutions studied in this paper (see Fig. \[fig:solitons\]). \[sec:num\]Numerical Construction of Hairy Black holes ====================================================== We use the DeTurck method [@Headrick:2009pv] (for an extensive review see [@Dias:2015nua]) which allows us to instead solve the Einstein-DeTurck or harmonic Einstein equation $$\label{eq:deturck} G_{\mu\nu}-\nabla_{(\mu}\xi_{\nu)}=0,$$ where $\xi^{\mu}=g^{\nu\rho}\left[\Gamma^\mu_{\nu\rho}(g)-\Gamma^\mu_{\nu\rho}(\tilde{g})\right]$ is the DeTurck vector and $\tilde{g}$ is a reference metric of our choice such that it possess the same causal structure of our desired solution $g$. This method is very useful because as we solve  the gauge is automatically fixed by the condition ${\xi^{\mu}=0}$. Static solutions to the harmonic Einstein equation under certain regularity assumptions will also satisfy the Einstein equation [@Figueras:2011va]. However, in this case we do not know whether there exist solutions with $\xi^{\mu}\ne 0$ (so called Ricci solitons). We check *a posteriori* that the solutions presented in this paper satisfy $\xi^{\mu}=0$ at least to $\mathcal{O}(10^{-10})$ precision and also demonstrate good convergence (see Appendix \[sec:numval\], Figs. \[fig:deturck\]-\[fig:planarconv\]). To solve  we use Newton-Raphson method with pseudospectral collocation on a Chebyshev grid to discretise the equations. We make a compact coordinate change $r=\dfrac{y_+}{\sqrt{1-y^2}}$ so that $y=1$ corresponds to $r=\infty$ and $y=0$ to $r=y_+$. The first metric *ansatz* that we use is $$\begin{aligned} \label{eq:metric1} \mathrm{d}s^2_1&=\frac{1}{1-y^2}\left[-y^2\Delta(y)q_1 \mathrm{d}t^2+\frac{y_+^2 q_2 \mathrm{d}y^2}{\left(1-y^2\right)\Delta(y)}+y_+^2 q_3\mathrm{d}\Omega_3^2\right]\end{aligned}$$ together with $$\begin{gathered} A(r)=y^2q_4(y)\,,\quad \phi(r)=\left(1-y^2\right)q_5(y)\quad \\ \text{and}\quad \Delta(y)=1+ 2y_+^2-\tilde{\mu} ^2-\left(1+y_+ ^2-2 \tilde{\mu} ^2\right)y^2-\tilde{\mu} ^2 y^4\,. \label{eq:ansatz1gauge}\end{gathered}$$ Our reference metric $\tilde{g}$ used in the DeTurck method is obtained from (\[eq:metric1\]) by setting $q_1=q_2=q_3=1$. This is simply the metric of a RNAdS when $\tilde{\mu}=\mu$ . The parameter $\tilde{\mu}$ is left to be specified freely as it just sets the reference metric and is in general different from the chemical potential of the physical metric. As we want to explore the solution space we start somewhere on the merger line, *i.e.* on a solution with some parameter coordinates ($\mu$, $y_+$) for which $\phi$ is arbitrarily small. If we want to probe low temperatures a natural choice is $\mu=1$ (as black holes with $y_+<1/2$ are small and have $\mu\sim 1$). The physical chemical potential of the black hole is then given by the gauge field on the boundary, $\mu=A(r)|_{r\to \infty}=q_4(1)$ (see the expansion (\[eq:expansion\])). At the conformal boundary, located at $y=1$, we demand that ${q_1(1)=q_2(1)=q_3(1)=1}$, $q_5(1)=\epsilon$ and a Robin condition $\left[y_+^2q_5'-2q_4^2q_5\right]_{y=1}=0$ for the gauge field which ensures that Newton’s method converges to the hairy solution if we specify nonzero $\epsilon$. Note that $\epsilon$ is related to $\varepsilon$ via $\epsilon = y_+^2 \varepsilon$. Regularity at the horizon demands $q_1(0)=q_2(0)$ and pure Neumann for the remaining functions, *i.e.* $q_i'(0)=0$. In many regions of the parameter space, $\epsilon$ will not uniquely parametrise a solution, however the strength of the scalar field at the horizon, $q_5(0)\equiv\epsilon_0$, will. Depending on which region of parameter space we want to probe, we might decide to parametrise our solution with $\epsilon$ or $\epsilon_0$. Thus we are left with two parameters after we fix $\tilde{\mu}$, namely $(y_+,\epsilon)$ or $(y_+,\epsilon_0)$. However, the RNAdS-like *ansatz* (\[eq:metric1\]) does not have good convergence properties almost everywhere in moduli space. We found that the following *ansatz* has better convergence properties (at least a few order of magnitudes better!) if we simply set $\Delta(y)=y_+^2$ in (\[eq:metric1\]), yielding: $$\begin{aligned} \label{eq:metric2} \mathrm{d}s^2_2&=\frac{1}{1-y^2}\left[-y^2y_+^2q_1 \mathrm{d}t^2+\frac{q_2 \mathrm{d}y^2}{\left(1-y^2\right)}+y_+^2 q_3\mathrm{d}\Omega_3^2\right].\end{aligned}$$ The trade off is that now the functions at high central field density $\epsilon_0$ are more peaked at high temperatures, therefore we use the *ansatz* to extend our solution curves in the high $\epsilon_0$, high $T$ regime. The boundary conditions remain the same except for the gauge field, which in the new *ansatz* obeys to the following boundary condition $\left[y_+^2(q_5+q_5')+q_5-2q_4^2q_5\right]_{y=1}=0$. This boundary condition can be obtained by solving the Einstein-DeTurck equations near the boundary. It is not always easy to find a reference metric for the DeTurck method, but here we have the luxury of having two good reference metrics. The results obtained with the two different reference metrics match at least to $0.1\%$ numerical accuracy in all the physical quantities such as energy (for the quantitative comparison of the two *ansatz* see Fig. \[fig:ansatz\], Appendix \[sec:numval\]). We present thermodynamic formulae for the line element since this was the *ansatz* we used the most. The electric charge is obtained by computing the flux of the electromagnetic field tensor at infinity $$Q=\dfrac{1}{4}A'(r)|_{r\to \infty}=\frac{y_+^2}{4}\left.\left(2q_4+\frac{\mathrm{d} q_4}{\mathrm{d}y}\right)\right|_{y=1}.$$ We compute the Hawking temperature of the black hole by requiring smoothness of the Euclidean spacetime and it is simply given by $$T=\frac{y_+}{2\pi}\,.$$ The entropy of a BH is proportional to its horizon area and is given by $$S=\pi y_+^3\,q_3(0)^{3/2}\,.$$ To compute the mass of the black hole we use the Ashtekar-Das formalism [@Ashtekar:1999jx] $$M=\frac{y_+^2}{8}\left[1+3y_+^2+y_+^4\left(2-q_5^2-q_1''\right)\right]_{y=1}.$$ We further checked that this matched the holographic renormalization technique of [@Henningson:1998gx; @Balasubramanian:1999re; @deHaro:2000vlm] up to the energy of the ground state of the global AdS$_5$. Our mass is computed with respect to pure AdS$_5$. We verified that these quantities obey the first law of black hole thermodynamics $\mathrm{d}M=T\mathrm{d}S+3\mu \mathrm{d}Q$ at least to $0.01\%$. \[sec:results\]Results ====================== Phase diagram of hairy AdS$_5\times S^5$ black holes {#subsec:Phase} ---------------------------------------------------- In this subsection we present a comprehensive picture of the phase diagram of the hairy black holes in the microcanonical ensemble and analyse its rich structure. ![*Left*: Phase diagram for the hairy black holes. The merger curve (solid black) indicates the onset of the superradiant instability. The line of extremal RNAdS solutions is shown as a dashed gray line. The BPS bound is given by $M_\mathrm{BPS}(Q)=3Q$ (dashed black). The gray dotted gridlines indicate the position of the special soliton with $\alpha = 2/3$.\ *Right*: For clarity, we plot the mass difference $\Delta M=M-M_{\mathrm{ext}}$, where $M_{\mathrm{ext}}$ is the mass of an extremal RNAdS black hole with the same charge $Q$.[]{data-label="fig:mq"}](phase01.pdf){width="\textwidth"} ![*Left*: Phase diagram for the hairy black holes. The merger curve (solid black) indicates the onset of the superradiant instability. The line of extremal RNAdS solutions is shown as a dashed gray line. The BPS bound is given by $M_\mathrm{BPS}(Q)=3Q$ (dashed black). The gray dotted gridlines indicate the position of the special soliton with $\alpha = 2/3$.\ *Right*: For clarity, we plot the mass difference $\Delta M=M-M_{\mathrm{ext}}$, where $M_{\mathrm{ext}}$ is the mass of an extremal RNAdS black hole with the same charge $Q$.[]{data-label="fig:mq"}](phase1.pdf){width="\textwidth"} The black holes with a non-zero scalar condensate first start to exist where the RNAdS black holes become superradiantly unstable. The RNAdS black holes are uniquely specified by the two parameters ($R$, $\mu$) and given the horizon radius we look for the value of the chemical potential $\mu$ at which zero-mode of the scalar field first appears. We generate this one parameter family of solutions separately by linearising the scalar equation  in the compact variable $y$ around the RNAdS black hole. Let $\delta q_5$ be an infinitesimal perturbation of $q_5$ defined in (\[eq:ansatz1gauge\]). Following [@Dias:2010ma; @Dias:2011tj] we numerically solve the resulting generalised eigenvalue problem $$L(y)\delta q_5(y)=\mu^2\Lambda(y)\delta q_5(y)$$ with boundary conditions $\delta q_5^\prime(0)=0$ and $2\mu^2\delta q_5(1)-R^2\delta q_5^\prime(1)=0$ which follow from imposing regularity at the horizon and solving near the asymptotic infinity. The $L(y)$ and $\Lambda(y)$ are both second order differential operators independent of $\mu$. The chemical potential of the corresponding marginally stable RNAdS black hole appears as the generalised eigenvalue. The line of solutions representing the onset of the condensation is also obtained by solving the full non-linear equations of motion  setting $q_5(0)=\epsilon_0$ to be small. These two methods to generate the merger line are found to be in very good agreement. Our numerical results are presented in Fig. \[fig:mq\]. We find that the hairy black holes exist between the instability curve all the way down to the BPS bound and we verified it for a wide range of charges. Numerically we did not find any upper bound on the charge up to $Q\sim 100$ and from the structure of the phase diagram it would be natural to infer that the hairy black hole solutions exist between the merger line and the BPS bound for every charge. At the lower bound the hairy black holes join the solitonic solution in the phase diagram, in particular, in the limit $T\rightarrow 0$, hairy black holes approach the smooth soliton, just as predicted in [@Bhattacharyya:2010yg]. In more detail, in Fig. \[fig:consteps\] we plot the charge $Q$ as a function of $\langle\mathcal{O}_\phi\rangle$ for constant values of $\epsilon_0$. In order to parametrise each of these constant $\epsilon_0$ curves we dial the temperature $T$. As we lower the temperature, we see that hairy solutions join smoothly to the smooth soliton curve. Furthermore, the higher value of $\epsilon_0$ we choose, the closer the hairy solutions get to $Q=Q_c$. In particular, as $\epsilon_0\to+\infty$ we see that the hairy black hole solution inherits the spiralling behaviour of the smooth solitonic branch (see right panel of Fig. \[fig:consteps\] where we can see two arms of the spiral). Note that exactly for $\epsilon=\langle O_\phi \rangle = 1.8710$ we expect even the hairy black hole to have infinite non-uniqueness as we approach $T\to0$ from above. ![*Left*: The hairy black hole charge $Q$ versus $\langle\mathcal{O}_\phi\rangle$ for constant central scalar field density $\epsilon_0$ curves. Red line is the smooth soliton and the green line is the singular soliton. *Right*: Mass difference versus charge $Q$. The constant parameter $\epsilon_0$ curves extend down to $T=0.055$. The inset is a zoomed in plot around $Q=Q_c$ for some value of $\epsilon_0$.[]{data-label="fig:consteps"}](constantccbend.pdf){width="\textwidth"} ![*Left*: The hairy black hole charge $Q$ versus $\langle\mathcal{O}_\phi\rangle$ for constant central scalar field density $\epsilon_0$ curves. Red line is the smooth soliton and the green line is the singular soliton. *Right*: Mass difference versus charge $Q$. The constant parameter $\epsilon_0$ curves extend down to $T=0.055$. The inset is a zoomed in plot around $Q=Q_c$ for some value of $\epsilon_0$.[]{data-label="fig:consteps"}](constantcc.pdf){width="\textwidth"} The behaviour of the isothermal curves changes as a function of the temperature. In particular, if we fix a temperature in the interval $T_1<T<T_2$ while increasing $\epsilon_0$, with $T_1 = 0.139^{+0.002}_{-0.002}$ and $T_2 = 0.23^{+0.01}_{0}$, we find two solutions for the same value of the charge $Q$ (corresponding to two different values of $\epsilon_0$). This can be seen for instance on the left panel of Fig. \[fig:zoom\]. We shall shortly see that this feature will give an intricate phase diagram in the canonical ensemble, where the temperature and charge are held fixed. ![*Left*: Zooming in around $Q=Q_c$, and observing the transition between $T<T_2$ and $T>T_2$. The color legend is the same as in Fig. \[fig:mq\]. *Right*: An even closer look for $Q$ near $Q_c = 0.261$. The hairy black hole isotherms terminate at charges above the special singular soliton.[]{data-label="fig:zoom"}](zoom1.pdf){width="\textwidth"} ![*Left*: Zooming in around $Q=Q_c$, and observing the transition between $T<T_2$ and $T>T_2$. The color legend is the same as in Fig. \[fig:mq\]. *Right*: An even closer look for $Q$ near $Q_c = 0.261$. The hairy black hole isotherms terminate at charges above the special singular soliton.[]{data-label="fig:zoom"}](zoom2.pdf){width="\textwidth"} One can finally ask what is the fate of the isothermal curves as we increase $\epsilon_0$. According to what we described above, these cannot be connected to the smooth soliton (except for the special isothermal with $T=0$). Indeed, we find numerical evidence that they connect to the singular soliton with $\alpha=1$, see for instance the right panel of Fig. \[fig:zoom\] where we see constant temperature curves joining the BPS bound at $M_\star=3Q_\star>Q_c$, with the limiting $Q_\star$ increasing further away from $Q_c$ as we increase the temperature. This behaviour can also be seen on the left panel of Fig. \[fig:spiral\]. Finally, we note that as the hairy black hole isothermals approach the singular soliton, we find evidence for spiralling behaviour, which is depicted on the right panel of Fig. \[fig:spiral\]. ![*Left*: Charge versus the vacuum expectation value of the operator dual to the scalar field for constant temperature hairy black hole solutions. The black and red data points are singular and smooth solitons respectively. Dotted gridlines show the point where these two merge. *Right*: The charge of the hairy solutions as we approach the singular soliton with $\alpha=1$, exhibits damped oscillations. This data was collected with $n=1000$ grid points.[]{data-label="fig:spiral"}](qe0zoom.pdf){width="\textwidth"} ![*Left*: Charge versus the vacuum expectation value of the operator dual to the scalar field for constant temperature hairy black hole solutions. The black and red data points are singular and smooth solitons respectively. Dotted gridlines show the point where these two merge. *Right*: The charge of the hairy solutions as we approach the singular soliton with $\alpha=1$, exhibits damped oscillations. This data was collected with $n=1000$ grid points.[]{data-label="fig:spiral"}](spiral.pdf){width="\textwidth"} In order to support the claim that $T\rightarrow 0$ hairy black holes do not tend to some configuration possessing irregular geometry we compute the Kretschmann invariant $K^2=R_{abcd}R^{abcd}$ following [@Dias:2011tj]. Because for RNAdS $K^2\sim 1/R^4$ when $T\rightarrow 0$, we normalise the $K^2$ by that of the corresponding RNAdS black hole with the same chemical potential and temperature. In Fig. \[fig:invariant\] (left) we show that the normalised curvature invariant remains bounded as we approach the smooth soliton. On the other hand, keeping the temperature fixed and increasing $\epsilon_0$ the normalised Kretschmann invariant appears to blow up (see Fig. \[fig:invariant\], right) as we approach the BPS bound. For solutions with $T>T_2$ we found the metric *ansatz* to be more numerically stable. ![ *Left*: Curvature invariants at the origin for a range of temperatures for constant $\epsilon_0$. The Kretchmann scalar remains finite. For lower values of $\epsilon_0$ it takes longer for the hairy black holes to approach the BPS bound, hence for larger values of the parameter the curves flatten out quicker. *Right*: Kretchmann invariant $K^2$ for the range of temperatures scaled by the $K^2$ of the corresponding RNAdS black hole in the grand-canonical ensemble. As $\epsilon_0$ increases the invariant increases without a bound.[]{data-label="fig:invariant"}](klevel.pdf){width="\textwidth"} ![ *Left*: Curvature invariants at the origin for a range of temperatures for constant $\epsilon_0$. The Kretchmann scalar remains finite. For lower values of $\epsilon_0$ it takes longer for the hairy black holes to approach the BPS bound, hence for larger values of the parameter the curves flatten out quicker. *Right*: Kretchmann invariant $K^2$ for the range of temperatures scaled by the $K^2$ of the corresponding RNAdS black hole in the grand-canonical ensemble. As $\epsilon_0$ increases the invariant increases without a bound.[]{data-label="fig:invariant"}](kscaled1.pdf){width="\textwidth"} As we increase $\epsilon_0$ the hairy black hole isotherms are approaching the BPS bound. The chemical potential $\mu\rightarrow 1$ and the entropy $S\rightarrow 0$ (see Fig. \[fig:entropies\], Appendix \[sec:therm\]). This together with the fact that the Kretschmann invariant blows up as $\epsilon_0\to +\infty$, even when normalized by the Reissner-Nortström solution, suggests that the isothermals will merge with the $\alpha=1$ soliton, for any value of $T$. The planar limit {#subsec:planar} ---------------- In this subsection we consider the planar horizon limit of our global AdS$_5$ solutions. The resulting black brane solutions were first studied in great detail in [@Aprile:2011uq]. Our numerical approach is similar to the one we used for the spherical black holes, so here we just quote the final results. In the large charge limit the singular soliton branch admits an exact analytical solution from which the planar limit solution can be recovered [@Bhattacharyya:2010yg] $$\label{eq:exact} \mathrm{d}s^2=-r^2\mathrm{d}t^2+\frac{r^2\mathrm{d}r^2}{b^2+r^4}+r^2\mathrm{d}x^2,\quad\phi(r)=\frac{2b}{r^2},\quad A(r)=0.$$ Note that the choice of the constant $b$ amounts to a coordinate transformation, therefore, this is a single asymptotically Poincarè patch solution. The planar solution exhibits explicit conformal invariance, since the field theory is suppose to live on Minkowski spacetime. Thus, in order to have a well defined planar limit, we always look at conformal invariant ratios, which should have a smooth limit as the black holes become infinitely large. For instance, to measure temperature we introduce $\tilde{T}\equiv\sqrt{\varepsilon_0} T$. The planar hairy solutions are thus a one parameter family of solutions, with the singular soliton solution (\[eq:exact\]) being a point. We choose to parametrise the hairy branes by $\tilde{T}$. We have constructed planar hairy black holes, *i.e.* hairy black branes, and checked that our spherical hairy black holes do approach the hairy branes in the limit $S\to+\infty$, *i.e.* hairy black holes become infinitely large. Furthermore, the singular soliton solution (\[eq:exact\]) is the zero $M/T^4$ and $Q/T^3$ limit of the hairy black branes, see Fig. \[fig:pqm\], which is reached as we take $\tilde{T}$ to be large. In order to test this, we have plotted the following gauge invariant quantities $g_{tt}\phi$ and $g_{xx}\phi$ and checked that they approach the same value at large $\tilde{T}$, see Fig. \[fig:functions\]. This is what is predicted by the exact soliton solution (\[eq:exact\]). ![Gauge invariant quantities $\phi g_{xx}$ and $\phi g_{rr}$ for the hairy planar solutions, versus the compact coordinate $y$. According to the exact solution, both these curves should approach the same constant value when $\tilde{T}\to+\infty$.[]{data-label="fig:functions"}](pfunctions.png){width="100.00000%"} Thermodynamics {#subsec:therm} -------------- In this section we analyse phase diagrams arising in different thermodynamic ensembles. The planar limit of our results match the results of [@Aprile:2011uq], which we reproduced using our own code. For completeness we present in Fig. \[fig:planartherm\] of the Appendix \[sec:therm\], a complete analysis of the several ensembles in the planar limit. In particular, we find that the hairy black branes are only dominant in the microcanonical ensemble, but never in the canonical or grand-canonical ensembles. ### Grand-canonical ensemble In the grand-canonical ensemble the system is in equilibrium with a thermodynamic reservoir with a temperature $T$ and chemical potential $\mu$, but is allowed to exchange energy and electric charge. The preferred phase of such a system minimises the Gibbs free energy $G=M-TS-3\mu Q$. The results are presented in the left panel of Fig. \[fig:gibbs\] as a difference between hairy black holes and RNAdS potentials (absolute quantities for a few regions in moduli space are shown in the Appendix \[sec:therm\], Fig. \[fig:gibbsfull\]). We find that in the grand-canonical ensemble RNAdS black holes have lower Gibbs free energy than the hairy black holes with the same chemical potential $\mu$ and temperature $T$. Note that our hairy solutions all have $G<0$, so $G>G_{RN}$ and that the RNAdS black holes and the hairy black holes phases are identical at the merger points. For RNAdS, $G=\frac{1}{4}R^2(1-R^2-\mu^2)$ and as shown in the Fig. \[fig:gibbsfull\] (Appendix \[sec:therm\]) the Gibbs free energy for the hairy black holes is always negative and approaching $0$ as we increase $\epsilon_0$. Note that it would only be exactly zero if $\mu$ could reach $1$, but that can only happen at infinite $\epsilon_0$. Finally, so far we have only considered the transition between the hairy black holes and RNAdS black holes. However, we note that the RNAdS black holes can themselves become subdominant with respect to AdS [@Hawking:1982dh]. As all our energies are measured with respect to pure AdS, so that the energy of AdS simply corresponds to $M=0$ and therefore zero thermodynamic potentials, black holes with negative free energy are thermally favoured over pure AdS. The small RNAdS branch has $\mu\leq 1$ and thus these black holes never compete with the hairy solutions. We have also studied local thermodynamic stability of the hairy black holes in the grand-canonical ensemble. We find that the specific heat at constant chemical potential is always positive, but the isothermal capacitance, defined as: $$C_T = \left(\frac{\partial Q}{\partial \mu}\right)_T\,,\nonumber$$ exhibits an interesting behaviour. For $T>T_2$ it is always positive and for $T<T_1$ we find $C_T<0$. In the interval $T_1<T<T_2$, each isothermal has two solutions at fixed electric charge. The most energetic of these solutions has $C_T>0$, whereas the least energetic has $C_T<0$. ![*Left*: The difference between the Gibbs free energies of hairy and RNAdS solutions with the same chemical potential and temperature. *Right*: Difference of the Helmoltz free energies of the Reissner-Nortström and the hairy solution with the same temperature and charge.[]{data-label="fig:gibbs"}](gibbsd.pdf){width="\textwidth"} ![*Left*: The difference between the Gibbs free energies of hairy and RNAdS solutions with the same chemical potential and temperature. *Right*: Difference of the Helmoltz free energies of the Reissner-Nortström and the hairy solution with the same temperature and charge.[]{data-label="fig:gibbs"}](helmzd.pdf){width="\textwidth"} ### Canonical ensemble In the canonical ensemble we restrict exchanges with the reservoir such that $\delta Q=0$, but $\delta M\neq0$, while keeping the temperature constant. The dominant phase minimises the Helmholtz free energy $F=M-TS$. The results are presented in the right panel of Fig. \[fig:gibbs\]. We see an interesting interplay between the RNAdS and the hairy solutions which shows a phase transition in the constant temperature family of hairy solutions, occurring at $T_1$ and ending at $T_2$. The higher $\epsilon_0$ branch has lower $F$ than the corresponding RNAdS black hole, see Fig. \[fig:helmfull\] in Appendix \[sec:therm\]. For $T>T_2$ RNAdS has lower free energy than the hairy black hole. Note however that in the region where the hairy solutions dominate over the RNAdS black hole, $F$ is positive indicating that thermal AdS is the dominant phase in this region of moduli space. We have also studied the local thermodynamic stability of the hairy solutions in the region where they dominate over the corresponding RNAdS black holes. Local thermodynamic stability in the canonical ensemble is controlled by the sign of the specific heat at constant charge, which turns out to be positive for this range of $T$ and $Q$. We summarise our results for these two ensembles in Fig. \[fig:parameters\]. ![\[fig:parameters\]*Left*: Canonical ensemble: RNAdS black holes exist below the extremality curve (the curve separating orange (top left) and light-orange (middle) regions) and the hairy black holes exist above the merger curve (purple data points) and for $\mu>1$. RNAdS dominate over pure AdS only in the yellow region (bottom right). The hairy black holes have a higher free energy than thermal AdS and thus are not the preferred phase in the ensemble. *Right*: Grand-canonical ensemble: when both solutions coexist (again above the merger curve), the hairy black holes have a higher free energy than the corresponding RNAdS with the same $\mu$ and $T$. The yellow region (middle) shows the parameter space in which the RNAdS dominates over thermal AdS. The orange region (top left) is the extremal RNAdS and light-orange region (bottom left) is the sector in which pure AdS is preferred over the RNAdS black holes.](regf.pdf){width="\textwidth"} ![\[fig:parameters\]*Left*: Canonical ensemble: RNAdS black holes exist below the extremality curve (the curve separating orange (top left) and light-orange (middle) regions) and the hairy black holes exist above the merger curve (purple data points) and for $\mu>1$. RNAdS dominate over pure AdS only in the yellow region (bottom right). The hairy black holes have a higher free energy than thermal AdS and thus are not the preferred phase in the ensemble. *Right*: Grand-canonical ensemble: when both solutions coexist (again above the merger curve), the hairy black holes have a higher free energy than the corresponding RNAdS with the same $\mu$ and $T$. The yellow region (middle) shows the parameter space in which the RNAdS dominates over thermal AdS. The orange region (top left) is the extremal RNAdS and light-orange region (bottom left) is the sector in which pure AdS is preferred over the RNAdS black holes.](regg.pdf){width="\textwidth"} ![The entropy difference $S-S_{\mathrm{RN}}$ of the hairy black holes and the corresponding RNAdS with the same values of $Q$ and $M$, as a function of the scaled mass $M/M_{\mathrm{merger}}$ plotted for a range of temperatures. Here $M_{\mathrm{merger}}$ is the mass at the onset of the superradiant instability, where by definition $S-S_{\mathrm{RN}}=0$.[]{data-label="fig:entropy"}](entropd1.pdf){width="\textwidth"} ![\[fig:expncomp\] Comparison of the data to the small charge perturbative expansion for the four main thermodynamic quantities. The black disks are the numerical data for the hairy black holes (with $\varepsilon=0.1$) and the red solid line shows the prediction of [@Bhattacharyya:2010yg]. As expected, we observe larger deviations in the temperature and chemical potential.](expncompm.pdf){width="\textwidth"} ![\[fig:expncomp\] Comparison of the data to the small charge perturbative expansion for the four main thermodynamic quantities. The black disks are the numerical data for the hairy black holes (with $\varepsilon=0.1$) and the red solid line shows the prediction of [@Bhattacharyya:2010yg]. As expected, we observe larger deviations in the temperature and chemical potential.](expncomps.pdf){width="\textwidth"} ![\[fig:expncomp\] Comparison of the data to the small charge perturbative expansion for the four main thermodynamic quantities. The black disks are the numerical data for the hairy black holes (with $\varepsilon=0.1$) and the red solid line shows the prediction of [@Bhattacharyya:2010yg]. As expected, we observe larger deviations in the temperature and chemical potential.](expncompt.pdf){width="\textwidth"} ![\[fig:expncomp\] Comparison of the data to the small charge perturbative expansion for the four main thermodynamic quantities. The black disks are the numerical data for the hairy black holes (with $\varepsilon=0.1$) and the red solid line shows the prediction of [@Bhattacharyya:2010yg]. As expected, we observe larger deviations in the temperature and chemical potential.](expncompmu.pdf){width="\textwidth"} ![The infinity norm of the DeTurck vector across a range temperatures with the number of gridpoints $n=600$. Lower temperature hairy global solutions have the highest norm.[]{data-label="fig:deturck"}](deturck.pdf){width=".45\textwidth"} ### Microcanonical ensemble Finally the system in which $\delta Q=0$ and $\delta M=0$ is described by the microcanonical ensemble. The preferred phase in this case maximises the entropy. We find that hairy black holes are *only* dominant in this ensemble, see Fig. \[fig:entropy\] (and Fig. \[fig:entropyfull\] in the Appendix \[sec:therm\]). Also in this ensemble $T_2$ plays an important role. In Fig. \[fig:entropy\] we plot $S-S_{RN}$ as a function of $M/M_{\mathrm{merger}}$. Here, $S_{RN}$ corresponds to the entropy of a RNAdS black hole with the same values of $Q$ and $M$ as the hairy solution we are considering, and $M_{\mathrm{merger}}$ to the mass of the RNAdS solution at the onset of the superradiant instability with the same $T$. We see that $S-S_{RN}$ has maximum slope at $M/M_{\mathrm{merger}}=1$, becoming the smallest at $T=T_2$, and increasing again for $T>T_2$. This is a simple consequence of the first law of thermodynamics. \[sec:comparison\]Comparison with perturbative results ====================================================== In this section we compare our numerical results with the perturbative expansion of the hairy black hole solutions of [@Bhattacharyya:2010yg], which are only valid at small asymptotic charges. In [@Bhattacharyya:2010yg] the mass and charge are given to sixth order in $\mathcal{O}(\varepsilon^6,R^6,\varepsilon^2 R^4,\varepsilon^4R^2)$, however, the chemical potential is only given to $\mathcal{O}(R^4,\varepsilon^2 R^2,\varepsilon^4)$ and the temperature to $\mathcal{O}(R^3,\varepsilon^2 R^3,\varepsilon^4R)$. In Fig. (\[fig:expncomp\]) we present a detailed comparison between our numerical solutions, represented by the black disks, and the expansion of [@Bhattacharyya:2010yg] represented by the red solid line. Since the chemical potential and temperature are only determined up to a lower order than the energy, we expect a worse agreement with the numerical data. This is indeed what we observe in Fig. (\[fig:expncomp\]). Nevertheless, the observed agreement between the numerical data and the analytic expansion of [@Bhattacharyya:2010yg] is reassuring. \[sec:discussion\]Summary and outlook ===================================== In this paper we have studied charged hairy black hole solutions in global AdS$_5$ spacetime using numerical methods. The action that yields these new black hole solutions arises from a consistent truncation of IIB string theory on AdS$_5\times S^5$. We provided strong numerical evidence that the black hole solutions with the scalar condensate exist between the onset of the superradiant instability and the BPS limit for all values of the hairy black hole charge. We obtain the smooth horizonless soliton with $\alpha=0$ in the limit $T\rightarrow 0$, while the singular soliton with $\alpha=1$ is reached for any isothermal with $T\neq0$ in the limit where $\epsilon_0$ (scalar field evaluated at the horizon) becomes infinitely large. The fact that these new solutions extend all the way to the BPS limit makes them interesting from the field theory perspective. In fact, from the field theory it is natural that solutions with mass and charge arbitrarily close to the BPS bound should exist, and yet the RNAdS black hole does not saturate such a bound. It is thus reassuring that we did find solutions that saturate the BPS bound; that they turn out to be hairy solutions could have not been anticipated. We identify the temperature range $T_1<T<T_2$ for which $(\partial M/\partial Q)|_T$ diverges and is marked by complex thermodynamic properties. Globally, we find that the hairy solutions, when they exist, are the preferred phase in the microcanonical ensemble, however, they are subdominant in the canonical and grand-canonical ensembles. In the canonical ensemble, the hairy black holes dominate over RNAdS black holes at low temperatures and become subdominant at high temperatures, however, the hairy solutions are never preferred over pure AdS. Finally, in the grand-canonical ensemble, the RNAdS black holes always dominate over the hairy solutions. These results are recovered in the planar limit. A natural extension of this work is the inclusion of rotation in our setup. Following [@Dias:2011at] we started with an equally-rotating Meyers-Perry-AdS$_5$ [@1986AnPhy.172..304M; @Hawking:1998kw] *ansatz* and constructed a sample of rotating, charged hairy black hole solutions. In this case the hairy black holes moduli space is governed by three parameters $\epsilon_0, y_+$ and $\omega$ where the latter is the black hole angular velocity. For this system it is known that there exist a one parameter family of supersymmetric asymptotically AdS$_5$ black holes [@Gutowski:2004ez] with zero scalar field. However, we were unable to obtain supersymmetric hairy black holes. We are exploring the hairy black hole and soliton solution moduli space in greater detail and the results will be presented in a follow up paper [@Markeviciute:2016]. This setup can also be used to analyse other consistent truncations, for instance, consistent truncations of the holographic dual on AdS$_4\times S^7$. In this case the existence of hairy supersymmetric solutions is known (e.g. [@Cacciatori:2009iz]) and it would be interesting to explore how non extremal configurations approach these supersymmetric hairy solutions. JM is supported by an STFC studentship. We would like to thank Toby Crisford and Óscar J. Dias for reading an earlier version of this manuscript. Numerical validity {#sec:numval} ================== We verify that our solutions satisfy $\xi^\mu=0$ to sufficient precision, *i.e.* that our Einstein-DeTurck solutions are also Einstein (see Fig. \[fig:deturck\]). We find that low temperature hairy black holes have the highest $\xi$ norm. The pseudospectral methods guarantee exponential convergence with an increasing grid size and we check that all our physical quantities and the norm of the DeTurck vector have this property (Fig. \[fig:hairyconv\], \[fig:planarconv\]). ![Comparison of the two *ansatz*. $M_1$ is the  *ansatz*, $M_2$ is the  *ansatz*. *Left*: $n=400$ data. *Right*: $n=600$ data. The highest temperature solutions agree the best and the agreement gets worse as we lower the temperature.[]{data-label="fig:ansatz"}](ansatz400.pdf){width="\textwidth"} ![Comparison of the two *ansatz*. $M_1$ is the  *ansatz*, $M_2$ is the  *ansatz*. *Left*: $n=400$ data. *Right*: $n=600$ data. The highest temperature solutions agree the best and the agreement gets worse as we lower the temperature.[]{data-label="fig:ansatz"}](ansatz600.pdf){width="\textwidth"} ![Convergence for the hairy global solutions for different values of the central scalar field density $\epsilon_0$ for one particular temperature. Convergence for the first *ansatz* is at least few orders of magnitude worse. *Left*: The norm of the DeTurck vector versus the grid size. *Right*: Hairy black hole mass error versus the grid size. As the mass involves second derivatives other thermodynamical quantities have at least two order of magnitude better convergence. For $\epsilon_0>20$ convergence falls rapidly.[]{data-label="fig:hairyconv"}](deturckce.pdf){width="\textwidth"} ![Convergence for the hairy global solutions for different values of the central scalar field density $\epsilon_0$ for one particular temperature. Convergence for the first *ansatz* is at least few orders of magnitude worse. *Left*: The norm of the DeTurck vector versus the grid size. *Right*: Hairy black hole mass error versus the grid size. As the mass involves second derivatives other thermodynamical quantities have at least two order of magnitude better convergence. For $\epsilon_0>20$ convergence falls rapidly.[]{data-label="fig:hairyconv"}](massc.pdf){width="\textwidth"} ![Convergence for the hairy planar solutions for different values of the central scalar field density $\epsilon_0$. As expected it is much better than for the hairy solutions.[]{data-label="fig:planarconv"}](pdeturckce.pdf){width="\textwidth"} ![Convergence for the hairy planar solutions for different values of the central scalar field density $\epsilon_0$. As expected it is much better than for the hairy solutions.[]{data-label="fig:planarconv"}](pmassx.pdf){width="\textwidth"} Additional figures {#sec:therm} ================== ![\[fig:helmfull\]Canonical free energy versus charge for the hairy solutions for three different temperatures. Red solid line is the corresponding RNAdS solution. For $T<T_1$, (left panel) the hairy solutions dominate over the RNAdS black hole. For $T_1<T<T_2$, the three solutions coexist: two hairy black holes and RNAdS. One of the hairy solutions dominates over the RNAdS, while the other is subdominant (middle panel). For $T>T_2$, the RNAdS black hole is always dominant in the canonical ensemble.](helmz.pdf){width="100.00000%"} ![\[fig:gibbsfull\] Gibbs free energy versus chemical potential for the hairy global black holes. Red solid line is the corresponding RNAdS solution. In the Gibbs ensemble, the hairy solutions are always subdominant with respect to the RNAdS black hole with the same temperature and chemical potential.](gibbs.pdf){width="\textwidth"} ![\[fig:gibbsfull\] Gibbs free energy versus chemical potential for the hairy global black holes. Red solid line is the corresponding RNAdS solution. In the Gibbs ensemble, the hairy solutions are always subdominant with respect to the RNAdS black hole with the same temperature and chemical potential.](gibbs1.pdf){width="\textwidth"} ![\[fig:entropyfull\] Entropy versus mass for the hairy solutions for three different temperatures. Red diamonds correspond to the RNAdS black hole with the same charge. The hairy solutions have higher entropy than the corresponding RNAdS black hole.[]{data-label="fig:entropies"}](entrop1.pdf){width="100.00000%"} ![Microcanonical diagram for the hairy black branes. The red line is the family of planar RNAdS black holes: hairy solutions always dominate over RNAdS black holes.[]{data-label="fig:pqm"}](pqm1.pdf){width="\textwidth"} ![\[fig:planartherm\] Scaled thermodynamic potentials for the planar hairy black holes. The red line in each figure is the RNAdS solution in the corresponding ensemble (left to right: microcanonical, canonical and grand-canonical ensembles). The hairy black branes are only dominant in the microcanonical ensemble.](psm.pdf){width="\textwidth"} ![\[fig:planartherm\] Scaled thermodynamic potentials for the planar hairy black holes. The red line in each figure is the RNAdS solution in the corresponding ensemble (left to right: microcanonical, canonical and grand-canonical ensembles). The hairy black branes are only dominant in the microcanonical ensemble.](phelmz.pdf){width="\textwidth"} ![\[fig:planartherm\] Scaled thermodynamic potentials for the planar hairy black holes. The red line in each figure is the RNAdS solution in the corresponding ensemble (left to right: microcanonical, canonical and grand-canonical ensembles). The hairy black branes are only dominant in the microcanonical ensemble.](pgibbs.pdf){width="\textwidth"} [^1]: This has actually never been shown in full generality, partially because of the self dual condition imposed on the Ramond-Ramond $F_5$ form flux, even though interesting progress has been recently made in [@Ciceri:2014wya]. [^2]: For instance, large extremal black holes are known to be unstable against neutral scalar field perturbations, but the triggering mechanism for this instability is *not* superradiance. [^3]: Note that we have fixed the $U(1)$ gauge freedom by taking $\phi$ to be real. [^4]: Defined so that the entropy for the RNAdS BH is $S=\pi R^3$. [^5]: In this case we use the same numerical method as for the hairy black holes, instead of solving (\[eq:soliton\]) directly. We will detail the numerical method shortly.
--- abstract: 'Wireless energy harvesting constitutes an effective way to prolong the lifetime of wireless networks. In this paper, an opportunistic decode-and-forward cooperative communication system is investigated, where the energy-constrained relays harvest energy from the received information signal and the co-channel interference (CCI) signals and then use that harvested energy to forward the correctly decoded signal to the destination. Different from conventional relaying system with constant energy supplier, the transmission power of the energy-constrained relay depends on the available energy that harvested, which is a random process. Best relay selection that takes into account all the impacting ingredients on the received signal quality at the destination is deployed. The exact closed-form expression of the outage probability is derived, and the optimal value of the energy harvesting ratio for achieving minimum outage is numerically investigated. In addition, the impacts of the CCI signals on the system’s outage and diversity performances are analyzed. It is shown that the proposed relaying scheme can achieve full diversity order equal to the number of relays without the need of fixed power supplier.' author: - Yanju Gu --- Introduction ============ Energy harvesting has become a strong solution for providing green energy and is widely adopted in green communications [@EnergyCooper; @EH2; @Gu2; @DingMultiUser]. The energy is usually harvested from solar and wind power and then transmitted to the power grid [@5357331] with the knowledge of power states [@powerconf; @powertsp]. In recent years, radio frequency (RF) based energy harvesting techniques [@Ada; @EH1] have received more and more attention as this RF energy is already pervasive in the wireless communication network, such as those ambient RF from TV and cellular communications. In cooperative communication networks, the intermediate relays are randomly deployed for forwarding signals to the destination. These devices like sensors are typically equipped with batteries with limited operation life. Replacing batteries for such devices is not an easy task as they may be located in hostile environments. Therefore, equipping relays with the capability of RF energy harvesting can prolong the lifetime of the network [@wang2016smart]. The relays can harvest energy from the received RF signal which is a superposition of the desired signal and the CCI signals. The cooperative decode-and-forward (DF) relaying systems with single energy-constrained relay have been studied in [@GuICC14; @Gu2], where co-channel interference (CCI) signals as useful energy are taking into account. In this paper, we take a step further by considering multi-relay cooperation, which can provide further spatial diversity. In conventional DF multi-relaying systems with constant power supply for the relay [@IkkiBestRelay], once the relay decodes the information correctly, the outage performance at the destination only depends on the channel quality between the relay and the destination. However, it is much more complicated for the energy harvesting based relaying system, since the energy constrained relay is operating with the random energy that harvested from the received information and interference signals. In this work, the exact closed-form expression of the outage probability for the energy-harvesting-based multi-relay DF cooperative system is derived, where best relay selection is deployed. The optimal value of the energy harvesting ratio for achieving minimum outage is numerically investigated. In addition, the impacts of the CCI signals on the system’s outage and diversity performances are also analyzed. Energy-Harvesting Based Relaying ================================ System and Channel Models ------------------------- A cooperative DF relaying system is considered, where the source $S$ communicates with the destination $D$ through the help of multiple energy-constrained intermediate relaying nodes ($R_1, R_2, \cdots, R_L$). Each node is equipped with a single antenna and operates in the half-duplex mode in which the node cannot simultaneously transmit and receive signals in the same frequency band. Both, the first hop (source-to-relay) and the second hop (relay-to-destination), experience independent Rayleigh fading with the complex channel fading gains given by $h_i \sim CN(0,\Omega_{h_i})$ and $g_i \sim CN(0,\Omega_{g_i})$, respectively. The channels follow the block-fading model in which the channel remains constant during the transmission of a block and varies independently from one block to another. The channel state information is only available at the receiver. As aforementioned, the system operates in the presence of external interferers. Specifically, we assume that there is an aggregate CCI signal affecting the $i^{\rm th}$ relay. The channel fading gain between the interferer and the $i^{\rm th}$ relay, denoted $\beta_{i}$, is modeled as $\beta_{i}\sim CN(0,\Omega_{\beta_{i}})$. The desired channels and the interference channel are supposed to be independent from each other. Wireless Energy Harvesting at the Relay --------------------------------------- The power-splitting based protocol, where a portion of the received power is utilized for energy harvesting and the remaining power is used for information processing, is adopted at the relay node. $P$ is the received power of the signal at the relay and $\theta$, with $0\leq\theta\leq1$, is the fraction of power that the relay harvests from the received interference and information signal. The remaining power is $(1-\theta)P$, for information transmission from the relay to the destination. In the first-hop phase, the source $S$ transmits signal $s$ with power $P_{_S}$ to the relays. We consider a pessimistic case in which power splitting only reduces the signal power, but not to the noise power, which can provide a lower bound for relaying networks in practice. Accordingly, the received signal at the $i^{\rm th}$ relay for information detection is given by $$\label{Signal.P} y_{_{SR_i}} = \sqrt {(1-\theta)P_{_S}}h_is + \sqrt{(1-\theta)P_{i}} \beta_{i}s_{i} + n_{_{R_i}},$$ where $s_i$ and $P_i$ denote the signal and the corresponding power, respectively, from the interferer at the $i^{\rm th}$ relay, and $n_{_{R_i}}$ is the additive white Gaussian noise (AWGN) at the $i^{\rm th}$ relay with zero mean and variance $\sigma^2_R$. According to (\[Signal.P\]), the received signal-to-interference-plus-noise ratio (SINR) at the relay is given by $$\begin{aligned} \label{SINR.P} \gamma _{_{SR_i}} = \frac{(1-\theta)P_{_S}|h_i|^2}{\sigma_{_R}^2 + (1-\theta)P_{i}|\beta_{i}|^2} = \displaystyle\frac{\gamma_{h_i}}{1 + I_i},\end{aligned}$$ where ${\gamma}_{h_i} \triangleq\frac{(1-\theta)P_{_S}}{\sigma_{_R}^2}|h_i|^2$ and ${I}_i \triangleq \frac{(1-\theta)P_i}{\sigma^2_R}|\beta_i|^2$. The relay harvests energy from the received information signal and the interference signal for a duration of $T/2$ at each block, and thus, the harvested energy at the $i^{\rm th}$ relay is obtained as $$\label{Harvest.P} E_{_{H_i}} = \eta{\theta}\bigg(P_{_S} |h_i|^2 + P_{i}|\beta_{i}|^2\bigg)T/2,$$ where $\eta$ is the energy conversion efficiency of the relay with value varying from $0$ to $1$ depending upon the harvesting circuitry. It is assumed that there is no additional device like an energy buffer to store the harvested energy (Harvest-Use) which can decrease the complexity of the energy harvesting nodes, so all the energy collected during the harvesting phase is consumed by the relay. Since the processing power required by the transmit/receive circuitry at the relay is generally negligible compared to the power used for signal transmission [@TSplit; @Neg], here we suppose that all the energy harvested from the received signal is consumed by the relay for forwarding the information to the destination node. The transmission power of the $i^{\rm th}$ relay is then given by $$\begin{aligned} \label{PR.P} P_{_{R_i}} = \frac{{E_{_{H_i}}}}{T/2} = \frac{{\eta \theta \sigma _R^2}}{1 - \theta }\left({\gamma}_{h_i}+{I}_i\right).\end{aligned}$$ Selective Transmission ---------------------- Let $\mathcal{L}=\left\{1,2,\ldots,L\right\}$ denote the set of all the $L$ relays and $\mathcal{S}$ represents the decoding subset consisted of those relays that are able to decode the source message. That is, $$\begin{aligned} \label{S set} \mathcal{S} &\triangleq\left\{i\in\mathcal{L}: \gamma_{_{SR_i}}\geq \gamma_{_{th}} \right\},\end{aligned}$$ where $\gamma_{_\mathrm{th}}$ is a pre-defined threshold. In the second-hop phase, only a single relay among the relays belonging to subset $\mathcal{S}$ is allowed to transmit the signal. More specifically, from that decoding subset, the relay with the maximum signal-to-noise ratio (SNR) retransmits the information. That is, $\hat i =\arg\max_{i\in \mathcal{S}}\{\gamma_{_{R_iD}}\}$. Such technique is referred to as opportunistic relay selection. Note that the relay selection process require centralized processing and how to extend it to distributed processing [@duJMLR; @pairwise; @informationmatrix] is still an open problem. Therefore, the received SNR at the destination, given set $\mathcal{S}$, can be written as $$\begin{aligned} \gamma_{_{R_{\hat i}D}} =\frac{P_{_{R_{\hat i}}}|g_{\hat i}|^2}{\sigma_{_D}^2} =\underbrace{\frac{\eta\theta }{( 1 - \theta) } \frac{\sigma_{_R}^2}{\sigma_{_D}^2 } {|g_{\hat i}|^2}}_{\triangleq W_{\hat i}}({\gamma}_{h_{\hat i}}+{I}_{\hat i}),\label{SNR_D_BS}\end{aligned}$$ where ${\sigma_{_D}^2 }$ is the variance of AWGN at the destination. The defined random variable $W_{\hat i}$ follows the same distribution as of $|g_{\hat i}|^2$. Note that, in contrast to traditional DF relaying system with no rechargeable nodes, the transmission power $P_{_{R_i}}$ at the relay in the energy harvesting system is not a constant but a random variable, which depends on the replenished energy from the interference and information signals. Therefore, the distribution of the received SNR at the destination is determined not only by the distribution of the relay-to-destination channel power gain $|g_{\hat i}|^2$, but also by the distribution of the information and interference signal power, i.e., ${\gamma}_{h_{\hat i}}$ and ${I}_{\hat i}$. Performance Evaluation ====================== Outage Analysis --------------- As an important performance measure of wireless systems, outage probability is defined as the probability that the instantaneous output SNR falls below a pre-defined threshold $\gamma_{_\mathrm{th}}$. Mathematically speaking, $P_{\rm out}(\gamma_{_\mathrm{th}})={\rm Pr}\left(\gamma < \gamma_{_\mathrm{th}}\right)$. This SNR threshold guarantees the minimum QoS requirement of the destination users. In the DF relaying system under study, the outage probability at the destination, given set $\mathcal{S}$, is expressed as $$\begin{aligned} &\Pr \{ \gamma_{_{R_{\hat i}D}} < \gamma_{_\mathrm{th}}|\left| \mathcal{S} \right| = l\} \nonumber \\ =& \left[ \Pr \{ \gamma_{_{R_iD}} < \gamma_{_\mathrm{th}}\left| \gamma_{_{S{R_i}}} \ge \gamma_{_\mathrm{th}} \right.\} \right]^l \label{c1} \\ =& \left[ \frac{\Pr \{ \gamma_{_{R_iD}} < \gamma_{_\mathrm{th}},\gamma_{_{S{R_i}}} \ge \gamma_{_\mathrm{th}}\}}{\Pr \{ \gamma_{_{S{R_i}}} \ge \gamma_{_\mathrm{th}}\} } \right]^l , \label{c2}\end{aligned}$$ where $\left| \mathcal{S} \right|$ denotes the number of relays in the decoding set $\mathcal{S}$. Note that we assume a homogenous scenario where the desired channels as well as the interference channels are independent and identically distributed (i.i.d.), i.e., $\Omega_{h_i}=\Omega_{h}$, $\Omega_{g_i}=\Omega_{g}$, $\Omega_{\beta_i}=\Omega_{\beta}$, $P_i=P_I$ for $i = 1,2,\ldots,L$, and hence, the outage probability is independent of the combination of relays forming the decoding subset. Expression (\[c1\]) is obtained by using the first principle of order statistics [@du2013network]. Since $ \gamma_{_{R_iD}}$ and $ \gamma_{_{SR_i}}$ are not known to be independent, this conditional probability is rewritten as (\[c2\]), in which, the probability in the denominator is given by $$\begin{aligned} \label{FirstPro} \Pr \{ \gamma_{_{S{R_i}}} \ge \gamma_{_\mathrm{th}}\} =& 1 - \Pr \{ \gamma_{_{S{R_i}}} < \gamma_{_\mathrm{th}}\} \nonumber \\ =& \left( {1 + \frac{{\bar \gamma}_{_{\beta}}}{{{\bar \gamma }_{h}}}\gamma_{_\mathrm{th}}} \right)^{ - 1}\exp \left( { - \frac{\gamma_{_\mathrm{th}}}{{{\bar \gamma }_{h}}}} \right),\ \ \\end{aligned}$$ where ${\bar\gamma}_{h} = \frac{(1-\theta)P_{_S}}{\sigma_R^2}\Omega_{h}$ and ${\bar\gamma}_{\beta}= \frac{(1-\theta)P_I}{\sigma_R^2}\Omega_{\beta}$. The calculation of the probability in the numerator of (\[c2\]) involves three random variables, ${\gamma}_{h_{i}}$, ${\gamma}_{g_{i}}$ and ${I}_{i}$; therefore, it is hard to get the result directly. By taking into account the physical meaning of this joint probability, it can be divided into two parts to simplify the derivation, that is, $$\begin{aligned} \label{Joint} &\Pr \{ \gamma_{_{R_iD}} < \gamma_{_\mathrm{th}},\gamma_{_{S{R_i}}} \ge \gamma_{_\mathrm{th}}\}\nonumber\\ =& \Pr \big \{ \gamma_{_{R_iD}}\mathbbm{1}_{\mathcal{C}} < {\gamma_{_\mathrm{th}}}\big \}-\Pr \{ \gamma_{_{S{R_i}}} < \gamma_{_\mathrm{th}}\},\end{aligned}$$ where $\mathbbm{1}_{\mathcal{C}}$ is the indicator random variable for the set $\mathcal{C}=\{\gamma_{_{SR_i}} \geq \gamma_{_\mathrm{th}}\}$, i.e., $\mathbbm{1}_{\mathcal{C}}=1$ if $\gamma_{_{SR_i}} \geq \gamma_{_\mathrm{th}}$, otherwise, $\mathbbm{1}_{\mathcal{C}}=0$. \[PrPower\] The first probability in (\[Joint\]) is given by $$\begin{aligned} \label{CDF_D} \Pr \big \{ \gamma_{_{R_iD}}\mathbbm{1}_{\mathcal{C}} < {\gamma_{_\mathrm{th}}}\big \} =& 1 - \frac{1}{{\bar \gamma }_h-{\bar \gamma }_{_\beta} } \bigg[{{\bar \gamma }_h}\Gamma\Big(1,\frac{\gamma_{_\mathrm{th}}}{\bar \gamma_h};\frac{\gamma_{_\mathrm{th}}}{{\bar \gamma }_h{\bar \gamma }_g}\Big)-b^{-1} \nonumber \\ & \times\exp \left( {\frac{{{a}\gamma_{_\mathrm{th}}}}{{1 + \gamma_{_\mathrm{th}}}}} \right) \!\Gamma\Big(1,b\gamma_{_\mathrm{th}};\frac{b\gamma_{_\mathrm{th}}}{\bar \gamma_g}\Big)\bigg],\end{aligned}$$ where $a \triangleq \frac{{\rm{1}}}{{\bar \gamma }_{_\beta}} - \frac{{\rm{1}}}{{{\bar \gamma }_h}}$, ${b} \triangleq \frac{1}{{{{\bar \gamma }_h}}} + \frac{{a}}{{1 + \gamma_{_\mathrm{th}}}}$ and ${\bar \gamma }_g = \frac{\eta \theta}{1 - \theta } \frac{\sigma_R^2}{\sigma_D^2}\Omega_g$. $\Gamma(a,x;b)$ is the generalized incomplete Gamma function defined by $\Gamma(a,x;b) \triangleq \int_{x}^{\infty}t^{a-1}\exp(-t-bt^{-1})d t$. From (\[SNR\_D\_BS\]), we get $\gamma_{_{R_iD}}\mathbbm{1}_{\mathcal{C}}= W_i({\gamma}_{h_i}+{I}_{i})\mathbbm{1}_{\mathcal{C}}$. Define $Z \triangleq (\gamma_{h_i} + I_i)\mathbbm{1}_{\mathcal{C}}$, then the cumulative distribution function (CDF) of $Z$ is given by $$\begin{aligned} \label{CDFZ} F_Z(z) = \Pr\{Z<z\} = \int\int_{{x,y}\in \mathcal{A}}f_{\gamma_{h_i},I_i}(x,y) \mathrm{d}x \mathrm{d}y,\end{aligned}$$ where the set $\mathcal{A}={\left\{ {x + y < z,{\kern 1pt} \frac{x}{{1 + y}} > \gamma_{_\mathrm{th}}}, x\geq0, y\geq0 \right\}}$. After some set manipulations, we have $\mathcal{A}\neq \emptyset $ if and only if $ z> \gamma_{_\mathrm{th}}$. Since $\gamma_{h_i}$ and $I_i$ are independent, we get the joint distribution $f_{\gamma_{h_i},I_i}(x,y) = f_{\gamma_{h_i}}(x)f_{I_i}(y)$. Then, after some straightforward algebraic derivations, we obtain $$\label{Fz} F_Z(z) = \mathbbm{1}_{\mathcal{Z}} \int_0^{ \frac{z - \gamma_{_\mathrm{th}}}{1 + \gamma_{_\mathrm{th}}}} \int_{(1+y)\gamma_{_\mathrm{th}}}^{z - y} {f_{\gamma_{h_i}}(x)f_{I_i}(y)} \mathrm{d}x \mathrm{d}y.$$ Both $\gamma_{h_i}$ and $I_i$ are of exponential distributions. Integrate with respect to $x$ and $y$ yielding the CDF of $Z$, where [@Gradshteyn Eq.(3.351.1)] was used. Then the probability density function (PDF) of $Z$, $f_Z(z)$, follows directly from differentiating $F_Z(z)$ with respect to $z$, and is given by $$\begin{aligned} f_Z(z) =\mathbbm{1}_{\mathcal{Z}} \frac{1}{{\bar \gamma }_h-{\bar \gamma }_{_\beta}}\exp (-\frac{z}{{\bar \gamma }_h}) \left[ 1\!-\! \exp \left( { - a\frac{z - \gamma_{_\mathrm{th}}}{1 + \gamma_{_\mathrm{th}}}} \right) \right], \label{fZ3}\end{aligned}$$ where $a \triangleq \frac{{\rm{1}}}{{\bar \gamma }_{_\beta}} - \frac{{\rm{1}}}{{{\bar \gamma }_h}}$ and $\mathbbm{1}_{\mathcal{Z}}$ is the indicator random variable for the set $\mathcal{Z}=\{z> \gamma_{_\mathrm{th}}\}$, i.e., $\mathbbm{1}_{\mathcal{Z}}=1$ if $ z > \gamma_{_\mathrm{th}}$, otherwise, $\mathbbm{1}_{\mathcal{Z}}=0$. Finally, we have $$\begin{aligned} \label{Frd} \Pr \big \{ \gamma_{_{R_iD}}\mathbbm{1}_{\mathcal{C}} < {\gamma_{_\mathrm{th}}}\big \} =& \mathbb{E}_Z \left\{ {1 - \exp \left( - \frac{\gamma_{_\mathrm{th}}}{{{{\bar \gamma }_g}Z}} \right)} \right\} \nonumber \\ =& 1 - \int_0^\infty {\exp \left( { - \frac{\gamma_{_\mathrm{th}}}{{{{\bar \gamma }_g}z}}} \right){f_Z}} \left( z \right)dz \nonumber \\ =& 1 - \frac{1}{{\bar \gamma }_h-{\bar \gamma }_{_\beta} } \bigg[{{\bar \gamma }_h}\Gamma\Big(1,\frac{\gamma_{_\mathrm{th}}}{\bar \gamma_h};\frac{\gamma_{_\mathrm{th}}}{{\bar \gamma }_h{\bar \gamma }_g}\Big)-b^{-1} \nonumber \\ & \times\exp \left( {\frac{{{a}\gamma_{_\mathrm{th}}}}{{1 + \gamma_{_\mathrm{th}}}}} \right) \Gamma\Big(1,b\gamma_{_\mathrm{th}};\frac{b\gamma_{_\mathrm{th}}}{\bar \gamma_g}\Big)\bigg],\end{aligned}$$ where ${b} \triangleq \frac{1}{{{{\bar \gamma }_h}}} + \frac{{a}}{{1 + \gamma_{_\mathrm{th}}}}$ and $\Gamma(a,x;b)$ is the generalized incomplete Gamma function defined by $\Gamma(a,x;b) \triangleq \int_{x}^{\infty}t^{a-1}\exp(-t-bt^{-1})d t$, which completes the proof. According to the theorem of total probability, the unconditional outage probability at the destination, $P_{\mathrm{out}}\left( \gamma_{_\mathrm{th}} \right)$, is expanded as $$\begin{aligned} \label{Outage} P_{\mathrm{out}}\left( \gamma_{_\mathrm{th}} \right) =&\sum_{l = 0}^{L}\Pr\{\left| \mathcal{S} \right| = l\} \Pr\{ \gamma_{_{R_{\hat i}D}} < \gamma_{_\mathrm{th}}|\left| \mathcal{S} \right| = l\} \nonumber\\ =&\left[ \Pr \{ \gamma_{_{S{R_i}}} < \gamma_{_\mathrm{th}} \} \right]^L \nonumber \\ & \times \sum_{l = 0}^{L}\binom{L}{l} \left[ \frac{\Pr \big \{ \gamma_{_{R_iD}}\mathbbm{1}_{\mathcal{C}} < {\gamma_{_\mathrm{th}}}\big \}}{\Pr \{ \gamma_{_{S{R_i}}} < \gamma_{_\mathrm{th}}\} } -1\right]^l,\end{aligned}$$ where the following expression is used to obtain (\[Outage\]), $$\begin{aligned} &\Pr\{\left| \mathcal{S} \right| = l\}\nonumber \\ =&\binom{L}{l} \left[ \Pr \{ \gamma_{_{S{R_i}}} < \gamma_{_\mathrm{th}} \} \right]^{L-l} \left[ \Pr \{ \gamma_{_{S{R_i}}} \geq \gamma_{_\mathrm{th}} \} \right]^l.\end{aligned}$$ Numerical Results and Discussion -------------------------------- Numerical examples are presented and corroborated by simulation results to examine the outage of the DF cooperative communication system, where the energy-constrained relays harvest energy from the received information signal and the CCI signals. Hereafter, and unless stated otherwise, the threshold $\gamma_{_\mathrm{th}}$ is set to $5\mathrm{dB}$ and the energy conversion efficiency $\eta$ is set to $1$. To better evaluate the effect of the interference on the system’s outage, we define $\frac{P_{_S}\Omega_h}{P_I\Omega_{\beta}}$ as the average signal-to-interference ratio (SIR) and $\frac{P_I\Omega_{\beta}}{\sigma_R^2}$ as the average interference-to-noise ratio (INR) at the first hop. Figure \[Fig2\] shows the outage probability versus the energy harvesting ratio $\theta$ for different number of relays, where the first-hop average SNR is $20\mathrm{dB}$, and there is no interference affecting the relay. It is observed that the analytical results of (\[Outage\]) match perfectly the simulation results. As the number of available relays increases, the outage performance of the energy harvesting based relaying system improves. The convex feature of the curves is due to the fact that the energy harvested for the second-hop transmission increases with increasing $\theta$, which effectively decreases the outage of the second hop and, accordingly, improves the outage performance of the system. Meanwhile, as $\theta$ increases, more power is harvested for information transmission and less power is left for information decoding which increases the outage of the first hop and reduces the number of reliable relays for relay selection, therefore, the outage probability first drops down until reaching a minimum and then increases up. In order to clearly demonstrate the diversity gain of the system and the impacts of CCI signals on the system’s performance, Figs. \[Fig3\] and \[Fig4\] illustrate the outage probability of the system versus the first-hop average SNR for different number of relays under two different situations where the first-hop average INR is $10\mathrm{dB}$ and the first-hop average SIR is $20\mathrm{dB}$, respectively. From Fig. \[Fig2\], it is clearly seen that the minimum outage is achieved when the energy harvesting ratio $\theta$ is of the value around $0.6$, so the ratio $\theta$ is set to $0.6$ here for performance evaluation. It is shown from each plot in Figs. \[Fig3\] and \[Fig4\] that, even though the CCI signals can be utilized as useful energy during the energy harvesting phase, the existence of interference at the relay during the information decoding phase still limits the outage performance. It is also shown that when there is no interference affecting the relay, the spatial diversity order, which indicates the decreasing speed of the outage with respect to SNR, increases as the number of available relays increases. In Fig. \[Fig3\], it is seen that the diversity order remains the same when the interference keeps constant, while in Fig. \[Fig4\], the diversity order is reduced to zero and error floors appear in the high-SNR regime due to the constant interference to signal ratio. Conclusions =========== In this paper, an opportunistic decode-and-forward (DF) cooperative communication system was studied, where the relays are energy-constrained and need to replenish energy from the received information signal and the co-channel interference (CCI) signals, and then use that harvested energy to forward the correctly decoded signal to the destination. Different from traditional DF relaying system with no rechargeable nodes, the transmission power of the energy constrained relay is not a constant anymore but a random variable depending on the variation of available energy harvested from the received information and CCI signals at the relay. In order to better evaluate the system performance, the exact closed-form expression of the outage probability was derived. The optimal value of the energy harvesting ratio for achieving minimum outage was numerically investigated. The proposed relaying scheme can achieve full diversity order equal to the number of relays without the need of fixed power supply. By applying some interference cancelation schemes at the information decoder, the system performance can be further improved. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[ l@\#1 =l@\#1 \#2]{}]{} B. Gurakan, O. Ozel, J. Yang, and S. Ulukus, “Energy cooperation in energy harvesting communications,” *[IEEE]{} Trans. Commun.*, vol. 61, no. 12, pp. 4884–4898, Dec. 2013. V. Raghunathan, S. Ganeriwal, and M. Srivastava, “Emerging techniques for long lived wireless sensor networks,” *[IEEE]{} Commun. Mag.*, vol. 44, no. 4, pp. 108–114, Apr. 2006. Y. Gu and S. Aissa, “[RF]{}-based energy harvesting in decode-and-forward relaying systems: Ergodic and outage capacities,” *[IEEE]{} Trans. Wireless Commun.*, vol. 14, no. 11, pp. 6425–6434, Nov. 2015. Z. Ding and H. V. Poor, “Multi-user swipt cooperative networks: Is the max-min criterion still diversity-optimal,” *IEEE Trans. Wireless Commun.*, vol. 15, no. 1, pp. 553–567, 2016. H. Farhangi, “The path of the smart grid,” *IEEE Power and Energy Magazine*, vol. 8, no. 1, pp. 18–28, 2010. J. Du, S. Ma, Y.-C. Wu, and H. V. Poor, “Distributed bayesian hybrid power state estimation with pmu synchronization errors,” in *Global Communications Conference (GLOBECOM), 2014 IEEE*, 2014, pp. 3174–3179. ——, “Distributed hybrid power state estimation under pmu sampling phase errors,” *IEEE Transactions on Signal Processing*, vol. 62, no. 16, pp. 4052–4063, 2014. J. S. Hoa, A. J. Yeha, E. Neofytoub, S. Kima, T. Tanabea, B. Patlollab, B. Beyguib, and A. Poon, “Wireless power transfer to deep-tissue microimplants,” *Proceedings of the National Academy of Sciences*, vol. 111, no. 22, pp. 7974–7979, June 2014. L. Mateu and F. Moll, “Review of energy harvesting techniques and applications for microelectronics,” in *Proc. SPIE Circuits and Syst. II*, 2005, pp. 359–373. Q. Wang, X. Liu, J. Du, and F. Kong, “Smart charging for electric vehicles: A survey from the algorithmic perspective,” *IEEE Communications Surveys & Tutorials*, vol. 18, no. 2, pp. 1500–1517, 2016. Y. Gu and S. Aissa, “Interference aided energy harvesting in decode-and-forward relaying systems,” in *Proc. 2014 IEEE Int. Conf. Commun.*, June 2014, pp. 5378–5382. S. S. Ikki and M. H. Ahmed, “Performance analysis of cooperative diversity with incremental-best-relay technique over rayleigh fading channels,” *IEEE Transactions on Communications*, vol. 59, no. 8, pp. 2152–2161, 2011. X. Zhou, R. Zhang, and C. K. Ho, “Wireless information and power transfer: Architecture design and rate-energy tradeoff,” in *Proc. 2012 IEEE Global Commun. Conf.*, Dec. 2012, pp. 3982–3987. B. Medepally and N. Mehta, “Voluntary energy harvesting relays and selection in cooperative wireless networks,” *[IEEE]{} Trans. Wireless Commun.*, vol. 9, no. 11, pp. 3543–3553, Nov. 2010. J. Du, S. Ma, Y.-C. Wu, S. Kar, and J. M. Moura, “Convergence analysis of distributed inference with vector-valued [G]{}aussian belief propagation,” *arXiv preprint arXiv:1611.02010*, 2016. ——, “Convergence analysis of belief propagation for pairwise linear [G]{}aussian models,” *IEEE Global Conference on Signal and Information Processing (GlobalSIP), arXiv preprint arXiv:1706.04074*, 2017. ——, “Convergence analysis of the information matrix in [G]{}aussian belief propagation,” *IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), arXiv preprint arXiv:1704.03969*, 2017. J. Du and Y.-C. Wu, “Network-wide distributed carrier frequency offsets estimation and compensation via belief propagation,” *IEEE Transactions on Signal Processing*, vol. 61, no. 23, pp. 5868–5877, 2013. I. S. Gradshteyn and I. M. Ryzhik, *Table of integrals, series and products*, 7th ed.1em plus 0.5em minus 0.4emAcademic Press, 7th Ed., 2007.
--- abstract: 'Nonsingular estimation of high dimensional covariance matrices is an important step in many statistical procedures like classification, clustering, variable selection an future extraction. After a review of the essential background material, this paper introduces a technique we call slicing for obtaining a nonsingular covariance matrix of high dimensional data. Slicing is essentially assuming that the data has Kronecker delta covariance structure. Finally, we discuss the implications of the results in this paper and provide an example of classification for high dimensional gene expression data.' author: - | Deniz Akdemir\ Department of Statistics\ University of Central Florida\ Orlando, FL 32816 bibliography: - 'arrayref.bib' title: 'Slicing: Nonsingular Estimation of High Dimensional Covariance Matrices Using Multiway Kronecker Delta Covariance Structures ' --- Intoduction =========== The advances in data collection methods and the increase in data storage and processing capabilities has led to data sets that are not suitable for analysis with the classical statistical approaches. For example, through DNA micro array techniques, the expression levels of millions of genes can easily be obtained. However, usually, the number of observations (the sample size) is much less than the number of expression levels observed. This is the characteristic of many recent data sets in bioinformatics, signal processing, and many other fields of science. The number of variables (p) is much higher than the number of observations (N) (i.e., $N<<p$). It is well known that when $N<p$ the usual sample covariance matrix will be singular. Many methods in statistics, like clustering and classification depends on estimating the inverse of the covariance matrix. For small samples and especially when $N<<p$ This becomes a major problem when the we need to obtain a the inverse of the covariance matrix. The technique slicing, which we will discuss in detail in this paper, is essentially obtaining estimates of the covariance matrix under the assumption that assuming that the p-dimensional observations are realizations from a multivariate distribution with a certain Kronecker delta structure. Slicing is appropriate when the number of observations in the sample is much less than the number of variables.because by choosing a Kronecker structure for the covariance a great deal of decrease in the number of parameters is obtained. By using $2$-way, $3$-way, and in general $i$-way Kronecker structures for the covariance matrix, we can obtain nonsingular estimates of the covariance matrix when $N<<p.$ While developing slicing, we have used the concept of array variate normal variable with multiway Kronecker delta structure obtained by using the rules of multi linear algebra. In Section 2, we will first review array algebra as its discussed in [@rauhala1974array], [@rauhala1980introduction], Blaha [@blaha1977few]. The array variate normal model with Kronecker delta structure and estimation of its parameters are also discussed in Section 2. In Section 3, we describe slicing in detail, provide the results from various simulations and apply the technique to high dimensional gene expression data. Array Algebra and Array Variate Normal Random Variable ====================================================== Array Algebra ------------- In this paper we will only study arrays with real elements. We will write $\widetilde{X}$ to say that $\widetilde{X}$ is an array. When it is necessary we can write the dimensions of the array as subindices, e.g., if $\widetilde{X}$ is a $m_1 \times m_2\times m_3 \times m_4$ dimensional array in $R^{m_1\times m_2 \times \ldots \times m_i}$, then we can write $\widetilde{X}_{m_1 \times m_2\times m_3 \times m_4}.$ To refer to an element of an array $\widetilde{X}_{m_1 \times m_2\times m_3 \times m_4},$ we write the position of the element as a subindex to the array name in parenthesis, $(\widetilde{X})_{r_1r_2r_3r_4}.$ We will now review some basic principles and techniques of multi linear algebra. These results and their proofs can be found in Rauhala [@rauhala1974array], [@rauhala1980introduction] and [@blaha1977few]. *Inverse Kronecker product* of two matrices $A$ and $B$ of dimensions $p\times q$ and $r \times s$ correspondingly is written as $A\otimes^i B$ and is defined as $A\otimes^i B=[A(B)_{jk}]_{pr\times qs}=B\otimes A,$ where $'\otimes'$ represents the ordinary Kronecker product. The following properties of the inverse Kronecker product are useful: - ${\boldsymbol{0}}\otimes^i A= A \otimes^i {\boldsymbol{0}}={\boldsymbol{0}}.$ - $(A_1+A_2)\otimes^i B=A_1\otimes^i B+ A_2 \otimes^i B.$ - $A \otimes^i (B_1+ B_2)=A \otimes^i B_1+ A \otimes^i B_2.$ - $\alpha A \otimes^i \beta B= \alpha \beta A \otimes^i B.$ - $(A_1 \otimes^i B_1)(A_2 \otimes^i B_2)= A_1A_2 \otimes^i B_1B_2.$ - $(A \otimes^i B)^{-1}=(A^{-1} \otimes^i B^{-1}).$ - $(A \otimes^i B)^{+}=(A^{+} \otimes^i B^{+}),$ where $A^{+}$ is the Moore-Penrose inverse of $A.$ - $(A \otimes^i B)^{-}=(A^{-} \otimes^i B^{-}),$ where $A^{-}$ is the $l$-inverse of $A$ defined as $A^{-}=(A'A)^{-1}A'.$ - If $\{\lambda_i\}$ and $\{\mu_j\}$ are the eigenvalues with the corresponding eigenvectors $\{{\boldsymbol{x}}_i\}$ and $\{{\boldsymbol{y}}_j\}$ for matrices $A$ and $B$ respectively, then $A\otimes^i B$ has eigenvalues $\{\lambda_i\mu_j\}$ with corresponding eigenvectors $\{{\boldsymbol{x}}_i\otimes^i{\boldsymbol{y}}_j\}.$ - Given two matrices $A_{n\times n}$ and $B_{m\times m}$ $|A\otimes^i B|=|A|^m|B|^n,$ $tr(A\otimes^i B)=tr(A)tr(B).$ - $A\otimes^i B=B\otimes A=U_1 A\otimes B U_2,$ for some permutation matrices $U_1$ and $U_2.$ It is well known that a matrix equation $$AXB'=C$$ can be rewritten in its mono linear form as $$\label{eqmnf}A\otimes^i B vec(X)=vec(C).$$ Furthermore, the matrix equality $$A\otimes^i B XC'=E$$ obtained by stacking equations of the form (\[eqmnf\]) can be written in its mono linear form as $$(A\otimes^i B \otimes^i C) vec(X)=vec(E).$$ This process of stacking equations could be continued and R-matrix multiplication operation introduced by Rauhala [@rauhala1974array] provides a compact way of representing these equations in array form: *R-Matrix Multiplication* is defined element wise: $$((A_1)^1 (A_2)^2 \ldots (A_i)^i\widetilde{X}_{m_1 \times m_2 \times \ldots \times m_i})_{q_1q_2\ldots q_i}$$ $$=\sum_{r_1=1}^{m_1}(A_1)_{q_1r_1}\sum_{r_2=1}^{m_2}(A_2)_{q_2r_2}\sum_{r_3=1}^{m_3}(A_3)_{q_3r_3}\ldots \sum_{r_i=1}^{m_i}(A_i)_{q_ir_i}(\widetilde{X})_{r_1r_2\ldots r_i}.$$ R-Matrix multiplication generalizes the matrix multiplication (array multiplication in two dimensions)to the case of $k$-dimensional arrays. The following useful properties of the R-Matrix multiplication are reviewed by Blaha [@blaha1977few]: - $(A)^1B=AB.$ - $(A_1)^1(A_2)^2C=A_1CA'_2.$ - $\widetilde{Y}=(I)^1(I)^2\ldots (I)^i \widetilde{Y}.$ - $((A_1)^1 (A_2)^2\ldots (A_i)^i)((B_1)^1(B_2)^2\ldots (B_i)^i)\widetilde{Y}= (A_1B_1)^1(A_2B_2)^2\ldots(A_iB_i)^i\widetilde{Y}.$ The operator $rvec$ describes the relationship between $\widetilde{X}_{m_1 \times m_2 \times \ldots m_i}$ and its mono linear form ${\boldsymbol{x}}_{m_1m_2\ldots m_i\times 1}.$ \[def:rvec\] $rvec( \widetilde{X}_{m_1 \times m_2 \times \ldots m_i})={\boldsymbol{x}}_{m_1m_2\ldots m_i\times 1}$ where ${\boldsymbol{x}}$ is the column vector obtained by stacking the elements of the array $\widetilde{X}$ in the order of its dimensions; i.e., $(\widetilde{X})_{j_1 j_2 \ldots j_i}=({\boldsymbol{x}})_j$ where $j=(j_i-1)n_{i-1}n_{i-2}\ldots n_1+(j_i-2)n_{i-2}n_{i-3}\ldots n_1+\ldots+(j_2-1)n_1+j_1.$ Let $\widetilde{L}_{m_1 \times m_2 \times\ldots m_i}=(A_1)^1(A_2)^2\ldots(A_i)^i\widetilde{X}$ where $(A_j)^j$ is an $m_j\times n_j$ matrix for $j=1,2,\ldots,i$ and $\widetilde{X}$ is an $n_1\times n_2\times\ldots\times n_i$ array. Write $\mathbf{l}=rvec(\widetilde{L})$ and ${\boldsymbol{x}}=rvec(\widetilde{X}).$ Then, $\mathbf{l}=A_1\otimes^iA_2\otimes^i\ldots\otimes^i A_i{\boldsymbol{x}}.$ Therefore, there is an equivalent expression of the array equation in mono linear form. The square norm of $\widetilde{X}_{m_1 \times m_2 \times\ldots m_i}$ is defined as $$\|\widetilde{X}\|^2=\sum_{j_1=1}^{m_1}\sum_{j_2=1}^{m_2}\ldots\sum_{j_i=1}^{m_i}((\widetilde{X})_{j_1j_2\ldots j_i})^2.$$ The distance of $\widetilde{X_1}_{m_1 \times m_2 \times\ldots m_i}$ from $\widetilde{X_2}_{m_1 \times m_2 \times\ldots m_i}$ is defined as $$\sqrt{\|\widetilde{X_1}-\widetilde{X_2}\|^2}.$$ Let $\widetilde{Y}=(A_1)^1 (A_2)^2\ldots (A_i)^i\widetilde{X}+\widetilde{E}.$ Then $\|\widetilde{E}\|^2$ is minimized for $\widehat{\widetilde{X}}=(A_1^{-})^1(A_2^{-})^2\ldots(A_i^{-})^i\widetilde{Y}.$ Array Variate Normal Distribution --------------------------------- ([@DenizGuptaJAS])\[modkroncov\] Let $A_1, A_2,\ldots,A_i$ are non singular matrices of orders $m_1, m_2,\ldots, m_i$ and let $\widetilde{M}$ be an $m_1 \times$ $m_2$ $\times \ldots$ $\times m_i$ dimensional constant array. Then the pdf of array normal random variable $\widetilde{X}$ with Kronecker delta covariance structure is given by $$\label{eq:densityarn}\phi(\widetilde{X}; \widetilde{M},A_1,A_2,\ldots A_i)=\frac{\exp{(-\frac{1}{2}\|{(A_1^{-1})^1 (A_2^{-1})^2 \ldots (A_i^{-1})^i(\widetilde{X}-\widetilde{M})}\|^2)}}{(2\pi)^{m_1m_2\ldots m_i/2}|A_1|^{\prod_{j\neq 1}{m_j}} |A_2|^{\prod_{j\neq 2}{m_j}} \ldots |A_i|^{\prod_{j\neq i}{m_j}}}.$$ Distributional properties of a array normal variable with density in the form of Theorem \[modkroncov\] can obtained by using the equivalent mono linear representation. The moments, the marginal and conditional distributions, independence of variates should be studied considering the equivalent mono linear form of the array variable and the well known properties of the multivariate normal random variable. For the $m_1 \times m_2 \times\ldots \times m_i$ dimensional array variate random variable $\widetilde{X},$ the principal components are defined as the principal components of the $d=m_1m_2\ldots m_i$-dimensional random vector $rvec(\widetilde{X}).$ The main statistical problem is the estimation of the covariance of $rvec(\widetilde{X}),$ its eigenvectors and eigenvalues for small sample sizes. Estimation ---------- In this section we provide an heuristic method of estimating the model parameters. The optimality of these estimators are not proven but merely checked by simulation studies. Inference about the parameters of the model in Theorem \[modkroncov\] for the matrix variate case has been considered in the statistical literature ([@roy2003tests], [@roy2008likelihood], [@lu2005likelihood],[@srivastava2008models], etc...). In these papers, the unique maximum likelihood estimators of the parameters of the model in Theorem \[modkroncov\] for the matrix variate case are obtained under different assumptions for the covariance parameters. Some classification rules based on the matrix variate observations with Kronecker delta covariance structures have been studied in [@roy2009classification], and also in [@krzysko2009discriminant]. The model in Theorem \[modkroncov\] the way it is stated is unidentifiable. However, this problem can easily be resolved by putting restrictions on the covariance parameters. The approach we take is to assume that $j-1$ of the last diagonal elements of matrices $A_jA'_j$ are equal to $1$ for $j=1,2,\ldots,i.$ The Flip-Flop Algorithm is proven to attain the maximum likelihood estimators of the parameters of two dimensional array variate normal distribution [@srivastava2008models]. The following is similar to the flip flop algorithm. First, assume $\{\widetilde{X}_1,$ $\widetilde{X}_2,$ $\ldots,$ $\widetilde{X}_N\}$ is a random sample from a $N(\widetilde{M},A_1,A_2,\ldots A_i)$ distribution with $j-1$ of the last diagonal elements of matrices $A_jA'_j$ equal to $1$ for $j=1,2,\ldots,i.$ Further, we assume that all $A_j'$s are square positive definite matrices of rank at least $j.$ Finally, assume that we have $N\prod_{j=1}^{i}m_j>m_r^2$ for all $r=1,2, \ldots,i.$ Algorithm for estimation: 1. Estimate $\widetilde{M}$ by $\widehat{\widetilde{M}}=\frac{1}{N}\sum_{l=1}^N \widetilde{X}_l,$ and obtain the centered array observations $\widetilde{X}_l^c=\widetilde{X}_l-\widehat{\widetilde{M}}$ for $l=1,2,\ldots, N.$ 2. Start with initial estimates of $A_2, A_3, \ldots, A_i.$ 3. On the basis of the estimates of $A_2, A_3, \ldots, A_i$ calculate an estimate of $A_1$ by first scaling the array observations using $$\widetilde{Z}_l=(I)^1(A_2^{-1})^2, (A_3^{-1})^3, \ldots, (A_i^{-1})^i\widetilde{X}_l^c,$$ and then calculating the square root of covariance along the $1$st dimension of the arrays $\widetilde{Z}_l,$ $l=1,2,\ldots, N.$ 4. On the basis of the most recent estimates of the model parameters, estimate $A_j$ $j=2,\ldots,i.$ by first scaling the array observations using $$\widetilde{Z}_l=(A_1^{-1})^1(A_2^{-1})^2, \ldots (A_{j-1}^{-1})^{j-1} I (A_{j+1}^{-1})^{j+1}\ldots(A_i^{-1})^i\widetilde{X}_l^c,$$ and then calculating the square root of covariance along the jth dimension of the arrays $\widetilde{Z}_l$’s for $j=2,\ldots, i.$ Scale the estimate of $A_jA'_j$ so that the last $j-1$ diagonal elements are equal to $1.$ 5. Repeat steps 3 and 4 until convergence is attained. Let $\widetilde{X}_l,$ $l=1,2,..,N$ be a random sample for the array variate random variable $\widetilde{X}.$ Let $p=m_1 m_2 \ldots m_i.$ When $N<p,$ it is well known that the usual covariance estimator for $rvec(\widetilde{X})$ will be singular with probability one. Therefore, when $N<p,$ there is no consistent estimator of the covariance of $rvec(\widetilde{X})$ under the unstructured covariance assumption. On the other hand, if we assume that the covariance matrix has Kronecker delta structure, we can obtain a nonsingular estimate of the covariance structure with the methods developed in this section. The condition on the sample size is relaxed considerably. If we have $pN>m_r^2$ for $r=1,2, \ldots,i$ and the assumptions stated before the algorithm for estimation of the parameters of this model hold, then the estimator of the covariance matrix is nonsingular. When the covariance does not have Kronecker structure, the estimate obtained here could be used as regularized nonsingular estimate of the covariance. \[exsim1\]Let\ $A_1= \left( \begin{array}{cc} 4 & 1 \\ 1 & 2 \end{array} \right)^{1/2}, $ $A_2= \left( \begin{array}{ccc} 3 & 0 & -1 \\ 0 & 2 & 0 \\ -1 & 0 & 1 \end{array} \right)^{1/2},$\ and $A_3=\left( \begin{array}{ccc} 4 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{array} \right)^{1/2}. $ Also, let $\widetilde{M}$ be the $0$ array of dimensions $2 \times 3 \times 3.$ The following are the estimates of $A_1$, $A_2$ and $A_3$ based on a random sample of size $100$ from the $N(A_1, A_2,A_3, \widetilde{M}).$\ $\widehat{A}_1= \left( \begin{array}{cc} 2.76 & 0.48 \\ 0.48 & 1.13 \end{array} \right)^{1/2}, $ $\widehat{A}_2= \left( \begin{array}{ccc} 4.69 & 0.25 & -0.52 \\ 0.25 & 2.70 & -0.04 \\ -0.52 & -0.04 & 1 \end{array} \right)^{1/2},$\ and\ $\widehat{A}_3=\left( \begin{array}{ccc} 4.27 & -0.13 & 0.55 \\ -0.13 & 1 & 0.02 \\ 0.55 & 0.02 & 1 \end{array} \right)^{1/2}.$\ The left plot in Figure \[figuresim\] compares the estimated eigenvalues to the true eigenvalues for this example. Slicing ======= A vector ${\boldsymbol{x}}$ of dimension $p$ can be sliced into $p/m_1=m_2$ pieces and organized into a matrix of order $p=m_1\times m_2$ for some natural numbers $m_1$ and $m_2.$ Or, in general, the same vector can be organized in an array of dimension $p=m_1\times m_2\times \ldots \times m_i$ for some natural numbers $m_1,$ $m_2, \ldots,$ $m_i.$ Once we slice the data and reorganize it in array form, we can pretend that this array data was generated from the model in Theorem \[modkroncov\]. We require that the additional assumptions stated before the algorithm for estimation of the parameters of this model hold. A nonsingular estimate of the covariance matrix $\Lambda$ of the $p-$dimensional vector variate random variable can be obtained by using the estimators from this algorithm and using $\hat{\Lambda}=(\hat{A}_1\otimes^i \hat{A}_2\otimes^i\hat{A}_i)(\hat{A}_1\otimes^i \hat{A}_2\otimes^i\hat{A}_i)'$ That we do not have to assume any covariance components are zero is the main difference and advantage of this regularization method to the usual shrinkage methods like lasso [@friedman2008sparse]. If $\{\lambda(A_r)_{r_j}\}$ are the $m_j$ eigenvalues of $A_rA'_r$ with the corresponding eigenvectors $\{({\boldsymbol{x}}_r)_{r_j}\}$ for $r=1,2,\ldots,i$ and $r_j=1,2,\ldots,m_r,$ then $(A_1\otimes^i A_2\otimes^iA_i)(A_1\otimes^i A_2\otimes^iA_i)'$ will have eigenvalues $\{\lambda(A_1)_{r_1}\lambda(A_2)_{r_2}\ldots\lambda(A_i)_{r_i}\}$ with corresponding eigenvectors $\{({\boldsymbol{x}}_1)_{r_1}\otimes^i({\boldsymbol{x}}_2)_{r_2}\otimes^i \ldots \otimes^i({\boldsymbol{x}}_i)_{r_i} \}.$ By replacing $A_r$ by their estimators, we estimate the eigenvalues and eigenvectors of the covariance of $rvec(\widetilde{X})$ using this relationship. Since each eigenvector is a Kronecker product of smaller components the reduction in dimension obtained by this approach is larger than the one that could be obtained using the ordinary principle components on the ordinary sample covariance matrix. Let ${\boldsymbol{x}}\sim N_{12}({\boldsymbol{\mu}}=0, \Lambda).$. We illustrate slicing for $i=2,$ $m_1=3$ and $m_2=4.$ $N=5,10, 15,20,...,45,50$ sets of $N$ observations were generated and $\Lambda$ was estimated using $\widehat{\Lambda}=\widehat{A}_1\otimes^i \widehat{A}_2$ assuming the model in Theorem \[modkroncov\]. We repeated the whole experiment $5$ times. The results are summarized in Figure \[fig:slicingN\]. The covariance matrix in the left figure is the identity matrix. In the center figure we have $\Lambda$ that has the same order Kronecker delta covariance structure as the slicing, the components of $\Lambda$ are unstructured and generated randomly. The right figure is the case where $\Lambda$ is a randomly generated unstructured covariance matrix. Slicing has as a regularization effect that shrinks the eigenvalues towards each other. ![The true covariance structure is represented with the black line. The estimated covariance structure is denoted by different colors according to the sample size. Yellow colors are for small sample sizes, red for moderate sample sizes and green for the larger samples. The covariance matrix in the left figure is the identity matrix. As N gets larger the estimators of the eigenvalues approach the true values. In this case the estimator seem to be consistent. In the center figure $\Lambda$ has the same order Kronecker delta covariance structure as the slicing, each of the components of $\Lambda$ are unstructured and generated randomly. The right figure is the case where $\Lambda$ is a randomly generated unstructured covariance matrix. In these last two cases the estimator has a bias that does not decrease on the average with increasing sample sizes, however the variance of the estimator decreases as the sample size increases.[]{data-label="fig:slicingN"}](slicingN2.jpg){width="100.00000%"} \[alon1\] The Alon colon data set [@alon1999broad] have expression measurements on 2000 genes and $n_1=40$ tumor tissues and $n_2=22$ normal tissue samples. We will compare the means of the normal and tumor tissue samples. We assume first that normal and tumor tissues have the same covariance $\Lambda,$ a $2000\times 2000$ positive definite matrix. We slice each of the $n=62$ observations into a $40\times 50$ matrix and estimate $\Lambda$ with $\widehat{\Lambda}=\widehat{A}_1\otimes^i \widehat{A}_2$ assuming the model in Theorem \[modkroncov\] holds. For testing the equality of the means, we calculate the $F^+$ statistic proposed in [@kubokawa2008estimation] replacing their estimator of the inverse of covariance matrix $\Lambda$ with the inverse of $\widehat{\Lambda}$: $$F^{+}=\frac{2000-(62-1)+1}{(62-1)^2}\left(\frac{1}{40}+\frac{1}{22}\right)^{-1}(\bar{{\boldsymbol{x}}}_1-\bar{{\boldsymbol{x}}}_2)'\widehat{\Lambda}^{-1}(\bar{{\boldsymbol{x}}}_1-\bar{{\boldsymbol{x}}}_2)=3023.273.$$ Using the sampling distribution $F_{r,n-r}$ proposed in [@kubokawa2008estimation] assuming that the rank $r$ of $\widehat{\Lambda}$ is $62-1,$ the p-value is calculated as $0.01445.$ Thus, the hypothesis of equality of the means is rejected. \[exsim2\] $N=10$ i.i.d. observations from a $N_{12}({\boldsymbol{\mu}}=0, \Lambda),$ distribution are generated for a randomly generated unstructured nonsingular covariance matrix $\Lambda$. The right plot in Figure \[figuresim\] compares the estimated eigenvalues obtained by slicing this data into a $2 \times 3 \times 3$ array with the ordinary sample covariance. ![The left plot in Figure \[figuresim\] compares the estimated eigenvalues to the true eigenvalues (Example \[exsim1\]). The right plot in Figure \[figuresim\] compares the estimated eigenvalues obtained under different assumptions to the true eigenvalues (Example \[exsim2\]). The red $*$’s represent the true values, black $\circ$’s are for estimates under Kronecker delta covariance assumption and the blue $+$’s are for estimates under unrestricted covariance assumption.[]{data-label="figuresim"}](figuresim1.jpg "fig:"){width=".4\textwidth"} ![The left plot in Figure \[figuresim\] compares the estimated eigenvalues to the true eigenvalues (Example \[exsim1\]). The right plot in Figure \[figuresim\] compares the estimated eigenvalues obtained under different assumptions to the true eigenvalues (Example \[exsim2\]). The red $*$’s represent the true values, black $\circ$’s are for estimates under Kronecker delta covariance assumption and the blue $+$’s are for estimates under unrestricted covariance assumption.[]{data-label="figuresim"}](figuresim2.jpg "fig:"){width=".4\textwidth"} In this example, we will use the heatmap of the true and estimated covariance matrices under different scenarios to see that slicing gives a reasonable description of the variable variances and covariances. In Figure \[fig:heatmap120var15t8identity\] the true covariance matrix is a $120\times 120$ identity matrix, we estimate this covariance matrix for $N=10, 50,$ and $100$ independent sets of random samples by using $15\times 8$ slicing. In Figure \[fig:heatmap120var15t8kron\] the true covariance matrix is a $120\times 120$ block diagonal matrix with Kronecker delta structure. Finally, in Figure \[fig:heatmap120var15t8unstruct\] the true covariance is a matrix with 4 way Kronecker structure. Convergence of the estimators is observed even when $p>>N.$ ![The true covariance matrix is a $120\times 120$ identity matrix with Kronecker delta structure. We estimate this covariance matrix for $N=10, 50,$ and $100$ independent sets of random samples by using $15\times 8$ slicing.[]{data-label="fig:heatmap120var15t8identity"}](image002.jpg){width=".5\textwidth"} ![The true covariance matrix is a $120\times 120$ block diagonal matrix with Kronecker delta structure. We estimate this covariance matrix for $N=10, 50,$ and $100$ independent sets of random samples by using $15\times 8$ slicing.[]{data-label="fig:heatmap120var15t8kron"}](image003.jpg){width=".5\textwidth"} ![The true covariance matrix is $120\times 120$ 4-way Kronecker delta structured matrix. We estimate this covariance matrix for $N=10, 50,$ and $100$ independent sets of random samples by using $15\times 8$ slicing.[]{data-label="fig:heatmap120var15t8unstruct"}](image001.jpg){width=".5\textwidth"} ![Linear discriminant analysis for the Alon colon data set [@alon1999broad]. An observation ${\boldsymbol{x}}$ was classified as ”normal” if ${\boldsymbol{x}}'\mathbf{w}>0,$ otherwise as ”tumor”. Misclassification rate is $\%11.3.$[]{data-label="fig:colonclassification"}](colonclassification2.jpg){width=".5\textwidth"} We have used the Fisher’s linear discriminant analysis for the Alon colon data set [@alon1999broad]. The linear discriminant function was calculated using $\mathbf{w}=\widehat{\Lambda}^{-1}(\bar{{\boldsymbol{x}}}_1-\bar{{\boldsymbol{x}}}_2)$ where $\widehat{\Lambda}$ is the covariance estimate from Example \[alon1\]. An observation ${\boldsymbol{x}}$ was classified as ”normal” if ${\boldsymbol{x}}'\mathbf{w}>0,$ otherwise as ”tumor”. Figure \[fig:colonclassification\] summarizes our findings. Misclassification rate is $\%11.3.$ In practice, how slicing is done matters. For example, a $24$ dimensional vector could be sliced as $2\times 12,$ $3 \times 8,$ $4 \times 6,$ or $2\times 3 \times 4, $ etc. In addition, the permutation of the variables will effect the estimators. As was discussed earlier slicing obtains dimension reduction by writing the covariance matrix into separable components and we perceive that more parsimonious models can be obtained by, for example, proposing a reduced rank mean for the array variable obtained after slicing. Yet another direction would be estimating each component of the covariance structure sparsely by using a penalty approach like the one used in [@friedman2008sparse]. These issues and improvements are important and will be dealt with in detail in a different article. In the following,we will use the GLASSO [@kim2005glasso] package which implements the shrinkage estimator of covariance [@friedman2008sparse] matrices will be used in conjunction with the flip flop algorithm. In practice, each of the components of the covariance structure could be penalized to individually to obtain very sparse nonsingular covariance estimates. This is important for variable selection. (Sparse Slicing with GLASSO:) ![Slicing with GLASSO: The expression levels in this dataset were ordered with respect to their variances. For the samples of expression levels from normal and tumor tissues, high correlation values (lighter colors in the heatmap) are only observed for the expression levels that have high variance. Low variance components have little correlation among each other but they might be mildly correlated with the high variance expression levels.[]{data-label="fig:lastcolon"}](lastheatmap.jpg){width="100.00000%"} We insert the GLASSO algorithm of [@friedman2008sparse] at the 4th step of the estimation algorithm from Section 2.3, just before scaling of the matrix. The heatmap of the estimated correlation matrix for the first 500 components of the Alon colon data set obtained by using two way slicing ($20\times 25$) and applying GLASSO to the components are given in Figure \[fig:lastcolon\]. The shrinkage parameters for GLASSO should selected by the aid of a model selection technique. Here, the values of these parameters are identified tentatively. The expression levels in this dataset were ordered with respect to their variances. For the samples of expression levels from normal and tumor tissues, high correlation values (lighter colors in the heatmap) are only observed for the expression levels that have high variance. Low variance components have little correlation among each other but they might be mildly correlated with the high variance expression levels. The linear discrimination of the groups on the 500 high variance expression levels result in $12.9\%$ false classification rate.
--- abstract: 'We introduce the vacillating voter model in which each voter consults two neighbors to decide its state, and changes opinion if it disagrees with either neighbor. This irresolution leads to a global bias toward zero magnetization. In spatial dimension $d>1$, anti-coarsening arises in which the linear dimension $L$ of minority domains grows as $t^{1/(d+1)}$. One consequence is that the time to reach consensus scales exponentially with the number of voters.' author: - 'R. Lambiotte$^{1,2}$' - 'S. Redner$^{3}$' title: Dynamics of Vacillating Voters --- The voter model [@L99] gives an appealing, albeit idealized, description for the opinion dynamics of a socially interacting population. In this model, each node of a graph is occupied by a voter that has one of two opinions, $\uparrow$ or $\downarrow$. The population evolves by: (i) picking a random voter; (ii) the selected voter adopts the state of a randomly-chosen neighbor; (iii) repeating these steps [*ad infinitum*]{} or until a finite system necessarily reaches consensus. Descriptively, each voter has no self confidence and follows one of its neighbors. With this dynamics, a voter chooses a state with a probability equal to the fraction of neighbors in that state, a feature that renders the voter model soluble in all dimensions [@L99; @K02]. In this work, we investigate a variation that we term the [*vacillating*]{} voter model. By vacillating, we mean that a voter very much lacks confidence in its state. In an update, if a voter happens to select a random neighbor of the same persuasion, the voter is still not convinced that this state is right. Thus the voter selects another random neighbor and adopts this state. This vacillation causes a voter to change state with a larger probability than the fraction of disagreeing neighbors, and leads to a bias toward the zero-magnetization state in which there are equal densities of voters of each type. ![Illustration of an update for the vacillating voter on the square lattice (left and middle). For the configuration on the right, the central voter flips with probability 5/6 because out of the 6 ways of selecting two neighbors, only one choice leads to both neighbors agreeable (dashed).[]{data-label="model"}](model){width="40.00000%"} Thus vacillation inhibits consensus, but due to a different mechanism than that in the prototypical Axelrod model [@A], the bounded compromise model [@W] and its variants [@VR]. For these latter models, consensus is hindered because of the absence of interaction whenever two agents become sufficiently incompatible. For vacillating voters, it is individual uncertainty that forestalls consensus. The vacillating voter model also differs from models that incorporate “contrarians” [@galam] because voters still try to imitate their neighbors. The update steps in the vacillating voter model are: 1. Pick a random voter. 2. The voter picks a random neighbor. If the neighbor disagrees with the voter, the voter changes state. 3. If the neighbor and the voter agree, the voter picks another random neighbor and adopts its state. 4. Repeat steps 1 and 2 [*ad infinitum*]{} or until consensus is reached. For example, the probability that a vacillating voter on the square lattice flips is $0,\frac{1}{2},\frac{5}{6}$, and 1, respectively, when the number of anti-aligned neighbors is 0, 1, 2, and $\geq 3$ (Fig. \[model\]). In contrast, for the classic voter model, the flip probability is $\frac{k}{4}$, where $k$ is the number of neighbors of the opposite opinion. We now explore the consequences of this vacillation on voter dynamics. Consider first the mean-field limit. Here the density $x$ of $\uparrow$ voters obeys the rate equation $$\begin{aligned} \dot x &=& -x \left[1-x^2\right] + (1-x)\left[1-(1-x)^2\right]\nonumber\\ &=& x(1-x)(1-2x).\end{aligned}$$ The first term on the right accounts for the loss of $\uparrow$ voters in which a $\uparrow$ voter is first picked (factor $x$), and then the neighborhood cannot consist of two $\uparrow$ voters (factor $1-x^2$). Similarly, in the second (gain) term, a $\downarrow$ voter is first picked, and then the neighborhood must contain at least one $\uparrow$ voter. The factorized form shows that there are unstable fixed points at $x=0,1$ and a stable fixed point at $x=1/2$. Thus a population is driven to the zero-magnetization state. However, because consensus is the only absorbing state of the stochastic dynamics, a finite population ultimately reaches consensus. To characterize the evolution to this state, we first study the exit probability $\mathcal{E}_n$, defined as the probability that a population of $N$ voters ultimately reaches $\uparrow$ consensus when there are initially $n$ $\uparrow$ voters. Then $\mathcal{E}_n$ obeys the backward equation [@fpp] $$\label{E} \mathcal{E}_n=w_{n\to n+1} \,\mathcal{E}_{n+1} + w_{n\to n-1} \,\mathcal{E}_{n-1} + w_{n\to n} \,\mathcal{E}_{n},$$ where $w_{n\to m}$ is the probability for the transition from the state with $n$ $\uparrow$ voters to $m$ $\uparrow$ voters in an update. This equation expresses the probability to exit from $n$ as the probability to take one step (the factors $w$) times the probability to exit from the point reached after one step. In the large-$n$ limit, we write $x=n/N$, and the transition probabilities become $$\begin{aligned} w_{n\to n+1}&=& (1-x)\left[1-(1-x)^2\right]\\ w_{n\to n-1}&=& x(1-x^2)\\ w_{n\to n}&= &x^3+ (1-x)^3.\\\end{aligned}$$ Substituting these in , writing $\mathcal{E}_{n\pm 1}\to\mathcal{E}(x\pm \delta x)$, and expanding to second order in $\delta x$, gives $$\frac{3 x(1-x)}{2N} \frac{\partial^2\mathcal{E}}{\partial x^2} + x(1-x)(1-2x) \frac{\partial\mathcal{E}}{\partial x}=0,$$ with solution $$\label{exitMF} \mathcal{E}(x) = \int_{-1/2}^{x-1/2} e^{2Ny^2/3}\, dy\Bigg/ \int_{-1/2}^{1/2} e^{2Ny^2/3}\, dy.$$ Notice that $\mathcal{E}(x)$ approaches the constant value $1/2$ for increasing $N$ (Fig. \[exit\]), reflecting the bias towards the zero-magnetization state. Almost all initial states are driven to the potential well at $x=1/2$, so that the exit probability becomes independent of the initial density of $\uparrow$ voters. ![Exit probability $\mathcal{E}(x)$ versus the density of $\uparrow$ voters $x$ for the case $N=16$, $N=25$ and $N=100$.[]{data-label="exit"}](exitMF "fig:"){width="45.00000%"} -.3in Similarly, we study the time to reach consensus as a function of the initial composition of voters. Let $t_n$ denote the time to reach consensus (either all $\uparrow$ or all $\downarrow$) when starting with $n$ $\uparrow$ voters in a population of $N$ voters. Similar to , $t_n$ obeys the backward equation [@fpp] $$\label{T} t_n= \delta t+ w_{n\to n+1} \,t_{n+1} +w_{n\to n-1} \,t_{n-1}+ w_{n\to n} \,t_{n},$$ where $\delta t=1/N$ is the time elapsed in an update. In the large-$n$ limit, this equation becomes $$\frac{3 x(1-x)}{2N} \frac{\partial^2 t}{\partial x^2} + x(1-x)(1-2x) \frac{\partial t}{\partial x}=-1.$$ The formal solution is again elementary, but the result can no longer be expressed in closed form. The main result is that the consensus time scales as $e^{a N}$, with $a$ a constant of order 1. In contrast to the classical voter model, the global bias drives the system into a potential well that must be surmounted to reach consensus. Thus the consensus time is anomalously long. In one dimension, a voter changes its opinion if at least one of its neighbors is in disagreement. For example, a $\uparrow$ voter flips with rate 1 if the neighborhood configurations are $\uparrow \uparrow \downarrow$, $\downarrow \uparrow \uparrow$, and $\downarrow \uparrow \downarrow$. As an amusing side-note, this dynamics is equivalent to rule 178 of the one-dimensional cellular automaton [@wolfram], except that this rule is implemented asynchronously in the vacillating voter model. In the framework of the Ising-Glauber model [@IG], the flip rate of a voter at site $i$, whose states are now represented by $\sigma_i=\pm 1$, is $$\begin{aligned} \label{w} w(\{\sigma\}\!\!\rightarrow\!\!\{\sigma'\}_i) \!=\! -\frac{\left[\sigma_{i}(\sigma_{i+1}\!+\! \sigma_{i-1}) \!+\! \sigma_{i-1} \sigma_{i+1} \! -\! 3\right]}{4}, \end{aligned}$$ with $\{\sigma\}$ denoting the state of all voters and $\{\sigma'\}_i$ the state where the $i^{\rm th}$ voter flips. The first two terms correspond to conventional Glauber kinetics, but as mentioned parenthetically in Ref. [@IG], the presence of the $\sigma_{i-1}\sigma_{i+1}$ term couples the rate equation for the mean spin to 3-body terms and the model is not exactly soluble. The mean spin, $s_j\equiv\langle\sigma_j\rangle=\sum_{\{\sigma\}} \sigma_j P(\{\sigma\};t)$ evolves according to $$\begin{aligned} \label{st} \frac{\partial s_j}{\partial t}&=& \sum_{\{\sigma\}}\sigma_j\Big[\sum_i w(\{\sigma'\}_i \rightarrow \{\sigma\})\, P(\{\sigma'\}_i;t) \nonumber \\ &~&~~~~~~~~-w(\{\sigma\} \rightarrow \{\sigma'\}_i) \, P(\{\sigma\};t)\Big] ,\end{aligned}$$ which reduces to, after straightforward but tedious steps, $$\begin{aligned} \label{eqS} \frac{\partial s_j}{\partial t} = \frac{1}{2}\left(s_{j+1} + s_{j-1} + \langle\sigma_{j-1}\sigma_j\sigma_{j+1}\rangle - 3s_j\right).\end{aligned}$$ In a similar spirit, the rate equation for the nearest-neighbor correlation function, $\langle\sigma_j \sigma_{j+1}\rangle$, is $$\begin{aligned} \label{ct} \frac{\partial \langle\sigma_j \sigma_{j\!+\!1}\rangle}{\partial t} &=& \frac{1}{2}\left[\langle \sigma_{j\!-\!1}(\sigma_j\!+\!\sigma_{j\!+\!1})\rangle+ \langle (\sigma_j\!+\!\sigma_{j\!+\!1})\sigma_{j\!+\!2}\rangle\right]~~~~~\nonumber \\ &~&~~~~~~~~ +1 -3\langle\sigma_j\sigma_{j+1}\rangle\end{aligned}$$ We can simplify Eq.  by considering domain walls—nearest-neighbor anti-aligned voters—whose density is given by $\rho=(1-\langle \sigma_i\sigma_{i+1}\rangle)/2$. According to the flip rate in Eq. , an isolated domain wall diffuses freely, just as in the pure voter model. However, when two domain walls are adjacent, they annihilate with probability 1/3 or one hops away from the other with probability 2/3. This process is isomorphic to single-species annihilation, $A+A\to 0$, but with a reduced reaction rate compared to freely diffusing reactants because of the nearest-neighbor repulsion. The domain wall density still asymptotically decays as $t^{-1/2}$ with an amplitude that depends on the magnitude of the repulsion. Because domain walls are widely separated at long times, the second-neighbor correlation function is $$\begin{aligned} \langle\sigma_{j} \sigma_{j+2}\rangle&=& +\text{prob(0 or 2 walls between\ }j\ \text{and\ } j\!+\!2)\\ &~&- \text{prob(1 wall between\ }j\ \text{and\ } j\!+\!2)\\ &\approx& 1-2\rho.\end{aligned}$$ Using the approximation of widely separated domain walls, $\langle\sigma_{j} \sigma_{j+2}\rangle \approx \langle\sigma_{j} \sigma_{j+1}\rangle\equiv m_2$, and the rate equation for nearest-neighbor correlation function $m_2$ becomes $\frac{\partial m_2}{\partial t} = 1 - m_2$, with solution $$\begin{aligned} m_2(t) = 1 + \left[m(0)^2-1\right]\, e^{-t} .\end{aligned}$$ Here we chose the uncorrelated initial condition, so that $m_2(0)=m(0)^2$, where $m(0)\equiv\langle s_j(0)\rangle $ is the average magnetization at $t=0$. ![Exit probability $\mathcal{E}(x)$ as a function of the initial density of $\uparrow$ voters $x$ for a one dimensional system composed of 25, 36 and 1000 voters respectively The voter model result, $\mathcal{E}(x)=x$, that follows from magnetization conservation is shown for comparison.[]{data-label="fig3"}](exit1d "fig:"){width="45.00000%"} -0.3in Let us now return to the rate equation for the mean spin. For a spatially homogeneous system, $\langle s_j\rangle$ are all identical and the magnetization is $m\equiv \langle s_j\rangle$. Also, we follow Ref.  [@MR] and decouple the 3-spin correlation function as $\langle \sigma_{j-1} \sigma_j \sigma_{j+1}\rangle \approx m m_2$. Then by averaging over all sites, the rate equation equation becomes $$\begin{aligned} \frac{\partial m}{\partial t} &=& \frac{1}{2}(m\, m_2 -m) = \frac{m}{2} e^{-t} (m(0)^2-1)\,,\end{aligned}$$ whose solution, for the initial condition $m(0)$, is $$\begin{aligned} m(t) &=& m(0)\, e^{\frac{1}{2} (1-e^{-t}) (m(0)^2-1)}~.\end{aligned}$$ Thus we obtain a non-trivial relation between final magnetization $m(\infty)$ and $m(0)$ $$\begin{aligned} \label{mInf} m(\infty) = m(0)\, e^{\frac{1}{2} (m(0)^2-1)}~.\end{aligned}$$ Since the density of $\uparrow$ voters is $x=(1+m)/2$, while $m(\infty)=2\mathcal{E}(x)-1$, the exit probability $\mathcal{E}(x)$ becomes $$\begin{aligned} \label{exit1D} \mathcal{E}(x) = \frac{1}{2} \left[(2x-1) e^{2x(x-1)} +1 \right].\end{aligned}$$ This result is in excellent agreement with our simulation results (Fig. \[fig3\]). For small systems ($N=25$ and $36$), we directly measure the probability $\mathcal{E}(n)$ that the population ultimately reaches a $\uparrow$ consensus when there are initially $n$ $\uparrow$ voters and averaged over 5000 realizations of the dynamics. We also verified Eq. (\[exit1D\]) for large systems ($N=1000$ nodes) by a different approach that avoids the need to measure $\mathcal{E}(n)$ directly by simulating until ultimate consensus. Instead, we run the dynamics up to $1000$ time steps and measure the magnetization at this time. We then average over 200 realizations of the process to obtain $m(\infty)$ and finally obtain $\mathcal{E}(x)$ from $\mathcal{E}(x)=(1+m(\infty))/2$. We again find excellent agreement with our prediction . ![Double logarithmic plot of the number of $\uparrow$ voters versus time on the square lattice starting from a $4\times 4$ square of $\uparrow$ voters in a background of $\downarrow$ voters. []{data-label="a-vs-t"}](a-vs-t){width="35.00000%"} The vacillating voter model in greater than one dimension has the new qualitative feature that small minority domains tend to grow. This anti-coarsening is a manifestation of the bias toward the zero-magnetization state. To appreciate how this anti-coarsening arises, consider a circular two-dimensional island domain of $\uparrow$ voters of linear dimension $L$ and area $A$ in a sea of $\downarrow$ voters. For large $L$, each voter at the interface has the same local environment, so that there is no environmental bias. However, there are slightly more $\downarrow$ voters just outside the circle that $\uparrow$ voters just inside. In a time of the order of $\delta t\sim L$ each interface voter is updated once, on average, so that the island area increases by an amount $\delta A$ that is of the order of the difference in the number of $\uparrow$ and $\downarrow$ voters at the interface. Thus $\frac{\delta A}{\delta t}\sim \frac{1}{L}$, which gives $L\sim t^{1/3}$. In $d$ dimensions, this same reasoning gives $L\sim t^{1/(d+1)}$. We probed for this anti-coarsening by simulating the evolution of an initial small square domain of $\uparrow$ voters in a $\downarrow$ background in two dimensions (Fig. \[a-vs-t\]). Although such domains do not remain contiguous, the data suggest that the number, or occupied area, of $\uparrow$ voters grows as $t^{\alpha}$, with $\alpha$ around 0.73, in reasonable agreement with our expectation $\alpha=2/3$. ![Exit probability $\mathcal{E}(x)$ as a function of the initial density of $\uparrow$ voters $x$ for a square lattice of 16, 25, 36 and 49 voters, respectively, with periodic boundary conditions. []{data-label="fig5"}](exit2d "fig:"){width="45.00000%"} -0.3in ![Snapshots of the vacillating (left) and pure (right) voter model on a $50 \times 50$ lattice starting with a random zero-magnetization state after $100$ time steps. The correlation function $C_1$ equals $0.31$ (left) and $0.59$ (right) respectively. []{data-label="fig6"}](comparison){width="50.00000%"} A system with non-zero initial magnetization is therefore again drawn to the attractor where the density $x$ of $\uparrow$ voters equals $1/2$ before final consensus is eventually reached. It is only for $x$ initially very close to 0 or 1 that the system achieves consensus without first being drawn to this attractor. Thus the exit probability $\mathcal{E}(x)$ should be nearly independent of $x$ for almost all $x$, just as in the mean-field limit. Simulations of the vacillating voter model on the square lattice (Fig. \[fig5\]) confirm that $\mathcal{E}(x)$ approaches 1/2 for a progressively wider range of $x$ as $L$ increases. Simulations also show that the correlation function $C_1\equiv \langle \sigma_{i,j} \sigma_{i,j+1}\rangle$ does not approach 1 in the long-time limit, as in one dimension or in the pure voter model in two dimensions. Rather, $C_1$ reaches the stationary value $0.31$, so that domains of opposite opinions coexist (Fig. \[fig6\]), and only a rare macroscopic fluctuation allows consensus to be reached. In summary, when vacillation is incorporated into the voter model, consensus is inhibited but not prevented. In the mean-field limit, the vacillation drives a population away from consensus and toward the zero-magnetization state. A finite system ultimately achieves consensus only via a macroscopic fluctuation that allows the system to escape this bias-induced potential well. Because of the bias, the probability to reach $\uparrow$ consensus is essentially independent of the initial composition of the population. In one dimension, the system coarsens, albeit more slowly than in the pure voter model because of the repulsion of neighboring domain walls, and the probability to reach the final state of $\uparrow$ consensus has a non-trivial initial state dependence. In two and higher dimensions, domains slowly anti-coarsen to drive the system to the zero-magnetization state. The overall behavior is qualitatively similar to that of the mean-field vacillating voter model, and very different from the pure voter model. We gratefully acknowledge the support of the European Commission Project CREEN FP6-2003-NEST-Path-012864 (RL), the ARC “Large Graphs and Networks” (RL) and NSF grant DMR0535503 (SR), and the hospitality of the Ettore Majorana Center where this project was initiated. [10]{} T. M. Liggett, [*Stochastic interacting systems: contact, voter, and exclusion processes*]{}, (Springer-Verlag, New York, 1999). P. L. Krapivsky, Phys. Rev. A [**45**]{}, 1067 (1992). R. Axelrod, J. Conflict Res. [**41**]{}, 203 (1977); R. Axtell, R. Axelrod, J. Epstein, and M. D. Cohen, Comput. Math. Organiz. Theory, [**1**]{}, 123 (1996). G. Weisbuch, G. Deffuant, F. Amblard, and J. P. Nadal, Complexity [**7**]{}, 55 (2002); R.  Hegselmann and U. Krause, J. Artificial Societies and Social Simulation [**5**]{}, no. 3; E. Ben-Naim, P. L. Krapivsky, and S. Redner, Physica D [**183**]{}, 190 (2003). F. Vazquez and S. Redner, J. Phys. A [**37**]{}, 8479 (2004). S. Galam, [*Physica A*]{} [**333**]{} 453 (2004) See [*e.g.*]{}, S. Redner, [*A Guide to First-Passage Processes*]{}, (Cambridge University Press, New York, 2001), Chap. 2. S. Wolfram, [*A New Kind of Science*]{}, (Wolfram Media, Champaign, IL, 2002). R. J. Glauber, J. Math. Phys. [**4**]{}, 294 (1963). M. Mobilia and S. Redner, Phys. Rev. E [**68**]{} 046106, (2003).
--- abstract: 'We provide the detailed calculation of a general form for Maxwell and London equations that takes into account gravitational corrections in linear approximation. We determine the possible alteration of a static gravitational field in a superconductor making use of the time-dependent Ginzburg-Landau equations, providing also an analytic solution in the weak field condition. Finally, we compare the behavior of a superconductor with a classical low-superconductor, analyzing the values of the parameters that can enhance the reduction of the gravitational field.' author: - Giovanni Alberto Ummarino - Antonio Gallerati bibliography: - 'bibliografia.bib' title: '[-0.5in]{}[-0.5in]{} **Superconductor in a weak static gravitational field**' --- Introduction {#sec:Intro} ============ There is no doubt that the interplay between the theory of the gravitational field and superconductivity is a very intriguing field of research, whose theoretical study has been involving many researchers for a long time [@DeWitt:1966yi; @papini1967detection; @Felch:1985pre; @anandan1994relgra; @anandan1977nuovo; @anandan1984relthe; @ross1983london; @hirakawa1975super; @rystephanick1973london; @peng1991interaction; @ciubotariu1996absence; @agop1996gravitational; @dinariev1987relativistic; @minasyan1976londons; @rothen1968application; @li1991effects; @peng1991electrodynamics; @li1992gravitational; @torr1993gravitoelectric]. Podkletnov and Nieminem declared the achievement of experimental evidence for a gravitational shielding in a high-superconductor (HTCS) [@podkletnov1992possibility; @podkletnov1997weak]. After their announcement, other groups tried to repeat the experiment obtaining controversial results [@li1997static; @de1995alternative; @unnikrishnan1996does], so that the question is still open. In 1996, G. Modanese interpreted the results by Podkletnov and Nieminem in the frame of the quantum theory of General Relativity [@modanese1996theoretical; @modanese1996role] but the complexity of the formalism makes it very difficult to extract quantitative predictions. Afterwards, Agop et al. wrote generalized Maxwell equations that simultaneously treat weak gravitational and electromagnetic fields [@agop2000local; @agop2000some]. #### Superfluid coupled to gravity. It is well known that, in general, the gravitational force is not influenced by any dielectric-type effect involving the medium. In the classical case, this is due to the absence of a relevant number of charges having opposite sign which, redistributing inside the medium, might counteract the applied field. On the other side, if we regard the medium as a quantum system, the probability of a graviton excitation of a medium particle is suppressed, due to the smallness of gravitational coupling. This means that any kind of shielding due to the presence of the medium can only be the result of an interaction with a different state of matter, like a Bose condensate or a more general superfluid. The nature of the involved field is also relevant for the physical process. If the gravitational field itself is considered as classical, it is readily realized that no experimental device – like the massive superconducting disk of the Podkletnov experiment [@podkletnov1992possibility; @podkletnov1997weak] – can influence the local geometry so much as to modify the measured sample weight. This means that the hypothetical shielding effect should consist of some kind of modification (or “absorption”) of the field in the superconducting disk. Since the classical picture is excluded, we need a quantum field description for the gravitational interaction [@modanese1996theoretical; @modanese1996role]. In perturbation theory the metric $\gmetr(x)$ is expanded in the standard way [@Wald:1984rg] $$\label{eq:metricweak} \gmetr(x)\=\emetr+\hmetr(x)$$ as the sum of the flat background $\emetr$ plus small fluctuations encoded in the $h_{\mu\nu}(x)$ component. The Cooper pairs inside the superconducting sample compose the Bose condensate, described by a bosonic field $\phi$ with non-vanishing vacuum expectation value $\phi_0=\langle0|\phi|0\rangle$. The Einstein-Hilbert Lagrangian has the standard form[^1]$$\LL_\textsc{eh} \= \frac{1}{8\pi\GN}\,\left(R-2\,\Lambda\right)\;,$$ where $R$ is the Ricci scalar and $\Lambda$ is the cosmological constant. The part of the Lagrangian describing the bosonic field $\phi$ coupled to gravity has the form: $$\LL_\ms{\!\phi} \= -\frac12\,\invgmetr\,\dm{\phi}^\ast\,\dm[\nu]\phi + \frac12\,m^2\,\phi^{\ast}\phi$$ where $m$ is the mass of the Cooper pair [@modanese1996theoretical]. If we expand the bosonic field as $\phi=\phi_0+\bar{\phi}$, one can consider the v.e.v. $\phi_0$ as an external source, related to the structure of the sample and external electromagnetic fields, while the $\bar{\phi}$ component can be included in the integration variables. The terms including the $\bar\phi$ components are related to graviton emission-absorption processes (which we know to be irrelevant) and can safely be neglected in $\LL_\ms{\!\phi}$. Perturbatively, the interaction processes involving the metric and the condensate are of the form $$\LL_\tts{int} ~\propto~ h^{\mu\nu}\,\dm{\phi_0}^\ast\,\dm[\nu]\phi_0 \label{eq:Lint}$$ and give rise to (gravitational) propagator corrections, which are again irrelevant. The total Lagrangian $\LL=\LL_\textsc{eh}+\LL_\ms{\!\phi}$ contains a further coupling between $\gmetr$ and $\phi_0$, which turns out to be a contribution to the so-called intrinsic cosmological term given by $\Lambda$. Explicitly, the total Lagrangian can in fact be rewritten as $$\LL\=\LL_\textsc{eh}+\LL_\ms{\!\phi}\= \frac{1}{8\pi\GN}\,\left(R-2\,\Lambda\right)+\LL_\tts{int} +\LL_0+\bar{\LL}_\ms{\bar{\phi}}\;,$$ where $\bar{\LL}_\ms{\bar{\phi}}$ are the negligible contributions having at least one field $\bar{\phi}$ and where $$\LL_0\= -\frac12\,\dm{\phi_0}^\ast\,\dmup[\mu]\phi_0 + \frac12\,m^2\left|\phi_0\right|^2\;,$$ that is, a Bose condensate contribution to the total effective cosmological term. This may produce slightly localized “instabilities” and thus an observable effect, in spite of the smallness of the gravitational coupling . The above instabilities can be found in the superconductor regions where the condensate density is larger: in these regions, the gravitational field would tend to assume fixed values due to some physical cutoff, that prevents arbitrary growth. The mechanism is similar to classical electrostatics in perfect conductors, where the electric field is constrained to be globally zero within the sample. In the latter case, the physical constraint’s origin is different (and is due to a charge redistribution), but in both cases the effect on field propagation and on static potential turns out to be a kind of partial shielding. In accordance with the framework previously exposed, the superfluid density $\phi_0(x)$ is determined not only by the internal microscopic structure of the sample, but also by the same magnetic fields responsible for the Meissner effect and the currents in the superconductor. The high-frequency components of the magnetic field can also provide energy for the above gravitational field modification [@modanese1996theoretical]. The previous calculation shows how Modanese was able to demonstrate, in principle, how a superfluid can determine a gravitational shielding effect. In Sect. \[sec: Isotropic SC\] we will quantify this effect by following a different approach, as the Ginzburg-Landau theory for a superfluid in an external gravitational field. Weak field approximation {#sec: Weak field} ======================== Now we consider a nearly flat spacetime configuration, i.e. an approximation where the gravitational field is weak and where we shall assume eq. , that is, the metric $\gmetr$ can be expanded as: $$\label{eq:gmetr} \gmetr~\simeq~\emetr+\hmetr\;,$$ where the symmetric tensor $\hmetr$ is a small perturbation of the flat Minkowski metric in the mostly plus convention[^2], [$\emetr=\mathrm{diag}(-1,+1,+1,+1)$]{}. The inverse metric in the linear approximation is given by $$\invgmetr~\simeq~\invemetr-\invhmetr\;.$$ Generalizing Maxwell equations ------------------------------ If we consider an inertial coordinate system, to linear order in $\hmetr$ the connection is written as $$\label{eq:Gam} \Gam[\lambda][\mu][\nu]~\simeq~\frac12\,\invemetr[\lambda][\rho]\, \left(\dm[\mu]\hmetr[\nu][\rho]+\dm[\nu]\hmetr[\rho][\mu]-\dm[\rho]\hmetr[\mu][\nu]\right)\;,$$ The Ricci tensor (Appendix \[app:signconv\]) is given by the contraction of the Riemann tensor $$\Ricci\=\Ruddd[\sigma][\mu][\sigma][\nu]\,.$$ and, to linear order in $\hmetr$, it reads $$\label{eq:Ricci} \begin{split} \Ricci&~\simeq~\dm[\lambda]\Gam[\lambda][\mu][\nu]\+\dm[\mu]\Gam[\lambda][\lambda][\nu]\+\cancel{\Gamma\,\Gamma}\-\cancel{\Gamma\,\Gamma}\=\\ &\=\frac12\,\left(\dm\dmup[\rho]\hmetr[\nu][\rho]+\dm[\nu]\dmup[\rho]\hmetr[\mu][\rho]\right) -\frac12\,\dm[\rho]\dmup[\rho]\hmetr-\frac12\,\dm\dm[\nu]h\=\\ &\=\dmup[\rho]\dm[{(}\mu]\hmetr[\nu{)}][\rho]-\frac12\,\dd^2\hmetr-\frac12\,\dm\dm[\nu]h\;, \end{split}$$ where we have used eq.  and where $h=\hud[\sigma][\sigma]$. The Einstein equations have the form [@Wald:1984rg; @misner1973gravitation]: $$\GEinst\=\Ricci-\dfrac12\,\gmetr\,R \=8\pi\GN\;T_{\mu\nu}\;,$$ and the term with the Ricci scalar $R=\invgmetr\Ricci$ can be rewritten, in first-order approximation and using eq. , as $$\frac12\,\gmetr\,R~\simeq~\frac12\,\emetr\,\invemetr[\rho][\sigma]\Ricci[\rho][\sigma] \=\frac12\,\emetr\,\left(\dmup[\rho]\dmup[\sigma]\hmetr[\rho][\sigma]-\dd^2h\right)\;,$$ so that the l.h.s. of the Einstein equations in weak field approximation reads $$\begin{split} \GEinst&\=\Ricci-\dfrac12\,\gmetr\,R ~\simeq~\\ &~\simeq~\dmup[\rho]\dm[{(}\mu]\hmetr[\nu{)}][\rho]-\frac12\,\dd^2\hmetr-\frac12\,\dm\dm[\nu]h -\frac12\,\emetr\left(\dmup[\rho]\dmup[\sigma]\hmetr[\rho][\sigma]-\dd^2h\right)\;. \end{split}$$ If one introduces the symmetric tensor $$\bhmetr\=\hmetr-\frac12\,\emetr\,h\;,$$ the above expression can be rewritten as $$\begin{split} \GEinst &~\simeq~ \dmup[\rho]\dm[{(}\mu]\bhmetr[\nu{)}][\rho]-\frac12\,\dd^2\bhmetr -\frac12\,\emetr\,\dmup[\rho]\dmup[\sigma]\bhmetr[\rho][\sigma]\=\\ %&\=\frac12\left(\dmup[\rho]\dm\bhmetr[\nu][\rho] % +\dmup[\rho]\dm[\nu]\bhmetr[\mu][\rho] % -\dmup[\rho]\dm[\rho]\bhmetr[\mu][\nu] % -\emetr\,\dmup[\rho]\dmup[\sigma]\bhmetr[\rho][\sigma]\right)\=\\ &\=\dmup[\rho]\dm[{[}\nu]\bhmetr[\rho{]}][\mu]+\dmup[\rho]\dmup[\sigma]\emetr[\mu][{[}\sigma]\,\bhmetr[\nu{]}][\rho]\=\\ &\=\dmup[\rho]\left( %\underbrace{ \dm[{[}\nu]\bhmetr[\rho{]}][\mu]+\dmup[\sigma]\emetr[\mu][{[}\rho]\,\bhmetr[\nu{]}][\sigma] % }_{\mathclap{\Gscr}} \right)\;,%~\equiv~\\ %&~\equiv~\dmup[\rho]\,\Gscr\;, \end{split}$$ where we have exchanged dumb indices in the last term of the second line. If we now define the tensor $$\label{eq:Gscr} \Gscr~\equiv~ \dm[{[}\nu]\bhmetr[\rho{]}][\mu]+\dmup[\sigma]\emetr[\mu][{[}\rho]\,\bhmetr[\nu{]}][\sigma]\;,$$ whose structure implies the property $$\Gscr\=-\Gscr[\mu][\rho][\nu]\;,$$ the Einstein equations can be rewritten in the compact form: $$\label{eq:Einst} \boxed{\;\GEinst\=\dmup[\rho]\Gscr\=8\pi\GN\;T_{\mu\nu}\,}\quad.$$ We can impose a gauge fixing making use of the *harmonic coordinate condition*, expressed by the relation [@Wald:1984rg]: $$\dm\left(\sqrt{-g}\,\invgmetr\right)=0 \;\quad\Leftrightarrow\quad\; \Box x^\mu=0\;,$$ where $g\equiv\mathrm{det}\left[\gmetr\right]$, and that can be rewritten in the form $$\invgmetr\,\Gam\,=\,0\;,$$ also known as *De Donder gauge*. Imposing the above condition and using eqs.  and , in first-order approximation we find: $$0~\simeq~ \frac12\,\invemetr\,\invemetr[\lambda][\rho]\left(\dm[\mu]\hmetr[\nu][\rho]+\dm[\nu]\hmetr[\rho][\mu]-\dm[\rho]\hmetr[\mu][\nu]\right)\= \dm\invhmetr-\frac12\,\dmup[\nu]h\;,$$ that is, we have the condition $$\label{eq:gaugecond0} \dm\invhmetr\simeq\frac12\,\dmup[\nu]h \;\quad\Leftrightarrow\;\quad \dmup\hmetr\simeq\frac12\,\dm[\nu]h\;\;.$$ Now, one also has $$\dmup\hmetr\=\dmup\left(\bhmetr+\frac12\,\emetr h\right) \=\dmup\bhmetr+\frac12\,\dm[\nu]h\;,$$ and, using eq. , we find the so-called *Lorenz gauge condition*: $$\dmup\bhmetr~\simeq~0\;.$$ The above relation further simplifies expression for $\Gscr$, which takes the very simple form $$\boxed{\;\Gscr~\simeq~\dm[{[}\nu]\bhmetr[\rho{]}][\mu]\,}\;\;,$$ and verifies also the relation $$\dm[{[}\lambda{|}]\Gscr[0][{|}\mu][\nu{]}]\=0 \;\qRq\; \Gscr[0][\mu][\nu] \propto \dm\mathcal{A}_\nu-\dm[\nu]\mathcal{A}_\mu\;,$$ which implies the existence of a potential. #### Gravito-Maxwell equations. Now, let us define the fields[^3] [@agop2000local] [6]{}[12]{}\[eq:fields0\] $$\begin{aligned} \Eg&~\equiv~E_i~=\,-\,\frac12\,\Gscr[0][0][i]~=\,-\,\frac12\,\dm[{[}0]\bhmetr[i{]}][0]\;,\\[\jot] \Ag&~\equiv~A_i\=\frac14\,\bhmetr[0][i]\;,\\[\jot] \Bg&~\equiv~B_i%\=\frac12\left(\Gscr[0][2][3],\,\Gscr[0][3][1],\,\Gscr[0][1][2]\right) \=\frac14\,{\varepsilon_i}^{jk}\,\Gscr[0][j][k]\;,\end{aligned}$$ where obviously $i=1,2,3$ and $$\Gscr[0][i][j]\=\dm[{[}i]\bhmetr[j{]}][0] \=\frac12\left(\dm[i]\bhmetr[j][0]-\dm[j]\bhmetr[i][0]\right)\=4\,\dm[{[}i]A_{j{]}}\;.$$ One can immediately see that $$\begin{split} \Bg&\=\frac14\,{\varepsilon_i}^{jk}\,4\,\dm[{[}j]A_{k{]}} \={\varepsilon_i}^{jk}\,\dm[j]A_k=\nabla\times\Ag\;,\\[3\jot] %&\Longrightarrow\quad \Bg=\nabla\times\Ag\;,%\\[3\jot] &~\Longrightarrow\quad \nabla\cdot\Bg\=0\;. \end{split}$$ Then one also has $$%\begin{split} \nabla\cdot\Eg\=\dmup[i]E_i\=-\dmup[i]\frac{\Gscr[0][0][i]}{2} \=-8\pi\GN\;\frac{T_{00}}{2} \=4\pi\GN\;\rhog\;, %\end{split}$$ using eq.  and having defined $\rhog\equiv-T_{00}$. If we consider the curl of $\Eg$, we obtain $$\begin{split} \nabla\times\Eg&\={\varepsilon_i}^{jk}\,\dm[j]E_k \=-{\varepsilon_i}^{jk}\,\dm[j]\frac{\Gscr[0][0][k]}{2} \=-\frac12\,{\varepsilon_i}^{jk}\,\dm[j]\dm[{[}0]\bhmetr[k{]}][0]\=\\[2\jot] &\=-\frac14\,4\;\dm[0]\,{\varepsilon_i}^{jk}\,\dm[j]A_k \=-\dm[0]B_i\=-\frac{\dd\Bg}{\dd t}\;. \end{split}$$ Finally, one finds for the curl of $\Bg$ $$\label{eq:gravMaxwell4} \begin{split} \nabla\times\Bg&\={\varepsilon_i}^{jk}\,\dm[j]B_k \=\frac14\,{\varepsilon_i}^{jk}\, {\varepsilon_k}^{\ell m}\,\dm[j]\Gscr[0][\ell][m]\=\\[3\jot] &\=\frac14\left({\delta_i}^\ell\delta^{jm}-{\delta_i}^m\delta^{j\ell}\right)\dm[j]\Gscr[0][\ell][m] \=\frac12\,\dmup[j]\Gscr[0][i][j]\=\\[3\jot] &\=\frac12\left(\dmup\Gscr[0][i][\mu]+\dm[0]\Gscr[0][i][0]\right) \=\frac12\left(\dmup\Gscr[0][i][\mu]-\dm[0]\Gscr[0][0][i]\right)\=\\[3\jot] &\=\frac12\left(8\pi\GN\;T_{0i}-\dm[0]\Gscr[0][0][i]\right) \=4\pi\GN\;j_i+\frac{\dd E_i}{\dd t}\=\\[3\jot] &\=4\pi\GN\;\jg\+\frac{\dd\Eg}{\dd t}\;, \end{split}$$ using again eq.  and having defined $\jg \equiv j_i \equiv T_{0i}$. Summarizing, once defined the fields of and having restored physical units, one gets the field equations: $$\label{eq:gravMaxwell} \begin{split} \nabla\cdot\Eg&\=4\pi\GN\,\frac{m^2}{e^2}\;\rhog\= \frac{\rhog}{\epsg}\;;\\[2\jot] \nabla\cdot\Bg&\=0 \;;\\[2\jot] \nabla\times\Eg&~=-\dfrac{\dd\Bg}{\dd t} \;;\\[2\jot] \nabla\times\Bg&\=4\pi\GN\,\frac{m^2}{c^2\,e^2}\,\jg \+\frac{1}{c^2}\,\frac{\dd\Eg}{\dd t} \= \mug\;\jg\+\frac{1}{c^2}\,\frac{\dd\Eg}{\dd t}\;,\qquad \end{split}$$ formally equivalent to Maxwell equations, where $\Eg$ and $\Bg$ are the gravitoelectric and gravitomagnetic field, respectively, and where we have defined the vacuum gravitational permittivity $$\epsg=\frac{1}{4\pi\GN}\,\frac{e^2}{m^2}\;,$$ and the vacuum gravitational permeability $$\mug=4\pi\GN\,\frac{m^2}{c^2\,e^2}\;.$$ For example, on the Earth surface, $\Eg$ is simply the Newtonian gravitational acceleration and the $\Bg$ field is related to angular momentum interactions [@agop2000local; @agop2000some; @braginsky1977laboratory; @huei1983calculation; @peng1990new]. The mass current density vector $\jg$ can also be expressed as: $$\jg \= \rhog\,\mathbf{v} \;,$$ where $\mathbf{v}$ is the velocity and $\rhog$ is the mass density. #### Gravito-Lorentz force. Let us consider the geodesic equation for a particle in the field of a weakly gravitating object: $$\frac{d^2x^\lambda}{ds^2}\+\Gam\,\frac{dx^\mu}{ds}\,\frac{dx^\nu}{ds}\=0\;.$$ If we consider a particle in non-relativistic motion, the velocity of the particle becomes $\frac{v_i}{c}\simeq\frac{dx^i}{dt}$. If we also neglect terms in the form $\frac{v_i\,v^j}{c^2}$ and limit ourselves to static fields $\left(\dm[t]\gmetr=0\right)$, it can easily be verified that a geodesic equation for a particle in non-relativistic motion can be written as:[@ruggiero2002gravitomagnetic; @Mashhoon:2003ax] $$\frac{d\mathbf{v}}{dt} \= \Eg+\mathbf{v}\times\Bg\;,$$ which shows that the free fall of the particle is driven by the analogous of a Lorentz force produced by the gravito-Maxwell fields. #### Generalized Maxwell equations. It is possible to define the generalized electric/magnetic field, scalar and vector potentials containing both electromagnetic and gravitational term as $$\label{eq:genpot} \E=\Ee+\frac{m}{e}\,\Eg\,;\quad\; \B=\Be+\frac{m}{e}\,\Bg\,;\quad\; \phi=\phi_\textrm{e}+\frac{m}{e}\,\phig\,;\quad\; \A=\Ae+\frac{m}{e}\,\Ag\,,$$ where $m$ and $e$ are the mass and electronic charge, respectively, and the subscripts identify the electromagnetic and gravitational contributions. The generalized Maxwell equations then become$$\label{eq:genMaxwell} \begin{split} \nabla\cdot\E&\=\left(\frac1\epsg+\frac{1}{\epsz}\right)\,\rho \;;\\[2\jot] \nabla\cdot\B&\=0 \;;\\[2\jot] \nabla\times\E&~=-\dfrac{\dd\B}{\dd t} \;;\\[2\jot] \nabla\times\B&\=\left(\mug+\muz\right)\,\jj \+\frac{1}{c^2}\,\dfrac{\dd\E}{\dd t} \;, \end{split}$$ where we have set $$\begin{split} \rhog&\=\frac{e}{m}\,\rho\;,\qquad\qquad\qquad\\[\jot] \jg&\=\frac{e}{m}\,\jj\;, \end{split}$$ and where $\epsz$ and $\muz$ are the electric permittivity and magnetic permeability in the vacuum. Generalizing London equations ----------------------------- The London equations for a superfluid in stationary state read [@tinkham1996introduction; @ketterson1999superconductivity; @degennes1989superconductivity]: \[eq:London\]$$\begin{aligned} \Ee&\=\frac{m}{\ns\,e^2}\;\dfrac{\dd\jj}{\dd t}\;;\label{eq:London1} \\[2\jot] \Be&~=-\frac{m}{\ns\,e^2}\;\nabla\times\jj \;.\label{eq:London2}\end{aligned}$$ where $\jj=\ns\,e\,\vs$ is the supercurrent and $\ns$ is the superelectron density. If we also consider Ampère’s law for a superconductor in stationary state (no displacement current) $$\label{eq:Ampere} \nabla\times\Be\=\muz\,\jj\;,$$ from and using vector calculus identities, we obtain $$\nabla\times\nabla\times\Be\=\nabla\left(\cancel{\nabla\cdot\Be}\right)-\nabla^2\Be \=\muz\,\nabla\times\jj~=-\muz\,\frac{\ns\,e^2}{m}\;\Be\;,$$ that is, $$\nabla^2\Be\=\frac{1}{\lambdae^2}\;\Be\;,$$ where we have introduced the penetration depth $$\label{eq:lambdae} \lambdae\=\sqrt{\frac{m}{\muz\,\ns\,e^2}}\;.$$ Using the vector potential $\Ae$, the two London equations can be summarized in the (not gauge-invariant) form $$\label{eq:Londonsumm} \qquad \jj~=-\frac{1}{\muz\,\lambdae^2}\;\Ae \qquad\qquad \big(\,\Be\,=\,\nabla\times\Ae\,\big)\quad.$$ #### Generalized London equations. If we now take into account gravitational corrections, we should consider for the fields and the vector potential the generalized form of definition : $$\B=\Be+\frac{m}{e}\,\Bg\;,\qquad \A=\Ae+\frac{m}{e}\,\Ag\;,\qquad \B=\nabla\times\A\;.$$ If $\A$ is minimally coupled to the wave function $$\label{eq:psi} \qquad \psi\=\psi_0\,\e^{i\varphi}\;, \qquad\qquad \psi_0^2\equiv\abs{\psi}^2=n_s\;,$$ the second London equation can be derived from a quantum mechanical current density [12]{}[15]{}  = -(\^-\^), where $\tilde{\nabla}$ is the covariant derivative for the minimal coupling: $$\tilde{\nabla}\=\nabla-i\,\tilde{g}\,\A\;,$$ so that one has for the current $$\jj ~= \,-\frac{i}{2m}\left(\psi^\ast\nabla\psi-\psi\nabla\psi^\ast\right)-\frac{\tilde{g}}{m}\,\A\,\abs{\psi}^2 \= \frac{1}{m}\,\abs{\psi}^2\left(\nabla\varphi-\tilde{g}\,\A\right)\;.$$ If we now take the curl of the previous equation, we find $$\B~=\,-\,\frac{m}{\tilde{g}\,\abs{\psi}^2}\;\nabla\times\jj \= -\,\frac1\zeta\;\nabla\times\jj\;,$$ which is the generalized form of the second London equation . To find an explicit expression for $\zeta$, we consider the case $\Bg=0$ obtaining $$\B \= \Be+\frac{m}{e}\,\xcancel{\Bg} ~= -\,\frac1\zeta\;\nabla\times\jj\;,$$ and, using , and , we find $$\tilde{g}\,=\,e^2\;, \qquad\quad \frac1\zeta\=\muz\,\lambdae^2\;.\qquad$$ Then we consider the case $\Be=0$, so that we have $$\B \= \xcancel{\Be}+\frac{m}{e}\,\Bg ~= \,-\muz\,\lambdae^2\;\nabla\times\jj ~= \,-\muz\,\lambdae^2\;\frac{m}{e}\,\nabla\times\jg\;,$$ together with gravito-Ampère’s law in stationary state, $$\nabla\times\Bg\=\mug\;\jg\;,$$ so that, taking the curl of the above equation, we find $$\nabla\times\nabla\times\Bg~=\,-\nabla^2\Bg \=\mug\,\nabla\times\jg~=-\,\mug\,\frac{1}{\muz\,\lambdae^2}\;\Bg ~=\,-\,\frac{1}{\lambdag^2}\;\Bg\;,$$ where we have introduced the penetration depth $$\label{eq:lambdag} \lambdag\=\sqrt{\frac{\muz\,\lambdae^2}{\mug}} \=\sqrt{\frac{c^2}{4\pi\GN\,m\,n_s}}\;.$$ Finally, using the stationary generalized Ampère’s law from and using eq.  we find $$\label{eq:genAmpere} \nabla\times\B\=\left(\muz+\mug\right)\,\jj\=\muz\left(1+\frac{\lambdae^2}{\lambdag^2}\right)\jj\;,$$ and taking the curl we obtain the general form $$\begin{split} \nabla^2\B&~=\,-\muz\left(1+\frac{\lambdae^2}{\lambdag^2}\right)\,\nabla\times\jj \=\muz\,\frac{1}{\muz\,\lambdae^2}\,\left(1+\frac{\lambdae^2}{\lambdag^2}\right)\,\B\=\\[\jot] &\=\left(\frac{1}{\lambdae^2}+\frac{1}{\lambdag^2}\right)\,\B \=\frac{1}{\lambda^2}\;\B\;, \end{split}$$ where we have defined a *generalized penetration depth* $\lambda$: $$\lambda\=\frac{\lambdag\,\lambdae}{\sqrt{\lambdag^2+\lambdae^2}} ~\simeq~ \lambdae \qquad\quad \left(\frac{\lambdag}{\lambdae}\simeq{10}^{21}\right)\;\;.$$ The general form of eq.  is $$\qquad \jj~=-\,\zeta\,\A \qquad\qquad \big(\,\B\,=\,\nabla\times\A\,\big)\;\;.$$ and, since charge-conservation requires the condition ${\nabla\cdot\jj=0}$, we obtain for the vector potential $$\nabla\cdot\A=0 \;,$$ that is, the so-called *Coulomb gauge* (or *London gauge*). Isotropic superconductor {#sec: Isotropic SC} ======================== In Sect. \[sec:Intro\] we have shown how Modanese was able to theoretically describe the gravitational shielding effect due to the presence of a superfluid. Now we are going to study the same problem with a different approach. Modanese has solved gravitational field equation where the contribution of the superfluid was encoded in the energy-momentum tensor. In the following, we are going to solve the Ginzburg-Landau equation for the superfluid order parameter in an external gravitational field. Let us restrict ourselves to the case of an isotropic superconductor in the gravitational field of the earth and in absence of an electromagnetic field, we can take $\Ee=0$ and $\Be=0$. Moreover, $\Bg$ in the solar system is very small [@mashhoon1989detection; @ljubivcic1992proposed], therefore $\E=\frac{m}{e}\,\Eg$ and $\B=0$. Finally, we also have the relations $\phi=\frac{m}{e}\,\phig$ and $\A=\frac{m}{e}\,\Ag$, so we can write down our set of conditions: \[eq:fields\] $$\begin{gathered} \Ee=0\;, \;\quad \Be=0\;, \;\quad \Bg=0 \;\quad\Longrightarrow\quad\; \E=\frac{m}{e}\;\Eg\;, \;\quad \B=0\;;\;\; % \intertext{together with} % \phi=\frac{m}{e}\,\phig \;,\qquad\quad \A=\frac{m}{e}\,\Ag\;.\end{gathered}$$ The situation is not the same as the Meissner effect but, rather, as the case of a superconductor in an electric field. Time-dependent Ginzburg-Landau equations ---------------------------------------- Since the gravitoelectric field is formally analogous to an electric field we can use the time-dependent Ginzburg-Landau equations (TDGL) which, in the Coulomb gauge $\nabla\cdot\A=0$ are written in the form [@tang1995time; @lin1997ginzburg; @ullah1991effect; @ghinovker1999explosive; @kopnin1999time; @fleckinger1998dynamics; @du1996high]: [12]{}[6]{} \[eq:TDGL\] $$\begin{gathered} \frac{\hbar^2}{2\,m\,\DD}\left(\frac{\dd}{\dd t} +\frac{2\,i\,e}{\hbar}\,\phi\right)\,\psi \-a\,\psi\+b\,\abs{\psi}^2\psi\+\frac{1}{2\,m}\left(i\hbar\nabla +\frac{2\,e}{c}\,\A\right)^2 \psi\=0 \;,\label{eq:TDGLn1}\\[2\jot] % \nabla\times\nabla\times\A-\nabla\times\HH~=-\frac{4\pi}{c}\, \left(\frac{\sigma}{c}\,\frac{\dd\A}{\dd t}+\sigma\,\nabla\phi +\frac{i\hbar\,e}{m}\left(\psi^*\nabla\psi-\psi\nabla\psi^*\right) +\frac{4\,e^2}{mc}\,\abs{\psi}^2\A\right)\,,\end{gathered}$$ where $\DD$ is the diffusion coefficient, $\sigma$ is the conductivity in the normal phase, $\HH$ is the applied field and the vector field $\A$ is minimally coupled to $\psi$. The above TDGL equations for the variables $\psi$, $\A$ are derived minimizing the total Gibbs free energy of the system [@tinkham1996introduction; @ketterson1999superconductivity; @degennes1989superconductivity]. The coefficients $a$ and $b$ in have the following form: $$\begin{split} a&\=a(T)\=a_\ms{0}\,(T-\Tc)\;,\\[\jot] b&\=b(T)~\equiv~b(\Tc)\;, \end{split}$$ $a_\ms{0\,}$, $b$ being positive constants and the critical temperature of the superconductor. The boundary and initial conditions are $$\begin{aligned} \left. \begin{aligned} \left(i\hbar\,\nabla\psi+\dfrac{2\,e}{c}\,\A\,\psi\right)\cdot\n=0& \cr \hfill\nabla\times\A\cdot\n=\HH\cdot\n& \\[1.5\jot] \hfill\A\cdot\n=0& \end{aligned} \;\;\right\} \; \text{on }\dd\Omega\times(0,t)\;; \qquad \left. \begin{aligned} \psi(x,0)&\=\psi_0(x) \cr \A(x,0)&\=\A_0(x) \end{aligned} \!\!\!\!\right\} \; \text{on }\Omega\qquad \label{eq:boundary}\end{aligned}$$ where $\dd\Omega$ is the boundary of a smooth and simply connected domain in $\mathbb{R}^\ms{\textrm{N}}$. #### Dimensionless TDGL. In order to write eqs.  in a dimensionless form, the following quantities can be introduced: \[eq:param\] $$\begin{gathered} \Psi^2(T)\=\frac{\abs{a(T)}}{b}\;,\qquad \xi(T)\=\frac{h}{\sqrt{2\,m\,|a(T)|}}\;,\qquad \lambda(T)\=\sqrt{\frac{b\,m\,c^2}{4\pi\,|a(T)|\,e^2}}\;,\\[1.2\jot] % \Hc(T)\=\sqrt{\frac{4\pi\,\muz\,\abs{a(T)}^2}{b}}\= \frac{h}{4\,e\,\sqrt{2\pi}\,\lambda(T)\,\xi(T)}\;,\\[5\jot] % \kappa\=\frac{\lambda(T)}{\xi(T)}\;,\qquad \tau(T)\=\frac{\lambda^2(T)}{\DD}\;,\qquad \eta\=\frac{4\pi\,\sigma\,\DD}{\epsz\,c^2}\;,\end{gathered}$$ where $\lambda(T)$, $\xi(T)$ and $\Hc(T)$ are the penetration depth, coherence length and thermodynamic field, respectively. The dimensionless quantities are then defined as: [2]{}[15]{} \[eq:dimlessfields\] $$\begin{gathered} x'=~\frac{x}{\lambda}\;,\qquad t'=~\frac{t}{\tau}\;,\qquad \psi'=~\frac{\psi}{\Psi}\;, % \intertext{and the dimensionless fields are written \bigskip} % \A'\=\frac{\A\,\kappa}{\sqrt{2}\,\Hc\,\lambda}\;,\qquad \phi'\=\frac{\phi\,\kappa}{\sqrt{2}\,\Hc\,\DD}\;,\qquad \HH'\=\frac{\HH\,\kappa}{\sqrt{2}\,\Hc}\;.\end{gathered}$$ Inserting eqs. (\[eq:dimlessfields\]) in eqs.  and dropping the prime gives the dimensionless TDGL equations in a bounded, smooth and simply connected domain in $\mathbb{R}^\ms{\textrm{N}}$ [@tang1995time; @lin1997ginzburg]: \[eq:dimlessTDGL\] $$\begin{gathered} \frac{\dd\psi}{\dd t}\+i\,\phi\,\psi \+\kappa^2\left(\abs{\psi}^2-1\right)\,\psi \+\left(i\,\nabla+\A\right)^2 \psi\=0 \;,\label{subeq:dimlessTDGLn1} \\[2\jot] % \nabla\times\nabla\times\A-\nabla\times\HH~= -\eta\,\left(\frac{\dd\A}{\dd t}+\nabla\phi\right) -\frac{i}{2}\left(\psi^*\nabla\psi-\psi\nabla\psi^*\right) -\abs{\psi}^2\A\,,\label{subeq:dimlessTDGLn2}\end{gathered}$$ and the boundary and initial conditions  become, in the dimensionless form $$\begin{aligned} \left. \begin{aligned} \left(i\,\nabla\psi+\A\,\psi\right)\cdot\n=0&\cr \nabla\times\A\cdot\n=\HH\cdot\n&\cr \A\cdot\n=0& \end{aligned} \!\!\!\right\} \; \text{on }\dd\Omega\times(0,t)\;; \qquad \left. \begin{aligned} \psi(x,0)&\=\psi_0(x)\cr \A(x,0)&\=\A_0(x) \end{aligned} \!\!\!\!\right\} \; \text{on }\Omega\;.\qquad \label{eq:dimlessboundary}\end{aligned}$$ Solving dimensionless TDGL -------------------------- If the superconductor is on the Earth’s surface, the gravitational field is very weak and approximately constant. This means that one can write $$\phi=-\gstar\,x\;,$$ with $$\gstar\=\frac{\lambda(T)\,\kappa\,m\,g}{\sqrt{2}\,e\,\Hc(T)\,\DD}~\ll~1\;,$$ $g$ being the acceleration of gravity. The corrections to $\phi$ in the superconductor are of second order in $\gstar$ and therefore they are not considered here. Now we search for a solution of the form $$\label{eq:condfield} \begin{split} \psi(x,t)&\=\psi_0(x,t)+\gstar\,\gamma(x,t)\;,\\ A(x,t)&\=\gstar\,\beta(x,t)\;,\\ \phi(x)&~=-\gstar\,x\;. \end{split}$$ At order zero in $\gstar$, eq. (\[subeq:dimlessTDGLn1\]) gives $$\label{eq:dimlessTDGL0} \frac{\dd\psi_0(x,t)}{\dd t} \+ \kappa^2\left(\abs{\psi_0(x,t)}^2-1\right)\,\psi_0(x,t) \- \frac{\dd^2\psi_0(x,t)}{\dd x^2} \=0 \;,$$ with the conditions $$\begin{split} \psi_0(x,0)&\=0\;,\\ \psi_0(0,t)&\=0\;,\\ \psi_0(L,t)&\=0\;, \end{split}$$ where $L$ is the length of the superconductor, here in units of $\lambda$, and $t=0$ is the instant in which the material undergoes the transition to the superconducting state. The static classical solution of eq.  is $$\label{eq:psi0} \psi_0(x,t)~\equiv~\psi_0(x)\=\tanh\left(\frac{\kappa\,x}{\sqrt{2}}\right)\,\tanh\left(\frac{\kappa\,\left(x-L\right)}{\sqrt{2}}\right)\;,$$ and, from , one obtains $$\label{eq:gamma} \frac{\dd\gamma(x,t)}{\dd t} \- \frac{\dd^2\gamma(x,t)}{\dd x^2}\+ \kappa^2\left(3\,\abs{\psi_0(x)}^2-1\right)\,\gamma(x,t)\= i\,x\,\psi_0(x)$$ at first-order in $\gstar$, with the conditions $$\begin{split} \gamma(x,0)&\=0\;,\\ \gamma(0,t)&\=0\;,\\ \gamma(L,t)&\=0\;. \end{split}$$ The first-order equation for the vector potential is written $$\label{eq:vectpot} \eta\,\frac{\dd\beta(x,t)}{\dd t} \+ \abs{\psi_0(x)}^2\,\beta(x,t) \+ J(x,t) \- \eta \= 0 \;,$$ with the constraint $$\label{eq:betanull} \beta(x,0) \= 0 \;.$$ The second-order spatial derivative of $\beta$ does not appear in eq. : this is due to the fact that, in one dimension, one has $$\nabla^2 A \= \frac{\dd}{\dd x}\,\nabla\cdot\A \;,$$ and therefore, in the Coulomb gauge $$\nabla\times\nabla\times\A \= \nabla\left(\nabla\cdot\A\right) - \nabla^2 A \= 0 \;.$$ The quantity $J(x,t)$ that appears in eq.  is given by $$\label{eq:J} J(x,t) \= \frac12 \left( \psi_0(x)\,\frac{\dd}{\dd x}\,\textrm{Im}\left[\gamma(x,t)\right] -\textrm{Im}\left[\gamma(x,t)\right]\,\,\frac{\dd}{\dd x}\psi_0\right)\;,$$ and the solution of eq.  is $$\begin{aligned} \label{eq:beta} \beta(x,t) &\= \frac1\PP \left(1-\e^{-\PP\,t}\right) \- \frac{\e^{-\PP\,t}}{\eta}\,\int^{t}_\ms{0} dt\,J(x,t)\;\e^{\PP\,t} \;\;, % \intertext{with} % &~ \PP \= \frac{\abs{\psi_0(x)}^2}{\eta}\;.\end{aligned}$$ Now, we have the form  for $\psi_0(x,t)$ and also the above  for $\beta(x,t)$ as a function of $\gamma(x,t)$ through the definition of $J(x,t)$: the latter can be used in to obtain both $\psi(x,t)$ and $\A(x,t)$ as functions of $\gamma(x,t)$. The gravitoelectric field can be found using the relation $$\label{eq:EgphiA} \Eg~= -\,\nabla\phi - \frac{\dd\A}{\dd t}\;,$$ and its explicit form reads $$\label{eq:Eg} \frac1{\gstar}\;{\Eg(x,t)} \= 1 \- \e^{-\PP\,t} \- \frac{\dd}{\dd t} \left( \frac{\e^{-\PP\,t}}{\eta}\,\int^{t}_\ms{0} dt\,J(x,t)\;\e^{\PP\,t}\right)\;.$$ The above formula shows that, for maximizing the effect of the reduction of the gravitational field in a superconductor, it is necessary to reduce $\eta$ and have large spatial derivatives of $\psi_0(x)$ and $\gamma(x,t)$. The condition for a small value of $\eta$ is a large normal-state resistivity for the superconductor and a small diffusion coefficient $$\DD ~\sim~ \frac{\vF\,\ell}{3}\;,$$ where $\vF$ is the Fermi velocity (which is small in HTCS) and $\ell$ is the mean free path: this means that the effect is enhanced in “bad” samples with impurities, not in single crystals. If we consider the case $J(x,t)=0$, given by the condition $$\psi_0(x)\=\textrm{Im}\left[\gamma(x,t)\right]~\equiv~\textrm{Im}\left[\gamma(x)\right]\;,$$ we obtain the simplified equation $$\eta\,\frac{\dd\beta(x,t)}{\dd t} \+\abs{\psi_0(x)}^2\,\beta(x,t)\-\eta\=0\;,$$ which is solved, together with the constraint , by the function $$\beta(x,t)\=\frac{\eta}{\abs{\psi_0(x)}^2}\,\left(1-\e^\mathlarger{-\frac{\abs{\psi_0(x)}^2}{\eta}\,t}\right)\;.$$ Using then eqs.  and we find $$\frac{\Eg}{\gstar}\=1-\e^\mathlarger{-\frac{\abs{\psi_0(x)}^2}{\eta}\,t}\;.$$ The above equation shows that, unlike the general case, in the absence of the contribution of $J(x,t)$ the effect is bigger than in the case of single crystal low-superconductor, where $\eta$ is large. ### Approximate solution From the experimental viewpoint, the greater are the length and time scales over which there is a variation of $\Eg$, the easier is the observation of this effect. Actually, we started from dimensionless equations and therefore the length and time scales are determined by $\lambda(T)$ and $\tau(T)$ of eqs. , which should therefore be as large as possible. In this sense, materials having very large $\lambda(T)$ could be interesting for the study of this effect [@blackstead2000magnetism]. Moreover, eq.  shows the dependence of relaxation with respect to $\abs{\psi_0(x)}^2$ through the definition of $\PP$: one can see that $\abs{\psi_0(x)}$ must be as small as possible and this implies that also $\kappa$ must be small, see eq. . This also means that $\lambda(T)$ and $\xi(T)$ must both be large. Up to now we have dealt with the expression of $\beta(x,t)$ as a function of $\gamma(x,t)$. If we want to obtain an explicit expression for $\Eg$, we have to solve the equation  for $\gamma(x,t)$: this is a difficult task which can be undertaken only in a numerical way. Nevertheless, if one puts $\psi_0(x)\approx1$, which is a good approximation in the case of YBa${}_2$Cu${}_3$O${}_7$ (YBCO) in which $\kappa=94.4$, one can find the simple approximate solution: $$\label{eq:gammasol} \gamma(x,t) \= i\,\gamma_0(x) \+ i\,\sum^\infty_{n=1} \Qn \, \sin\left(\wn x\right) \, \e^{-\Cnq\,t}\;,$$ with $$\begin{aligned} \gamma_0(x) &\= \frac{x}{2\,\kappa^2}\,\Bigg(1-\ch[\frac{\alpha}{2}-\frac{\alpha}{L}\,x]\,\sech[\frac{\alpha}{2}]\Bigg)\;, \\[4\jot] % \Qn &\= \frac1L\, \int^{\ms{L}}_\ms{0} dx\:\gamma_0(x)\,\sin(\wn x) \= \frac{(-1)^n}{2\,\kappa^2}\,\Biggl( \frac{1}{\wn}\-\wn\,\frac{Q_{n}^{{}^{(1)}}+Q_{n}^{{}^{(2)}}}{\Cnq}\Biggr)\;,\\[-5\jot] % \intertext{and} % \Cnq&\=\wn^2+2\,\kappa^2\;,\qquad\wn\=n\,\pi/L\;,\qquad\alpha\=\sqrt{2}\,\kappa\,L\;,\\[3\jot] % Q_{n}^{{}^{(1)}} &\= (-1)^n-\cosh\alpha \+\frac{2\,\alpha\,\wn^2}{L\,\Cnq}\,\sinh\alpha\;,\\[2\jot] % Q_{n}^{{}^{(2)}} &\= \left(\cosh\alpha-1\right)\,\Bigg(1 \+ \frac{2\,\alpha}{L^2\,\Cnq}\, \frac{(-1)^n-\cosh\alpha}{\sinh\alpha}\Bigg)\;.\end{aligned}$$ Taking into account eq.  and inserting eq.  in eq.  and then in eq. , we can find a new expression for the gravitoelectric field $\Eg$: $$\label{eq:Egsol}s \frac1{\gstar}\;{\Eg(x,t)} \= 1 \- \e^{-\PP\,t}\,\left(1-\frac{J_0(x)}{\eta}\right) \+ \frac{1}{\eta}\:\sum^\infty_{n=1}\,\Qn\:\Rn(x)\:\Sn(x,t) \;,$$ where $$\begin{aligned} J_0(x) &\= \frac{1}{2\,\kappa^2}\,\left( \psi_0(x)\,\frac{\dd}{\dd x}\gamma_0(x) -\gamma_0(x)\,\frac{\dd}{\dd x}\psi_0(x)\right)\;, \\[4\jot] % \Rn(x) &\= \wn\,\psi_0(x)\,\cos(\wn x) \- \sin(\wn x)\frac{\dd}{\dd x}\psi_0(x)\;,\\[4\jot] % \Sn(x,t)&\=\frac{\Cnq\;\e^{-\Cnq\,t}\-\PP\,\e^{-\PP\,t}}{\PP-\Cnq}\;.\end{aligned}$$ By making the approximation $$\label{eq:drasticappr} \gamma(x)~\simeq~\frac{i\,x}{2\,\kappa^2}\;,$$ one finds the result $$\label{eq:Egappr} \frac1{\gstar}\;{\Eg(x,t)} \= 1 \- \e^{-\PP\,t}\,\left(1-\frac{J_{00}(x)}{\eta}\right)\;,$$ where $$J_{00}(x) \= \frac{1}{2\,\kappa^2}\, \left(\psi_0(x)-x\,\frac{\dd}{\dd x}\psi_0(x)\right)\;.$$ In spite of its crudeness, in the case of YBCO the above approximate solution  gives the same results of the solution . Moreover, nothing changes significantly if one neglects the finite size of the superconductor and uses $$\psi_0(x)\=\tanh\left(\kappa x/\sqrt2\right)$$ instead of eq. . YBCO vs. Pb ----------- In the case of YBCO, the variation of the gravitoelectric field $\Eg$ in time and space is shown in Figs. \[fig:YBCO\] and \[fig:YBCOx\]. It is easily seen that this effect is almost independent on the spatial coordinate. ![The gravitational field $\Eg/\gstar$ as a function of the normalized time and space for YBCO at $T=77\,\K$[]{data-label="fig:YBCO"}](Fig1A.pdf){width="\textwidth"} ![The gravitational field as a function of the normalized time for increasing values of the $x$ variable for YBCO[]{data-label="fig:YBCOx"}](Fig1B.pdf){width="\textwidth"} The results in the case of Pb are reported in Figs. \[fig:Pb\] and \[fig:Pbx\], which clearly show that, due to the very small value of $\kappa$, the reduction is greater near the surface. Moreover, in this particular case, some approximations made in the case of YBCO are no longer allowed: for example, the simplified relation  is not valid for small values of $L$. In fact, when $\kappa$ is small, the length $L$ plays an important role and, in particular, if $L$ is small the effect is remarkably enhanced, as shown in Fig. \[fig:PbL\]. In the same condition, a maximum of the effect (and therefore a minimum of $\Eg$) can occur at $t\neq0$, as can be seen in the same figure. In the extreme case $L=6\,\lambda$, we found that the system returns to the unperturbed value after a time $t_0\simeq10^5\,\tau$. Table \[tab:YBCOvsPb1\] reports the values of the parameters of YBCO and Pb, calculated at a temperature $T_\star$ such that the quantity $\frac{T_\star-\Tc}{\Tc}$ is the same in the two materials. In Tables  and are shown the calculated values of $\lambda$, $\tau$ and $\gstar$ at different temperatures. ![The gravitational field $\Eg/\gstar$ as a function of the normalized time and space for Pb at $T=6.3\,\K$[]{data-label="fig:Pb"}](Fig2A.pdf){width="\textwidth"} ![The gravitational field as a function of the normalized time for increasing values of the $x$ variable for Pb[]{data-label="fig:Pbx"}](Fig2B.pdf){width="\textwidth"} ![The gravitational field $\Eg/\gstar$ as a function of the normalized time in the case of Pb, for different values of $L$ and $x=4\,\lambda$. The maximum of the shielding effect is evident.[]{data-label="fig:PbL"}](Fig3.pdf){width="\textwidth"} [@ M[p]{}[0.175]{} M[p]{}[0.2]{} M[p]{}[0.275]{} @]{} & &\ & 89 & 7.2 \ T\_& 77 & 6.3 \ (T\_) & 3.610\^[-9]{} & 1.710\^[-7]{} \ (T\_) & 3.310\^[-7]{} & 7.810\^[-8]{} \ & &\ (T\_) & 0.2  & 0.018 \ & 94.4 & 0.48\ (T\_) & 3.410\^[-10]{} & 6.110\^[-15]{} \ & 1.2710\^[-2]{} & 6.610\^[3]{}\ & 3.210\^[-4]{} \^[2]{}/& 1 \^[2]{}/\ & 610\^[-9]{} & 1.710\^[-6]{} \ & 1.610\^[5]{} /& 1.8310\^[6]{} /\ [-5cm]{}[-5cm]{} [0.75]{} [@ M[p]{}[0.17]{} M[p]{}[0.19]{} M[p]{}[0.22]{} M[p]{}[0.19]{} @]{} **YBCO** & & &\ T=0  & 1.710\^[-7]{} & 9.0310\^[-11]{} & 2.610\^[-12]{}\ T=70 & 2.610\^[-7]{} & 2.110\^[-10]{} & 9.810\^[-12]{}\ T=77 & 3.310\^[-7]{} & 3.410\^[-10]{} & 210\^[-11]{}\ T=87 & 810\^[-7]{} & 210\^[-9]{} & 2.810\^[-7]{}\ [0.75]{} [@ M[p]{}[0.2]{} M[p]{}[0.2]{} M[p]{}[0.22]{} M[p]{}[0.21]{} @]{} **Pb** & & &\ T=0  & 3.9010\^[-8]{} & 1.510\^[-15]{} & 110\^[-17]{}\ T=4.20 & 4.310\^[-8]{} & 1.810\^[-15]{} & 1.410\^[-17]{}\ T=6.26 & 7.810\^[-8]{} & 6.110\^[-15]{} & 8.210\^[-17]{}\ T=7.10 & 2.310\^[-7]{} & 5.310\^[-14]{} & 2.210\^[-15]{}\ \[tab:YBCOvsPb2\] Conclusions =========== It is clearly seen that $\lambda$ and $\tau$ grow with the temperature, so that one could think that the effect is maximum when the temperature is very close to the critical temperature $T_\textrm{c\,}$. However, this is true only for low-superconductors (LTSC) because in high-superconductors fluctuations are of primary importance for some Kelvin degree around $T_\textrm{c\,}$. The presence of these opposite contributions makes it possible that a temperature $\subt[T][max]\leq\Tc$ exists, at which the effect is maximum. In all cases, the time constant $\subt[t][int]$ is very small, and this makes the experimental observation rather difficult. Here we suggest to use pulsed magnetic fields to destroy and restore the superconductivity within a time interval of the order of $\subt[t][int]$. The main conclusion of this work is that the reduction of the gravitational field in a superconductor, if it exists, is a transient phenomenon and depends strongly on the parameters that characterize the superconductor. Sign convention {#app:signconv} =============== We work in the “mostly plus” convention, where $$\eta\=\mathrm{diag}(-1,+1,+1,+1)\;.$$ We define the Riemann tensor as: $$\begin{split} \Ruddd&\=\dm[\lambda]\Gam[\sigma][\mu][\nu]\-\dm[\nu]\Gam[\sigma][\mu][\lambda] \+\Gam[\sigma][\rho][\lambda]\,\Gam[\rho][\nu][\mu] \-\Gam[\sigma][\rho][\nu]\,\Gam[\rho][\lambda][\mu]\=\\ &\= 2\,\dm[{[}\lambda]\Gam[\sigma][\nu{]}][\mu] \+2\,\Gam[\sigma][\rho][{[}\lambda]\,\Gam[\rho][\nu{]}][\mu]\;, \end{split}$$ where $$\begin{split} \Gam[\lambda][\nu][\rho]&\=\invgmetr[\lambda][\mu]\,\Gamd[\mu][\nu][\rho]\;,\\ \Gamd[\mu][\nu][\rho]&\=\frac12\,\left(\dm[\rho]\gmetr[\mu][\nu] +\dm[\nu]\gmetr[\mu][\rho] -\dm[\mu]\gmetr[\nu][\rho]\right)\;. \end{split}$$ The Ricci tensor is defined as a contraction of the Riemann tensor $$\Ricci\=\Ruddd[\sigma][\mu][\sigma][\nu]\;,$$ the Ricci scalar is given by $$R\=\invgmetr\Ricci\;,$$ and the so-called Einstein tensor $\GEinst[][]$ has the form $$\GEinst~\equiv~\Ricci-\dfrac12\,\gmetr\,R\;.$$ The Einstein equations are written $$\GEinst~\equiv~\Ricci-\dfrac12\,\gmetr\,R \=8\pi\GN\;T_{\mu\nu}\;,$$ where $T_{\mu\nu}$ is the total energy-momentum tensor. The cosmological constant contribution can be pointed out splitting $T_{\mu\nu}$ tensor in the matter and $\Lambda$ component: $$T_{\mu\nu}\=T^{{}^\tts{(M)}}_{\mu\nu}+T^{{}^\ms{(\Lambda)}}_{\mu\nu} \=T^{{}^\tts{(M)}}_{\mu\nu}-\frac{\Lambda}{8\pi\GN}\,\gmetr\;,$$ so that the Einstein equation can be rewritten as $$\Ricci-\dfrac12\,\gmetr\,R\=8\pi\GN\,\left(T^{{}^\tts{(M)}}_{\mu\nu}+T^{{}^\ms{(\Lambda)}}_{\mu\nu}\right)\;,$$ or, equivalently, $$\Ricci-\dfrac12\,\gmetr\,R+\Lambda\,\gmetr\=8\pi\GN\;T^{{}^\tts{(M)}}_{\mu\nu}\;.$$ Acknoledgments {#acknoledgments .unnumbered} ============== This work was supported by the Competitiveness Program of the National Research Nuclear University MEPhI of Moscow for the contribution of prof. G. A. Ummarino. We also thank Fondazione CRT that partially supported this work for dott. A. Gallerati. [^1]: we work in the “mostly plus” framework, $\eta=\mathrm{diag}(-1,+1,+1,+1)$, and set $c=\hbar=1$ [^2]: see Appendix \[app:signconv\] for definitions and sign conventions [^3]: for the sake of simplicity, we initially set the physical charge $e=m=1$
--- abstract: 'We theoretically investigate the quantum interference of entangled two-photon states generated in a nonlinear crystal pumped by femtosecond optical pulses. Interference patterns generated by the polarization analog of the Hong-Ou-Mandel interferometer are studied. Attention is devoted to the effects of the pump-pulse profile (pulse duration and chirp) and the second-order dispersion in both the nonlinear crystal and the interferometer’s optical elements. Dispersion causes the interference pattern to have an asymmetric shape. Dispersion cancellation occurs in some cases.' author: - | Jan Peřina, Jr.[^1][^2], Alexander V. Sergienko,\ Bradley M. Jost, Bahaa E. A. Saleh, Malvin C. Teich[^3]\ Quantum Imaging Laboratory[^4]\ Department of Electrical and Computer Engineering\ Boston University\ 8 Saint Mary’s Street, Boston, MA 02215, USA title: 'Dispersion in Femtosecond Entangled Two-Photon Interference' --- (short title: Entangled Two-Photon Interference) Keywords: down-conversion, entangled two-photon interference, spontaneous processes, ultrafast nonlinear optics. Introduction ============ Significant consideration has recently been given to the process of spontaneous parametric down-conversion in nonlinear crystals pumped by cw lasers [@MaWo; @coinc; @Klysh; @Teich]. The nonclassical properties of entangled two-photon light generated by this process have been used in many experimental schemes to elucidate distinctions between the predictions of classical and quantum physics [@quant]. Coincidence-count measurements with entangled two-photon states have revealed violations of Bell’s inequalities [@bell], and have been considered for use in nonclassical imaging [@Serg] and quantum cryptography [@crypt]. A new frontier in these efforts is the generation of quantum states with three correlated particles (GHZ states) [@Zeil; @three], which would be most useful for further tests of the predictions of quantum mechanics. One way to create such states is to make use of pairs of two-photon entangled states that are synchronized in time, i.e., generated within a sharp time window [@three-two]. This can be achieved by using femtosecond pump beams. Also, successful quantum teleportation has already been observed using femtosecond pumping [@teleport]. For these reasons, the theoretical and experimental properties of pulsed spontaneous parametric down-conversion have been scrutinized [@Se; @Ru; @Gr; @Ou]. It has been shown that ultrashort pumping leads to a loss of visibility of the coincidence-count interference pattern in type-II parametric down-conversion [@Se; @Ru; @Gr], and narrowband frequency filters are required to restore the visibility [@three-two; @Se; @Gr]. This paper is devoted to a theoretical investigation of dispersion effects in femtosecond-pulsed spontaneous parametric down-conversion. Particular attention is given to the effects of pump-pulse chirp and second-order dispersion (in both the pump and down-converted beams) on the visibility and shape of the photon-coincidence pattern generated by the polarization analog of the Hong-Ou-Mandel interferometer [@Mad]. Dispersion cancellation, which has been extensively studied in the case of cw pumping [@Lar], is also predicted to occur under certain conditions for femtosecond down-converted pairs. Spontaneous parametric down-conversion with an ultrashort pump pulse ==================================================================== We consider a nonlinear crystal pumped by a strong coherent-state field. Nonlinear interaction then leads to the spontaneous generation of two down-converted fields (the signal and the idler) which are mutually strongly correlated [@MaWo]. Such a correlation can be conveniently described in terms of the two-photon amplitude $ {\cal A}_{12} $ which is defined as a matrix element of the product of electric-field operators $ \hat{E}^{(+)}_1(z_1,t_1) $ and $ \hat{E}^{(+)}_2(z_2,t_2) $ sandwiched between the entangled two-photon state $ |\psi^{(2)}\rangle $ (for details, see Appendix A) and the vacuum state $ |{\rm vac} \rangle $: $$% 1 {\cal A}_{12}(z_1,t_1,z_2,t_2) = \langle {\rm vac} | \hat{E}^{(+)}_1(z_1,t_1) \hat{E}^{(+)}_2(z_2,t_2) |\psi^{(2)} (0,t)\rangle .$$ The positive-frequency part $ \hat{E}^{(+)}_j $ of the electric-field operator of the $ j $th beam is defined as $$% 2 \hat{E}^{(+)}_j(z_j,t_j) = \sum_{k_j} e_j(k_j) f_j(\omega_{k_j}) \hat{a}_j(k_j) \exp(ik^v_j z_j - i \omega_{k_j} t_j ) , \hspace{1cm} j=1,2,$$ where $ \hat{a}_{k_j} $ stands for the annihilation operator of the mode with wave vector $ k_j $, $ e_j(k_j) $ denotes the normalization amplitude of the mode $ k_j $, and $ f_j(\omega_{k_j}) $ characterizes an external frequency filter placed in the $ j $th beam. The symbols $ k^v_1 $ and $ k^v_2 $ denote wave vectors in vacuum. At the termination of the nonlinear interaction in the crystal, the down-converted fields evolve according to free-field evolution and thus the two-photon amplitude $ {\cal A}_{12} $ depends only on the differences $ t_1 - t $ and $ t_2 - t $. When the down-converted beams propagate through a dispersive material of the length $ l $, the entangled two-photon state $ |\psi^{(2)} \rangle $ given in Eq. (A4) in Appendix A provides the expression for $ {\cal A}_{12,l} $: $$\begin{aligned} % 3 {\cal A}_{12,l}(\tau_1,\tau_2) &=& C \int_{-L}^{0} dz \, \sum_{k_p} \sum_{k_1} f_1(\omega_{k_1}) \sum_{k_2} f_2(\omega_{k_2}) {\cal E}_p^{(+)}(0,\omega_{k_p}-\omega^0_p) \exp \left[ i (k_p-k_1-k_2) z \right] \nonumber \\ & & \mbox{} \times \exp\left[ i(\tilde{k}_1 + \tilde{k}_2) l \right] \delta( \omega_{k_p} - \omega_{k_1} - \omega_{k_2} ) \exp [ -i\omega_{k_1}\tau_1 ] \exp [ -i\omega_{k_2}\tau_2 ] .\end{aligned}$$ The times $ \tau_1 $ and $ \tau_2 $ are given as follows: $$% 4 -i \omega_{k_j}\tau_j = ik^v_j z_j - i \omega_{k_j} (t_j - t) , \hspace{1cm} j=1,2 .$$ The symbol $ {\cal E}_p^{(+)}(0,\omega_{k_p}-\omega^0_p) $ denotes the positive-frequency part of the envelope of the pump-beam electric-field amplitude at the output plane of the crystal and $ \omega^0_p $ stands for the central frequency of the pump beam; the wave vectors $ k_p $, $ k_1 $, and $ k_2 $ ($ \tilde{k}_1 $ and $ \tilde{k}_2 $) are appropriate for the nonlinear crystal (dispersive material). The symbol $ L $ means the length of the crystal. The amplitudes $ e_1(k_1) $ and $ e_2(k_2) $ from Eq. (2) are absorbed into the constant $ C $. A typical experimental setup for coincidence-count measurement is shown in Fig. 1. We consider type-II parametric down-conversion for this exposition. In this case two mutually perpendicularly polarized photons are provided at the output plane of the crystal. They propagate through a birefringent material of a variable length $ l $ and then impinge on a 50/50 beamsplitter. Finally they are detected at the detectors $ {\rm D}_A $ and $ {\rm D}_B $. The coincidence-count rate $ R_c $ is measured by a coincidence device C. The beams might be filtered by the frequency filters $ {\rm F}_A $ and $ {\rm F}_B $ which can be placed in front of the detectors. Analyzers rotated by 45 degrees with respect to the ordinary and extraordinary axes of the nonlinear crystal enable quantum interference between two paths to be observed; either a photon from beam 1 is detected by the detector $ {\rm D}_A $ and a photon from beam 2 by the detector $ {\rm D}_B $, or vice versa. Including the effects of the beamsplitter and analyzers, the coincidence-count rate $ R_c $ can be determined as follows [@Se; @Ru]: $$% 5 R_c(l) = \frac{1}{4} \int_{-\infty}^{\infty} dt_A \, \int_{-\infty}^{\infty} dt_B \, \left| {\cal A}_{12,l}(t_A,t_B) - {\cal A}_{12,l}(t_B,t_A) \right|^2 ,$$ where the two-photon amplitude $ {\cal A}_{12,l} $ is given in Eq. (3). The normalized coincidence-count rate $ R_n $ is then expressed in the form: $$% 6 R_n(l) = 1 - \rho(l) ,$$ where $$% 7 \rho(l) = \frac{1}{2R_0} \int_{-\infty}^{\infty} dt_A \, \int_{-\infty}^{\infty} dt_B \, {\rm Re} \left[ {\cal A}_{12,l}(t_A,t_B) {\cal A}^*_{12,l}(t_B,t_A) \right] ,$$ and $$% 8 R_0 = \frac{1}{2} \int_{-\infty}^{\infty} dt_A \, \int_{-\infty}^{\infty} dt_B \, \left| {\cal A}_{12,l}(t_A,t_B) \right|^2 .$$ The symbol $ {\rm Re} $ denotes the real part of its argument. Specific models including second-order dispersion ================================================= Let us assume that the nonlinear crystal and the optical material in the path of the down-converted photons are both dispersive. We proceed to generalize the models provided in Refs. [@Se; @Ru; @Gr] by including the effects of second-order dispersion. The wave vectors $ k_p(\omega_{k_p}) $, $ k_1(\omega_{k_1}) $, and $ k_2(\omega_{k_2}) $ of the beams in the nonlinear crystal can be expressed in the following form, when the effects of material dispersion up to the second order are included [@SaTe]: $$% 9 k_j(\omega_{k_j}) = k^0_j + \frac{1}{v_j} (\omega_{k_j} - \omega^0_j) + \frac{D_j}{4\pi} (\omega_{k_j} - \omega^0_j)^2 , \hspace{1cm} j=p,1,2 .$$ The inverse of group velocity $ 1/v_j $, and the second-order dispersion coefficient $ D_j $, are given by $$\begin{aligned} % 10-11 \frac{1}{v_j} &=& \frac{dk_j}{d\omega_{k_j}} \left. \right|_{\omega_{k_j}=\omega^0_j} , \\ D_j &=& 2\pi \frac{d^2k_j}{d\omega_{k_j}^2} \left. \right|_{\omega_{k_j}=\omega^0_j} , \hspace{1cm} j=p,1,2 .\end{aligned}$$ The symbol $ \omega^0_j $ denotes the central frequency of beam $ j $. The wave vector $ k^0_j $ is defined by the relation $ k^0_j = k_j(\omega^0_j) $. Similarly, the wave vectors $ \tilde{k}_1(\omega_{k_1}) $ and $ \tilde{k}_2(\omega_{k_2}) $ of the down-converted beams in a dispersive material outside the crystal can be expressed as: $$% 12 \tilde{k}_j(\omega_{k_j}) = \tilde{k}^0_j + \frac{1}{g_j} (\omega_{k_j} - \omega^0_j) + \frac{d_j}{4\pi} (\omega_{k_j} - \omega^0_j)^2 , \hspace{1cm} j=1,2 ,$$ where $$\begin{aligned} % 13-14 \frac{1}{g_j} &=& \frac{d\tilde{k}_j}{d\omega_{k_j}} \left. \right|_{\omega_{k_j}=\omega^0_j} , \\ d_j &=& 2\pi \frac{d^2\tilde{k}_j}{d\omega_{k_j}^2} \left. \right|_{\omega_{k_j}=\omega^0_j} , \hspace{1cm} j=1,2,\end{aligned}$$ and $ \tilde{k}^0_j = \tilde{k}_j(\omega^0_j) $. We further assume that frequency filters with a Gaussian profile, and centered around the central frequencies $ \omega^0_1 $ and $ \omega^0_2 $, are incorporated: $$% 15 f_j(\omega_{k_j}) = \exp\left[ - \frac{ (\omega_{k_j} - \omega^0_j)^2 }{ \sigma_j^2 } \right] , \hspace{1cm} j=1,2,$$ where $ \sigma_j $ is the frequency width of the $ j $th filter. Assuming frequency- and wave-vector phase matching for the central frequencies ($ \omega^0_p = \omega^0_1 + \omega^0_2 $) and central wave vectors ($ k^0_p = k^0_1 + k^0_2 $), respectively, the two-photon amplitude $ {\cal A}_{12,l}(\tau_1,\tau_2) $ defined in Eq. (3) can be expressed in the form: $$\begin{aligned} % 16 {\cal A}_{12,l}(\tau_1,\tau_2) &=& C_{\cal A} \exp(-i\omega^0_1\tau_1) \exp(-i\omega^0_2\tau_2) \int_{-L}^{0} dz \, \int d\Omega_p \, {\cal E}^{(+)}_p(0,\Omega_p) \nonumber \\ & & \mbox{} \times \int d\Omega_1 \, \exp \left[ -\left( \frac{1}{\sigma_1^2} - i \frac{d_1 l}{4\pi} \right) \Omega_1^2 \right] \int d\Omega_2 \, \exp \left[ -\left( \frac{1}{\sigma_2^2} - i \frac{d_2 l}{4\pi} \right) \Omega_2^2 \right] \delta(\Omega_p-\Omega_1-\Omega_2) \nonumber \\ & & \mbox{} \times \exp\left[ i \left( \frac{\Omega_p}{v_p} - \frac{\Omega_1}{v_1} - \frac{\Omega_2}{v_2} \right) z \right] \exp\left[ i \left( \frac{D_p}{4\pi}\Omega_p^2 - \frac{D_1}{4\pi}\Omega_1^2 - \frac{D_2}{4\pi}\Omega_2^2 \right) z \right] \nonumber \\ & & \mbox{} \times \exp\left[ -i\left( \tau_1 - \frac{l}{g_1} \right) \Omega_1 \right] \exp\left[ -i\left( \tau_2 - \frac{l}{g_2} \right) \Omega_2 \right] .\end{aligned}$$ The frequencies $ \Omega_j $, $ \Omega_j = \omega_{k_j} - \omega^0_j $, for $ j=1,2,p $ have been introduced in Eq. (16); $ C_{\cal A} $ denotes a constant. We proceed to devote further attention to special cases. We first consider an ultrashort pump pulse with a Gaussian profile: the envelope $ {\cal E}_p^{(+)}(0,t) $ of the pump pulse at the output plane of the crystal then assumes the form [@Rudolph]: $$% 17 {\cal E}_p^{(+)}(0,t) = \xi_{p0} \exp \left( - \frac{1+ia }{\tau_D^2} t^2 \right),$$ where $ \xi_{p0} $ is the amplitude, $ \tau_D $ is the pulse duration, and the parameter $ a $ describes the chirp of the pulse. The complex spectrum $ {\cal E}_p^{(+)}(z,\Omega_p) $ of the envelope $ {\cal E}_p^{(+)}(z,t) $ is defined by $$% 18 {\cal E}_p^{(+)}(z,\Omega_p) = \frac{1}{2\pi} \int_{-\infty}^{\infty} dt \, {\cal E}_p^{(+)}(z,t) \exp( i \Omega_p t) .$$ For a pulse of the form given in Eq. (17) we obtain: $$% 19 {\cal E}_p^{(+)}(0,\Omega_p) = \xi_p \frac{\tau_D}{2\sqrt{\pi} \sqrt[4]{1+a^2} } \exp \left[ - \frac{ \tau_D^2}{ 4(1+a^2) } ( 1 - ia) \Omega_p^2 \right] ,$$ where $ \xi_{p} = \xi_{p0} \exp[-i \arctan(a)/2] $. Substituting Eq. (19) into Eq. (16) and using the identity $$\begin{aligned} % 20 \int_{-\infty}^{\infty} d\Omega_1 \, \int_{-\infty}^{\infty} d\Omega_2 \, \exp \left[ -\alpha_1 \Omega_1^2 - \alpha_2 \Omega_2^2 -2\alpha_{12}\Omega_1\Omega_2 + i a_1\Omega_1 - i a_2\Omega_2 \right] &=& \nonumber \\ & & \hskip -10cm \frac{\pi}{ \sqrt{ \alpha_1\alpha_2 - \alpha_{12}^2 } } \exp \left[ - \frac{ a_1^2\alpha_2 + a_2^2\alpha_1 + 2\alpha_{12} a_1a_2}{ 4( \alpha_1\alpha_2 - \alpha_{12}^2 ) } \right] ,\end{aligned}$$ we arrive at the following expression for the two-photon amplitude $ {\cal A}_{12,l}(\tau_1,\tau_2) $: $$\begin{aligned} % 21, 22 {\cal A}_{12,l}(\tau_1,\tau_2) &=& C_{\cal A} \frac{\xi_p \tau_D}{ 2 \sqrt{\pi} \sqrt[4]{1+a^2} } \exp(-i\omega^0_1 \tau_1) \exp(-i\omega^0_2 \tau_2) A_{12,l}(\tau_1,\tau_2) , \\ A_{12,l}(\tau_1,\tau_2) &=& \int_{-L}^{0} dz \, \frac{ 1}{ \sqrt{ \beta_1\beta_2 - \gamma^2 } } \exp \left[ - \frac{ c_1^2\beta_2 + c_2^2\beta_1 + 2\gamma c_1 c_2 }{ 4( \beta_1\beta_2 - \gamma^2 ) } \right] .\end{aligned}$$ The functions $ \beta_j(z) $, $ c_j(z) $, and $ \gamma(z) $ are defined as follows: $$\begin{aligned} % 23 \beta_j(z) &=& \frac{1}{\sigma_j^2} + b (1-ia) - i \frac{d_j}{4\pi} l - i \frac{D_p - D_j}{4\pi} z, \hspace{1cm} j=1,2 \nonumber \\ c_j(z) &=& (-1)^{(j-1)} \left[ \left( \frac{1}{v_p} - \frac{1}{v_j} \right) z + \frac{l}{g_j} - \tau_j \right] , \hspace{1cm} j=1,2 \nonumber \\ \gamma(z) &=& b (1-ia) - i \frac{D_p}{4\pi} z .\end{aligned}$$ The parameter $ b $ is a characteristic parameter of the pump pulse: $$% 24 b = \frac{\tau_D^2}{4(1+a^2)} .$$ The quantities $ \rho(l) $ and $ R_0 $ are then determined in accordance with their definitions in Eqs. (7) and (8), respectively. The quantity $ \rho(l) $ as a function of the length $ l $ of the birefringent material then takes the form ($ \omega^0_1 = \omega^0_2 $ is assumed): $$% 25 \rho(l) = \frac{ \pi^2 |C_{\cal A}|^2 |\xi_p|^2 \tau_D^2 }{ 2 \sqrt{1+a^2} R_0 } {\rm Re} \left\{ \int_{-L}^{0} dz_1 \, \int_{-L}^{0} dz_2 \, \frac{ 1}{ \sqrt{ \bar{\beta}_1\bar{\beta}_2 - \bar{\gamma}^2 } } \exp \left[ - \frac{ \bar{c}_1^2\bar{\beta}_2 + \bar{c}_2^2 \bar{\beta}_1 + 2\bar{\gamma} \bar{c}_1 \bar{c}_2 }{ 4( \bar{\beta}_1\bar{\beta}_2 - \bar{\gamma}^2 ) } \right] \right\} .$$ The functions $ \bar{\beta}_j(z_1,z_2) $, $ \bar{c}_j(z_1,z_2) $, and $ \bar{\gamma}(z_1,z_2) $ are expressed as follows: $$\begin{aligned} % 26 \bar{\beta}_j(z_1,z_2) &=& \frac{1}{\sigma_1^2} + \frac{1}{\sigma_2^2} - i \frac{d_j - d_{3-j}}{4\pi} l + 2 b - i \frac{D_p - D_j}{4\pi} z_1 + i \frac{D_p - D_{3-j}}{4\pi} z_2, \hspace{1cm} j=1,2 \nonumber \\ \bar{c}_j(z_1,z_2) &=& \left( \frac{1}{v_p} - \frac{1}{v_1} \right) z_{j} - \left( \frac{1}{v_p} - \frac{1}{v_2} \right) z_{3-j} + \left( \frac{1}{g_1} - \frac{1}{g_2} \right) l, \hspace{1cm} j=1,2 \nonumber \\ \bar{\gamma}(z_1,z_2) &=& 2 b - i \frac{D_p}{4\pi} (z_1-z_2) .\end{aligned}$$ Similarly, the normalization constant $ R_0 $ is given by the expression: $$% 27 R_0 = \frac{ \pi^2 |C_{\cal A}|^2 |\xi_p|^2 \tau_D^2 }{ 2 \sqrt{1+a^2} } \int_{-L}^{0} dz_1 \, \int_{-L}^{0} dz_2 \, \frac{ 1}{ \sqrt{ \tilde{\beta}_1\tilde{\beta}_2 - \tilde{\gamma}^2 } } \exp \left[ - \frac{ \tilde{c}_1^2\tilde{\beta}_2 + \tilde{c}_2^2\tilde{\beta}_1 + 2\tilde{\gamma} \tilde{c}_1 \tilde{c}_2 }{ 4( \tilde{\beta}_1\tilde{\beta}_2 - \tilde{\gamma}^2 ) } \right] ,$$ where $$\begin{aligned} % 28 \tilde{\beta}_j(z_1,z_2) &=& \frac{2}{\sigma_j^2} + 2 b - i \frac{D_p - D_j}{4\pi} (z_1 - z_2) , \hspace{1cm} j=1,2 \nonumber \\ \tilde{c}_j(z_1,z_2) &=& \left( \frac{1}{v_p} - \frac{1}{v_j} \right) (z_{j} - z_{3-j}) , \hspace{1cm} j=1,2 \nonumber \\ \tilde{\gamma}(z_1,z_2) &=& 2 b - i \frac{D_p}{4\pi} (z_1-z_2) .\end{aligned}$$ It is convenient to consider the pump pulse characteristics at the output plane of the crystal, i.e., to use the parameters $ \tau_D $ and $ a $. They can be expressed in terms of the parameters $ \tau_{Di} $ and $ a_i $ appropriate for the input plane of the crystal: $$\begin{aligned} % 29 a &=& \left( \frac{\tau_{Di}^2 a_i}{4( 1+ a_i^2)} + \frac{D_pL}{4\pi} \right) \left( \frac{\tau_{Di}^2}{4(1+a_i^2)} \right)^{-1} , \nonumber \\ \tau_D &=& \tau_{Di} \sqrt{ \frac{1+a^2}{1+ a_i^2} } .\end{aligned}$$ In this case, the parameter $ b_i $ $$% 30 b_i = \frac{\tau_{Di}^2}{4(1+a_i^2)}$$ has the same value as the parameter $ b $ defined in Eq. (24). Ignoring second-order dispersion in all modes ($ D_p = D_1 = D_2 = 0 $), Eq. (25) reduces to the following analytical expression for the quantity $ \rho $: $$% 31 \rho(\Delta\tau_l) = \sqrt{\frac{\pi}{2}} \frac{1}{|\Lambda| L} \frac{\tau_{Di}}{\sqrt{1+a_i^2}} {\rm erf} \left[ \frac{\sqrt{2}|\Lambda|}{D} \frac{\sqrt{1+a_i^2}}{\tau_{Di}} \left( \frac{DL}{2} - |\Delta\tau_l| \right) \right] ,$$ in which $$\begin{aligned} % 32 D &=& \frac{1}{v_1} - \frac{1}{v_2} , \nonumber \\ \Lambda &=& \frac{1}{v_p} - \frac{1}{2} \left( \frac{1}{v_1} + \frac{1}{v_2} \right) ,\end{aligned}$$ and $$% 33 \Delta \tau_l = \tau_l - DL/2 .$$ The symbol $ {\rm erf} $ denotes the error function. When deriving Eq. (31) the condition $ D > 0 $ was assumed. In Eq. (33), $ \tau_l $ denotes the relative time delay of the down-converted beams in a birefringent material of length $ l $ and is defined as follows: $$% 34 \tau_l = \left( \frac{1}{g_2} - \frac{1}{g_1} \right) l .$$ When second-order dispersion in the down-converted fields is omitted, the interference pattern can be determined for an arbitrary pump-pulse profile in terms of the autocorrelation function of the pump pulse. For details, see Appendix B. Discussion ========== We now proceed to examine the behavior of the normalized coincidence-count rate $ R_n $ on various parameters, from both analytical and numerical points of view. The profile of the interference dip in the coincidence-count rate [@Mad] (described by $ \rho $ as a function of $ l $), formed by the overlap of a pair of two-photon amplitudes, can be understood as follows. The expression in Eq. (7) for $ \rho(l) $ can be rewritten in the form: $$% 35 \rho(l) = \frac{1}{2R_0} \int_{-\infty}^{\infty} dt \, \int_{-\infty}^{\infty} d\tau \, \left[ {\cal A}^r_{12,l}(t,\tau) {\cal A}^{r}_{12,l}(t,-\tau) + {\cal A}^i_{12,l}(t,\tau) {\cal A}^{i}_{12,l}(t,-\tau) \right] ,$$ where $$% 36 t = \frac{t_A + t_B}{2} , \hspace{1cm} \tau = t_A - t_B ,$$ and $ {\cal A}^r_{12,l} = \mbox{Re} [ {\cal A}_{12,l}] $; $ {\cal A}^i_{12,l} = \mbox{Im} [ {\cal A}_{12,l}] $. The symbol $ \mbox{Im} $ denotes the imaginary part of the argument. Hence, according to Eq. (35), the overlaps of the real and imaginary parts of the two-photon amplitudes $ {\cal A}_{12,l}(t,\tau) $ and $ {\cal A}_{12,l}(t,-\tau) $ determine the values of the interference term $ \rho $. The amplitude $ {\cal A}_{12,l}(t,-\tau) $ can be considered as a mirror image of the amplitude $ {\cal A}_{12,l}(t,\tau) $ with respect to the plane $ \tau = 0 $. When only first-order dispersion in the optical material is taken into account, the shape of the two-photon amplitude $ {\cal A}_{12,l}(t,\tau) $ does not depend on the length $ l $; as $ l $ increases, the amplitude $ {\cal A}_{12,l}(t,\tau) $ moves only in the $ t$-$\tau$ plane. The shift in the $ \tau $-direction is important, because it changes the degree of overlap of the amplitudes. This reveals the origin of the shape of the dip. The overlap of the two-photon amplitudes can be interpreted from the point-of-view of distinguishability of two paths leading to coincidence detection [@Se]. When the overlap is complete, the two paths cannot be distinguished and the interference pattern has maximum visibility. Incomplete overlap means that the paths can be “partially distinguished” and thus the visibility is reduced. We consider, in turn, the role played by pump-pulse duration and chirp, second-order dispersion in the nonlinear down-converting medium, second-order dispersion in the optical elements of the interferometer, and dispersion cancellation. Pump-pulse duration and chirp ----------------------------- In the absence of second-order dispersion and frequency filters, a useful analytical expression for the two-photon amplitude $ A_{12,l=0}(t,\tau) $ can be obtained: $$% 37 A_{12,l=0}(t,\tau) = \frac{ 4 \pi \sqrt{\pi} \sqrt[4]{1+a_i^2} }{ \tau_{Di} |D| } \mbox{rect}\left( \frac{\tau}{DL} \right) \exp \left[ - \frac{ 1 + ia_i}{\tau_{Di}^2} \left( t + \frac{\Lambda}{D} \tau \right)^2 \right] .$$ The coefficients $ D $ and $ \Lambda $ are defined in Eq. (32). Equation (37) elucidates the role of pump-pulse parameters as discussed below. It is well known that for a cw-pump field the coincidence-count rate $ R_n(\tau_l) $ forms a triangular dip of width $ DL $ [@MaWo]. The visibility is 100%, indicating maximum interference. An ultrashort pump pulse of duration $ \tau_{Di} $ leads to a loss of visibility (see Fig.2) but the width of the dip remains unchanged [@Se]. This can be understood from the shape of the two-photon amplitude $ A_{12,l=0}(t,\tau) $ given in Eq.(37). In the $ \tau $-direction the two-photon amplitude is confined to the region $ 0 < \tau < DL $ for either cw or an ultrashort pump pulse; this confinement is responsible for the width of the dip. The two-photon amplitude is confined in the $ t $-direction by the ultrashort pump-pulse duration \[see Eq. (37)\]. The tilt (given by the ratio $ \Lambda / D $, see Eq. (37)) of the amplitude in the $ t$-$\tau $ plane leads to a loss of visibility since the overlap of the amplitudes $ A_{12,l}(t,\tau) $ and $ A_{12,l}(t,- \tau) $ for a given optimum value of $ l $ cannot be complete for a nonzero tilt. The shorter the pump-pulse duration, the smaller the overlap, and the lower values of visibility that result. However, when values of the first-order dispersion parameters are chosen such that $ \Lambda = 0 $, the tilt is zero \[see Eq.(37)\] and no loss of visibility occurs as the pump-pulse duration shortens (for details, see [@Ru]). As indicated by the Eq. (37) for the amplitude $ A_{12,l=0} $, pump-pulse chirp (characterized by $ a_i $) introduces a phase modulation of the two-photon amplitude in the $ t $-direction. This modulation decreases the overall overlap of the corresponding two-photon amplitudes, given as a sum of the overlaps of their real and imaginary parts. Increasing values of the chirp parameter $ a_i $ thus lead to a reduction of visibility. However, the width of the dip does not change. In fact, it is the parameter $ b_i $ given in Eq. (30), combining both the pulse duration $ \tau_{Di} $ and the chirp parameter $ a_{i} $, that determines the visibility in case of a Gaussian pump pulse. To be more specific the parameter $ b_i $ is determined by the bandwidth $ \Delta\Omega_p $ \[$ \Delta\Omega_p = \sqrt{2} \sqrt{1+a_i^2} / \tau_{Di} $, see Eq. (19)\] of the pump pulse according to the relation $ b_i = 1/ [2 (\Delta\Omega_p)^2] $. Thus, more generally, it is the bandwidth of the pump pulse that determines the interference pattern. As a consequence, dispersion of the pump beam between the pump-pulse source and the nonlinear crystal does not influence the interference pattern. Examination of Eqs. (B3) and (B4) in Appendix B shows that the dip remains symmetric since the function $ \rho(\Delta\tau_l) $ in Eq. (B3) is an odd function of $ \Delta\tau_l $ for an arbitrary pump-pulse profile. Frequency filters inserted into the down-converted beams serve to broaden the two-photon amplitude $ A_{12,l}(t,\tau) $ both in the $ t $- and $ \tau $-direction. Broadening in the $ \tau $-direction leads to wider dips, whereas that in the $ t $-direction smooths out the effect of tilt discussed above and thereby results in a higher visibility. The narrower the spectrum of frequency filters, the wider the dip, and the higher the observed visibility. The effect of chirp is suppressed by the presence of frequency filters, because they effectively make the complex pump-pulse spectrum narrower and hence diminish relative phase changes across such a narrowed complex spectrum. Second-order dispersion in the nonlinear crystal ------------------------------------------------ Second-order dispersion in the [*pump beam*]{} causes changes in the pulse phase (chirp) as the pulse propagates and this leads to broadening of the pulse. The effect of such pump-pulse broadening is transferred to the down-converted beams, as is clearly shown by the behavior of the two-photon amplitude $ A_{12,l}(t,\tau) $ illustrated in Fig. 3. In this figure, the amplitude in the region near $ \tau = 0 $ s has its origin near the output plane of the crystal where the pump pulse is already broadened as a result of its having propagated through the dispersive crystal. At the other edge, near $ \tau \approx 6 \times 10^{-13} $ s the down-converted light arises from the beginning of the crystal where the pump pulse has not yet suffered dispersive broadening. The profile of the interference dip is modified as follows: An increase in the second-order dispersion parameter $ D_p $ leads to an increase of visibility, but no change in the width of the dip, as illustrated in Fig.4(a). For appropriately chosen values of $ D_p $ a small local peak emerges at the bottom of the dip \[see Fig.4(a)\]. Nonzero initial chirp ($ a_i $) of the pump beam can provide a higher central peak but on the other hand it reduces the visibility \[see Fig.4(b)\]. The peak remains, but is suppressed, in the presence of narrow frequency filters. Now we turn to second-order dispersion in the [*down-converted beams*]{} (nonzero $ D_1 $, $ D_2 $), which broadens the two-photon amplitude $ A_{12,l}(t,\tau) $ in the $ \tau $-, as well as in the $t$-direction. As demonstrated in Fig. 5, this leads to a broadening of the dip, as well as asymmetry and oscillations at its borders. When values of $ D_1 $ increase, visibility decreases at first and then later increases. Nonzero chirp leads to a lower visibility, but tends to suppress ocillations at the borders of the dip. Frequency filters, which behave as discussed above, suppress asymmetry. When second-order dispersion occurs in all three modes, the two-photon amplitude $ A_{12,l}(t,\tau) $ is broadened for smaller values of $ \tau $ (mainly owing to dispersion in the pump beam) as well as for greater values of $ \tau $ (mainly owing to dispersion in the down-converted beams). As a result, the interference pattern comprises all of the features discussed above: a local peak may emerge at the bottom of the dip, the dip is broadened and asymmetric, and there occur oscillations at the borders of the dip. To observe the above mentioned effects caused by dispersion in a nonlinear crystal, relatively large values of the dispersion parameters $ D_p $, $ D_1 $, and $ D_2 $ are required. For example, our simulations make use of parameter values that are approximately an order of magnitude higher than those of the BBO crystals commonly used in type-II down-conversion-based interferometric experiments. Second-order dispersion in the interferometer’s optical elements ---------------------------------------------------------------- Second-order dispersion in an optical material ($ d_1 $, $ d_2 $) through which down-converted photons propagate leads to asymmetry of the dip. The dip is particularly stretched to larger values of $ l $ (see Fig. 6) as a consequence of the deformation and lengthtening of the two-photon amplitude $ A_{12,l} $ in a dispersive material. The higher the difference $ d_1 - d_2 $ of the dispersion parameters, the higher the asymmetry and the wider the dip; moreover its minimum is shifted further to smaller values of $ l $ (see Fig. 6). Asymmetry of the dip is also preserved when relatively narrow frequency filters are used though the narrowest filters remove it. Chirp decreases visibility but the shape of the dip remains unchanged. Dispersion cancellation ----------------------- Asymmetry of the dip caused by second-order dispersion in an optical material through which down-converted photons propagate can be suppressed in two cases. In the first case, for a pump pulse of arbitrary duration, dispersion cancellation occurs when the magnitude of second-order dispersion in the path of the first photon (given by $ d_1 l $) equals that of the second photon (given by $ d_2 l $). This observation immediately follows from Eqs.(25) and (26), in which the effect of second-order dispersion is prescribed by the parameter $ (d_1-d_2)l $. Dispersion cancellation is a result of completely destructive interference between the amplitudes $ A_{12,l}(t,\tau) $ and $ A_{12,l}(t,-\tau) $ for which there is nonzero overlap. This is demonstrated in Fig. 7 for $ l=25 $ mm, i.e. for which $ \rho = 0 $. When the pulse duration is sufficiently long (in the cw regime) dispersion cancellation occurs for arbitrary magnitudes of second-order dispersion (given by $ d_1 l $ and $ d_2 l $) present in the paths of the down-converted photons. The gradual suppression of the asymmetry of the dip as the pump pulse duration increases is shown in Fig. 8. Dispersion cancellation has its origin in the entanglement of the photons, i.e., in the fact that the permitted values of the frequency $ \omega_1 $ and the frequency $ \omega_2 $ are governed by the relation $ \delta(\omega_p - \omega_1 - \omega_2) $, where $ \omega_p $ lies within the pump-pulse spectrum. Conclusion ========== We have developed a description of two-photon type-II spontaneous parametric down-conversion produced when ultrashort pulses from a femtosecond laser are used to pump an appropriate nonlinear medium, as well as the associated two-photon interference effects. The model includes frequency modulation of the pump pulse (chirp) and dispersion in both the nonlinear crystal and the interferometer’s optical elements. The influence of these features on the depth and asymmetry characteristics of a photon-coincidence interference dip have been established. We showed that the interference pattern is determined by the bandwidth of the pump pulse; the larger the bandwidth, the lower the interference-pattern visibility. This implies that dispersion of the pump beam before the nonlinear crystal does not influence the interference pattern. Second-order dispersion of the pump beam in the nonlinear crystal can result in the occurrence of a local peak at the bottom of the interference dip. Second-order dispersion of the down-converted photons in the crystal can result in oscillations at the borders of the dip, whereas dispersion of the down-coverted photons in the interferometer’s optical materials (e.g., the delay line) can produce an asymmetry in the dip. These effects can be used to measure the dispersion parameters of both a nonlinear crystal and an arbitrary optical material. Dispersion cancellation has been revealed for pump pulses of arbitrary duration when the amount of dispersion in the two down-converted beams is identical and in general for sufficiently long pump pulses. Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank J. Peřina and M. Atature for valuable discussions. This work was supported by the National Science Foundation under Grant Nos. ECS-9800300 and ECS 9810355. J. P. acknowledges support from Grant No. VS96028 of the Czech Ministry of Education. Determination of an entangled two-photon state ============================================== The interaction Hamiltonian of the process of spontaneous parametric down-conversion can be written in the form [@MaWo]: $$% A1 \hat{H}_{\rm int}(t) = \int_{-L}^{0} dz \, \chi^{(2)} E^{(+)}_p(z,t) \hat{E}^{(-)}_1(z,t) \hat{E}^{(-)}_2(z,t) + \mbox{h.c.} ,$$ where $ \chi^{(2)} $ is the second-order susceptibility, $ E^{(+)}_p $ denotes the positive-frequency part of the electric-field amplitude of the pump field, and $ E^{(-)}_1 $ ($ E^{(-)}_2 $) is the negative-frequency part of the electric-field operator of down-converted field 1 (2). The nonlinear crystal extends from $ z=-L $ to $ z=0 $. The symbol $ \mbox{h.c.} $ means Hermitian conjugate. Expanding the interacting fields into harmonic plane waves, the interaction Hamiltonian $ \hat{H}_{\rm int} $ in Eq. (A1) can be recast into the form: $$\begin{aligned} % A2 \hat{H}_{\rm int}(t) &=& C_{\rm int} \int_{-L}^{0} dz \, \sum_{k_p} \sum_{k_1} \sum_{k_2} \chi^{(2)} {\cal E}_p^{(+)}(0,\omega_{k_p}-\omega^0_p) \hat{a}_1^{\dagger}(k_1) \hat{a}_2^{\dagger}(k_2) \nonumber \\ & & \mbox{} \times \exp \left[ i (k_p-k_1-k_2)z - i (\omega_{k_p}-\omega_{k_1}- \omega_{k_2}) t \right] + \mbox{h.c.} ,\end{aligned}$$ where $ C_{\rm int} $ is a constant. The symbol $ {\cal E}_p^{(+)}(0,\omega_{k_p}-\omega^0_p) $ denotes the positive-frequency part of the envelope of the pump-beam electric-field amplitude at the output plane of the crystal; $ k_p $ stands for the wave vector of a mode in the pump beam, and $ \omega^0_p $ stands for the central frequency of the pump beam. The symbol $ \hat{a}_1^{\dagger}(k_1) $ ($ \hat{a}_2^{\dagger}(k_2) $) represents the creation operator of the mode with wave vector $ k_1 $ ($ k_2 $) and frequency $ \omega_{k_1} $ ($ \omega_{k_2} $) in the down-converted field 1 (2). We note that the phases of all three interacting fields in space are chosen in such a way that they are zero at the output plane of the crystal. The wave function $ |\psi^{(2)}(0,t)\rangle $ describing an entangled two-photon state whose phases are set equal to 0 at $ z=0 $ is given by: $$% A3 |\psi^{(2)} (0,t)\rangle = \frac{-i}{\hbar} \int_{-\infty}^{t} dt' \hat{H}_{\rm int} (t') |{\rm vac} \rangle ,$$ where $ |{\rm vac} \rangle $ denotes a multimode vacuum state. For times $ t $ sufficiently long so that the nonlinear interaction is complete, the entangled two-photon state $ |\psi^{(2)}(0,t) \rangle $ can be obtained in the form: $$\begin{aligned} % A4 |\psi^{(2)}(0,t) \rangle &=& C_{\psi} \int_{-L}^{0} dz \, \sum_{k_p} \sum_{k_1} \sum_{k_2} {\cal E}_p^{(+)}(0,\omega_{k_p}-\omega^0_p) \hat{a}_1^{\dagger}(k_1) \hat{a}_2^{\dagger}(k_2) \exp \left[ i(k_p-k_1- k_2) z \right] \nonumber \\ & & \mbox{} \times \delta( \omega_{k_p} - \omega_{k_1} - \omega_{k_2} ) \exp\left[ i(\omega_{k_1} + \omega_{k_2}) t \right] |{\rm vac} \rangle .\end{aligned}$$ The susceptibility $ \chi^{(2)} $ is included in the constant $ C_{\psi} $. We note that for times during which the down-converted fields are being created in the crystal, the appropriate wave function differs from that in Eq. (A4). However, detectors are placed at a sufficiently large distance from the output plane of the crystal to assure that such “partially evolved” states cannot be detected. Interference pattern for an arbitrary pump-pulse profile ======================================================== We assume an arbitrary complex spectrum $ {\cal E}^{(+)}_p(-L,\Omega_p) $ for the envelope of the pump pulse at the input plane of the crystal. We further take into account the effect of second-order dispersion only in the pump beam and assume frequency filters of the same width ($ \sigma_1 = \sigma_2 $). Under these conditions, the normalized coincidence-count rate $ R_n $ in Eq. (6) can be expressed in terms of the autocorrelation function of the pump field. Let us introduce the field $ {\cal E}^{(+)}_{p\sigma}(z, t) $ according to the definition: $$\begin{aligned} % B1 {\cal E}^{(+)}_{p\sigma}(z, t) = \int_{-\infty}^{\infty} d\Omega_p \, {\cal E}^{(+)}_p(-L,\Omega_p) \exp\left[ i \frac{D_p(z+L)}{ 4\pi} \Omega_p^2 \right] \exp \left[ - \frac{\Omega_p^2}{\sigma^2} \right] \exp (-i\Omega_p t) ,\end{aligned}$$ where $ \sigma = \sqrt{2} \sigma_1 $. The above expression describes the propagation of the pump beam through a dispersive material (a multiplicative term describing first-order dispersion is not explicitly included here). Equation (B1) also includes frequency filtering having its origin in the filtering of the down-converted beams and their entanglement with the pump beam. The two-photon amplitude $ {\cal A}_{12,\tau_l}(\tau_1,\tau_2) $ can then be derived from the expression in Eq. (16): $$\begin{aligned} % B2 {\cal A}_{12,\tau_l}(\tau_1,\tau_2) &=& \frac{C_{\cal A}}{2} \exp(-i\omega^0_1 \tau_1 ) \exp(-i\omega^0_2 \tau_2) \nonumber \\ & & \hskip -1.5cm \mbox{} \times \sqrt{\pi}\sigma \int_{-L}^{0} dz \, {\cal E}^{(+)}_{p\sigma} (z, (\tau_1 + \tau_l + \tau_2)/2 - \Lambda z) \exp \left[ - \frac{\sigma^2}{16} \left( \tau_1 + \tau_l - \tau_2 + D z \right)^2 \right] , \nonumber \\\end{aligned}$$ where the parameters $ D $ and $ \Lambda $ are defined in Eq. (32) and the relative time delay $ \tau_l $ of the down-converted beams is introduced in Eq. (34). The quantity $ \rho $ given in Eq. (7) then has the form (again it is assumed that $ \omega^0_1 = \omega^0_2 $): $$\begin{aligned} % B3 \rho(\Delta\tau_l) &=& \frac{ |C_{\cal A}|^2 \sqrt{2\pi} \pi \sigma}{4 R_0 } \nonumber \\ & & \hskip -.5cm \mbox{} \times {\rm Re} \left\{ \int_{-L/2}^{L/2} dz_1 \, \int_{-L/2}^{L/2} dz_2 \, \gamma_{\sigma}(z_1,z_2,\Lambda(z_1-z_2)) \exp \left[ - \frac{\sigma^2}{8} \left( \Delta\tau_l + \frac{D}{2}(z_1 + z_2) \right)^2 \right] \right\} , \nonumber \\ & &\end{aligned}$$ where $ \Delta \tau_l $ is defined in Eq. (33). The correlation function $ \gamma_{\sigma}(z_1,z_2,x) $ of two pulsed fields at positions $ z_1 $ and $ z_2 $ is written as $$% B4 \gamma_{\sigma}(z_1,z_2,x) = \int_{-\infty}^{\infty} dt \, {\cal E}^{(+)}_{p\sigma}(z_1 - L/2,t) {\cal E}^{(-)}_{p\sigma}(z_2 - L/2,t + x) .$$ The constant $ R_0 $ occurring in Eq. (B3) is expressed as follows: $$\begin{aligned} % B5 R_0 = \frac{ |C_{\cal A}|^2 \sqrt{2\pi} \pi \sigma}{4} \int_{-L/2}^{L/2} dz_1 \, \int_{-L/2}^{L/2} dz_2 \, \gamma_{\sigma}(z_1,z_2,\Lambda(z_1-z_2)) \exp \left[ - \frac{\sigma^2 D^2}{32} (z_1 - z_2)^2 \right] .\end{aligned}$$ For a Gaussian pulse with the complex spectrum as given in Eq. (19), the correlation function $ \gamma_{\sigma} $ becomes $$\begin{aligned} % B6 \gamma_{\sigma}(z_1,z_2,\Lambda(z_1-z_2)) &=& \frac{\sqrt{\pi} \tau_{Di}^2}{ 2\sqrt{1+a_i^2} } \frac{|\xi_{p}|^2}{\sqrt{ \psi(z_1,z_2)}} \exp \left[ - \frac{\Lambda^2 (z_1-z_2)^2 }{4\psi(z_1,z_2)} \right], \nonumber \\ \psi(z_1,z_2) &=& 2b_i + \frac{2}{\sigma^2} - i \frac{D_p}{4\pi} (z_1-z_2) ,\end{aligned}$$ which, together with Eqs. (B3) and (B5), leads to expressions which agree with those derived from Eqs. (25) and (27). The parameter $ b_i $ is defined in Eq. (30). The experimental setup without frequency filters ($ \sigma \rightarrow \infty $) is of particular interest. In this case, using the identity $ \sqrt{\pi} \sigma \exp( - \sigma^2 y^2 / 4 ) \rightarrow 2\pi \delta(y) $ for $ \sigma \rightarrow \infty $, Eqs. (B3) and (B5) provide a useful expression for the function $ \rho(\Delta\tau_l) $: $$\begin{aligned} % B7 \rho(\Delta\tau_l) &=& \frac{ 1 }{ \gamma_{\infty}(0,0,0) L} {\rm Re} \left\{ \int_{-L/2}^{L/2} dz \, {\rm rect} \left( z/L + 1/2 + 2\Delta\tau_l/(DL) \right) \right. \nonumber \\ & & \mbox{} \times \left. \gamma_{\infty}(z, -z - 2\Delta\tau_l/D, 2z + 2\Delta\tau_l/D ) \right\} ,\end{aligned}$$ where $ {\rm rect}(x) $ is the rectangular function ($ {\rm rect} (x) = 1 $ for $ 0<x<1 $ and $ {\rm rect} (x) = 0 $ otherwise). [12]{} L. Mandel and E. Wolf, [*Optical Coherence and Quantum Optics*]{} (Cambridge Univ. Press, Cambridge, 1995). J. Peřina, Z. Hradil, and B. Jurčo, [*Quantum Optics and Fundamentals of Physics*]{} (Kluwer, Dordrecht, 1994). D. N. Klyshko, [*Photons and Nonlinear Optics*]{} (Gordon and Breach Science Publishers, New York, 1988). A. Joobeur, B. E. A. Saleh, T. S. Larchuk, and M. C. Teich, Phys. Rev. A [**53**]{}, 4360 (1996); B. E. A. Saleh, A. Joobeur, and M. C. Teich, Phys. Rev. A [**57**]{}, 3991 (1998). Z. Y. Ou and L. Mandel, Phys. Rev. Lett. [**61**]{}, 50 (1988); Y. H. Shih and C. O. Alley, Phys. Rev. Lett. [**61**]{}, 2921 (1988); Z. Y. Ou and L. Mandel, Phys. Rev. Lett. [**61**]{}, 54 (1988); J. Brendel, E. Mohler, and W. Martienssen, Phys. Rev. Lett. [**66**]{}, 1142 (1991); T. S. Larchuk, R. A. Campos, J. G. Rarity, P. R. Tapster, E. Jakeman, B. E. A. Saleh, and M. C.  Teich, Phys. Rev. Lett. [**70**]{}, 1603 (1993); A. M.  Steinberg, P. G. Kwiat, and R. Y. Chiao, Phys. Rev. Lett. [**71**]{}, 708 (1993); C. K. Hong., Z. Y. Ou, and L. Mandel, Phys. Rev. Lett. [**59**]{}, 1903 (1987); J. G. Rarity and P. R. Tapster, J. Opt. Soc. Am. B 6, 1221 (1989); R. Ghosh, C. K. Hong, Z. Y. Ou, and L. Mandel, Phys. Rev. A [**34**]{}, 3962 (1986); P. G. Kwiat, K. Mattle, H. Weinfurter, A. Zeilinger, A. V. Sergienko, and Y. Shih, Phys. Rev. Lett. [**75**]{}, 4337 (1995); P. W. Milloni, H. Fearn, and A. Zeilinger, Phys. Rev. A [**53**]{}, 4556 (1996). M. D. Reid and D. F. Walls, Phys. Rev. A [**34**]{}, 1260 (1986); P.R. Tapster, J. G. Rarity, and P. C. M. Owens, Phys. Rev. Lett. [**73**]{}, 1923 (1994); T. B. Pittman, Y. H. Shih, A. V. Sergienko, and M. H. Rubin, Phys. Rev. A [**51**]{}, 3495 (1995); P. R. Tapster, J. G. Rarity, and P. C. M. Owens, Phys. Rev. Lett. [**73**]{}, 1923 (1994). T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, Phys. Rev. A [**52**]{}, R3429 (1995); T. B. Pittman, D. V. Strekalov, D. N. Klyshko, M. H. Rubin, A. V. Sergienko, and Y. H. Shih, Phys. Rev. A [**53**]{}, 2804 (1996); D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, and Y. H. Shih, Phys. Rev. Lett. [**74**]{}, 3600 (1995); B. M. Jost, A. V. Sergienko, A. F. Abouraddy, B. E. A. Saleh, and M. C. Teich, Opt. Express [**3,**]{} 81 (1998); M. C. Teich and B. E. A. Saleh, Čs. čas. fyz. [**47**]{}, 3 (1997), in Czech. J. D. Franson, Phys. Rev. A [**44**]{}, 4552 (1991); J. G. Rarity and P. R. Tapster, Phys. Rev. A [**45**]{}, 2052 (1992); A. V. Sergienko, M. Atature, B. M. Jost, J. Peřina, Jr., B. E. A. Saleh, and M. C. Teich, Quantum Cryptography with Femtosecond Parametric Down Conversion, 1998 OSA Ann. Meeting Program (Baltimore, MD), Oct. 4-9, 1998, p. 104. D. M. Greenberger, M. A. Horne, A. Shimony, and A. Zeilinger, Am. J. Phys. [**58**]{}, 1131 (1990). D.M. Greenberger, M. A. Horne, and A. Zeilinger, Phys. Today [**46**]{} (8), 22 (1993); S. P. Tewari and P. Hariharan, J. Mod. Opt. [**44**]{} 543 (1997); D. A. Rice, C. F. Osborne, and P. Lloyd, Phys. Lett. A [**186**]{}, 21 (1994); M. D. Reid and W. J. Munro, Phys. Rev. Lett. [**69**]{}, 997 (1992); D. N. Klyshko, Phys. Lett. A [**172**]{}, 399 (1993); J. A. Bergou and M. Hillery, Phys. Rev. A [**55**]{}, 4585 (1997); C. C. Gerry, Phys. Rev. A [**53**]{}, 4591 (1996). M. Zukowski, A. Zeilinger, M. A. Horne, and A. K. Ekert, Phys. Rev. Lett. [**71**]{}, 4287 (1993); A. Zeilinger, M. A. Horne, H. Weinfurter, and M. Zukowski, Phys. Rev. Lett. [**78**]{}, 3031 (1997); J.-W. Pan and A. Zeilinger, Phys. Rev. A [**57**]{}, 2208 (1998). D. Bouwmeester, J.-W. Pan, K. Mattle, M. Eibl, H. Weinfurter, and A. Zeilinger, Nature [**390**]{}, 575 (1997). G. Di Guiseppe, L. Haiberger, F. De Martini, and A. V. Sergienko, Phys. Rev. A [**56**]{}, R21 (1997). T. E. Keller and M. H. Rubin, Phys. Rev. A [**56**]{}, 1534 (1997). W. P. Grice and I. A. Walmsley, Phys. Rev. A [**56**]{}, 1627 (1997); W. P. Grice, R. Erdmann, I. A. Walmsley, and D. Branning, Phys. Rev. A [**57**]{}, R2289 (1998). Z. Y. Ou, Quant. and Semiclass. Optics [**9**]{}, 599 (1997). C. K. Hong, Z. Y. Ou, and L. Mandel, Phys. Rev. Lett. [**59**]{}, 2044 (1987). T. S. Larchuk, M. C. Teich, and B. E. A. Saleh, Phys. Rev. A [**52**]{}, 4145 (1995). B. E. A. Saleh and M. C. Teich, [*Fundamentals of Photonics*]{} (Wiley, New York, 1991). J.-C. Diels and W. Rudolph, [*Ultrashort Laser Pulse Phenomena*]{} (Academic Press, San Diego, 1996). [^1]: On leave from the Joint Laboratory of Optics of Palacký University and Institute of Physics of Academy of Sciences of the Czech Republic, 17. listopadu 50, 772 07 Olomouc, Czech Republic. [^2]: email: [email protected] [^3]: email: [email protected] [^4]: URL: http://photon.bu.edu/teich/qil/QImaging.html
--- author: - 'Chuan-Tsung Chan$^\dagger$' - 'Hsiao-Fan Liu$^{\ddagger}$' date: - - title: Graphic Enumerations and Discrete Painlevé Equations via Random Matrix Models --- ¶ ł ł Ł ø ß Ø ł Ł ø § i[ i ]{} ß =0.25cm Introduction and motivation =========================== Graphic enumerations, random matrix models and discrete Painlevé equations ========================================================================== Wick theorem for random matrix integrals {#subsec1} ---------------------------------------- Diagrammatic expansion of the partition function of the matrix model -------------------------------------------------------------------- Orthogonal polynomial technique for random matrix integrals ----------------------------------------------------------- Normalization constants and recursive coefficients of the monic orthogonal polynomial ------------------------------------------------------------------------------------- Perturbative expansion of the free energy of the quartic model ============================================================== Topological expansion of the free energy of the quartic model ============================================================= Perturbative expansion of the free energy of the cubic model ============================================================ Topological expansion of the free energy of the cubic model =========================================================== Summary and Conclusion ====================== Acknowledgement {#acknowledgement .unnumbered} =============== Useful Information about the Catalan Numbers {#app:a} ============================================ Useful information for solving the cubic model {#app:b} ============================================== [99]{} D. Bessis, C. Itzykson and J. B. Zuber, Quantum field theory techniques in graphical enumeration, Adv. Appl. Math.  [**1**]{}, 109 (1980). D. Bessis, A New Method in the Combinatorics of the Topological Expansion, Commun. Math. Phys.  [**69**]{}, 147 (1979). P. Di Francesco, P. H. Ginsparg and J. Zinn-Justin, 2-D Gravity and random matrices, Phys. Rept.  [**254**]{}, 1 (1995). D. J. Gross and A. A. Migdal, Nonperturbative Two-Dimensional Quantum Gravity, Phys. Rev. Lett.  [**64**]{}, 127 (1990). D. J. Gross and A. A. Migdal, A Nonperturbative Treatment of Two-dimensional Quantum Gravity, Nucl. Phys. B [**340**]{}, 333 (1990). E. Brezin and V. A. Kazakov, Exactly Solvable Field Theories of Closed Strings, Phys. Lett. B [**236**]{}, 144 (1990). M. R. Douglas and S. H. Shenker, Strings in Less Than One-Dimension, Nucl. Phys. B [**335**]{}, 635 (1990). Manfredo P. do Carmo, Differential Geometry of Curves and Surfaces, Prentice-Hall, (1976). M. L. Mehta, On the statistical properties of the level-spacings in nuclear spectra, Nuclear Phys. v. 18 (1960) 395-419. M. L. Mehta; M. Gaudin, On the density of eigenvalues of a random matrix, Nuclear Phys. 18 (1960) 420-427. G. ’t Hooft, A Planar Diagram Theory for Strong Interactions, Nucl. Phys. B [**72**]{}, 461 (1974). E. Brezin, C. Itzykson, G. Parisi and J. B. Zuber, Planar Diagrams, Commun. Math. Phys.  [**59**]{}, 35 (1978). K. Kajiwara, M. Noumi and Y. Yamada, Geometric Aspects of Painlevé Equations, J. Phys. A: Math. and Theo., Vol. 50, Num. 7 (2017). A. S. Fokas, A. R. Its and A. V. Kitaev, Discrete Painleve equations and their appearance in quantum gravity, Commun. Math. Phys.  [**142**]{}, 313 (1991). A. Beardon, Sums of Powers of Integers, Amer. Math. Monthly 103 (1996), 201-213. A. S. Alexandrov, A. Mironov and A. Morozov, Partition Functions of Matrix Models as the First Special Functions of String Theory. 1. Finite size Hermitean one matrix model, Int. J. Mod. Phys. A [**19**]{}, 4127 (2004). A. S. Alexandrov, A. Mironov, A. Morozov and P. Putrov, Partition Functions of Matrix Models as the First Special Functions of String Theory. II. Kontsevich Model, Int. J. Mod. Phys. A [**24**]{}, 4939 (2009). A. S. Fokas, A. R. Its and A. V. Kitaev, The Isomonodromy approach to matrix models in 2-D quantum gravity, Commun. Math. Phys.  [**147**]{}, 395 (1992). P. Flajolet, R. Sedgewick, An Introduction to the Analysis of Algorithms, Addison Wesley, 1996. P. Flajolet, R. Sedgewick, Analytic Combinatorics, Cambridge University Press, 2008. https://en.wikipedia.org/wiki/Catalan$\underline{\hspace{0.5em}}$number E. T. Whittaker and G. N. Watson, A Course of Modern Analysis, Cambridge University Press, 1927.
--- abstract: 'In this article we develop a three dimensional (3D) analytical model of wireless networks. We establish an analytical expression of the SINR (Signal to Interference plus Noise Ratio) of user equipments (UE), by using a 3D fluid model approach of the network. This model enables to evaluate in a simple way the cumulative distribution function of the SINR, and therefore the performance, the quality of service and the coverage of wireless networks, with a high accuracy. The use of this 3D wireless network model, instead of a standard two-dimensional one, in order to analyze wireless networks, is particularly interesting. Indeed, this 3D model enables to establish more accurate performance and quality of services results than a 2D one.' author: - nocite: '[@*]' title: A 3D Spatial Fluid Model for Wireless Networks --- Introduction ============ Most of analysis in wireless networks consider a 2D analytical model of network. However since the needs are increasing in terms of QoS and performance, and because of the scarcity of the frequency bandwidth, forecast of performance needs to be more and more accurate. This is the reason why more accurate analytical models are developed and in particular 3D models of wireless networks which should be closer to a real network. A large literature has been developed about the modeling of wireless networks [@Bon03] [@Vit95] [@Lagr05] [@KeA05] [@KelCoEurasip10] [@GuK2000] [@LiP11] [@BB09] [@NBK07], in the aim to analyze the capacity, throughput, coverage, and more generally performances for different types of wireless systems such as sensors, cellular and ad-hoc ones. These models of networks are based on i) stochastic geometry: the transmitters are distributed according to a spatial Poisson Process, ii) hexagonal pattern: the transmitting base stations constitute a regular infinite hexagonal grid , iii) a ’fluid’ approach: the interfering transmitters are replaced by a continuum. The hexagonal wireless network model is the most used one ([@Bon03]-[@Vit95]-[@Lagr05]). This model seems rather ’reasonable’ for regular deployments of base stations. However, it is a two dimensional one: it does not take into account the height of antennas, and the propagation in the three dimensions. Moreover, from an analytical point of view, it is intractable. Therefore, extensive computations are needed to establish performance. Among the techniques developed to perform such computations, Monte Carlo simulations are widely used in conjunction with this model [@Gil91; @Ela05] or numerical computations in hexagonal networks [@Vit94; @Vit95]. More generally, most of the works focus on 2D models of wireless networks. Nevertheless, 3D models of wireless networks were developed and analyzed in term of capacity [@GuK2000] [@LiP11]. Authors of [@AlGo14] present a 3D geometrical propagation mathematical model. Based on this model, a 3D reference model for MIMO mobile to mobile fading channels is proposed. In [@AMo15], authors propose an exact form of the coverage probability, in two cases: i) interferers form a 3D Poisson Point Process and ii) interferers form a 3D Modified Matern Process. The results established are then compared with a 2D case. Authors of [@KAC15] develop a three-dimensional channel model between radar and cellular base stations, in which the radar uses a two-dimensional antenna array and the base station uses a one-dimensional antenna array. Their approach allows them to evaluate the interference impact. The 3D wireless model for sensor networks developed in [@XuLyH15] allows the authors to develop an analysis where the coverage can be improved. The tilt angle is the corner stone of an algorithm which uses the 3D model they propose, : In this paper, we develop a three dimensional analytical model of wireless networks. We establish a closed form formula of the SINR. We show that the formula allows analyzing wireless networks with more accuracy than a 2D wireless approach. Comparisons of performance coverage and QoS results, established with this 3D spatial fluid model and with a classical 2D model, show the wide interest of using this approach instead of a 2D approach. The paper is organized as follows. In Section \[model\], we develop the 3D network model. In Section \[3DFluid\], the analytical expression of the SINR by using this 3D analytical fluid model is established. In Section \[valid3D\], the validation of this analytical 3D model is done by comparison with Monte Carlo simulations. Section \[conclusion\] concludes the paper. System Model {#model} ============ We consider a wireless network consisting of $S$ geographical sites, each one composed by 3 base stations. Each antenna covers a sectored cell. We focus our analysis on the downlink, in the context of an OFDMA based wireless network, with frequency reuse 1. Let us consider: -  ${\mathcal S}=\{1,\ldots,S\}$ the set of geographic sites, uniformly and regularly distributed over a two-dimensional plane. -  ${\mathcal N}=\{1,\ldots,N\}$ the set of base stations, uniformly and regularly distributed over the two-dimensional plane. The base stations are equipped with directional antennas: $N$= 3 $S$. - the antenna height, denoted $h$. - $F$ sub-carriers $f\in\mathcal{F}=\{1,\ldots,F\}$ where we denote $W$ the bandwidth of each sub-carrier. - $P_{f}^{(j)}(u)$ the transmitted power assigned by the base station $j$ to sub-carrier $f$ towards user $u$. - $g_{f}^{(j)}(u)$ the propagation gain between transmitter $j$ and user $u$ in sub-carrier $f$. We assume that time is divided into slots. Each slot consists in a given sequence of OFDMA symbols. As usual at network level, we assume that there is no Inter-Carrier Interference (ICI) so that there is no intra-cell interference. The total amount of power received by a UE $u$ connected to the base station $i$, on sub-carrier $f$ is given by the sum of: a useful signal $P_{f}^{(i)}(u) g_{f}^{(i)}(u)$, an interference power due to the other transmitters $\sum\limits_{j\in\mathcal{N},j\neq i}P_{f}^{(j)}(u) g_{f}^{j}(u)$ and thermal noise power $N_{th}$. We consider the SINR $\gamma_{f}(u)$ defined by: $$\label{SINR} \gamma_{f}(u)=\frac{P_{f}^{(i)}(u) g_{f}^{(i)}(u) }{ \sum\limits_{j\in\mathcal{N},j\neq i}P_{f}^{(j)}(u) g_{f}^{j}(u) + N_{th}}$$ as the criterion of radio quality. We investigate the quality of service and performance issues of a network composed of sites equipped with 3D directional transmitting antennas. The analyzed scenarios consider that all the subcarriers are allocated to UEs (full load scenario). Consequently, each sub-carrier $f$ of any base station is used and can interfere with the ones of other sites. All sub-carriers are independent, we can thus focus on a generic one and drop the index $f$. Expression of the SINR {#SINRexpression} ---------------------- Let us consider the path-gain model $g(R) = KR^{-\eta} A$, where $K$ is a constant, $R$ is the distance between a transmitter $t$ and a receiver $u$, and $\eta >$ 2 is the path-loss exponent. The parameter $A$ is the antenna gain (assuming that receivers have a 0 dBi antenna gain). Therefore, for a user $u$ located at distance $R_i$ from its serving base station $i$, the expression (\[SINR\]) of the SINR can be expressed, for each sub-carrier (dropping the index $f$): $$\label{SINRdirect} \gamma(R_i,\theta_i, \phi_i)=\frac{G_0 P KR_i^{-\eta}A(\theta_i,\phi_i) }{ G_0\sum\limits_{j\in\mathcal{N},j\neq i}P KR_j^{-\eta}A(\theta_j,\phi_j) + N_{th}},$$ where: - $P$ is the transmitted power, - $A(\theta_i, \phi_i)$ is the pattern of the 3D transmitting antenna of the base station $i$, and $G_0$ is the maximum antenna gain. - $\theta_j$ is the horizontal angle between the UE and the principal direction of the antenna $j$, - $\phi_j$ is the vertical angle between the UE and the antenna $j$ (see Fig. \[antennes3D-2\]), - $R_i = \sqrt{r_i^2+h^2}$, where $r_i$ represents the projection of $R_i$ on the ground. The gain $G(\theta, \phi)$ of an antenna in a direction $(\theta, \phi)$ is defined as the ratio between the power radiated in that direction and the power that radiates an isotropic antenna without losses. This property characterizes the ability of an antenna to focus the radiated power in one direction. The parameter $G_0$ (\[SINRdirect\]) is particularly important for a beamforming analysis. Let notice that it is determined by considering that the power, which would be transmitted in all directions for a non directive antenna (with a solid angle of $4\pi$), is transmitted in a solid angle given by the horizontal and the vertical apertures of the antenna. In the ideal case where the antenna emits in a cone defined by $0 \leq \theta \leq \theta_{3dB}$ and $0 \leq \phi \leq \phi_{3dB}$, the gain is given by $\frac{4 \pi}{\int \int A(\theta, \phi) sin\theta d\theta d \phi}$. BS Antenna Pattern {#sub:bsantennapattern} ------------------ In our analysis, we conform to the model of [@ITUR2009] for the antenna pattern (gain, side-lobe level). The antenna pattern which is applied to our scheme, is computed as: $$\begin{aligned} \label{eq:hv_pattern} A_{dB}(\theta, \phi) = -\min \left [ -(A_{h_{dB}}(\theta)+A_{v_{dB}}(\phi)), A_m \right ],\end{aligned}$$ where $A_h(\theta)$ and $A_v(\phi)$ correspond respectively to the horizontal and the vertical antenna patterns. The horizontal antenna pattern used for each base station is given by: $$\begin{aligned} \label{eq:h_pattern} A_{h_{dB}}(\theta)= -\min \left[ 12 \left( \frac{\theta}{\theta_{3dB}} \right)^{2}, A_m \right ],\end{aligned}$$ where: - $\theta_{3dB}$ is the half-power beamwidth (3 dB beamwidth); - $A_m$ is the maximum attenuation. The vertical antenna direction is given by: $$\begin{aligned} \label{eq:v_pattern} A_{v_{dB}}(\phi)= -\min \left [ 12 \left( \frac{\phi-\phi_{tilt}}{\phi_{3dB}} \right)^{2}, A_m \right ],\end{aligned}$$ where: - $\phi_{tilt}$ is the downtilt angle; - $\phi_{3dB}$ is the 3 dB beamwidth. ![User equipment located at $(r_i, \theta_i)$. It receives a useful power from antenna $i$ and interference power from antenna $j$. []{data-label="antennes3D-2"}](antennes3D-2.png) ### Antenna Pattern in the Network {#antennapatternnetwork} Each site is constituted by 3 antennas (3 sectors). Therefore, for any site $s$ of the network, we have : $$\begin{aligned} \left\{ \begin{array}{ll} A_h(\theta_s^2) = A_h(\theta_s^1 + 2\frac{\pi}{3}) \\ A_h(\theta_s^3) = A_h(\theta_s^1 - 2\frac{\pi}{3}) \\ A_v(\phi_s^1) = A_v(\phi_s^2) = A_v(\phi_s^3), \end{array} \right.\end{aligned}$$ where $\theta_s^a$ and $\phi_s^a$ represent the angles relative to the antenna $a \in \{1,2,3\}$ for the site $s$. For the sake of simplicity, in expression (\[SINRdirect\]) we do a sum on the base stations (not on the sites) and denote $\theta_j$ and $\phi_j$ the angles relative to the antenna $j$. ### Vertical Antenna Gain in the Network {#verticalantennagain} For a UE at the distance $r_j$ from the antenna $j$, the vertical angle can be expressed as: $$\label{phij} \phi_j = \arctan \left(\frac{h}{r_j}\right).$$ For interfering antennas, it can be noticed that since $r_j \gg h$, we have $\phi_j = \arctan \left(\frac{h}{r_j}\right) \rightarrow 0$, and $\left( \frac{\phi_j-\phi_{tilt}}{\phi_{3dB}} \right)^{2} \rightarrow \left( \frac{\phi_{tilt}}{\phi_{3dB}} \right)^{2}$. Therefore the vertical antenna pattern (\[eq:v\_pattern\]) can be written as: $$\begin{aligned} \label{eq:v_pattern2} A_{v_{dB}}(\phi)&=& -\min \left [ 12 \left( \frac{\phi-\phi_{tilt}}{\phi_{3dB}} \right)^{2}, A_m \right ] \nonumber \\ &\approx& -\min \left [ 12 \left( \frac{\phi_{tilt}}{\phi_{3dB}} \right)^{2}, A_m \right ] \nonumber \\ &=& G_{v_{dB}},\end{aligned}$$ where $G_{v_{dB}}= -\min \left [ 12 \left( \frac{\phi_{tilt}}{\phi_{3dB}} \right)^{2}, A_m \right ]$ (i.e. a constant). And the antenna gain can be expressed as: $$\begin{aligned} \label{eq:hv_pattern2} A_{dB}(\theta, \phi) &=& -\min \left [ -A_{h_{dB}}(\theta)-A_{v_{dB}}(\phi)), A_m \right ] \nonumber \\ &=& -\min \left [ -A_{h_{dB}}(\theta) - G_{v_{dB}} , A_m \right ] \nonumber \\ &=& -\min \left [ -A_{h_{dB}}(\theta), A_m + G_{v_{dB}} \right ] + G_{v_{dB}} \nonumber \\ &=& B_{dB}(\theta)+ G_{v_{dB}},\end{aligned}$$ where $B_{dB}(\theta) = -\min \left [ -A_{h_{dB}}(\theta), A_m + G_{v_{dB}} \right ]$. So we have: $$\begin{aligned} \left\{ \begin{array}{ll} B_{dB}(\theta) = -\min \left [ -A_{h_{dB}}(\theta), A_m + G_{v_{dB}} \right ] \\ G_{v_{dB}} = - \min\left[12\left(\frac{\phi_{tilt}}{\phi_{3dB}}\right)^2,A_m\right] \end{array} \right.\end{aligned}$$ Therefore, we establish that in this case, the vertical antenna gain only depends on the angle $\theta$. 3D Wireless Network Spatial Fluid Model {#3DFluid} ======================================= The main assumption of the fluid network modeling [@KeA05] [@KelCoEurasip10] [@KeCoGo07] consists in replacing a fixed finite number of eNBs by an equivalent continuum of eNBs spatially distributed in the network. Therefore, the transmitting power of the set of interfering eNBs of the network is considered as a continuum field all over the network. Considering a density $\rho_{S}$ of sites $S$ and following the approach developed in [@KeA05] [@KelCoEurasip10] [@KeCoGo07], let us consider a UE located at $(R_i, \theta_i, \phi_i)$ in the area covered by the eNB $i$. Since each site is equipped by 3 antennas, we can express the denominator of (\[SINRdirect\]) as: $$\begin{aligned} \label{Interference} I &=& \int 3 \times \rho_{S} K P R^{-\eta}A(\theta, \phi)t dt d\theta \nonumber \\ &+& P KR_i^{-\eta}\sum_{a=2}^3 A(\theta_i^a, \phi_i^a)+ N_{th},\end{aligned}$$ where the integral represents the interference due to all the other sites of the network, and the discrete sum represents the interference due to the 2 antennas (eNB) co-localized with the eNB $i$. The index $a$ holds for these 2 antennas. This can be further written as: $$\begin{aligned} I &=& \int P \rho_{S} K (t^2+h^2)^{-\frac{\eta}{2}} t dt \times 3 \int A (\theta, \phi)d\theta \nonumber \\ &+& P K (r_i^2+h^2)^{-\frac{\eta}{2}} \sum_{a=2}^3 A(\theta_i^a, \phi_i^a)+ N_{th}\end{aligned}$$ Since for the other eNB of the network, the distance $ r >> h $, we have $(t^2+h^2)^{-\frac{\eta}{2}} = t^{-\eta} (1+h^2/t^2)^{-\frac{\eta}{2}} \approx t^{-\eta}$, and the interference can be approximated by using (\[eq:hv\_pattern2\]): $$\begin{aligned} \label{Interference2} I &=& \int P \rho_{S} K t^{-\eta} t dt G_v \times 3 \int_0^{2 \pi} B (\theta) d\theta \nonumber \\ &+& P K (r_i^2+h^2)^{-\frac{\eta}{2}} \sum_{a=2}^3 A(\theta_i^a, \phi_i^a)+ N_{th}\end{aligned}$$ 3D Analytical Fluid SINR Expression {#3DFluid2} ----------------------------------- The approach developed in [@KeCoGo07] [@KeCoGo10] allows to express $\int P \rho_{S} Kr^{-\eta} t dt$ as $\frac{\rho_{S} P K(2R_c-r_i)^{2-\eta}}{\eta-2}$, where $2 R_c$ represents the intersite distance (ISD). We refer the reader to [@KelCoEurasip10] [@KeCoGo07] [@KeCoGo10] for the detailed explanation and validation through Monte Carlo simulations. Therefore, (\[Interference2\]) can be expressed as: $$\begin{aligned} \label{interferencefluid} I &=& \frac{3 G_v P K (2R_c-r_i)^{2-\eta}}{\eta-2} \rho_{S} \int_0^{2 \pi} B (\theta) d\theta \nonumber \\ &+& P K (r_i^2+h^2)^{-\frac{\eta}{2}} \sum_{a=2}^3 A(\theta_i^a, \phi_i^a)+ N_{th}\end{aligned}$$ For a UE located at $(r,\theta,\phi)$ (dropping the index $i$), relatively to its serving eNB, the inverse of the SINR (\[SINRdirect\]) is finally given by the expression : $$\begin{aligned} \label{SINRfluid2} \frac{1}{\gamma(r,\theta, \phi)}&=& \frac{3 G_v\rho_{S} (2R_c-r)^{2-\eta}}{(\eta-2)(r^2 +h^2)^{-\eta/2}} \frac{\int_0^{2 \pi} B (\theta) d\theta}{A(\theta, \phi)} \nonumber \\ &+& \frac{\sum_{a=2}^3 A(\theta^a, \phi^a)}{A(\theta, \phi)} \nonumber \\ &+& \frac{N_{th}}{G_0 P K (r^2+h^2)^{-\eta/2} A(\theta, \phi)},\end{aligned}$$ where the index $a$ holds for the 2 antennas (eNB) co-localized with the serving antenna (eNB). Interest of the SINR Analytical Formula --------------------------------------- The closed form formula (\[SINRfluid2\]) allows the calculation of the SINR in an easy way. First of all, it only depends on the distance of a UE to its serving eNB (not on the distances to each eNB (SINR expression (\[SINRdirect\])).This formula highlights the network characteristic parameters which have an impact on the SINR (path loss parameter, inter-site distance, antenna gain). A simple numerical calculation is needed, due to the tractability of that formula. Throughput Calculation ---------------------- The SINR allows calculating the maximum theoretical achievable throughput $D_u$ of a UE $u$, by using Shannon expression. For a bandwidth $W$, it can be written: $$\label{Dugamma} D_u = W \log_2(1+\gamma_u)$$ **Remarks:** In the case of realistic wireless network systems, it can be noticed that the mapping between the SINR and the achievable throughput are established by the mean of *level curves*.\ Validation of the analytical formula {#valid3D} ==================================== The validation of the analytical formula (\[SINRfluid2\]) consists in the comparison of the results established by this formula, to the ones established by Monte Carlo simulations. Assumptions {#assumptionsantanna2D} ----------- Let us consider: - A hexagonal network composed of sectored sites; - Three base stations per site; - The 2D model: the antenna gain of a transmitting base station is given in dB by: $$\label{GTheta} G_T(\theta) = - \min\left[12\left(\frac{\theta}{\theta_{3dB}}\right)^2, A_m \right],$$ where $\theta_{3dB}$ = $70^\circ$ and $A_m =21$ dB; - The 3D model: the antenna gain of a transmitting base station is given by expressions (\[eq:hv\_pattern\]) (\[eq:h\_pattern\]) (\[eq:v\_pattern\]); - Analyzed scenarios corresponding to realistic situations in a network: - Urban environment: Inter Site Distance ISD = 200m, 500m and 750m, - Antennas tilts: $20^\circ$, $30^\circ$, $40^\circ$. Simulations vs 3D Analytical Model ---------------------------------- User equipments are randomly distributed in a cell of a 2D hexagonal based network (Fig. \[hexagonaljpg\]). This hexagonal network is equipped by antennas which have a given height (30m and 50m in our analysis), in the third dimension. Monte Carlo simulations are done to calculate the SINR for each UE. We focus our analysis on a typical hexagonal site. The cumulative distribution function (CDF) of the SINR can be established by using these simulations. These curves are compared to the ones established by using the analytical formula (\[SINRfluid2\]) to calculate the SINR values. Moreover, the SINR values established by the two ways are drawn on figures representing a site with three antennas. We present two types of comparisons. We first establish the CDF of the SINR. Indeed, the CDF of SINR provides a lot of information about the network characteristics: the coverage and the outage probability, the performance distribution, and the quality of service that can be reached by the system. As an example, figure 3 shows that for an outage probability target of 10%, the SINR reaches -8 dB, which corresponds to a given throughput. A second comparison, focused on the values of the SINR at each location of the cell, establishes a map of SINR over the cell. ![Hexagonal network: location of the 3 sectors base stations in the plan. The $X$ and $Y$ axes represent the coordinates, in meters. The intersite distance in this example is 750 m.[]{data-label="hexagonaljpg"}](hexagonalnetwork.png) Results of the Validation ------------------------- For the validation, we compare the two methods by considering realistic values of network parameters. An urban environment with realistic parameters of propagation is simulated [@ITUR2009]. Different tilts and apertures are considered. The scenarios, summarized in Tab. \[tab\_scenariofigures\], show that the 3D fluid analytical model and the simulations provide very close values of SINR: **Scenario** **$\phi_{tilt}^{(\circ)}$** $\phi_{3dB}^{(\circ)}$ $\theta_{3dB}^{(\circ)}$ ISD (m) h (m) **Figures** -------------- ----------------------------- ------------------------ -------------------------- --------- ------- ------------- Scenario 1 30 10 10 500 50 3-4 Scenario 2 30 10 20 750 30 5-6-7 Scenario 3 20 10 10 750 30 8-9 Scenario 4 20 10 40 750 50 10-11 Scenario 5 40 30 20 750 30 12-13 Scenario 6 40 10 20 200 50 14-15 : Scenarios and Figures[]{data-label="tab_scenariofigures"} ### CDF of SINR The figures of scenario 1 (Fig. 3), scenario 2 (Fig. 5 and 6), scenario 3 (Fig. 8), scenario 4 (Fig. 10), scenario 5 (Fig. 12) and scenario 6 (Fig. 14) show that the analytical model (blue curves) and the simulations (red curves) provide very close CDF of SINR curves. ### Map of SINR The figures of scenario 1 (Fig. 4), scenario 2 (Fig. 7), scenario 3 (Fig. 9), scenario 4 (Fig. 11), scenario 5 (Fig. 13) and scenario 6 (Fig. 15) represent the values of SINR in each location of a cell, where the $X$ and $Y$ axes represent the coordinates (in meters). These figures show that the analytical model (right side) and the simulations (left side) provide very close maps of SINR. ![Comparison of CDF of SINR for $\phi_{tilt} = 30^\circ$, a vertical aperture $\phi_{3dB} = 10^\circ$ and an horizontal aperture $\theta_{3dB} = 10^\circ$.[]{data-label="tilt30phi3dB10theta3dB10"}](cdfsinr30-10-10FluidSimuISD500mh50m.png) ![Simulation (left) and Analytical (right) Map of the SINR for $\phi_{tilt} = 30^\circ$, a vertical aperture $\phi_{3dB} = 10^\circ$ and an horizontal aperture $\theta_{3dB} = 10^\circ$.[]{data-label="AnalyticMaptilt30phi3dB10theta3dB10"}](figSimuAnalyti30-10-10.png) ![Comparison of CDF of SINR for $\phi_{tilt} = 30^\circ$, a vertical aperture $\phi_{3dB} = 10^\circ$ and an horizontal aperture $\theta_{3dB} =20^\circ$.[]{data-label="tilt30phi3dB10theta3dB20"}](cdfsinr30-10-20ComparSimuFluid3D.png) ![Zoom on the upper part of the CDF (Fig \[tilt30phi3dB10theta3dB20\]) where $\phi_{tilt} = 30^\circ$, $\phi_{3dB} = 10^\circ$ and $\theta_{3dB} = 20^\circ$.[]{data-label="Focustilt30phi3dB10theta3dB20"}](cdfFocusSinr30-10-20ComparSimuFluid3D.png) ![Simulation (left) and Analytical (right) Map of the SINR for $\phi_{tilt} = 30^\circ$, a vertical aperture $\phi_{3dB} = 10^\circ$ and an horizontal aperture $\theta_{3dB} = 20^\circ$.[]{data-label="AnalyticMaptilt30phi3dB10theta3dB20"}](figSimuAnalyti30-10-20.png) ![Comparison of CDF of SINR for $\phi_{tilt} = 20^\circ$, a vertical aperture $\phi_{3dB} = 10^\circ$ and an horizontal aperture $\theta_{3dB} = 10^\circ$.[]{data-label="tilt20phi3dB10theta3dB10"}](cdfsinr20-10-10ComparSimuFluid3D.png) ![Simulation (left) and Analytical (right) Map of the SINR for $\phi_{tilt} = 20^\circ$, a vertical aperture $\phi_{3dB} = 10^\circ$ and an horizontal aperture $\theta_{3dB} = 10^\circ$.[]{data-label="SimuAnalyticMaptilt20phi3dB10theta3dB10"}](figSimuAnalyti20-10-10.png) ![Comparison of CDF of SINR for $\phi_{tilt} = 20^\circ$, a vertical aperture $\phi_{3dB} = 10^\circ$ and an horizontal aperture $\theta_{3dB} = 40^\circ$.[]{data-label="tilt20phi3dB10theta3dB40"}](cdfsinr20-10-40FluidSimuISD750mh50m.png) ![Simulation (left) Analytical (right) Map of the SINR for $\phi_{tilt} = 20^\circ$, a vertical aperture $\phi_{3dB} = 10^\circ$ and an horizontal aperture $\theta_{3dB} = 40^\circ$.[]{data-label="AnalyticMaptilt20phi3dB10theta3dB40"}](figSimuAnalyti20-10-40.png) ![Comparison of CDF of SINR for $\phi_{tilt} = 40^\circ$, a vertical aperture $\phi_{3dB} = 30^\circ$ and an horizontal aperture $\theta_{3dB} = 20^\circ$.[]{data-label="tilt40phi3dB30theta3dB20ISD750H30"}](cdfsinr40-30-20FluidSimuISD750mh30m.png) ![Simulation (left) and Analytical (right) Map of the SINR for $\phi_{tilt} = 40^\circ$, a vertical aperture $\phi_{3dB} = 30^\circ$ and an horizontal aperture $\theta_{3dB} = 20^\circ$.[]{data-label="AnalyticMaptilt40phi3dB30theta3dB20ISD750H30"}](figSimuAnalyti40-30-20.png) ![Comparison of CDF of SINR for $\phi_{tilt} = 40^\circ$, a vertical aperture $\phi_{3dB} = 10^\circ$ and an horizontal aperture $\theta_{3dB} = 20^\circ$.[]{data-label="tilt40phi3dB10theta3dB20ISD200H50"}](cdfsinr40-10-20ComparSimuFluid_ISD200mh50m.png) ![Simulation (left) and Analytic (right) Map of the SINR for $\phi_{tilt} = 40^\circ$, a vertical aperture $\phi_{3dB} = 10^\circ$ and an horizontal aperture $\theta_{3dB} = 20^\circ$.[]{data-label="SimuAnalyticMaptilt40phi3dB10theta3dB20ISD200H50"}](figSimuAnalyti40-10-20.png) Limitation of the 3D Fluid Model of Wireless Network {#LimitModel} ---------------------------------------------------- The aim of our analysis is to propose a model allowing to evaluate the performance, quality of service and coverage reachable in a cell whose standard antennas are replaced by 3D antennas, taking into account the height of the antenna and its tilt. Therefore the whole antenna energy is focused on the cell. This implies that the angle $\phi_{3dB}$ has to be lower than $\phi_{tilt}$, otherwise UEs belonging to other cells could be served by this antenna. The validation process was done according to this constraint. However, the analytical closed-form formula (\[SINRfluid2\]) allows to establish CDF of SINR very closed to simulated ones, for the different values of $\phi_{tilt}$, vertical apertures $\phi_{3dB}$ and horizontal apertures $\theta_{tilt}$, as soon as $\phi_{tilt}\geq \phi_{3dB}$. Moreover, the SINR maps given by simulations and by the formula are also very closed. Therefore, the formula is particularly well adapted for 3D wireless networks analysis. Conclusion ========== We develop, and validate, a three dimensional analytical wireless network model. This model allows us to establish a closed form formula of the SINR reached by a UE at any location of a cell, for a 3D wireless network. The validation of this model, by comparisons with Monte Carlo simulations results, shows that the two approaches establish very close results, in terms of CDF of SINR, and also in terms of SINR map of the cell. This model may be used in the aim to analyze wireless networks, in a simple way, with a higher accuracy than a classical 2D approach. [1]{} A. G. Zajic and G. L. Stiiber 3-D MIMO Mobile-to-Mobile Channel Simulation, 2014 T. Bonald and A. Proutiere, Wireless Downlink Data Channels: User Performance and Cell Dimensioning, ACM Mobicom 2003 A. J. Viterbi, CDMA - Principles of Spread Spectrum Communication, Addison-Wesley, 1995. X. Lagrange, Principes et évolutions de l’UMTS, Hermes, 2005. K. S. Gilhousen, I. M. Jacobs, R. Padovani, A. J. Viterbi, L. A. Weaver, and C. E. Wheatley, On the Capacity of Cellular CDMA System, IEEE Trans. on Vehicular Technology, Vol. 40, No. 2, May 1991. S. E. Elayoubi and T. Chahed, Admission Control in the Downlink of WCDMA/UMTS, Lect. notes comput. sci., Springer, 2005. J-M. Kelif and E. Altman, Downlink Fluid Model of CDMA Networks, Proc. of IEEE VTC Spring, May 2005. Jean-Marc Kelif, Marceau Coupechoux, Philippe Godlewski, A Fluid Model for Performance Analysis in Cellular Networks, EURASIP Journal on Wireless Communications and Networking, Vol. 2010, Article ID 435189, doi:10.1155/2010/435189 A. J. Viterbi, A. M. Viterbi, and E. Zehavi, Other-Cell Interference in Cellular Power-Controlled CDMA, IEEE Trans. on Communications, Vol. 42, No. 2/3/4, Freb/Mar/Apr. 1994. S. Kaiser, Spatial Transmit Diversity Techniques for Broadband OFDM Systems, Proc. of Globecom, 2000. J.-M. Kelif, M. Coupechoux and P. Godlewski, Spatial Outage Probability for Cellular Networks, Proc. of GLOBECOM, 2007. Report ITU-R M.2135-1, Guidelines for evaluation of radio interface technologies for IMT-Advanced, 12/2009 3GPP TSG-RAN1 WG1, LTE Downlink Performance, Conference Call, Apr 24th 2007. R1-071978. J-M. Kelif, M. Coupechoux and P. Godlewski, On the Dimensioning of Cellular OFDMA Networks, Physical Communication Journal, Ref : PHYCOM118, online October 2011, DOI : 10.1016/j.phycom.2011.09.008. P. Gupta and P. Kumar, “Internets in the sky: capacity of 3d wireless networks,” ser. IEEE CDC 2000, 2000, pp. 2290–2295. P. Li, M. Pan, and Y. Fang, “The capacity of three-dimensional wireless ad hoc networks,” ser. IEEE INFOCOM 2011, 2011, pp. 1485–1493. F. Baccelli and B. Blaszczyszyn, Stochastic Geometry and Wireless Networks. NOW publishers, 2009. H. Q. Nguyen, F. Baccelli, and D. Kofman, “A stochastic geometry analysis of dense IEEE 802.11 networks,” in IEEE INFOCOM, 2007, pp. 1199–1207 A. Mouradian, Modeling Dense Urban Networks with 3D Stochastic Geometry, arXiv:1509.03075 A. Khawar, A. Abdelhadi, and T. C. Clancy, A Three-dimensional (3D) Channel Modeling between Seaborne MIMO Radar and MIMO Cellular System, arXiv:1504.04333 S. Xu, W. Lyu, H.Li, Optimizing coverage of 3D Wireless Multimedia Sensor Networks by means of deploying redundant sensors, arXiv:1509.04823
--- abstract: 'In pursuit of sketching the effective magnetized QCD phase diagram, we find conditions on the critical coupling for chiral symmetry breaking in the Nambu–Jona-Lasinio model in a nontrivial thermo-magnetic environment. Critical values for the plasma parameters, namely, temperature and magnetic field strength for this to happen are hence found in the mean field limit. The magnetized phase diagram is drawn from the criticality condition for different models of the effective coupling describing the inverse magnetic catalysis effect.' author: - Angelo Martínez and Alfredo Raya title: Critical chiral hypersurface of the magnetized NJL model --- Introduction ============ Understanding the behavior of quantum chromodynamics (QCD) in a nontrivial thermo-magnetic environment of quarks and gluons is both a very hot topic and a hard nut to crack (a recent review on the sketch of the magnetized QCD phase diagram can be found in Ref. [@review]). These extreme conditions are met, for instance, in compact stars and in peripheral relativistic heavy ion collisions, which give rise to magnetic fields of around $m_\pi^2$ at RHIC and $15m_\pi^2$ at LHC [@skokov09]. Although these fields are short-lived [@skokov2], in order to study its effect on the chiral transition, a common starting point is to regard them as uniform in space and time in such a manner that the scenario for the said transition is impacted by the magnetic field strength in a non-trivial manner as the heat bath temperature is increased. In this view, in an attempt to sketch the magnetized QCD phase diagram, after neglecting density effects, lattice QCD simulations available in Refs. [@bali:2012a; @bali:2012b; @bali:2014] reveal, on the one hand, that the chiral condensate grows with the magnetic field strength for temperatures below $T_0$, the pseudocritical temperature for the chiral transition in absence of the magnetic field, in accordance with the universal phenomenon of magnetic catalysis (MC) (see Ref. [@shovkovy] for a review). On the other hand, for $T\simeq T_0$, a turnover behavior settles and the pseudocritical transition temperature decreases as the strength of the magnetic field increases. This phenomenon has been dubbed Inverse Magnetic Catalysis (IMC) effect. A plausible explanation for this phenomenon is that the strong QCD coupling exhibits a non-trivial thermomagnetic behavior such that the competition between the magnetic field strength and temperature renders the coupling to reach its asymptotically free limit in an accelerated manner. Establishing the detailed properties of the QCD coupling as a function of the magnetic field strength is a formidable task, mostly because the effects of the field belong to the non-perturbative domain, where the strong coupling is less known (see, for instance, Refs. [@review; @miranskyreview; @shov]). Deriving the properties of the coupling constant from vertex corrections in a weak magnetic field has been recently done in Ref. [@ayala-vertex], for instance. Nevertheless, effective models might still be able to capture general features of the running of the coupling with $B$. Within the Linear Sigma models (LSM), the full thermomagnetic dependence of the self-coupling [@ayalaetal:LSM1] as well as the quadratic and quartic couplings [@ayalaetal:LSM2] have been obtained including the medium screening effects properly. These findings capture the basic traits of the IMC effect, namely, decreasing of the couplings with increasing magnetic field for $T>T_0$ as well as the critical temperatures for chiral symmetry restoration. Without incorporating those effects, it is not possible to reproduce the growth and decreasing of the chiral condensate with the magnetic field strength for temperatures smaller and larger than $T_0$, respectively. Nambu–Jona-Lasinio (NJL) model [@NJL] (see Refs. [@klevansky; @buballa] for reviews), its extensions [@PNJL] and non-local variants [@nNJL] have widely been used to capture some non-perturbative features of QCD. The local model is non-renormalizable and the coupling needs to exceed a critical value in order to be able to describe chiral symmetry breaking. Furthermore, in the mean field limit, a medium-independent coupling constant fails to incorporate important dynamics to be able to reproduce the traits of IMC. A non-trivial dependence of the plasma parameters for the coupling is required to explain the said phenomenon. Based on lattice simulations, there have been attempts to describe a nontrivial thermomagnetic behavior of the coupling in the local NJL model [@pade; @aftab; @allofus1; @allofus2; @gastao; @costa]. For the non-local extensions, early works [@marcelonnjl] establish that for weak magnetic fields, IMC effects are not observed when the coupling is considered in its mean field limit. For strong magnetic fields, however, this type of models have recently been observed to offer a natural explanation to the IMC effect [@norberto] in the sense that the way form factors depend on the magnetic field correspond to a backreaction effect of the sea quarks on the gluon fields that makes the interaction strength decrease as the magnetic field strength increases. Furthermore, these models provide clues for the simultaneity of the chiral and confinement/deconfinement transitions when models are coupled to a Polyakov loop potential. In the present article we study the boundary of the critical chiral hypersurface in parameter space of the magetized NJL model to determine the conditions on the coupling constant for which chiral symmetry breaking takes place in terms of the plasma parameters, namely, temperature and magnetic field strength. In the limiting case of a medium independent coupling constant, we are able to identify the critical values of the temperature and magnetic field strength enough to break chiral symmetry. Some indirect hints of IMC are explicitly observed even in this regime insofar as an entirely different analytic behavior of the coupling as a function of the magnetic field strength for temperatures above and below $T_0$ develops. We further explore the magnetized phase diagram along this critical chiral hypersurface assuming a dressing of the coupling by the plasma as is modeled to describe IMC in Refs. [@pade; @aftab; @allofus1; @allofus2; @gastao; @costa]. For this purpose, we have organized the remaining of the article as follows: In Sec. \[sec:gap\] we derive the details of the gap equation for the magnetized NJL model within the proper-time regularization scheme. Section \[sec:surf\] is devoted to obtain the critical hypersurface for chiral symmetry breaking of the model in parameter space. A criticality condition among the parameters of the medium is established that divides symmetric from asymmetric domains in parameter space according to chiral symmetry. The phase diagram obtained from this criticality condition is sketched in Sec. \[sec:comp\] for several proposals of the coupling which are dressed by the medium in accordance with the IMC effect. Concluding remarks are presented in Sect. \[sec:concl\]. Gap Equation {#sec:gap} ============ ![\[fig:Schwinger-Dyson-equation\]Schwinger-Dyson equation corresponding to the Hartree-Fock approximation.]("ecuaciondysongap".pdf){width="\linewidth"} One of the first successful attempts to describe strong interaction is the model put forward by Nambu and Jona-Lasinio in analogy with superconductivity [@NJL]. Such a model is described through the Lagrangian $${\cal L}=\bar\psi \left( i {\not \! \partial} -m_q\right) \psi + G\left[ (\bar\psi\psi)^2+(\bar\psi i\gamma_5\vec\tau\psi)^2\right]\;,$$ where the fields $\psi$ are in modern literature regarded as quark fields with current mass $m_q$ and $G$ is the (dimensionful) coupling of the model. Here, $\vec\tau$ correspond to the Pauli matrices acting on isospin space. Within the Hartree-Fock approximation, the corresponding Schwinger-Dyson (gap) equation for the quark propagator is pictorially described in Fig. \[fig:Schwinger-Dyson-equation\], which can be expressed as $$m=m_{q}-2G\braket{\bar{\psi}\psi},\label{eq:gapgral}$$ where $m$ is the dynamically generated mass and $-\braket{\overline{\psi}\psi}$ is the chiral condensate, defined as, $$-\braket{\bar{\psi}\psi}=\int\frac{d^4p}{(2\pi)^4}{\rm Tr}\left[iS\left(p\right)\right].\label{eq:condensate}$$ Here, $$S(p)=\frac{1}{{\not \! p}-m}$$ is the dressed quark propagator with $m$ the dynamically generated mass, which is momentum independent. Even though we consider isospin symmetric light quark flavors, in what follows we consider the gap equation for a single light quark flavor and set $N_f=1$. The trace, however, runs over Dirac and color spaces. As mentioned before, the model is non-renormalizable, and therefore, integrals must be regulated. We adopt the proper-time regularization scheme [@schwinger], which allows to incorporate magnetic field effects straightforward. We follow the procedure first used in Refs. [@allofus1; @allofus2], which we briefly describe in here. Considering a uniform magnetic field of strength $B$ aligned with the third spatial axis, the translationally invariant part of the quark propagator adopts its Schwinger representation [@schwinger] $$\begin{aligned} S(p) & =-i\int_{\Lambda}^{\infty}\frac{ds}{\cos(q_{f}Bs)}e^{is\left(p_{\parallel}^{2}-p_{\perp}^{2}\frac{\tan(q_{f}Bs)}{q_{f}Bs}-m^{2}\right)}\nonumber \\ & \bigg\{(\cos(q_{f}Bs)+\gamma_{1}\gamma_{2}\sin(q_{f}Bs))(m+\slashed p_{\parallel})\nonumber \\ & -\frac{\slashed p_{\perp}}{\cos(q_{f}Bs)}\bigg\},\label{eq:prop}\end{aligned}$$ were $q_{f}$ is the absolute value of the quark charge and $p_{\parallel}$ and $p_{\perp}$ are the parallel and perpendicular components of the four-momentum, defined as: $$\begin{aligned} p_{\parallel}^{\mu} & =&\left(p_{0},0,0,p_{3}\right)\;,\nonumber \\ p_{\perp}^{\mu} & =&\left(0,p_{1},p_{2},0\right),\end{aligned}$$ and $\Lambda$ is the proper time cut-off, which has canonical dimensions of $[{\rm Mass}]^{-2}$. Notice that the Schwinger phase cancels in obtaining the gap equation and therefore we omit it hereafter. Plugging Eq. (\[eq:prop\]) into Eq. (\[eq:condensate\]) and taking the trace, we obtain the chiral condensate for a single quark flavor, $$\begin{aligned} -\braket{\bar{\psi}\psi}&=&\nonumber\\ &&\hspace{-15mm}4N_{c}m\int\frac{d^{4}p}{(2\pi)^{4}}\int_{\Lambda}^{\infty}ds\,e^{is\left(p_{\parallel}^{2}-p_{\perp}^{2}\frac{\tan(q_{f}Bs)}{q_{f}Bs}-m^{2}\right)},\label{eq:condensateM}\end{aligned}$$ where $N_c=3$ is the number of colors. Performing the Gaussian integrals over the perpendicular components of momentum and upon Wick rotating to Euclidean space through the replacement $p_{0}\longrightarrow ip_{0}$, we obtain $$-\braket{\bar{\psi}\psi}=\frac{N_{c}mq_{f}B}{\pi}\int\frac{d^{2}p_{\parallel}}{(2\pi)^{2}}\int_{\Lambda}^{\infty}ds\,\frac{e^{-is\left(p_{\parallel}^{2}+m^{2}\right)}}{\tan(q_{f}Bs)}.$$ Now, in order to take into account thermal effects, we use the Matsubara formalism [@kapusta] which requires the replacements $$\int_{-\infty}^\infty\frac{dp_{0}}{2\pi} f(p_0)\quad\longrightarrow \quad T\,\sum_{n=-\infty}^{\infty} f(\omega_n),$$ where $\omega_{n}=2(n+\nicefrac{1}{2})\pi T$ are the Matsubara frequencies. On carrying out the remaining Gaussian momentum integral and performing the change of variable $s\to-is$, we obtain $$\begin{aligned} -\braket{\bar{\psi}\psi} & =&\frac{N_{c}mq_{f}B}{2\pi^{\nicefrac{3}{2}}}\int_\Lambda^\infty\frac{ds}{s^{\nicefrac{1}{2}}}\frac{e^{-sm^{2}}}{\tanh(q_{f}Bs)} \nonumber \\&& \times T\sum_{n=-\infty}^{\infty}e^{-s\omega_{n}^{2}}.\label{eq:fullcondensate}\end{aligned}$$ To simplify Eq. (\[eq:fullcondensate\]) further, we recall the definition of the third Jacobi Elliptic theta function $$\Theta_{3}(z,\tau)=1+2\sum_{n=1}^{\infty}q^{n^{2}}\cos(2nz)\;,$$ with $q=e^{i\pi\tau}$. In this way, $$\begin{aligned} \sum_{n=-\infty}^{\infty}e^{-s\omega_{n}^{2}}&=&e^{-\pi^2T^2s}\Theta_{3}(i2\pi^2T^2s,i4\pi T^{2}s),\nonumber\\ &=&(4\pi T^{2}s)^{-\frac{1}{2}}\Theta_{3}\left(-\frac{\pi}{2},\frac{i}{4\pi T^{2}s}\right),\end{aligned}$$ where we have used the inversion formula $$\Theta_{3}(\tau,z)=(-i\tau)^{-\frac{1}{2}}e^{-\frac{iz^{2}}{\pi\tau}}\Theta_{3}\left(-\frac{z}{\tau},-\frac{1}{\tau}\right).$$ Thus, the chiral condensate simplifies to $$\begin{aligned} -\braket{\bar{\psi}\psi} &=&\frac{N_{c}mq_{f}B}{2\pi^{\nicefrac{3}{2}}}\frac{1}{(4\pi)^{\frac{1}{2}}}\int_\Lambda^\infty\frac{ds}{s}\frac{e^{-sm^{2}}}{\tanh(q_{f}Bs)}\nonumber \\ &&\times\Theta_{3}\left(-\frac{\pi}{2},\frac{i}{4\pi T^{2}s}\right),\label{eq:condensatewtheta}\end{aligned}$$ and correspondingly, the gap Eq. (\[eq:gapgral\]) becomes $$\begin{aligned} m & =& m_{q}+GN_{c}\frac{mq_{f}B}{2\pi^{2}}\int_{\Lambda}^{\infty}\frac{ds}{s}\frac{e^{-sm^{2}}}{\tanh(q_{f}Bs)}\nonumber \\ && \times\Theta_{3}\left(-\frac{\pi}{2},\frac{i}{4\pi T^{2}s}\right).\label{eq:esgapcompleta}\end{aligned}$$ Next, to explicitly isolate the vacuum from medium contributions, we split the first term of the sum $$\Theta_{3}\left(-\frac{\pi}{2},\frac{i}{4\pi T^{2}s}\right)=1+2\sum_{n=1}^{\infty}(-1)^{n}e^{-\frac{n^{2}}{4T^{2}s}},$$ and hence, adding and subtracting a factor 1 to account for the would-be divergent term to cancel the ultraviolet contribution of the medium [@allofus1; @allofus2], we arrive at $$\begin{aligned} m & =&m_{q}+GN_{c}\frac{m}{2\pi^{2}}\Bigg\{ \int_{\Lambda}^{\infty}\frac{ds}{s^{2}}e^{-sm^{2}}\nonumber \\ &&\hspace{-8mm}+ \int_{0}^{\infty}\frac{ds}{s^{2}}e^{-sm^{2}}\left(\frac{q_{f}Bs}{\tanh(q_{f}Bs)}-1\right)\nonumber \\ &&\hspace{-8mm}+ 2q_{f}B\int_{0}^{\infty}\frac{ds}{s}\frac{e^{-sm^{2}}}{\tanh(q_{f}Bs)}\sum_{n=1}^{\infty}(-1)^{n}e^{-\frac{n^{2}}{4T^{2}s}}\Bigg\} . \label{eq:gapsplit}\end{aligned}$$ The first integral in Eq. (\[eq:gapsplit\]) is the vacuum term, whereas the second and third terms are the thermomagnetic contribution. The purely thermal contribution is straightforwardly obtained taking the limit $B\to 0$, whereas the purely magnetic contribution comes from the limit $T\to 0$. Notice that the regularization parameter $\Lambda$ remains in the vacuum integral only. On physical grounds, we expect this to be the case as the medium contribution is strongly suppressed in the ultraviolet. Below, we analyze the above equation to explore the parameter space domains where chiral symmetry breaking is possible in the model. Critical curves {#sec:surf} =============== ![\[fig:crit\]Function $f(m)$ in eq. (\[rhs\]), corresponding to the r.h.s. of the gap equation, as a function of the dynamical mass $m$ for various values of the coupling (in black). Intersections with the (red) line $y=m$ give the solutions. There could be none except for the trivial one $m=0$ (upper panel), one $m\ne 0\ll 1$ (mid panel) or trivial and nontrivial solutions (lower panel) if $G>G_c$, $G=G_c$ or $G>G_c$, respectively.]("critcond".pdf){width="0.9\columnwidth"} In this section we are interested in deriving the critical curves, in parameter space, that distinguish domains supporting and/or prohibiting chiral symmetry to be broken. We start considering the vacuum term alone. From Eq. (\[eq:gapsplit\]), setting $T=B=0$ we have $$\begin{aligned} m & =&m_{q}+GN_{c}\frac{m}{2\pi^{2}} \int_{\Lambda}^{\infty}\frac{ds}{s^{2}}e^{-sm^{2}}.\label{gap}\end{aligned}$$ We look for nontrivial solutions $m\ne 0$ to the above expression. In the chiral limit, it is equivalent to find the intersections of the curves $$f(m)=GN_{c}\frac{m}{2\pi^{2}} \int_{\Lambda}^{\infty}\frac{ds}{s^{2}}e^{-sm^{2}}\label{rhs}$$ as a function of $m$ for all other parameters fixed and the line $y=m$. Depending upon the strength of the coupling constant (see Fig. \[fig:crit\]), there could be no intersection whatsoever if the coupling is weak, except for the trivial $m=0$. There exists, however, a critical value $G_c$ where there is one more intersection and correspondingly, a nontrivial solution $m\ne 0$ bifurcates away from the trivial one $m=0$, and for $G>G_c$ at least these two intersections are observed. Then, to find the critical coupling $G_{c}$ that allows chiral symmetry breaking, we derive the gap equation (\[gap\]) with respect to the mass $m$ and evaluate at $m=0$. This procedure indicates exactly where the trivial and nontrivial solutions bifurcate from one another and specifies the value of the coupling $G_c$ required for that purpose. In our case, we have $$\begin{aligned} 1&=&G_{c}\frac{3}{2\pi^{2}}\int_{\Lambda}^{\infty}\frac{ds}{s^{2}}%\nonumber\\&=& \equiv\frac{G_c}{\tilde{\Lambda}},\end{aligned}$$ where $\tilde{\Lambda}= 2\pi^2\Lambda/3$. Therefore $G_{c}=\tilde{\Lambda}$ is the critical value of the coupling above which chiral symmetry is broken in the model. ![\[fig:Critical-coupling-constant T\]Coupling constant $G$ in a thermal bath as a function of $T$. The solid curve corresponds to $G_c^T$ in Eq. (\[eq:G\_cT\]). Notice that for $T=T_c\equiv 1/\sqrt{G_c}$, $G_{c}^T$ is divergent. This means that, no matter how strong the coupling constant is, there is no chiral symmetry breaking. Also, for any value of the coupling $G>G_{c}^T$, quark masses are dynamically generated. The scale of the plot is set by $\Lambda=1{\rm GeV}^{-2}$.]("puntoscriticostemp".pdf){width="\linewidth"} Next, we consider the effect of a heat bath at temperature $T$. Taking the limit $B\to 0$ in Eq. (\[eq:gapsplit\]), on deriving with respect to $m$ and setting $m=0$, we reach at the critical relation $$1= \frac{3}{2\pi^{2}}G_{c}^T\left(\frac{1}{\Lambda}-\frac{2}{3}\pi^{2}T^{2}\right), \label{eq:0.3}$$ where $G_c^T$ stands for the critical coupling required to break chiral symmetry in a heat bath at temperature $T$. Observe that at this point, $G_c^T$ could have a non-trivial dependence of the plasma parameters and thus Eq. (\[eq:0.3\]) becomes a self-consistent relation for the critical temperature for chiral symmetry restoration. Assuming that the coupling constant is independent of the temperature, by writing in the form $$G_{c}^T=\frac{G_c}{1-G_cT^2}, %\frac{\pi^{2}}{3}\frac{1}{(\frac{1}{\Lambda}-\frac{2}{3}\pi^{2}T^{2})}, \label{eq:G_cT}$$ we notice that at $T=0$ we recover the vacuum limit explicitly. Furthermore, there exist a critical temperature $T_{c}=1/\sqrt{\tilde{\Lambda}}=1/\sqrt{G_c}$ where $G_c^T$ diverges, which means that no matter how strong the coupling is, it is not enough to break chiral symmetry. A plot of $G$ as a function of $T$ for a fixed value of $G_c$ is shown in Fig. \[fig:Critical-coupling-constant T\]. The critical curve $G_{c}^T$ limits the values of $G$ that generate mass from those that do not. Any value of $G$ above the curve of $G_{c}^T$ suffices to generate a dynamical quark mass $m\ne 0$. Thus, the shaded region corresponds to the chirally broken region in parameter space. The vertical line corresponds to the position of $T_c$ and to the right of such a line, chiral symmetry can never be broken. For the general case of a $T$-dependent coupling, of course, Eq. (\[eq:G\_cT\]) becomes a transcendental, self-consistent equation to find the critical temperatures. The shape of the boundary is expected to be refined accordingly The influence of solely a magnetic field can be considered from the gap Eq. (\[eq:gapsplit\]) in the limit $T\to 0$. Again, differentiating with respect to the mass and setting afterward $m=0$, we obtain $$1= \frac{3}{2\pi^{2}}G_{c}^M\left[\frac{1}{\Lambda}+\int_{0}^{\infty}\frac{ds}{s^{2}}\left(\frac{q_fBs}{\tanh(q_fBs)}-1\right)\right],\label{eq:0.4}$$ where $G_c^M$ represents the critical coupling needed to generate masses in a magnetized medium for a single quark flavor of electric charge $q_f$. For a magnetic field of arbitrary strength, the integral on the r.h.s. of Eq. (\[eq:0.4\]) diverges and thus $G_{c}^M$ cannot be defined if we assume the coupling is not dressed by the magnetic field in a nontrivial manner. This circumstance implies that there is generation of masses for any finite value of $G_c^M$ regardless of the magnetic field strength in the mean field limit. Even for weak magnetic fields, a given value of $q_fB$ makes any value of the weak coupling strong enough to form the condensate. This behavior is reminiscent of the catalytic effect of a magnetic field at zero temperature that promotes the generation of mass through the formation of the chiral condensate. An important observation is that in this regime, the generated mass is extremely small, as shown in Fig. \[fig:m de G magnetico\]. Nevertheless, by demanding the generated mass to be larger than the current quark masses, namely of ${\cal O}(10^{-4}~{\rm GeV})$, we can define a pseudo-critical coupling $\tilde{G}_{c}^M$ as the coupling needed to generate such a mass. In the weak field regime, we sketch the $G-q_fB$ plane for dynamical generation of quark mass. It is shown as a function $q_fB$ in Fig. \[fig:puntocriticosuave\]. The shaded region correspond to masses larger than the current quark masses and the red (solid) line is the pseudo-critical curve. For a fully dressed coupling, of course, Eq. (\[eq:0.4\]) becomes a self consistent relation for the magnetic field needed to break chiral symmetry. ![\[fig:m de G magnetico\]Dynamically generated mass $m$ as a function of the coupling $G$ for various values of the magnetic field at fixed $\tilde{\Lambda}=1\,{\rm GeV}^{-2}$. ]("mdeGmagnetico".pdf){width="\linewidth"} ![\[fig:puntocriticosuave\] Pseudo-critical coupling as a function of the magnetic field strength. Under the critical curve $\tilde{G}_{c}^M$, shown as a red, solid curve, the generated mass is smaller than $10^{-4}~{\rm GeV}$. Notice that $\tilde{G}_{c}^M$ gets smaller as the magnetic field strength increases, in accordance with the phenomenon of MC. ]("puntocriticosuave".pdf){width="\linewidth"} The last case at hand is the full gap equation in a thermomagnetic plasma. In this case, the condition for criticality reads $$\begin{aligned} 1&= & \frac{3}{2\pi^{2}}G_{c}^{TM}\Bigg[\frac{1}{\Lambda}+\int_{0}^{\infty}\frac{ds}{s^{2}}\left( \frac{q_fBs}{\tanh(q_fBs)}-1\right) \nonumber \\ & & +2q_fB\sum_{n=1}^{\infty}\intop_{0}^{\infty}\frac{ds}{s}\frac{(-1)^{n}\exp(-sn^{2})}{\tanh(\frac{q_fB}{4T^{2}s})}\Bigg],\label{eq:0.5}\end{aligned}$$ where $G_c^{TM}$ represents the critical coupling in the thermomagnetic medium for a single quark flavor of charge $q_f$ to obtain a mass $m\ne 0$. The first thing we can readily verify is that the integrals in the r.h.s. of Eq. (\[eq:0.5\]) are convergent. This is so because, unlike the pure magnetic field case, the leading term of the second integral ($n=1$ in the sum) in the limit $B\to 0$ cancels the divergence of the second term (pure magnetic contribution) in this limit. Physically, we expect such a cancellation because the temperature tries to dissolve the condensate. Thus, there is a competition between the magnetic field and the temperature to promote and inhibit the generation of a quark mass which leads to the existence of a critical coupling constant $G_{c}^{TM}$. In Fig. \[fig:Gc critica B y T\] we plot the coupling constant as a function of the temperature for different values of $q_fB$. Different lines correspond to the corresponding critical curves $G_{c}^{TM}$. Shaded regions are the chirally asymmetric domains. For low temperatures, we observe that the largest value of $G_c^{TM}$ that hits the vertical axis corresponds to the zero magnetic field case, and that such a height diminishes as the strength of the magnetic field increases. This is expected under the view of the MC phenomenon. For larger values of $T$, vertical lines in the plot correspond to the values of $T$ where each $G_{c}^{TM}$ diverges. In other words, these correspond to the critical temperature above which chiral symmetry can no longer be broken. As the magnetic field increases in strength, the critical temperatures move toward larger values. In Fig. \[fig:critico mag temp\] we plot the same coupling as a function of the magnetic field for various values of the temperature. $G_{c}^{TM}$ curves are also shown and the shaded regions correspond to chirally asymmetric domains. We notice that when temperatures $T<1/\sqrt{G_c}$, masses are generated for arbitrary values of $q_fB$. Nevertheless, for $T=1/\sqrt{G_c}$ a turnover behavior develops such that for $T>1/\sqrt{G_c}$ a critical strength $q_fB_c$ is required in order to break chiral symmetry. The critical $q_fB_c$ is a vertical line that corresponds also to $G_c^{TM}$. Thus, we observe two entirely different behaviors of coupling which separate at $T=T_0$: on the one hand ($T>T_0$), we have always the possibility for chiral symmetry breaking for arbitrary magnetic field strength, and in the other hand ($T>T_0$), a critical magnetic field strength is needed for that purpose. It is therefore naturally expected that for a coupling that is fully dressed by the plasma parameters, this transition between the two regimes is smooth, as observed in the IMC effect. In this direction, below we revise the implications of the criticality condition (\[eq:0.5\]) on the magnetized phase diagram. ![\[fig:Gc critica B y T\]Coupling constant as a function of the temperature $T$ for various values of the magnetic field $q_fB$. The corresponding critical curves $G_{c}^{TM}$ are shown. These separate the chiral assymetric from the symmetric domains. We can see that, even in the presence of the magnetic field, we have a critical temperature $T_{c}$. Also notice that as the magnetic field increases, the critical temperature $T_{c}$ tends to increase. ]("GcriticaMyT".pdf){width="\linewidth"} ![\[fig:critico mag temp\] Coupling constant as a function of the magnetic field $q_fB$ for various values of the temperature $T$. Critical curves $G_c^{TM}$ are also shown. For temperatures $T<T_c=1/\sqrt{G_c}$, the domains of chiral symmetry breaking include any values of $q_fB$. Nevertheless, for $T>Tc$, there is as critical value of the magnetic field strength $q_fB_{c}$ such that if the magnetic field $q_fB$ is larger, then chiral symmetry is broken.]("puntoscriticomag".pdf){width="\linewidth"} Inverse magnetic catalysis models of the coupling constant {#sec:comp} ========================================================== In this section we address the issue of the critical hypersurface and its shape according to restrictions arising from IMC. For this purpose, we derive the magnetized phase diagram with several proposals for the coupling from Eq. (\[eq:0.5\]). In particular, we test some models which include or not specific dependence of the plasma parameteres in $G$ in accordance with the IMC phenomenon. Explicitly, we consider the following examples - Mean field coupling, $G^{TM}\equiv G^0$, namely, the coupling is independent of the plasma parameters. - Running coupling of QCD in a background magnetic field [@shov] $$G^{TM}=\frac{G^{0}}{\ln{\bigg{(}e+\frac{|q_fB|}{\Lambda_{QCD}^2}\bigg{)}}},\label{modelolog}$$ were $\Lambda_{QCD}^2=300\thinspace {\rm MeV}$ and $G^{0}$ is the value of the coupling constant in vacuum. Here, no explicit dependence to the temperature is considered. - A Padè fit [@pade; @aftab; @costa] $$G^{TM}(q_fB)=G^0\bigg{(}\frac{1+a\zeta^2+b\zeta^3}{1+c\zeta^2+d\zeta^4}\bigg{)},\label{modeloaftad}$$ where $a=0.0108805$, $b=-1.0133\times10^{-4}$, $c=0.02228$, $d=1.84558\times 10^{-4}$, $\zeta=q_fB/\Lambda_{QCD}^2$ and $\Lambda_{QCD}^2=300\thinspace {\rm MeV}$. This model is temperature independent and is known to reproduce the behavior of the critical temperatures as a function of $q_fB$ in consistency with lattice results for IMC. - A nontrivial fit of the form [@gastao] $$G^{TM}=c(B)\Bigg[ 1-\frac{1}{1+\exp{[\beta(B)(T_a(B)-T)]}}\Bigg]+s(B)\;, \label{eq:coupgastao}$$ where the parameters $c(B)$, $\beta(B)$, $T_a(B)$ and $s(B)$ are tabulated in Table 1 of Ref. [@gastao]. Assuming the dynamically generated mass to be independent of the plasma parameters, this [*ansatz*]{} renders the behavior of the critical temperature $T$ vs. $q_fB$ in accordance with lattice simulations for the IMC. - A numerical fit suggested in Ref. [@allofus1], based on reverse engineering of lattice results to derive the behavior of the chiral condensate. The mass function and the coupling in this case have a non-trivial dependence of the plasma parameters. In Fig. \[fig:coupB\] we draw the corresponding phase diagrams. The vertical axis is normalized to the critical temperature in absence of the magnetic field $T_0$. We observe that the MF coupling is consistent with MC, namely, the critical temperature grows monotonically with the magnetic field strength. For the case of the running coupling, critical temperatures start diminishing and then monotonically increase. Such a behavior comes solely from comparing the magnetic field strength to $\Lambda^2_{QCD}$. The remaining three proposals already include the traits of the IMC phenomenon and show the expected decrease of the critical temperature as the magnetic field increases. The Padè fit shows a very smooth signal of IMC. The apparent decrease of $T$ with increasing $q_fB$ for the nontrivial model [@gastao] is due to numerical accuracy and is not related to the behavior of the logarithmic running coupling. The numerical fit, though not much pronounced, shows an increasing and decreasing behavior of $T$ with increasing the magnetic field. Let it be stressed that in the models of Refs. [@allofus1] and [@gastao], the coupling was obtained for an average condensates of lattice [@bali:2012a; @bali:2012b; @bali:2014] and hence we multiply by 2 to put all couplings in the same footing for a single quark species. ![\[fig:coupB\] Magnetized phase diagram from the criticality condition (\[eq:0.5\]) with different proposals of the coupling constant as explained in the text. The temperature axis is normalized to $T_0$, the critical temperature in absence of the magnetic field.]("acoplamientosfuncionB".pdf){width="\linewidth"} Concluding Remarks {#sec:concl} ================== In this article we have studied the critical hypersurface in parameter space that separates the domains where chiral symmetry breaking is possible/forbidden in the magnetized NJL model. In vacuum, it is well known that the coupling must exceed a critical value in order to allow for a non-trivial solution of the gap equation. Adding a heat bath, assuming the coupling to have its mean-field character, there exist a critical value of temperature above which no matter how strong the coupling is, it is not possible to break chiral symmetry. For a pure magnetic background, under the same assumptions, chiral symmetry can be broken for any value of the coupling, in accordance with the universal phenomenon of Magnetic Catalysis. Criticality appears only when we demand the generated mass to be larger than the current quark mass. Finally, in a non-trivial thermomagnetic medium, there exist a competition between the temperature and the magnetic field for masses to be dynamically generated. For temperatures lower that the critical temperature in absence of the magnetic field, it is seen that its strength always promotes the breaking of chiral symmetry. Nevertheless, above this critical temperature, the hypersurface develops hard walls such that a critical magnetic field strength is required to strengthen the coupling and generate masses. This is might be seen as a seed of the IMC effect. From the critical relation in Eq. (\[eq:0.5\]), we have derived the corresponding phase diagram in the $T-q_fB$ plane assuming the coupling is dressed by the plasma. The mean field and running coupling with the magnetic field are solely compatible with MC. When the coupling is non-trivially dressed with the plasma parameters, the critical temperature is no longer a monotonic increasing function of the magnetic field strength, but exhibits a turn-over effect characteristic of IMC. Extensions of the present reasoning including a non-trivial dependence of the dynamical mass as in Ref. [@allofus1] as well as for non local models are under study. Findings will be reported elsewhere. We acknowledge valuable discussions with A. Ayala and A. Bashir, M. Loewe, A.J. Mizher and C. Villavicencio. AR acknowledges support from Consejo Nacional de Ciencia y Tecnología under grant 256494. [99]{} J. O. Andersen, W. R. Naylor and A. Tranberg, [*Rev. Mod. Phys.*]{} [**88**]{}, 025001 (2016). V. Skokov, A.Y. Illarionov, V. Toneev, [*Int. J. Mod. Phys. A*]{} [**24**]{}, 5925 (2009). L. McLerran and V. Skokov, [*Nucl. Phys. A*]{} [**929**]{}, 184 (2014). G. Bali, F. Bruckmann, G. Endrodi, Z. Fodor, S. Katz, [*et al.*]{}, [*JHEP*]{} [**1202**]{}, 044 (2012). G. Bali, F. Bruckmann, G. Endrodi, Z. Fodor, S. Katz, [*et al., Phys. Rev. D*]{} [**86**]{}, 071502 (2012). G. Bali, F. Bruckmann, G. Endrodi, S. Katz, and A. Shafer, JHEP. 1408, 177 (2014). I. A. Shovkovy, [*Lect. Notes Phys.*]{} [**871**]{} 13-49, (2013). V. A. Miransky, I. A. Shovkovy, [*Phys. Rept.*]{} [**576**]{}, 1 (2015). V. A. Miransky and I. A. Shovkovy, [*Phys. Rev. D*]{}[**66**]{}, 045006 (2002). A. Ayala, C. A. Dominguez, L. A. Hernandez, M. Loewe and R. Zamora, [*Phys. Lett. B*]{} [**759**]{}, 99 (2016). A. Ayala, M. Loewe, Ana Julia Mizher, R. Zamora, [*Phys. Rev. D*]{} [**90**]{} (2014) no.3, 036001; A. Ayala, M. Loewe, R. Zamora, [*Phys.Rev. D*]{} [**91**]{} no.1, 016002 (2015); Y. Nambu and G. Jona-Lasinio, [*Phys. Rev.*]{} [**122**]{}, 345, (1961);\ Y. Nambu and G. Jona-Lasinio, [*Phys. Rev.*]{} [**124**]{}, 246, (1961). S. Klevansky, [*Rev. Mod. Phys.*]{} [**64**]{}, 649 (1992). M. Buballa, [*Phys. Rept.*]{} [**407**]{}, 2015 (2005) K. Fukushima, [*Phys. Lett. B*]{}[**591**]{}, 277 (2004);\ E. Megias, E. Ruiz Arriola, and L.L. Salcedo, [*Phys. Rev. D*]{} [**74**]{}, 065005 (2006);\ E. Megias, E. Ruiz Arriola, and L.L. Salcedo, [*Phys. Rev. D*]{} [**74**]{}, 114014 (2006);\ M. Ciminale, R. Gatto, N. D. Ippolito, G. Nardulli, and M. Ruggieri, [*Phys. Rev. D*]{}[**77**]{}, 054023 (2008);\ C. Ratti, M. A. Thaler, W. Weise, [*Phys. Rev. D*]{} [**73**]{}, 014019 (2006);\ H.-M. Tsai and B. Müller, [*J. Phys. G: Nucl. Part. Phys.*]{} [**36**]{}, 075101 (2009) . M. Buballa, S. Krewald, [*Phys. Lett. B*]{} [**294**]{}, 19 (1992);\ R. D. Bowler and M. C. Birse, [*Nucl. Phys. A*]{}[**582**]{}, 655 (1995);\ R. S. Plant, M. C. Birse, [*Nucl. Phys. A*]{}[**628**]{}, 607 (1998);\ I. General, D. Gomez Dumm, N. N. Scoccola, [*Phys. Lett. B*]{} [**506**]{}, 267 (2001). M. Ferreira, P. Costa, O. Lourenço, T Frederico and C. Providência, [*Phys. Rev. D*]{} [**89**]{}, 116011 (2014). A Ahmad and A Raya [*J. Phys. G*]{} [**43**]{} (2016) no.6, 065002. A. Ayala , C. A. Dominguez, L. A. Hernandez, M. Loewe, A. Raya, J. C. Rojas, C. Villavicencio, [*Phys. Rev. D*]{} [**94**]{}, 054019 (2016). A. Ayala, L. A. Hernandez, M. Loewe, A. Raya, J. C. Rojas, R. Zamora, [*Phys. Rev. D*]{} [**96**]{} (2017) no.3, 034007. R. L. S. Farias, V. S. Timoteo, S. S. Avancini, M. B. Pinto, and Gastao Krein, [*Euro. Phys. Jour. A*]{} [**53**]{}(5):101, (2017). M. Ferreira, P. Costa and C. Providência, [*Phys.Rev. D*]{} [**97**]{} (2018) no.1, 014014 M. Loewe, F. Marquez, C. Villavicencio and R. Zamora, [*Int. J. Mod. Phys. A*]{} [**30**]{}, no.21, 1550123 (2015). D. Gómez Dumm, M. F. Izzo Villafañe, S. Noguera, V. P. Pagura and N. N. Scoccola, [*Phys. Rev. D*]{} [**96**]{} no.11, 114012 (2017). J. Schwinger, Phys. Rev. 82, 664 (1951). J. I. Kapusta and C. Gale, [*Finite-temperature field theory. Principles and applications,*]{} 2nd edition (2011). Cambridge Monographs on Mathematical Physics. ISBN: 9780521173223.
--- abstract: '   Some quantum-gravity theories suggest that the absorbing horizon of a classical black hole should be replaced by a reflective surface which is located a microscopic distance above the would-be classical horizon. Instead of an absorbing black hole, the resulting horizonless spacetime describes a reflective exotic compact object. Motivated by this intriguing prediction, in the present paper we explore the physical properties of exotic compact objects which are linearly coupled to stationary bound-state massive scalar field configurations. In particular, solving the Klein-Gordon wave equation for a stationary scalar field of proper mass $\mu$ and spheroidal harmonic indices $(l,m)$ in the background of a rapidly-rotating exotic compact object of mass $M$ and angular momentum $J=Ma$, we derive a compact analytical formula for the [*discrete*]{} radii $\{r_{\text{c}}(\mu,l,m,M,a;n)\}$ of the exotic compact objects which can support the stationary bound-state massive scalar field configurations. We confirm our analytical results by direct numerical computations.' author: - Shahar Hod title: 'Stationary bound-state scalar configurations supported by rapidly-spinning exotic compact objects' --- Introduction ============ Black holes in classical theories of gravity describe compact spacetime regions which are bounded by event horizons with absorbing boundary conditions. Interestingly, however, some candidate quantum-gravity models [@eco1; @eco2; @eco3; @eco4; @eco5; @eco6; @eco7; @eco8; @eco9; @eco10; @eco11; @eco12; @Pan] have recently suggested that quantum effects may prevent the formation of stable black-hole horizons. These models have put forward the intriguing idea that, within the framework of a quantum theory of gravity, horizonless exotic compact objects may serve as alternatives to the familiar classical black-hole spacetimes [@eco1; @eco2; @eco3; @eco4; @eco5; @eco6; @eco7; @eco8; @eco9; @eco10; @eco11; @eco12; @Pan]. In a very interesting work, Maggio, Pani, and Ferrari [@Pan] have recently studied numerically the physical properties of spinning exotic compact objects which are characterized by spacetime geometries that modify the familiar Kerr metric only at some microscopic scale around the would-be classical horizon. In particular, the physical model analyzed in [@Pan] assumes that the absorbing horizon of the classical Kerr black-hole spacetime is replaced by a slightly larger quantum membrane with reflective boundary conditions. The interplay between compact astrophysical objects and fundamental matter fields has attracted much attention over the years from both physicists and mathematicians. In particular, recent analytical [@Hodrc] and numerical [@Herkr] studies of the Einstein-Klein-Gordon field equations have revealed the fact that, within the framework of classical general relativity, spinning Kerr black holes can support stationary spatially regular bound-state matter configurations which are made of massive scalar fields. This fact naturally raises the following physically interesting question: Can the exotic compact objects of the suggested quantum-gravity models [@eco1; @eco2; @eco3; @eco4; @eco5; @eco6; @eco7; @eco8; @eco9; @eco10; @eco11; @eco12; @Pan], like the more familiar classical black holes [@Hodrc; @Herkr], support stationary bound-state massive scalar field configurations in their exterior regions? In the present paper we shall address this physically intriguing question by solving the Klein-Gordon wave equation for massive scalar fields in the background of a rapidly-rotating exotic compact object. In particular, motivated by the suggested quantum-gravity model recently studied in [@Pan] (see also [@eco1; @eco2; @eco3; @eco4; @eco5; @eco6; @eco7; @eco8; @eco9; @eco10; @eco11; @eco12]), we shall use [*analytical*]{} techniques in order to study the physical properties of exotic compact objects with reflective boundary conditions which are linearly coupled to stationary bound-state massive scalar field configurations. Interestingly, as we shall explicitly show below, one can derive a remarkably compact analytical formula for the [*discrete*]{} radii $\{r_{\text{c}}(\mu;n)\}$ [@Notennn] of the horizonless rapidly-spinning exotic compact objects which, for a given proper mass $\mu$ of the external field, can support the stationary bound-state massive scalar field configurations. It is important to note that the horizonless spinning configurations that we shall study in the present paper are characterized by the dimensionless relation $M\mu=O(m)$, where $M$ is the mass of the central exotic compact object and $m$ is the azimuthal harmonic index of the massive scalar field mode. This characteristic relation implies, in particular, that for astrophysically realistic black-hole candidates, the relevant massive scalar fields are ultralight: $\mu\sim10^{-10}-10^{-19}$eV. In this context, it is worth noting that the physical motivation to consider such ultralight massive scalar fields is manifold and ranges from possible dark matter candidate fields to new fundamental bosonic fields which might appear in suggested extensions of the Standard Model and in fundamental string theories. In particular, such ultralight fields naturally appear in the suggested string axiverse scenario, in theories of exotic dark photons, and in hidden $U(1)$ sectors [@Arv1; @Arv2; @Feng; @Hui; @Wik]. Before proceeding, it is worth emphasizing that the composed spinning-exotic-compact-object-massive-scalar-field configurations that we shall analyze in the present paper, like the composed spinning-black-hole-massive-scalar-field configurations studied in [@Hodrc; @Herkr], owe their existence to the intriguing physical phenomenon of superradiant scattering in rotating spacetimes [@Frid; @Bri]. In particular, the stationary field configurations, whose physical properties will be analyzed below, are characterized by the critical (marginal) frequency $\omega_{\text{c}}$ for the superradiant scattering phenomenon of bosonic fields in spinning spacetimes. Description of the system ========================= We shall study analytically the physical properties of massive scalar field configurations which are linearly coupled to a rapidly-spinning exotic compact object. Motivated by the suggested quantum-gravity models [@eco1; @eco2; @eco3; @eco4; @eco5; @eco6; @eco7; @eco8; @eco9; @eco10; @eco11; @eco12; @Pan], we shall assume that the spacetime of the exotic compact object is described by a curved geometry that modifies the familiar Kerr metric only at some microscopic scale around the would-be classical horizon. In particular, following the interesting work of Maggio, Pani, and Ferrari [@Pan], we shall consider a reflective spinning compact object of radius $r_{\text{c}}$, mass $M$, and angular momentum $J=Ma$ whose exterior spacetime metric is described by the curved Kerr line element [@Chan; @Noteman] (we shall use natural units in which $G=c=\hbar=1$) $$\begin{aligned} \label{Eq1} ds^2=-{{\Delta}\over{\rho^2}}(dt-a\sin^2\theta d\phi)^2+{{\rho^2}\over{\Delta}}dr^2+\rho^2 d\theta^2+{{\sin^2\theta}\over{\rho^2}}\big[a dt-(r^2+a^2)d\phi\big]^2\ \ \ \text{for}\ \ \ \ r>r_{\text{c}}\\end{aligned}$$ with $\Delta\equiv r^2-2Mr+a^2$ and $\rho^2\equiv r^2+a^2\cos^2\theta$. Here $(t,r,\theta,\phi)$ are the familiar Boyer-Lindquist spacetime coordinates. The radius of the would-be classical horizon is given by $$\label{Eq2} r_+=M+(M^2-a^2)^{1/2}\ .$$ Following the intriguing quantum-gravity models [@eco1; @eco2; @eco3; @eco4; @eco5; @eco6; @eco7; @eco8; @eco9; @eco10; @eco11; @eco12; @Pan], which predict the occurrence of quantum corrections to the curved spacetime only at some microscopic scale around the would-be classical horizon, we shall assume that the radius $r_{\text{c}}$ of the reflective quantum membrane is characterized by the strong inequality $$\label{Eq3} x_{\text{c}}\equiv {{r_{\text{c}}-r_+}\over{r_+}}\ll1\ .$$ It is important to emphasize that the assumption made in [@Pan], that the exterior spacetime region of the spinning exotic compact object is described by the Kerr metric (\[Eq1\]), is a non-trivial one. As emphasized in [@Pan] (see also Refs. [@kr1; @kr2; @kr3]), this assumption is expected to be valid in the physically interesting regime $x_{\text{c}}\ll1$ of small quantum corrections. The Klein-Gordon wave equation [@Teuk; @Stro] $$\label{Eq4} (\nabla^\nu\nabla_{\nu}-\mu^2)\Psi=0\$$ governs the dynamics of the scalar field in the curved spacetime of the exotic compact object, where $\mu$ is the proper mass of the linearized field [@Noteunm]. Substituting into (\[Eq4\]) the mathematical decomposition [@Teuk; @Stro; @Notedec] $$\label{Eq5} \Psi(t,r,\theta,\phi)=\sum_{l,m}e^{im\phi}{S_{lm}}(\theta;a\sqrt{\mu^2-\omega^2}){R_{lm}}(r;M,a,\mu,\omega)e^{-i\omega t}\$$ for the eigenfunction $\Psi$ of the massive scalar field, and using the metric components (\[Eq1\]) which characterize the exterior curved spacetime of the exotic compact object, one finds that the radial eigenfunction of the massive scalar field satisfies the ordinary differential equation [@Teuk; @Stro] $$\label{Eq6} \Delta{{d} \over{dr}}\Big(\Delta{{dR_{lm}}\over{dr}}\Big)+\Big\{[\omega(r^2+a^2)-ma]^2 +\Delta[2ma\omega-\mu^2(r^2+a^2)-K_{lm}]\Big\}R_{lm}=0\ ,$$ where the angular eigenvalues $\{K_{lm}(a\sqrt{\mu^2-\omega^2})\}$ are determined by the characteristic angular differential equation [@Heun; @Fiz1; @Teuk; @Abram; @Stro; @Hodasy; @Hodpp] $$\begin{aligned} \label{Eq7} {1\over {\sin\theta}}{{d}\over{\theta}}\Big(\sin\theta {{d S_{lm}}\over{d\theta}}\Big) +\Big[K_{lm}+a^2(\mu^2-\omega^2) -a^2(\mu^2-\omega^2)\cos^2\theta-{{m^2}\over{\sin^2\theta}}\Big]S_{lm}=0\\end{aligned}$$ with the physically motivated boundary conditions of regularity at the two angular poles $\theta=0$ and $\theta=\pi$. It is worth noting that the characteristic eigenvalues of the spheroidal angular equation (\[Eq7\]) can be expanded in the form $K_{lm}+a^2(\mu^2-\omega^2)=l(l+1)+\sum_{k=1}^{\infty}c_k a^{2k}(\mu^2-\omega^2)^k$, where the expansion coefficients $\{c_k(l,m)\}$ are given in [@Abram]. The stationary bound-state configurations of the spatially regular massive scalar fields in the curved spacetime of the spinning exotic compact object are characterized by exponentially decaying (normalizable) radial eigenfunctions at spatial infinity [@Notemas] \[for brevity, we shall henceforth omit the angular harmonic indices $(l,m)$ which characterize the spatially regular massive scalar field mode\]: $$\label{Eq8} R(r\to\infty)\sim {{1}\over{r}}e^{-\sqrt{\mu^2-\omega^2}r}\ \ \ \ \text{with}\ \ \ \ \omega^2<\mu^2\ .$$ In addition, following the quantum-gravity model studied in [@Pan], we shall assume that the exotic compact object is characterized by a reflecting surface which is located a microscopic distance \[see Eq. (\[Eq3\])\] above the would-be classical horizon. In particular, we shall consider two types of inner boundary conditions [@Pan]: $$\label{Eq9} \begin{cases} R(r=r_{\text{c}})=0 &\ \ \ \ \text{Dirichlet B. C.}\ ; \\ dR(r=r_{\text{c}})/dr=0 &\ \ \ \ \text{Neumann B. C.}\ \ . \end{cases}$$ The set of equations (\[Eq6\])-(\[Eq9\]) determines the [*discrete*]{} spectrum of radii $\{r_{\text{c}}(\mu,l,m,M,a;n)\}$ of the spinning exotic compact objects which can support the spatially regular stationary bound-state massive scalar field configurations. Interestingly, in the next section we shall explicitly prove that, for rapidly-rotating horizonless compact objects, this characteristic discrete spectrum of supporting radii can be determined [*analytically*]{}. The resonance conditions of the composed spinning-exotic-compact-object-massive-scalar-field configurations =========================================================================================================== In the present section we shall analyze the set of equations (\[Eq6\])-(\[Eq9\]) which determine the bound-state resonances of the massive scalar fields in the curved background of the spinning horizonless exotic compact object. In particular, below we shall use [*analytical*]{} techniques in order to derive remarkably compact resonance conditions for the [*discrete*]{} radii $\{r^{\text{Dirichlet}}_{\text{c}}(\mu,l,m,M,a;n)\}$ and $\{r^{\text{Neumann}}_{\text{c}}(\mu,l,m,M,a;n)\}$ of the rapidly-spinning exotic compact objects which can support the spatially regular stationary bound-state massive scalar field configurations. It is convenient to use the dimensionless physical parameters [@Teuk; @Stro] $$\label{Eq10} x\equiv {{r-r_+}\over {r_+}}\ \ \ ;\ \ \ \tau\equiv{{r_+-r_-}\over{r_+}}\ \ \ ;\ \ \ k\equiv 2\omega r_+\ \ \ ;\ \ \ \bar\mu\equiv M\mu\ \ \ ;\ \ \ \omega_{\text{c}}r_+\equiv {{ma}\over{2M}}\ \ \ ;\ \ \ \varpi\equiv{{2M(\omega-\omega_{\text{c}})}\over{\tau}}\ ,$$ in terms of which the scalar radial equation (\[Eq6\]) takes the form [@Hodcm; @Stro] $$\label{Eq11} x(x+\tau){{d^2R}\over{dx^2}}+(2x+\tau){{dR}\over{dx}}+UR=0\ ,$$ where $$\label{Eq12} U(x)={{(\omega r_+x^2+kx+\varpi\tau)^2}\over{x(x+\tau)}}-K+2ma\omega-\mu^2[r^2_+(1+x)^2+a^2]\ .$$ Interestingly, and most importantly for our analysis, the radial differential equation (\[Eq11\]) is amenable to an analytical treatment in the spatial region $$\label{Eq13} x\geq x_{\text{c}}\gg \tau\times\text{max}(1,\varpi)\ .$$ Note that the strong inequalities (\[Eq3\]) and (\[Eq13\]) can be satisfied simultaneously in the regime $$\label{Eq14} \tau\times\text{max}(1,\varpi)\ll1\ ,$$ in which case one may approximate the massive scalar equation (\[Eq11\]) by [@Hodcm; @Stro] $$\label{Eq15} x^2{{d^2R}\over{dx^2}}+2x{{dR}\over{dx}}+\bar U R=0\ ,$$ where $$\label{Eq16} \bar U(x)=[(m^2/4-{\bar\mu}^2)x^2+(m^2-2{\bar\mu}^2)x+(-K+2m^2-2{\bar\mu}^2)]\cdot[1+O(\tau,\tau\varpi)]\ . %$\bar U=(\omega r_+x+k)^2-K+2ma\omega-\mu^2[r^2_+(1+x)^2+a^2]$.$$ It should be emphasized that the $\tau\ll1$ regime (\[Eq14\]) corresponds to rapidly-spinning exotic compact objects. In addition, it is worth noting that the physical significance of the field frequency $\omega=\omega_{\text{c}}$ \[which corresponds to $\varpi=0$, see Eq. (\[Eq10\])\] stems from the fact that, for classical spinning black-hole spacetimes, this unique resonant frequency characterizes the composed stationary black-hole-massive-scalar-field configurations studied in [@Hodrc; @Herkr]. Note, in particular, that in (\[Eq16\]) we have used the dimensionless relation $M\omega={1\over 2}m\cdot[1+O(\tau,\tau\varpi)]$ in the regime (\[Eq14\]) of rapidly-spinning exotic compact objects and field frequencies that lie in the vicinity of the critical superradiant frequency $\omega=\omega_{\text{c}}$ (or equivalently, in the vicinity of $\varpi=0$). The general mathematical solution of the radial differential equation (\[Eq15\]) can be expressed in terms of the familiar Whittaker functions [@Abram; @Morse]. In particular, defining the dimensionless variables $$\label{Eq17} \epsilon\equiv \sqrt{{\bar\mu}^2-m^2/4}\ \ \ \ ; \ \ \ \ \kappa\equiv {{m^2/2-{\bar\mu}^2}\over{\epsilon}}\ \ \ \ ; \ \ \ \ \delta^2\equiv -K-{1\over 4}+2(m^2-{\bar\mu}^2)\ ,$$ one finds [@Abram; @Morse; @Hodcm; @Notehc] $$\label{Eq18} R(x)=x^{-1}\big[A\cdot W_{\kappa,i\delta}(2\epsilon x)+B\cdot M_{\kappa,i\delta}(2\epsilon x)\big]\ , %R(x)=x^{-{1\over 2}+i\delta}e^{-\epsilon x}\big[A\cdot U({1\over %2}+i\delta-\kappa,1+2i\delta,2\epsilon x)+B\cdot M({1\over %2}+i\delta-\kappa,1+2i\delta,2\epsilon x)\big]\ ,$$ where $\{A,B\}$ are normalization constants. We shall henceforth assume that [@Notedp] $$\label{Eq19} \{\delta,\kappa\}\in \mathbb{R}\ .$$ Note that the assumption $\kappa\in \mathbb{R}$ corresponds to bound-state resonances of the massive scalar fields with $\omega^2<\mu^2$ \[see Eqs. (\[Eq8\]) and (\[Eq17\])\]. The asymptotic spatial behavior of the radial scalar function (\[Eq18\]) is given by [@Noteasa] $$\label{Eq20} R(x\to\infty)=A\cdot x^{-1+\kappa}e^{-\epsilon x}+B\cdot {{\Gamma(1+2i\delta)}\over{\Gamma({1\over 2}+i\delta-\kappa)}}x^{-1-\kappa}e^{\epsilon x}\ .$$ Taking cognizance of the asymptotic boundary condition (\[Eq8\]), which characterizes the spatially regular bound-state (normalizable) configurations of the massive scalar fields, one concludes that the coefficient of the exploding exponent in the asymptotic radial expression (\[Eq20\]) should vanish: $$\label{Eq21} B=0\ .$$ We therefore find that the stationary bound-state resonances of the massive scalar fields in the horizonless curved spacetime of the rapidly-spinning exotic compact object are characterized by the spatially regular radial eigenfunction $$\label{Eq22} R(x)=A\cdot x^{-1} W_{\kappa,i\delta}(2\epsilon x)\ , %R(x)=A\cdot x^{-{1\over 2}+i\delta}e^{-\epsilon x} U({1\over %2}+i\delta-\kappa,1+2i\delta,2\epsilon x)\ ,$$ where $W_{\kappa,\beta}(z)$ is the familiar Whittaker function of the second kind [@Abram]. Taking cognizance of the inner boundary conditions (\[Eq9\]) at the reflective surface $x=x_{\text{c}}$ of the exotic compact object, together with the characteristic radial eigenfunction (\[Eq22\]) of the stationary bound-state massive scalar field configurations, one obtains the remarkably compact resonance conditions $$\label{Eq23} %U({1\over 2}+i\delta-\kappa,1+2i\delta,2\epsilon x_{\text{c}})=0 W_{\kappa,i\delta}(2\epsilon x_{\text{c}})=0\ \ \ \ \text{for}\ \ \ \ \text{Dirichlet B. C.}\$$ and $$\label{Eq24} {{d}\over{dx}}[x^{-1}W_{\kappa,i\delta}(2\epsilon x)]_{x=x_{\text{c}}}=0\ \ \ \ \text{for}\ \ \ \ \text{Neumann B. C.}\$$ for the composed exotic-compact-object-linearized-massive-scalar-field configurations. The analytically derived resonance conditions (\[Eq23\]) and (\[Eq24\]) determine the dimensionless [*discrete*]{} radii $\{x_{\text{c}}(\mu,l,m,M,a;n)\}$ of the rapidly-spinning exotic compact objects which can support the stationary bound-state spatially regular massive scalar field configurations. In the next section we shall use analytical techniques in order to prove that the resonance conditions (\[Eq23\]) and (\[Eq24\]) for the characteristic dimensionless radii of the central exotic compact objects can only be satisfied in the bounded radial regime $$\label{Eq25} \epsilon x_{\text{c}}<\kappa+\sqrt{\kappa^2+\delta^2}\ .$$ Upper bound on the radii of the supporting exotic compact objects ================================================================= In the present section we shall use a simple analytical argument in order to derive a remarkably compact upper bound on the characteristic dimensionless radii $\{x_{\text{c}}(\mu,l,m,M,a;n)\}$ of the rapidly-spinning exotic compact objects which can support the stationary bound-state massive scalar field configurations. It proves useful to define the new radial function $$\label{Eq26} R=x^{\gamma}\Phi\ \ \ \text{with}\ \ \ \gamma\geq -1\ ,$$ in terms of which the radial differential equation (\[Eq15\]) can be written in the form $$\label{Eq27} x^2{{d^2\Phi}\over{dx^2}}+2(1+\gamma)x{{d\Phi}\over{dx}}+[{\bar U}(x)+\gamma(\gamma+1)]\Phi=0\ .$$ Taking cognizance of the boundary conditions (\[Eq8\]) and (\[Eq9\]), one concludes that the eigenfunction $\Phi(x)$, which characterizes the spatially regular stationary bound-state configurations of the massive scalar fields in the background of the exotic compact object, must have (at least) one inflection point, $x=x_{\text{in}}$, in the radial interval $$\label{Eq28} x_{\text{in}}\in (x_{\text{c}},\infty)\$$ which is characterized by the functional relations $$\label{Eq29} \{\Phi\cdot{{d\Phi}\over{dx}}<0\ \ \ \text{and}\ \ \ {{d^2\Phi}\over{dx^2}}=0\}\ \ \ \ \text{for}\ \ \ \ x=x_{\text{in}}\ .$$ Substituting the characteristic relations (\[Eq29\]) into the radial scalar equation (\[Eq27\]), one concludes that the composed exotic-compact-object-linearized-massive-scalar-field configurations are characterized by the inequality $$\label{Eq30} {\bar U}(x_{\text{in}})+\gamma(\gamma+1)>0\ ,$$ which implies \[see Eq. (\[Eq16\])\] $$\label{Eq31} \epsilon^2\cdot x^2_{\text{in}}-2\epsilon\kappa\cdot x_{\text{in}}-[\delta^2+{1\over4}+\gamma(\gamma+1)]<0\ .$$ Taking cognizance of (\[Eq28\]) and (\[Eq31\]), one finds the upper bound $$\label{Eq32} x_{\text{c}}<{{\kappa+\sqrt{\kappa^2+\delta^2+{1\over4}+\gamma(\gamma+1)}}\over{\epsilon}}\ .$$ The strongest upper bound on the dimensionless radius $x_{\text{c}}$ can be obtained by minimizing the r.h.s of (\[Eq32\]). In particular, the term $\gamma(\gamma+1)$ is minimized for $\gamma=-1/2$, in which case one finds from (\[Eq32\]) the characteristic upper bound $$\label{Eq33} x_{\text{c}}<{{\kappa+\sqrt{\kappa^2+\delta^2}}\over{\epsilon}}\ .$$ on the dimensionless radii of the rapidly-spinning exotic compact objects which can support the stationary bound-state massive scalar field configurations. The characteristic resonance spectra of the stationary composed exotic-compact-object-linearized-massive-scalar-field configurations ==================================================================================================================================== Interestingly, as we shall now prove explicitly, the resonance equations (\[Eq23\]) and (\[Eq24\]), which determine the discrete radii $\{x_{\text{c}}(\mu,l,m,M,a;n)\}$ of the supporting exotic compact objects, can be solved [*analytically*]{} in the physically interesting regime $$\label{Eq34} x_{\text{c}}\ll1\ .$$ It is worth emphasizing again that the strong inequality (\[Eq34\]) characterizes exotic compact objects whose quantum reflective surfaces are located in the radial vicinity of the would-be classical black-hole horizons \[see Eqs. (\[Eq2\]) and (\[Eq3\])\]. In particular, the small-$x_{\text{c}}$ regime (\[Eq34\]) corresponds to the physically interesting model of spinning exotic compact objects recently studied numerically in [@Pan] (see also [@eco1; @eco2; @eco3; @eco4; @eco5; @eco6; @eco7; @eco8; @eco9; @eco10; @eco11; @eco12]). Using Eqs. 13.1.3, 13.1.33, and 13.5.5 of [@Abram], one can express the (Dirichlet) resonance condition (\[Eq23\]) in the form $$\label{Eq35} (2\epsilon x_{\text{c}})^{2i\delta}={{\Gamma(1+2i\delta)\Gamma({1\over 2}-i\delta-\kappa)}\over{\Gamma(1-2i\delta)\Gamma({1\over 2}+i\delta-\kappa)}}\ \ \ \ \ \text{for}\ \ \ \ x_{\text{c}}\ll1\ .$$ Likewise, using Eqs. 13.1.3, 13.1.33, 13.4.33, and 13.5.5 of [@Abram], one can express the (Neumann) resonance condition (\[Eq24\]) in the form $$\label{Eq36} (2\epsilon x_{\text{c}})^{2i\delta}={{\Gamma(2+2i\delta)\Gamma({1\over 2}-i\delta-\kappa)}\over{\Gamma(2-2i\delta)\Gamma({1\over 2}+i\delta-\kappa)}}\ \ \ \ \ \text{for}\ \ \ \ x_{\text{c}}\ll1\ .$$ From Eqs. (\[Eq35\]) and (\[Eq36\]) one obtains respectively the analytical formulas [@Notenn] $$\label{Eq37} x^{\text{Dirichlet}}_{\text{c}}(n)={{e^{-\pi n/\delta}}\over{2\epsilon}}\Big[{{\Gamma(1+2i\delta)\Gamma({1\over 2}-i\delta-\kappa)}\over{\Gamma(1-2i\delta)\Gamma({1\over 2}+i\delta-\kappa)}}\Big]^{1/2i\delta}\ \ \ ; \ \ \ n\in\mathbb{Z}$$ and $$\label{Eq38} x^{\text{Neumann}}_{\text{c}}(n)={{e^{-\pi n/\delta}}\over{2\epsilon}}\Big[{{\Gamma(2+2i\delta)\Gamma({1\over 2}-i\delta-\kappa)}\over{\Gamma(2-2i\delta)\Gamma({1\over 2}+i\delta-\kappa)}}\Big]^{1/2i\delta}\ \ \ ; \ \ \ n\in\mathbb{Z}\$$ for the dimensionless [*discrete*]{} radii of the horizonless rapidly-spinning exotic compact objects which can support the stationary bound-state massive scalar field configurations [@Notekp0]. It is worth noting that, taking cognizance of Eq. 6.1.23 of [@Abram], one deduces that $\Gamma(1+2i\delta)/\Gamma(1-2i\delta)=e^{i\phi_1}$, $\Gamma(2+2i\delta)/\Gamma(2-2i\delta)=e^{i\phi_2}$, and $\Gamma(1/2-i\delta-\kappa)/\Gamma(1/2+i\delta-\kappa)=e^{i\phi_3}$ for $\{\delta,\kappa\}\in \mathbb{R}$ \[see Eq. (\[Eq19\])\], where $\{\phi_1,\phi_2,\phi_3\}\in\mathbb{R}$. These relations imply that $\{x^{\text{Dirichlet}}_{\text{c}}(n),x^{\text{Neumann}}_{\text{c}}(n)\}\in\mathbb{R}$. The eikonal large-mass $M\mu\gg1$ regime ======================================== In the present section we shall show that the analytically derived resonance spectra (\[Eq37\]) and (\[Eq38\]), which characterize the composed exotic-compact-object-linearized-massive-scalar-field configurations, can be further simplified in the asymptotic eikonal regime $$\label{Eq39} m\gg1\$$ of large field masses [@Notelmlm]. In the regime (\[Eq39\]) one finds the compact asymptotic expression [@Hodpp; @Notewlm] $$\label{Eq40} K_{mm}={5\over 4}m^2-{\bar\mu}^2+\sqrt{{3\over4}m^2+{\bar\mu}^2}+O(1)\ ,$$ which implies \[see Eq. (\[Eq17\])\] $$\label{Eq41} \delta=\sqrt{{3\over 4}m^2-{\bar\mu}^2-\sqrt{{3\over4}m^2+{\bar\mu}^2}}\cdot[1+O(\tau,\tau\varpi)]\ .$$ In addition, in the large-$m$ regime one may use the asymptotic approximations [@Abram] $$\label{Eq42} {{\Gamma(2i\delta)}\over{\Gamma(-2i\delta)}}=i\Big({{2\delta}\over{e}}\Big)^{4i\delta}\cdot[1+O(m^{-1})]\$$ and $$\label{Eq43} {{\Gamma({1\over 2}-i\delta-\kappa)}\over{\Gamma({1\over 2}+i\delta-\kappa)}}=e^{2i\delta}(\delta^2+\kappa^2)^{-i\delta}e^{2i\pi\kappa(1-\theta)}[1+O(m^{-1})]\ \ \ \text{where}\ \ \ \theta\equiv \pi^{-1}\arctan(\delta/\kappa)\ .$$ Substituting (\[Eq42\]) and (\[Eq43\]) into the resonant spectra (\[Eq37\]) and (\[Eq38\]), one finds respectively the simplified expressions [@Notegmf] $$\label{Eq44} x^{\text{Dirichlet}}_{\text{c}}(n)={{2e^{\pi[\kappa(1-\theta)-1/4-n]/\delta}} \over{e\sqrt{\delta^2+\kappa^2}\epsilon}}\ \ \ ; \ \ \ n\in\mathbb{Z}$$ and $$\label{Eq45} x^{\text{Neumann}}_{\text{c}}(n)={{2e^{\pi[\kappa(1-\theta)+1/4-n]/\delta}} \over{e\sqrt{\delta^2+\kappa^2}\epsilon}}\ \ \ ; \ \ \ n\in\mathbb{Z}$$ for the dimensionless discrete radii of the rapidly-spinning exotic compact objects which can support the stationary bound-state configurations of the spatially regular massive scalar fields. Numerical confirmation ====================== It is of physical interest to confirm the validity of the compact analytical formulas (\[Eq37\]) and (\[Eq38\]) for the discrete radii of the exotic compact objects which can support the stationary bound-state massive scalar field configurations. In Table \[Table1\] we present the dimensionless radii $x^{\text{analytical}}_{\text{c}}(n)$ of the horizonless exotic compact objects with reflecting Dirichlet boundary conditions as calculated from the analytically derived formula (\[Eq37\]). We also present the corresponding dimensionless radii $x^{\text{numerical}}_{\text{c}}(n)$ of the rapidly-spinning exotic compact objects as obtained from a direct numerical solution of the compact resonance condition (\[Eq23\]). The numerically computed roots of the Whittaker function $W_{\kappa,i\delta}(2\epsilon x)$ and its spatial derivative \[see the analytically derived resonance equations (\[Eq23\]) and (\[Eq24\])\] were obtained directly from the WolframAlpha: Computational Knowledge Engine. In the physically interesting $x_{\text{c}}\ll1$ regime [@Notephy] \[see Eq. (\[Eq3\])\], one finds a remarkably good agreement between the approximated discrete radii of the horizonless exotic compact objects \[as calculated from the analytical formula (\[Eq37\])\] and the corresponding exact radii of the rapidly-spinning exotic compact objects \[as obtained numerically from the characteristic resonance equation (\[Eq23\])\]. It is worth emphasizing the fact that the characteristic dimensionless radii $x^{\text{Dirichlet}}_{\text{c}}(n)$ of the supporting exotic compact objects, as presented in Table \[Table1\], conform to the analytically derived upper bound (\[Eq33\]).  $x^{\text{Dir}}_{\text{c}}(n=0)$   $x^{\text{Dir}}_{\text{c}}(n=1)$   $x^{\text{Dir}}_{\text{c}}(n=2)$    $x^{\text{Dir}}_{\text{c}}(n=3)$   $x^{\text{Dir}}_{\text{c}}(n=4)$   $x^{\text{Dir}}_{\text{c}}(n=5)$ ------------------------ ------------------------------------- ------------------------------------ ------------------------------------- ------------------------------------ ------------------------------------ ---------------------------------- -- --   \[Eq. (\[Eq37\])\]   0.07998    0.02449    0.00750    0.00229    0.00070    0.00022   \[Eq. (\[Eq23\])\]   0.08690    0.02503    0.00754    0.00230    0.00070    0.00022 : Composed stationary exotic-compact-object-massive-scalar-field configurations. We present the dimensionless discrete radii $x^{\text{Dirichlet}}_{\text{c}}(n)$ of the rapidly-spinning exotic compact objects with reflective Dirichlet boundary conditions as calculated from the analytically derived compact formula (\[Eq37\]). We also present the corresponding dimensionless radii of the horizonless exotic compact objects as obtained from a direct numerical solution of the characteristic resonance condition (\[Eq23\]). The data presented is for massive scalar field configurations with angular harmonic indices $l=m=10$, $M\omega=5$, and $\mu=1.5\omega$ \[these field parameters correspond to $\epsilon=5.59$, $\kappa=-1.12$, and $\delta=2.65$, see Eq. (\[Eq17\])\]. One finds a remarkably good agreement between the approximated discrete radii of the rapidly-spinning exotic compact objects \[as calculated from the analytically derived compact formula (\[Eq37\])\] and the corresponding exact radii of the horizonless exotic compact objects \[as obtained from a direct numerical solution of the Dirichlet resonance equation (\[Eq23\])\]. Note that the dimensionless radii $x^{\text{Dirichlet}}_{\text{c}}(n)$ of the supporting rapidly-spinning exotic compact objects conform to the analytically derived upper bound (\[Eq33\]).[]{data-label="Table1"} In Table \[Table2\] we display the dimensionless discrete radii $x^{\text{analytical}}_{\text{c}}(n)$ of the rapidly-spinning exotic compact objects with reflecting Neumann boundary conditions as calculated from the analytically derived formula (\[Eq38\]). We also present the corresponding dimensionless radii $x^{\text{numerical}}_{\text{c}}(n)$ of the horizonless exotic compact objects as obtained numerically from the resonance condition (\[Eq24\]). Again, in the physically interesting $x_{\text{c}}\ll1$ regime [@Notephy] \[see Eq. (\[Eq3\])\], one finds a remarkably good agreement between the approximated discrete radii of the rapidly-spinning exotic compact objects \[as calculated from the analytically derived formula (\[Eq38\])\] and the corresponding exact radii of the horizonless exotic compact objects \[as obtained numerically from the characteristic resonance equation (\[Eq24\])\]. It is worth pointing out that the characteristic dimensionless radii $x^{\text{Neumann}}_{\text{c}}(n)$ of the supporting exotic compact objects, as presented in Table \[Table2\], conform to the analytically derived upper bound (\[Eq33\]).  $x^{\text{Neu}}_{\text{c}}(n=0)$   $x^{\text{Neu}}_{\text{c}}(n=1)$   $x^{\text{Neu}}_{\text{c}}(n=2)$    $x^{\text{Neu}}_{\text{c}}(n=3)$   $x^{\text{Neu}}_{\text{c}}(n=4)$   $x^{\text{Neu}}_{\text{c}}(n=5)$ ------------------------ ------------------------------------- ------------------------------------ ------------------------------------- ------------------------------------ ------------------------------------ ---------------------------------- -- --   \[Eq. (\[Eq38\])\]   0.13476    0.04125    0.01263    0.00387    0.00118    0.00036   \[Eq. (\[Eq24\])\]   0.16106    0.04291    0.01277    0.00388    0.00118    0.00036 : Composed stationary exotic-compact-object-massive-scalar-field configurations. We present the dimensionless discrete radii $x^{\text{Neumann}}_{\text{c}}(n)$ of the horizonless exotic compact objects with reflective Neumann boundary conditions as calculated from the analytically derived formula (\[Eq38\]). We also present the corresponding dimensionless radii of the rapidly-spinning exotic compact objects as obtained from a direct numerical solution of the characteristic resonance condition (\[Eq24\]). The data presented is for massive scalar field configurations with angular harmonic indices $l=m=10$, $M\omega=5$, and $\mu=1.5\omega$ \[these field parameters correspond to $\epsilon=5.59$, $\kappa=-1.12$, and $\delta=2.65$, see Eq. (\[Eq17\])\]. One finds a remarkably good agreement between the approximated discrete radii of the horizonless exotic compact objects \[as calculated from the analytically derived compact formula (\[Eq38\])\] and the corresponding exact radii of the rapidly-spinning exotic compact objects \[as obtained from a direct numerical solution of the Neumann resonance equation (\[Eq24\])\]. Note that the dimensionless radii $x^{\text{Neumann}}_{\text{c}}(n)$ of the supporting rapidly-spinning exotic compact objects conform to the analytically derived upper bound (\[Eq33\]).[]{data-label="Table2"} Summary and Discussion ====================== Some candidate quantum-gravity models [@eco1; @eco2; @eco3; @eco4; @eco5; @eco6; @eco7; @eco8; @eco9; @eco10; @eco11; @eco12; @Pan] have recently put forward the intriguing idea that quantum effects may prevent the formation of stable black-hole horizons. These models have suggested, in particular, that within the framework of a quantum theory of gravity, horizonless exotic compact objects may serve as alternatives to classical black-hole spacetimes. Motivated by this intriguing prediction, we have raised here the following physically interesting question: Can horizonless exotic compact objects with reflective boundary conditions [@Pan] (see also [@eco1; @eco2; @eco3; @eco4; @eco5; @eco6; @eco7; @eco8; @eco9; @eco10; @eco11; @eco12]) support spatially regular massive scalar field configurations in their exterior regions? In order to address this intriguing question, in the present paper we have solved [*analytically*]{} the Klein-Gordon wave equation for a stationary linearized scalar field of mass $\mu$, proper frequency $\omega_{\text{c}}$, and spheroidal harmonic indices $(l,m)$ in the background of a rapidly-spinning \[see Eq. (\[Eq14\])\] exotic compact object of mass $M$ and angular momentum $J=Ma$. The main physical results derived in the present paper are as follows: \(1) It was proved that the compact upper bound \[see Eqs. (\[Eq17\]) and (\[Eq33\])\] $$\label{Eq46} x_{\text{c}}<{{\kappa+\sqrt{\kappa^2+\delta^2}}\over{\epsilon}}\$$ on the dimensionless radius of the spinning exotic compact object provides a necessary condition for the existence of the composed stationary exotic-compact-object-massive-scalar-field configurations. \(2) We have shown that, for a given set $(\mu,l,m)$ of the physical parameters which characterize the massive scalar field, there exists a [*discrete*]{} spectrum of radii $\{r_{\text{c}}(\mu,l,m,M,a;n)\}$ of the rapidly-spinning exotic compact objects which can support the stationary bound-state massive scalar field configurations. In particular, it was proved analytically that the compact resonance conditions \[see Eqs. (\[Eq23\]) and (\[Eq24\])\] $$\label{Eq47} W_{\kappa,i\delta}(2\epsilon x^{\text{Dirichlet}}_{\text{c}})=0\ \ \ \ ; \ \ \ \ {{d}\over{dx}}[x^{-1}W_{\kappa,i\delta}(2\epsilon x)]_{x=x^{\text{Neumann}}_{\text{c}}}=0\$$ determine the characteristic discrete sets of supporting radii which characterize the rapidly-spinning reflective exotic compact objects. \(3) It was explicitly shown that the physical properties of the composed stationary exotic-compact-object-massive-scalar-field configurations can be studied [*analytically*]{} in the physically interesting $x_{\text{c}}\ll1$ regime [@Notephy] \[see Eq. (\[Eq3\])\]. In particular, we have used analytical techniques in order to derive the compact formula \[see Eqs. (\[Eq37\]) and (\[Eq38\])\] $$\label{Eq48} x_{\text{c}}(n)={{e^{-\pi n/\delta}}\over{2\epsilon}}\Big[\nabla{{\Gamma(1+2i\delta)\Gamma({1\over 2}-i\delta-\kappa)}\over{\Gamma(1-2i\delta)\Gamma({1\over 2}+i\delta-\kappa)}}\Big]^{1/2i\delta}\ \ \ \ ; \ \ \ \ \nabla=\begin{cases} 1 &\ \ \ \ \text{Dirichlet B. C.}\ \\ {{1+2i\delta}\over{1-2i\delta}} &\ \ \ \ \text{Neumann B. C.}\ \ \end{cases} %\ \ \ ; \ \ \ n\in\mathbb{Z}$$ for the discrete spectra of reflecting radii $\{x_{\text{c}}(\mu,l,m,M,a;n)\}$ which characterize the rapidly-spinning exotic compact objects that can support the stationary bound-state massive scalar field configurations. \(4) We have explicitly shown that, in the physically interesting $x_{\text{c}}\ll1$ regime [@Notephy], the analytically derived formulas (\[Eq37\]) and (\[Eq38\]) for the discrete radii of the horizonless exotic compact objects that can support the stationary bound-state massive scalar field configurations agree with direct numerical computations of the corresponding radii of the rapidly-spinning exotic compact objects. \(5) It is worth emphasizing again that the composed spinning-exotic-compact-object-massive-scalar-field configurations that we have studied in the present paper, like the composed spinning-black-hole-massive-scalar-field configurations studied recently in [@Hodrc; @Herkr], owe their existence to the intriguing physical phenomenon of superradiant scattering in rotating spacetimes [@Frid; @Bri]. In particular, the spatially regular stationary massive scalar field configurations (\[Eq22\]) are characterized by the critical (marginal) frequency $\omega_{\text{c}}$ for the superradiant scattering phenomenon of bosonic (integer-spin) fields in spinning spacetimes \[see Eqs. (\[Eq10\]) and (\[Eq14\])\]. \(6) Finally, we would like to stress the fact that, combining the results of the present paper with the results presented in [@Hodrc; @Herkr], one deduces that both spinning black holes and horizonless spinning exotic compact objects with reflecting surfaces can support spatially regular configurations of stationary massive scalar fields in their exterior regions. It is important to note, however, that for given physical parameters $\{M,a\}$ of the central supporting object, the discrete resonant spectra $\{\mu(M,a)\}$ of the allowed field masses are different. Thus, our analytically derived theoretical results may one day be of practical observational importance since the different resonant spectra of the external stationary massive scalar field configurations may help astronomers to distinguish horizonless spinning exotic compact objects from genuine black holes. In particular, it is worth pointing out that the regime of existence of the composed spinning-black-hole-massive-scalar-field configurations is characterized, in the rapidly-rotating $a/M\to 1$ limit, by the dimensionless relations $m/2<M\mu<m/\sqrt{2}$ [@Hodrc]. On the other hand, the composed spinning-exotic-compact-object-massive-scalar-field configurations exist for all real values of the physical parameters $\delta$ and $\kappa$ \[see Eq. (\[Eq19\])\]. Taking cognizance of the large-$m$ relation (\[Eq41\]), one deduces that the horizonless spinning configurations studied in the present paper have the [*larger*]{} regime of existence $m/2<M\mu<\sqrt{3}m/2$. [**ACKNOWLEDGMENTS**]{} This research is supported by the Carmel Science Foundation. I thank Yael Oren, Arbel M. Ongo, Ayelet B. Lata, and Alona B. Tea for stimulating discussions. [99]{} P. O. Mazur and E. Mottola, arXiv:gr-qc/0109035. S. D. Mathur, Fortsch. Phys. [**53**]{}, 793 (2005). C. B. M. H. Chirenti and L. Rezzolla, Class. and Quant. Grav. [**24**]{}, 4191 (2007). K. Skenderis and M. Taylor, Phys. Rept. [**467**]{}, 117 (2008). P. Pani, E. Berti, V. Cardoso, Y. Chen, and R. Norte, Phys. Rev. D [**80**]{}, 124047 (2009). A. Almheiri, D. Marolf, J. Polchinski, and J. Sully, JHEP [**02**]{}, 062 (2013). V. Cardoso, L. C. B. Crispino, C. F. B. Macedo, H. Okawa, and P. Pani, Phys. Rev. D, [**90**]{}, 044069 (2014). M. Saravani, N. Afshordi, and R. B. Mann, Int. J. Mod. Phys. D [**23**]{}, 1443007 (2015). C. Chirenti and L. Rezzolla, Phys. Rev. D [**94**]{}, 084016 (2016). J. Abedi, H. Dykaar, and N. Afshordi, arXiv:1612.00266. B. Holdom and J. Ren, arXiv:1612.04889. C. Barcelo, R. Carballo-Rubio, and L. J. Garay, arXiv:1701.09156. E. Maggio, P. Pani, and V. Ferrari, arXiv:1703.03696. S. Hod, Phys. Rev. D [**86**]{}, 104026 (2012) \[arXiv:1211.3202\]; S. Hod, The Euro. Phys. Journal C [**73**]{}, 2378 (2013) \[arXiv:1311.5298\]; S. Hod, Phys. Rev. D [**90**]{}, 024051 (2014) \[arXiv:1406.1179\]; S. Hod, Phys. Lett. B [**739**]{}, 196 (2014) \[arXiv:1411.2609\]; S. Hod, Class. and Quant. Grav. [**32**]{}, 134002 (2015) \[arXiv:1607.00003\]; S. Hod, Phys. Lett. B [**751**]{}, 177 (2015); S. Hod, Class. and Quant. Grav. [**33**]{}, 114001 (2016); S. Hod, Phys. Lett. B [**758**]{}, 181 (2016) \[arXiv:1606.02306\]; S. Hod and O. Hod, Phys. Rev. D [**81**]{}, 061502 Rapid communication (2010) \[arXiv:0910.0734\]; S. Hod, Phys. Lett. B [**708**]{}, 320 (2012) \[arXiv:1205.1872\]; S. Hod, Jour. of High Energy Phys. [**01**]{}, 030 (2017) \[arXiv:1612.00014\]. C. A. R. Herdeiro and E. Radu, Phys. Rev. Lett. [**112**]{}, 221101 (2014); C. L. Benone, L. C. B. Crispino, C. Herdeiro, and E. Radu, Phys. Rev. D [**90**]{}, 104024 (2014); C. A. R. Herdeiro and E. Radu, Phys. Rev. D [**89**]{}, 124018 (2014); C. A. R. Herdeiro and E. Radu, Int. J. Mod. Phys. D [**23**]{}, 1442014 (2014); Y. Brihaye, C. Herdeiro, and E. Radu, Phys. Lett. B [**739**]{}, 1 (2014); J. C. Degollado and C. A. R. Herdeiro, Phys. Rev. D [**90**]{}, 065019 (2014); C. Herdeiro, E. Radu, and H. Rúnarsson, Phys. Lett. B [**739**]{}, 302 (2014); C. Herdeiro and E. Radu, Class. Quantum Grav. [**32**]{} 144001 (2015); C. A. R. Herdeiro and E. Radu, Int. J. Mod. Phys. D [**24**]{}, 1542014 (2015); C. A. R. Herdeiro and E. Radu, Int. J. Mod. Phys. D [**24**]{}, 1544022 (2015); P. V. P. Cunha, C. A. R. Herdeiro, E. Radu, and H. F. Rúnarsson, Phys. Rev. Lett. [**115**]{}, 211102 (2015); B. Kleihaus, J. Kunz, and S. Yazadjiev, Phys. Lett. B [**744**]{}, 406 (2015); C. A. R. Herdeiro, E. Radu, and H. F. Rúnarsson, Phys. Rev. D [**92**]{}, 084059 (2015); C. Herdeiro, J. Kunz, E. Radu, and B. Subagyo, Phys. Lett. B [**748**]{}, 30 (2015); C. A. R. Herdeiro, E. Radu, and H. F. Rúnarsson, Class. Quant. Grav. [**33**]{}, 154001 (2016); C. A. R. Herdeiro, E. Radu, and H. F. Rúnarsson, Int. J. Mod. Phys. D [**25**]{}, 1641014 (2016); Y. Brihaye, C. Herdeiro, and E. Radu, Phys. Lett. B [**760**]{}, 279 (2016); Y. Ni, M. Zhou, A. C. Avendano, C. Bambi, C. A. R. Herdeiro, and E. Radu, JCAP [**1607**]{}, 049 (2016); M. Wang, arXiv:1606.00811 . Here the integer $n$ is the resonance parameter of the composed exotic-compact-object-linearized-massive-scalar-field configurations \[see Eqs. (\[Eq37\]) and (\[Eq38\]) below\]. A. Arvanitaki, S. Dimopoulos, S. Dubovsky, N. Kaloper, and J. March-Russell, Phys. Rev.D [**81**]{}, 123530 (2010). A. Arvanitaki and S. Dubovsky, Phys. Rev.D [**83**]{}, 044026 (2011). W. Z. Feng, G. Shiu, P. Soler, and F. Ye, Phys. Rev. Lett. [**113**]{}, 061802 (2014). L. Hui, J. P. Ostriker, S. Tremaine, and E. Witten, Phys. Rev. D [**95**]{}, 043541 (2017). See also https://en.wikipedia.org/wiki/Dark-photon. J. L. Friedman, Commun. in Math. Phys. [**63**]{}, 243 (1978). R. Brito, V. Cardoso, and P. Pani, Lect. Notes Phys. [**906**]{}, 1 (2015). S. Chandrasekhar, [*The Mathematical Theory of Black Holes*]{}, (Oxford University Press, New York, 1983). Following [@Pan], we shall assume that the energy and angular momentum of the reflective quantum membrane are negligible. P. Pani, Phys. Rev. D [**92**]{}, 124030 (2015). N. Uchikata, S. Yoshida, and P. Pani, Phys. Rev. D [**94**]{}, 064015 (2016). K. Yagi and N. Yunes, Phys. Rev. D [**91**]{}, 123008 (2015). S. A. Teukolsky, Phys. Rev. Lett. [**29**]{}, 1114 (1972); S. A. Teukolsky, Astrophys. J. [**185**]{}, 635 (1973). T. Hartman, W. Song, and A. Strominger, JHEP 1003:118 (2010). Note that the proper mass of the scalar field stands for $\mu/\hbar$. This characteristic physical parameter therefore has the dimensions of $($length$)^{-1}$. Here $\{\omega,l,m\}$ are respectively the characteristic proper frequency, the spheroidal harmonic index, and the azimuthal harmonic index (with $l\geq |m|$) of the stationary spatially regular scalar field configuration. A. Ronveaux, [*Heun’s differential equations*]{}. (Oxford University Press, Oxford, UK, 1995); C. Flammer, [*Spheroidal Wave Functions*]{} (Stanford University Press, Stanford, 1957). P. P. Fiziev, e-print arXiv:0902.1277; R. S. Borissov and P. P. Fiziev, e-print arXiv:0903.3617; P. P. Fiziev, Phys. Rev. D [**80**]{}, 124001 (2009); P. P. Fiziev, Class. Quant. Grav. [**27**]{}, 135001 (2010). M. Abramowitz and I. A. Stegun, [*Handbook of Mathematical Functions*]{} (Dover Publications, New York, 1970). S. Hod, Phys. Rev. Lett. [**100**]{}, 121101 (2008) \[arXiv:0805.3873\]. S. Hod, Phys. Lett. B [**717**]{}, 462 (2012) \[arXiv:1304.0529\]; S. Hod, Phys. Rev. D [**87**]{}, 064017 (2013) \[arXiv:1304.4683\]; S. Hod, Phys. Lett. B [**746**]{}, 365 (2015) \[arXiv:1506.04148\]. T. Damour, N. Deruelle and R. Ruffini, Lett. Nuovo Cimento [**15**]{}, 257 (1976); S. Detweiler, Phys. Rev. D [**22**]{}, 2323 (1980); V. Cardoso and J. P. S. Lemos, Phys. Lett. B [**621**]{}, 219 (2005); V. Cardoso and S. Yoshida, JHEP 0507:009 (2005); S. R. Dolan, Phys. Rev. D [**76**]{}, 084001 (2007); S. Hod and O. Hod, e-print arXiv:0912.2761.; H. R. Beyer, J. Math. Phys. [**52**]{}, 102502 (2011); Y. S. Myung, Phys. Rev. D [**84**]{}, 024048 (2011); S. Hod, Phys. Lett. B [**708**]{}, 320 (2012) \[arXiv:1205.1872\]; S. Hod, Phys. Lett. B [**713**]{}, 505 (2012); J. P. Lee, JHEP [**1201**]{}, 091 (2012); J. P. Lee, Mod. Phys. Lett. A [**27**]{}, 1250038 (2012); S. Hod, Phys. Lett. B [**718**]{}, 1489 (2013) \[arXiv:1304.6474\]; S. R. Dolan, Phys. Rev. D [**87**]{}, 124026 (2013); H. Witek, V. Cardoso, A. Ishibashi, and U. Sperhake, Phys. Rev. D [**87**]{}, 043513 (2013); V. Cardoso, Gen. Relativ. and Gravit. [**45**]{}, 2079 (2013); J. C. Degollado and C. A. R. Herdeiro, Gen. Rel. Grav. [**45**]{}, 2483 (2013); R. Li, The Euro. Phys. Journal C [**73**]{}, 2274 (2013); S. J. Zhang, B. Wang, E. Abdalla, arXiv:1306.0932; H. Witek, arXiv:1307.1145; Y. S. Myung, Phys. Rev. D [**88**]{}, 104017 (2013); R. Li, Phys. Rev. D [**88**]{}, 127901 (2013); R. Brito, V. Cardoso, and P. Pani, Phys. Rev. D [**88**]{}, 023514 (2013); S. Hod, Phys. Lett. B [**739**]{}, 196 (2014) \[arXiv:1411.2609\]; H. Okawa, H. Witek, and V. Cardoso, Phys. Rev. D [**89**]{}, 104032 (2014); B. Arderucio, arXiv:1404.3421; M. O. P. Sampaio, C. Herdeiro, M. Wang, Phys. Rev. D [**90**]{}, 064004 (2014); S. Hod, Phys. Rev. D [**91**]{}, 044047 (2015) \[arXiv:1504.00009\]; H. M. Siahaan, Int. J. Mod. Phys. D [**24**]{}, 1550102 (2015); R. Brito, V. Cardoso, and P. Pani, Lect. Notes Phys. [**906**]{}, 1 (2015); S. Hod, Phys. Lett. B [**749**]{}, 167 (2015) \[arXiv:1510.05649\]; J. W. Gerow and A. Ritz, Phys. Rev. D [**93**]{}, 044043 (2016); S. Hod, Phys. Lett. B [**758**]{}, 181 (2016) \[arXiv:1606.02306\]; Y. Huang and D. J. Liu, arXiv:1606.08913 . S. Hod, Phys. Rev. D [**94**]{}, 044036 (2016) \[arXiv:1609.07146\]. P. M. Morse and H. Feshbach, [*Methods of Theoretical Physics*]{} (McGraw-Hill, New York, 1953). Here $M_{\kappa,\beta}(z)$ and $W_{\kappa,\beta}(z)$ are the Whittaker function of the first kind and the Whittaker function of the second kind [@Abram], respectively. One may assume, without loss of generality, that $\delta>0$. Here we have used Eqs. 13.1.4, 13.1.8, 13.1.32, and 13.1.33 of [@Abram]. Here we have used the relation $1=e^{-2i\pi n}$, where the integer $n$ is the resonance parameter of the composed exotic-compact-object-massive-scalar-field configurations. It is worth noting that the analytically derived resonance spectra (\[Eq37\]) and (\[Eq38\]), which characterize the composed exotic-compact-object-linearized-massive-scalar-field configurations, can be further simplified in the asymptotic regime $\omega^2/\mu^2\to 1^-$ of marginally-bound massive scalar field configurations \[see Eq. (\[Eq8\])\]. These marginally-bound resonances correspond to $\kappa\to\infty$ \[see Eq. (\[Eq17\])\], in which case one may use the asymptotic approximation $\Gamma(1/2-i\delta-\kappa)/\Gamma(1/2+i\delta-\kappa)=\kappa^{-2i\delta}e^{2i\pi\kappa}[1+O(\kappa^{-1})]$ [@Abram]. Substituting this dimensionless ratio into (\[Eq37\]) and (\[Eq38\]), one finds the simplified resonant spectra $x^{\text{Dirichlet}}_{\text{c}}(n)={{2e^{\pi(\kappa-n)/\delta}}\over{m^2}} \Big[{{\Gamma(1+2i\delta)}\over{\Gamma(1-2i\delta)}}\Big]^{1/2i\delta}$ and $x^{\text{Neumann}}_{\text{c}}(n)={{2e^{\pi(\kappa-n)/\delta}}\over{m^2}} \Big[{{\Gamma(2+2i\delta)}\over{\Gamma(2-2i\delta)}}\Big]^{1/2i\delta}$ \[Here we have used the dimensionless relation $M\omega={1\over 2}m\cdot[1+O(\tau,\tau\varpi)]$ in the regime (\[Eq14\]) of rapidly-spinning exotic compact objects\]. Note also that $K_{mm}=m(m+1)+O[a^2(\mu^2-\omega^2)]$ in the marginally-bound $\omega^2/\mu^2\to 1^-$ regime [@Heun; @Fiz1; @Teuk; @Abram; @Stro; @Hodasy; @Hodpp], which implies $\delta_{mm}=(m^2/2-m-1/4)^{1/2}$ \[see Eq. (\[Eq17\])\]. Note that the large-$m$ regime (\[Eq39\]) corresponds to the large-frequency and large-mass regimes $M\mu>M\omega\gg1$ \[see Eqs. (\[Eq8\]), (\[Eq10\]), and (\[Eq14\])\]. Likewise, the large-$m$ regime (\[Eq39\]) corresponds to $\delta\gg1$ and $\kappa\gg1$ \[see Eq. (\[Eq17\])\]. It is worth emphasizing that the asymptotic large-$m$ expression (\[Eq40\]) for the eigenvalues of the characteristic spheroidal angular equation (\[Eq7\]) is valid in the regime $-a^2(\mu^2-\omega^2)<m^2$ [@Hodpp]. Taking cognizance of the inequality $\omega^2<\mu^2$ \[see (\[Eq8\])\], which characterizes the bound-state resonances of the massive scalar fields, one immediately concludes that the requirement $-a^2(\mu^2-\omega^2)<m^2$ is trivially satisfied by the stationary spatially regular scalar configurations. Here we have used Eq. 6.1.15 of [@Abram], which implies $\Gamma(1+2i\delta)/\Gamma(1-2i\delta)=-\Gamma(2i\delta)/\Gamma(-2i\delta)$ and $\Gamma(2+2i\delta)/\Gamma(2-2i\delta)=\Gamma(2i\delta)/\Gamma(-2i\delta)\cdot[1+O(m^{-1})]$ in the asymptotic large-$m$ regime (\[Eq39\]). It is worth emphasizing again that the strong inequality $x_{\text{c}}\ll1$ characterizes exotic compact objects whose quantum reflective surfaces are located in the radial vicinity of the would-be classical black-hole horizons \[see Eq. (\[Eq3\])\]. This small-$x_{\text{c}}$ regime corresponds to the physically interesting model of spinning exotic compact objects recently studied numerically in [@Pan] (see also [@eco1; @eco2; @eco3; @eco4; @eco5; @eco6; @eco7; @eco8; @eco9; @eco10; @eco11; @eco12]).
--- address: 'Department of Mathematics, University of Texas, Austin, TX 78712' author: - Žarko Bižaca title: 'On Ribbon ’s' --- 1200 amstex epsf \#1 \#1\#2 \#1\#2 \#1\#2 \#1\#2 We consider ribbon ’s, that is, smooth open 4-manifolds, homeomorphic to  and associated to $h$-cobordisms between closed 4-manifolds. We show that any generalized ribbon  associated to a sequence of $h$-cobordisms between non-diffeomorphic 4-manifolds is exotic. Notion of a positive ribbon  is defined and we show that a ribbon  is positive if and only if it is associated to a sequence of stably non-product $h$-cobordisms. In particular we show that any positive ribbon  is associate to a subsequence of the sequence of non-product $h$-cobordisms from \[BG\].= 14pt = 4pt It is well known that there are examples of pairs of homeomorphic but not diffeomorphic simply connected closed smooth 4-manifolds, see \[K\]. It is also known that any such a pair of homeomorphic, smooth, simply connected and closed 4-manifolds is $h$-cobordant, \[W\]. Equivalently, given such a pair of non-diffeomorphic 4-manifolds, each one can be obtained from the other one by a regluing of a certain open smooth 4-manifold, usually called a “ribbon ”, \[DF\] or \[K\]. A ribbon  used in a such reimbedding can be obtained from the $h$-cobordism and is homeomorphic but not diffeomorphic to the standard Euclidean four-space, . So, it is an example of what is usually referred as an [*exotic*]{} . Any $h$-cobordism between a pair of smooth, possibly diffeomorphic, 4-manifolds may be used to construct a ribbon , but it is not known whether each of them is necessarily exotic. This paper provides a partial answer to the question which ribbon ’s are exotic. We are working under an assumption that the given $h$-cobordism can not be turned into a product cobordism by blowing up both of its boundary components finitely many times (see Definition 5). Let $\ (W^5;M_0^4,M_1^4)\ $ be an $h$-cobordism between two non-diffeomorphic, oriented, smooth, closed, simply connected 4-manifolds. After trading handles if necessary, we may assume that $W$ has a handlebody description with only 2- and 3-handles: $$W\cong (M_0\times I)\cup(\bigcup_{i=1}^k h_i^2)\cup(\bigcup_{j=1}^k h_j^3),$$ where $I$ is the unit interval, $\ I=[0,1]$, and the matrix of incidence numbers between 2- and 3-handles is the identity matrix. These incidence numbers are equal to the intersection numbers in the middle level of the cobordism, $\ M_{1/2}=\bd((M_0\times I)\cup(\bigcup_{i=1}^k h_i^2)-M_0$, between the [*belt*]{} (or [*dual*]{}) spheres of the 2-handles and the [*attaching*]{} spheres of 3-handles, see \[RS\]. We denote the attaching spheres by $A_i$ and the belt spheres by $B_i, \ 1\leq i\leq k$. (Often the belt spheres are called “descending spheres”, and attaching spheres are called “ascending spheres”.) Both the attaching spheres and the belt spheres are families of disjointly embedded 2-spheres in $M_{1/2}$, but beside $k$ intersection points of $\ A_i\cap B_i, \ 1\leq i\leq k$, recorded in the intersection matrix, there may be some additional intersection points between the attaching and the belt spheres. These extra intersection points on any $A_i\cap B_j$ can be grouped into pairs with opposite signs. Note that in the absence of these extra pairs of intersections the 2- and 3-handles in $W$ form complementary pairs of handles that can be removed from the handlebody decomposition. In that case there is a product structure for $W$, that is, a diffeomorphism $W\cong M_0\times I$. In our situation the $h$-cobordism has no smooth product structure, so there has to be at least one extra pair of intersections between $A_*$ and $B_*$. We denote by $X$ a regular neighborhood in the middle level of the union of these spheres, $X=\Nbd (A_*\cup B_*)$. Extra pairs of intersections result in being nontrivial. We can use Casson’s construction \[C\] to cap the generators of by Casson handles inside $M_{1/2}-\text{int}X$. Casson’s construction may produce new pairs of intersections between $A_*$ and $B_*$, but when considered separately, each family of spheres remains disjoint. The boundary components of $W$, $M_0$ and $M_1$ in our notation, can be obtained by surgering the middle level, $M_{1/2}$. These surgeries are performed on $A_*$ spheres to obtain $M_1$ and on $B_*$ spheres to obtain $M_0$. Surgering $X$ produces two compacts in the boundary components, $Y_0$ and $Y_1$. It follows from Freedman’s work \[F\] that $M_0$ and $M_1$ are homeomorphic and that the cobordism $W$ is homeomorphic to the product cobordism, $M_0\times I$. Since we have assumed that $M_0$ and $M_1$ are not diffeomorphic, $W$ can not be diffeomorphic to the product cobordism. Following \[DF\], \[FQ\] or \[K\] we may assume that $W$ is smoothly product over the complement of a the compact $Y_0$ in $M_0$. Note that $\ \bd Y_0=\bd Y_1=\bd X\ $ and so $M_1$ can be obtained from $M_0$ by replacing $Y_0$ by $Y_1$, that is $\ M_1\cong (M_0 -\text{int}Y_0)\cup_{\bd Y_0=\bd Y_1}Y_1$. Also, $\ M_{1/2}\cong (M_0 -\text{int}Y_0)\cup_{\bd}X$. = 5.2inA link calculus description of an example of such $Y_0$, $X$ and $Y_1$ is presented in Figure 1. Dashed circles are generators of the fundamental groups. A compact obtained like $Y_0$ or $Y_1$ in an $h$-cobordism is known to be diffeomorphic to a complement in the 4-ball of an embedded disc that spans a ribbon link in the boundary of the 4-ball. We will refer to such a compact $Y_0$ as a [*ribbon complement associated to the $h$-cobordism $W$*]{} or simply as a [*ribbon complement*]{}, when the cobordism is determined by the context. By inverting the cobordism $W$ we obtain an $h$-cobordism $\ (\overline W;M_1, M_0)$ and $Y_1$ is a ribbon complement associated to $\overline W$. Meridians to the components of the bounding ribbon knot or link generate the first homology group of a ribbon complement. If we use these meridians (with 0-framings) to attach the standard 2-handles to a ribbon complement, the resulting manifold is the standard 4-ball. If the standard 2-handles are replaced with Casson handles and the remaining boundary is removed, then the resulting open 4-manifold is homeomorphic to the Euclidean four-space, , and is called a [*ribbon* ]{} \[DF\]. In the case that a ribbon  is not diffeomorphic to , we refer to it as an [*exotic ribbon*]{} . If Casson handles are attached to ribbon complement associated to $W$ ambiently in $M_0-\text{int}Y_0$, then we say that the resulting ribbon  is [*associated to the cobordism $W$*]{}. An example of an exotic ribbon  associated to a non-product $h$-cobordism was explicitly described in \[B\]. Although only two Casson handle were involved in its construction, the number of their kinks grow so fast with the level that the description, as I. Steward \[S\] has politely phrase it, “verges on bizzare”. To obtain a simpler exotic ribbon , a sequence of non-product $h$-cobordisms was used in \[BG\] in the following way. Let $R_*=\text{int}Y_*\cup_{i=1}^m CH_i$ be a ribbon  built from a ribbon complement $Y_*$. For every positive integer $n$, we denote by $U_*^n$ the open manifold built by attaching to $Y$ only the first $n$ levels of each of the Casson handles $CH_i$, $\ U_*^n=\text{int}Y_*\cup_{i=1}^m (CH_i)^n$. Suppose that each $U_*^n$ is associated to an $h$-cobordism $\ (W_n;M_{n,0},M_{n,1})\ $ in the sense that $Y_*$ is associated to $W_n$ and the $n$ level open Casson towers $(CH_i)^n$ are embedded ambiently into $M_{n,0}-\text{int}Y_*$. Then we say that $R_*$ is [*associated to the sequence of cobordism*]{} $\{ W_n\} $. Note that if $R_*$ is a ribbon associated to an $h$-cobordism $W$ then $R_*$ is also associate to the sequence of cobordisms $\{ W_n=W\} $. To continue we introduce a model of a ribbon complement. We start as before by first constructing a compact $X$ that is a regular neighborhood of $A_*$ and $B_*$ spheres in the middle level of an arbitrary $h$-cobordism $W$. It is always possible, although not in a unique way, to group the extra pairs of the intersections so that each can be obtained by a finger move of an $A_*$ through a $B_*$. In other words, we may introduce finger moves on the regular neighborhood $\ \Nbd(\coprod_k(S^2\vee S^2))\subset\sharp_k(S^2\times S^2)\ $ so that the result is diffeomorphic to $X$. This construction produces a distinguished set of generators for consisting of loops embedded into $\bd X$. If we retreat the fingers emanating from $A_*$ so that their tips are only tangent to $B_*$, we call the remaining generators [*accessory loops*]{}. When the fingers are returned to their initial positions one of the two intersection points of each finger is designated for accessory loops to pas through and we adjust accessory loops accordingly. To complete our set of generators for we choose a loop for each finger that consist of an arc on $A_*$ and an arc on $B_*$, both ending on the two intersection points on the finger. We call these loops the [*Whitney loops*]{}. Using isotopies when necessary, we assume that loops generating are disjoint outside the extra intersections between $A_*$ and $B_*$. After projecting $X$ along the cobordism into $M_0$ or $M_1$, only the interior of $X$ has been replaced and the accessory and the Whitney loops are in $M_0$ or $M_1$, respectively. It is easy to describe this projection in the terms of link calculus: to surger $X$ into $Y_0$ 0-framings of the link components representing the family $B_*$ are replaced by dots, namely, 2-handles are replaced by 1-handles or by scooped out 2-handles. Note that is a subgroup of , the latter also includes generators that are meridians to the dotted circles that used to represent $B_*$ in $X$. We will continue to call “accessory” and “ Whitney” loops the generators for induced by the surgery. = 5in Figure 2 is a link calculus picture of a finger move. Note that the components of $B_*$ are already surgered into dotted circles, the picture of $X$ can be obtained by replacing dots on these components by 0-framings. We may build our model of a ribbon complement by adding such finger moves to a collection of Hopf links with a 0-framed and a dotted component in each (link calculus picture of complementary pairs of 1- and 2-handles), but often we will end up with more “accessory” dotted circles than needed. The extra dotted circles may me slid of their parallels and off our picture where they are removed. Alternatively, the “accessory” dotted circle from Figure 2 is added only when the finger closes a loop and introduces a new accessory generator of . = 5.2in $R_0$ – an example of a generalized ribbon . The construction of ribbon complements described above (and in \[K\] and \[DF\]) involves a specific family of ribbon links. The above mentioned exotic  from \[B\] is an example of an ribbon , but the simplest known exotic  (introduced in \[BG\]) is not a “ribbon ” although it has a ribbon knot (Figure 3) associated to it: to the meridian of the ribbon complement of the disc bounding ribbon knot from the left part of Figure 3 a single Casson handle is attached. The attached Casson handle has a single kink at each level and all its kinks are positive. This exotic , which we denote by $R_0$, is built from the same ribbon complement as the example from \[B\], but instead capping both the accessory and the Whitney loops by Casson handles, there are only one Casson handle and a standard 2-handle involved. We will consider a slightly more general situation and so we say that a contractible open smooth 4-manifold built from a ribbon complement by capping the accessory and Whitney loops with any combination of 2- and Casson handles is a [*generalized ribbon* ]{} . The notion of such a generalized ribbon  being associated to a sequence of $h$-cobordisms can be defined exactly as before. Two questions arise naturally. The first one is whether a ribbon associated to an $h$-cobordism between non-diffeomorphic 4-manifolds (or to a sequence of such cobordisms) is necessary exotic. Conversely: which combinations of ribbon complements and attached Casson handles produce exotic ribbon ’s. Answer to the first question was known to be positive in the case of a ribbon  associated to a single $h$-cobordism between non-diffeomorphic 4-manifolds, \[K, pages 98 – 101\] . This is also true in a slightly more general situation. A (generalized) ribbon  associated to a sequence of $h$-cobordisms between non-diffeomorphic 4-manifolds is exotic. Let $R=Y\cup\big (\bd Y\times (0,\infty)\big)\cup_{i=1}^m CH_i\cup_{j=1}^n H^2_j\ $ be a generalized ribbon  associated to a sequence of $h$-cobordism, $\{ (W_n;M_{n,0},M_{n,1})\}$, where $M_{n,0}$ and $M_{n,1}$ are not diffeomorphic and where $H_j^2$ denotes an open 2-handle, that is $\ (H_j^2,\bd H_j^2)\cong (D^2\times\r,S^1\times\r)$. Assume that $R$ is diffeomorphic to the standard . Then, since the ribbon complement $Y$ is a compact subset of $R$, there is a smooth 4-ball $B_0$ embedded in $R$ that contains $Y$ in its interior. The ball $B_0$ being compact is contained in $U^k$, for some $k\geq 1$. $W_k$ is a product over $\ M_{k,0}-Y\ $ and we use this product structure to lift $\bd B_0$ into a smooth 3-sphere $S_1$ in $M_{k,1}$. An argument from \[K\] shows that $S_1$ has to bound a standard 4-ball in $M_{k,1}$: briefly, by embedding $k$ level Casson towers of $U^k$ into the standard 2-handles we construct a smooth embedding of $U^k$ into the standard 4-ball which we consider to be in the standard 4-sphere, $S^4$. The piece of the cobordism $W_k$ over $U^k$ can be transplanted into the product cobordism $S^4\times I$. Recall that the complement of a smooth 4-ball in 4-sphere is also a 4-ball. We lift the complement of the 4-ball $B_0$ in $\ S^4\times\{ 0\}\ $ to the top of the cobordism, $\ S^4\times \{ 1\}$. Since this cobordism is a product over the complement of $Y$, the complement of the lifted 4-ball is a standard 4-ball, bounded by $S_1$, and the cobordisms over the complements of int$B_0$ in $S^4$ and $M_{k,0}$ are product. So, the product structure over $\ M_{k,0}-B_0\ $ can be extended over $B_0$, contradicting our assumption that $M_{k,0}$ and $M_{k,1}$ were not diffeomorphic. One might expect that the second question has an equally simple answer, that all possible generalized ribbon ’s are exotic. This is not the case: the simplest ribbon complement is diffeomorphic to $\ S^1\times D^3$ and the ribbon link involved is the unknot. If any Casson handle is attached over the meridian and the boundary is removed, the resulting manifold is the standard  \[F, page 381\]. However, it is easy to describe this manifold as a generalized ribbon , for example, replace one of the two Casson handles of $R_1$ in Figure 5 by a standard open 2-handle. If $L$ is an accessory loop of a ribbon complement $Y_0$ than the set of Whitney loops that intersect $L$ is called the [*Whitney set of*]{} $L$. If a signed tree associated to a Casson handle has a positive branch, than the Casson handle is [*positive*]{}. If there are more positive then negative edges emanating from every vertex of the associated tree, then the Casson handle is [*strictly positive*]{}. Let $Y$ be a ribbon complement and $R$ a ribbon obtained from $Y$ by adding Casson handles. Suppose that there is an accessory loop such that every loop of its Whitney set is capped by a positive Casson handle and, in the case that its Whitney set contains only one loop, then the accessory loop itself is capped by a positive Casson handle. In the case that there are more then one loop in the Whitney set we require that this accessory loop coincides with at most one finger emanating from any $A_*$ spheres. Then we say that $R$ is a [*positive ribbon* ]{} . It is not known whether ribbon ’s that are not positive are exotic or not. However, ribbon ’s that are not positive can not be associated to a sequence of non-product $h$-cobordisms that satisfy the following additional assumption. Let $(W^5;M_0^4,M_1^4)\ $ be an $h$-cobordism between two oriented, smooth, closed, simply connected 4-manifolds. We say that $W$ is [*stably non-product*]{} if $M_0\sharp n(\ncp)$ and $M_1\sharp n(\ncp)$ are not diffeomorphic for any nonnegative integer $n$. It is not known to the author whether there exists a pair of simply connected closed smooth 4-manifolds $M_0$ and $M_1$ that are homeomorphic and non-diffeomorphic and such that $M_0\sharp n(\ncp)$ and $M_1\sharp n(\ncp)$ are diffeomorphic for some $\ n\geq 1$. A ribbon  associated to a sequence of stably non-product $h$-cobordisms is positive. We will prove that a ribbon  that is not positive can not be associated to a sequence of stably non-product $h$-cobordisms $W_n$ by showing that at least one $W_n$ can be turned into a product cobordism by blowing up its end sufficiently many times. A process of removal of double points is described in \[Ku\] and a short outline of that method is given next. Suppose that $(\Delta,\bd\Delta)$ is an immersed disc in a 4-manifold $(N,\bd N)$ with a single double point in the interior of $\Delta$. Furthermore, suppose that this double point is negative. After blowing up $N$ we can replace $\Delta$ by an embedded disc in $(N\sharp\ncp,\bd (N\sharp\ncp))$ that spans the same loop in $\bd N\sharp\ncp=\bd N$: Let $E$ and $E'$ represent two “exceptional curves”, i.e., copies of ${\Bbb C}P^1$ in general position and embedded in the added . If $E$ and $E'$ are equipped with opposite orientations then they intersect in a single point that has the positive sign. Choose a small ball centered at the intersection between $E$ and $E'$, the intersection between the boundary of the small ball and $E$ and $E'$ will form a Hopf link. Similarly choose a small ball in the interior of $N$ that is centered at the double point of $\Delta$. The intersection between $\Delta$ and the boundary of the ball is again a Hopf link. The centers of these two balls can be connected by a path that avoids $\Delta$, $E$ and $E'$. Remove the intersection between the interiors of the balls and $\Delta$, $E$ and $E'$. Now the two Hopf links in the boundaries of the balls are connected by two pipes that follow the chosen path. The resulting disc represents the same second homology class as $\Delta$ and has the same boundary, but it is embedded into $(N\sharp\ncp,\bd (N\sharp\ncp))$. Notice that this procedure can prune all the branches of a tree associated to a Casson handle that have a negative kink. Also, for every non-positive Casson handle there is a natural number $k$ so that every brunch of the tree associated to the Casson handle has a negative kink on the first $k$ levels. (Recall that a tree associated to a Casson handle has finitely many edges coming from every vertex.) Consequently, if we perform sufficiently many blow-ups of an ambient 4-manifold, we may replace any Casson handle that is not positive with an embedded standard 2-handle. Suppose that $R$ is a non-positive ribbon associated to a sequence of $h$-cobordisms $W_n$. Choose $k$ large enough such that every branch of non-positive Casson handles contains a negative kink on the first $k$ levels. We will work in $W_k$ where we have embedded first $k$ levels of the Casson handles from $R$. After blowing up $W_k$ sufficiently many times we may replace all non-positive Casson towers in $\ \tilde W_k :=W_k\cup(\sharp\ncp\text{'s}\times I)\ $ with standard 2-handles so we may assume that all the remaining Casson towers have only positive kinks. We can use the embedded 2-handles to perform Whitney tricks and cancel pairs of 2- and 3-handles from the handlebody decomposition of $\tilde W_k$ whenever possible. If none of the Casson handles of $R$ is positive, this procedure removes all the 2- and 3-handle pairs and $\tilde W_k$ has a product structure. In particular, $W_k$ is not stably non-product. = 4in If there are no accessory loops we can use Norman trick from Figure 4 to remove the extra pairs of intersections. Working in the middle level we start from such an extra pair, say between $A_i$ and $B_j$. Note that $B_j$ and $A_j$ have a single intersection point since otherwise there would be an accessory loop on them. Each of the two intersections is removed by Norman trick (see \[FQ\]) by removing a disc from $A_i$ centered at the intersection point and a disc from a copy and $A_j$ that is centered at an intersection point between $A_j$ and $B_j$. Then the boundaries of these two removed discs are meridians to $B_j$ and are connected by a tube, Figure 4. The resulting new $A_i$ sphere, denoted by $A'_i$, intersect each $B_*$ that $A_j$ did, so there are four intersections between $A'_i$ and $B_k$ in our example from Figure 4. Repeating this process produces a cascade of fingers, each piercing a $B_t$ such that $A_t$ has no fingers. The application of Norman trick will add two copies of $A_t$ to each finger that ends on $B_t$ therefore removing the pair of intersections we have started with. Now all Whitney discs are removed and again, $\tilde W_k$ has a product structure. If $Y$ does contain accessory loops, since we have assumed that $R$ is not positive, each accessory loop ventures over more then one finger emanating from a single $A_*$ or the Whitney set for the accessory loop contains a loop capped by a non-positive Casson handles. In the later case, after the blow-ups this Whitney loop is capped by the standard two handle and the finger containing it is removed, breaking the given accessory loop. So now we may assume that the only accessory loops remaining are those that contain more then one finger emanating from a single $A_*$. Starting with two such fingers and pushing them over other spheres from $A_*$ we produce cascades of fingers. The accessory loop is closed when both cascades of fingers intersect the same $B_*$. We consider such a loop, emanating from, say, $A_i$ and ending on $B_j$. Fingers that may start from $A_j$ also can be grouped in pairs, each ending on a same $B_*$. Following these pairs we can not close a loop by having a pair of fingers ending on $B_i$, otherwise we could select a member from each pair and obtain an accessory loop that passes over at most one finger emanating from any $A_*$. In that case the non-positiveness would apply that at least one of the associated Whitney loops can be capped by the standard 2-handle. Consequently we may assume that our pairs of fingers emanating from $A_i$ ends on a $B_j$ such that $A_j$ has no fingers and no extra intersection points. Using the Norman trick as before the pairs of intersections on fingers are removed. So all the extra pairs of intersections can be removed and again, $\tilde W_k$ has a product structure. The converse to Theorem 6 is also true, every positive ribbon  is associated to a sequence of stably non-product $h$-cobordisms. Every positive ribbon  can be associated to a subsequence of the sequence $\ \{W_m \}^{\infty}_{m=2}\ $ of stably non-product $h$-cobordisms constructed in \[BG\]. Each of these $h$-cobordisms from \[BG\] we denote here by $(W_m;M_{m,0},M_{m,1})$, where $M_{m,1}\cong E(m)\sharp k(\ncp)$ and $M_{m,0}$ decomposes as a connected sum of ’s and ’s. For simplicity we have not included “$k$” or $\tilde W_m$ in our notation and $E(m)$ denotes the minimal elliptic surface with no multiple fibers and of the Euler characteristic $12m$. = 4.5in First we will show that for each positive ribbon , $R=\text{int}Y\cup_i CH_i$ and for each natural number $k$ we can embed $\ U^k=\text{int}Y\cup_i (CH_i)^k$ in some $M_{m,1}$, when $m$ is large enough and $M_{m,1}$ contains sufficiently many copies of . Each of this embeddings factors through an embedding into a compact obtained from the closure of $U_0^k$ (that is, the first $k$ levels of $R_0$ from Figure 3) by adding extra 2-handles and parallel copies of the Casson tower, shown in Figure 7. Then we show that the $h$-cobordism obtained by regluing the embedded ribbon complement $Y$ is diffeomorphic to $W_m$. = 5inAccording to Definition 3 each positive ribbon  contains an accessory loop whose Whitney set is capped by positive Casson handles. We start our construction by embedding all the other Casson handles into the standard 2-handle and each positive Casson handle capping a loop from the fixed Whitney set is embedded into the $CH^+$, the positive Casson handle with one kink per level. Now each positive ribbon  is embedded into a ribbon  that has a single accessory loop, similar to one of those in Figure 5. In this figure there is a sequence of ribbon ’s, $R_n,\ n\geq 1$, each but the first one is built by attaching $n$ copies of the Casson handle $CH^+$ to a ribbon complement which we denote by $Y_n$. Note that in $R_1$ the accessory loop is also capped by $CH^+$, and for $n\geq 2$, the accessory loop is capped by the standard 2-handle and it’s dotted circle disappears (compare with Figure 2). We denote by $R_n'$ the resulting ribbon  in which we have embedded $R$. The index $n$ in “$R_n'$” or “$R_n$” is equal to the number of pairs of $A$ and $B$ spheres that form the underlying middle level compact, $X_n$. The middle level compact $X_n$, and therefore the ribbon complement $Y_n$, are uniquely defined up to isotopy by listing the geometric and algebraic numbers of intersections between $A_*$ and $B_*$ spheres. Furthermore we can isotope one link calculus picture of $Y_n$ into another by sliding the 2-handles and dotted circles whose meridians are Whitney loops over the dotted circle corresponding to the accessory loop. Equivalently, the possible differences between link calculus pictures may occur as different choices of clasps, positions of dotted circles whose meridians are Whitney loops and twists of parallel strands in 2-handles. In Figure 6 it was shown how to deal separately with each of this differences. Since in any stage we are allowed to blow-up the ambient manifold finitely many times we can always introduce positive twists by attaching a $-1$-framed 2-handle and sliding other 2-handles off it, Figure 6. So we may assure that all clasp between handles corresponding to $A_*$ and $B_*$ spheres and positions of the dotted circles corresponding to Whitney loops in a link calculus picture of $R_n'$ are exactly the same as in the picture of $R_n$ in Figure 5. The only possible difference remaining is in accessory loops. Therefore, our construction produces an embedding of a positive ribbon  into a possibly blown-up ribbon  that we have denoted by $R'_n$ and that is built by attaching copies of the Casson handle $CH^+$ onto Whitney circles of $Y_n$, and by dealing with the accessory loop in the same fashion as in $R_n$. If we fix as generators for the Whitney circles $\ w_1,\dots,w_n\ $ and the accessory loop $a$ from Figure 5, then in general $a'$, the accessory loop of $R'_n$, is a word in these generators involving $w_i$’s. As before $U^k_n$ will denote $Y_n \cup_{w_*} n(CH^+)^k)\cup_a h^2$, where $(CH^+)^k$ is the Casson tower equal to the closure of the first $k$ levels of the Casson handle $CH^+$. Copies of $(CH^+)^k$ are attached over the Whitney loops $w_*$ and the 2-handle $h^2$ is attached over the accessory loop $a$. Similarly, we define $(U')^k_n$ by attaching the 2-handle over $a'$ instead of $a$ . = 3inWe claim that we can extend the embeddings from \[BG\] of $U^k_0$ into $M_{m,1}$, where $m$ is large enough, to an embedding of the handlebody $C_1$ from Figure 7. The embeddings we are extending were described in \[B\], Figures 19 – 60, and in \[BG\], Figures 40 – 81. The modifications we have to add is to have an arbitrary number of Casson $k$ level towers (instead of only one in $U^k_0$) and arbitrary numbers of $-1$-framed 2-handles linked with the two dotted circles in $C_1$, Figure 7. To embed $-1$-framed 2-handles linked with the larger dotted circle we follow Figures 40 – 43 and 49 – 54 in \[BG\]. In Figure 40 from \[BG\] the meridian of the larger dotted circle from our Figure 7 corresponds to the circle denoted by $\beta$. The meridian of the smaller dotted circle in Figure 7 we can follow in Figures 39 – 44 in \[B\], but the difference in our case that the actual pictures are the mirror images of those in \[B\] and the largest 0-framed two handle and the dotted circle have to switch their roles. So we read Figure 44 in \[B\] that each meridian of the smaller dotted circle form our Figure 7 is isotoped to a pair of meridians of the larger dotted circle from Figure 7. Each pair has one unlinked 0-framed component and the other one is $-1$-framed and linked with all the second components of the other pairs. After passing these meridians into the other part of the manifold we have to take their mirror images so the framings in Figure 44 from \[B\] are now correct as drown. Figure 60 from \[B\] shows how to deal with (now $+1$-framed) linked second components of the pairs. Note that in figures from \[BG\] these pairs of meridians also are isotopic to $\beta$. In all cases we are left to cap $0$-framed isotopes of the circle $\beta$ which we can do by either Casson towers of arbitrary levels (Figure 59 in \[B\] or Figure 81 in \[BG\]) or we can slide them over the linked $-1$-framed 2-handles and produce embeddings of $-1$-framed 2-handles in our Figure 7. Each of these processes uses $-1$-framed 2-handles isotop to $\beta$ (Figure 81 from \[BG\]) end to procure them in a sufficient quantity we need to choose $m$ large enough. = 5inNext we construct an embedding of $U^k_n$, $n\geq 1$, into the handlebody $C_1$ from Figure 7. An embedding of $U^k_1$ was described in \[BG\], Figure 47, namely a 0-framed 2-handle is added to connect the accessory and Whitney dotted circle, the result is $U^k_0$, but with two parallel Casson $k$ level towers so it embeds in $C_1$. To embed $U^k_n$, $n\geq 2$, into $C_1$ we connect the dotted circles corresponding to $B$ spheres by $n-1$ 2-handles. Figure 8 depicts this process in the case of $U^k_2$ and $U^k_3$. = 5inFigure 9 shows how to embed each $U^k_n,\ n\geq 1$, in $C_1$. We start with $C_1$ and add complementary pairs of 2- and 3-handles (which corresponds in link calculus to adding unlinked unknots with framings 0) and then we slide the 2-handles over the 2-handle of $C_1$. Next we perform isotopies to separate these parallel copies of this 2-handle. Figure 9 shows how to complete an embedding of $U^k_2$ into $C_1$ and all the other $U^k_n$’s are embedded in the same fashion. = 5in Next we show how to modify these embeddings to embed $(U')^k_n$ into $C_1$. Recall that the difference between $(U')^k_n$ and $U^k_n$ is only in the 2-handle capping the accessory loop. In the case of $(U')^k_n$ the attaching circle of this 2-handle can also link other handles, but we can isotop this circle to be a word in our fixed Whitney circles, $w_1, w_2,\dots,w_n$. Figure 10 shows how to unlink the accessory loop from the dotted circles corresponding to Whitney loops. For each piece of the accessory loop that links once a Whitney dotted circle we add a $-1$-framed 2-handle, as shown in Figure 10. Then we can use the added handle to slide the accessory loop off the dotted circle. The result of such a slide increases the framing of the accessory loop by $+1$. After we slid the accessory loop off all dotted circles the resulting new accessory loop is linked with only one dotted circle, as in the case of $U^k_n$, but its framing will be in general positive. We can slide in Figure 10 the dotted circle and the $N$-framed accessory circle linked to it such that the dotted circle ends up linked with the visible 0-framed 2-handle. Then we can slid each of the two strands of the 0-framed handle over the $N$-framed handle and finally, we slid this (still 0-framed) 2-handle $2N$ times over dotted circle to unlink it from the $N$-framed handle. The result of this handle slides will render the canceling pair with the dotted circle and the $N$-framed 2-handle unlinked from anything else and therefore it can be removed from the picture. The other induced change is that the two strands of the 0-framed 2-handle have obtained $N$ positive twists. The framing of the accessory loop can always be increased by any positive amount: attach a $-1$-framed 2-handle as in Figure 10 and slide off it the 0-framed handle. This process blows up once the ambient manifold and introduces an extra positive twist. By using such a blow-up if necessary and reversing the handle slide, we assume that $N$, the framing of the accessory circle, is an even positive integer. = 5inOur present link calculus pictures of $(U')^k_n$ and $R'_n$ differ from $U^k_n$ and $R_n$ in that accessory loop is capped by an unknoted 2-handle that has framing $N$ instead of 0, where $N$ is an even positive integer or, equivalently, one of the 0-framed 2-handles has $N$ positive twists, see Figure 10. Figure 11 shows how to embed $(U')^k_n$ into $C_1$ by accommodating each pair of twists. Adding a pair of complementary 1- and 2-handles such that the 2-handle is 0-framed and linked twice with the dotted circle is equivalent of having two positive twists and so we add $N/2$ such pairs. The attaching of $-1$-framed 2-handles as in Figure 11 replaces each of $N/2$ pairs of twists by a $-1$-framed 2-handle that is meridian to the larger dotted circle in the picture of $C_1$ and so we have an embedding of $(U')^k_n$ into $M_{m,1}$. = 5inOur next task is to obtain an $h$-cobordism from each embedding of $(U')^k_n$ into $M_{m,1}$. The other boundary component, $M_{m,0}$, is obtained by a reimbedding of $\ (Y')_n\subset (U')^k_n \subset M_{m,1}\ $ that switches the roles of 0-framed 2-handles and appropriate dotted circles in Figures 8 and 9. In particular, Figure 9 is replaced by Figure 12. The 0-framed Hopf links added in Figure 12 were complementary pairs of 1- and 2-handles in Figure 9. We claim that the obtained $h$-cobordisms are $(W_m;M_{m,0},M_{m,1})$, that is, they are in the same sequence of $h$-cobordisms obtained by the reimbeddings of $\ Y_0 = Y_1\subset U^k_1 \subset M_{m,1}$, \[BG\], but with possibly different $m$ and the number of blow-ups corresponding to a given number of levels, $k$. The boundary components of the cobordisms in \[BG\] were constructed by regluing of the Mazur rational ball that is visible in our figures as a subhandlebody of $C_1$, Figure 7. The two embeddings of the Mazur ball were using its dual handlebody decomposition and we will do the same here with embeddings of $C_1$. We will explicitly show the embedding of $Y_2$ into $M_{m,0}$ and from the construction it will be clear how to obtain the embeddings of $Y_n, n\geq 3$. = 5.2inThe top part of Figure 13 recapitulates the embedding of $Y_2$ into $C_1$. There is a 3-handle attached over unlinked 2-handle, normally not visible in a link calculus picture. Below is a link calculus picture of the dual handlebody decomposition. We will follow a convention from \[BG\] and now we present its shout outline. To obtain a link calculus picture of a dual decomposition from a given link calculus picture one may start by drawing the mirror image of the given link calculus picture. Then the dotted circles (that is, 1-handles or, equivalently, scooped out 2-handles) obtain $(0)$-framings and the signs of all others framings are changed and enclosed by parentheses, see \[BG\]. Next, to each link component that was an attaching circle of a 2-handle in the original link calculus picture one attaches a 0-framed 2-handles over the meridian of the component, see \[K\]. (Recall that in a dual decomposition of a 4-dimensional handlebody 1-handles become 3-handles and vice versa and the 0-handle becomes a 4-handle.) Such a link calculus picture contains components marked with a dot (1-handles), components with an integer framing (2-handles) and components whose framings are integers enclosed by parentheses. A handlebody described by a such link has two boundary components: the “$\nbd$-component” is obtained by performing (a 3-dimensional) surgery only on components in parentheses, and the other boundary component, the “$\pbd$-component”, is the result of the surgery of all the components of the link. To facilitate the description we have labeled all components of the dual part of Figure 13 by capital letters, A – I. Now we describe a diffeomorphism between the two pictures of the dual decomposition. The handlebody to the left also contains a 4-handle and three 3-handles, the duals of the 1-handles in the original decomposition, namely they have to be attached over the components D, E and G. Also there is a 1-handle that has to be added to the $\nbd$-boundary component that is not visible in this picture. First we slide D and E over I and off G. The new D’ and E’ are now linked with A, and A can be slid off I over G. Components I and G are unlinked and can be removed from the picture. Note that now A occupies the place previously occupied by G. A is then slid of D and E over B and C, respectively. The resulting 2-handle, now denoted by A’ is unlinked from the rest of the components, but the 3-handle that used to be attached over G is now attacher over A’ and together they form a complementary pair of 2-and 3-handles. Next, we slide D’ over E’ and the result is visible in the lower right corner of Figure 13. Now the 1-handle is visible; it coincides with D’ and together with the 2-handle B it forms a complementary pair. Now we have two complementary pairs of handles that we remove from the picture. By adding where appropriate Casson towers and $-1$-framed 2-handles we obtain $C_1$. = 5.2inWe proceed with a description of an embedding of $Y_2$ into $M_{m,0}$. The top of Figure 14 reproduces from Figure 12 the link calculus picture of $Y_2$ with two 0-framed 2-handles added. Below is a link calculus picture of its dual decomposition. As before we have labeled all the components of this link. Again we have an invisible 1-handle, a 4-handle and three 3-handles attached over D, E and F. First we slide B over E and then twice over C to unlink it from H. The resulting component, B’, is unlinked from the rest of the components, but it coincides with the 3-handle attached over E. Similarly as above, we remove the components G and I and the resulting link calculus picture is visible in Figure 14 and the next picture in that figure is the result of an ambient isotopy of the link. Then we slide E’ over D’ and the resulting component, E”, is where we add (from “below”, to the $\nbd$-component of the boundary) the missing 1-handle. The 3-handle that was incident only with E is now incident with both E” and B’ and 3-handle originally attached to D is now incident with D’ and E”. = 4inThe $\nbd$-component of the boundary of our handlebody is the same as in the case of embedding into $M_{m,1)}$ and, after adding two $-1$-framed 2-handle we have a handlebody from Figures 44 and 45 in \[BG\]. The result of this addition is in Figure 15. Furthermore we decompose this manifold as a union of three pieces stack over each other and glued over appropriate boundary components. The bottom piece in Figure 15 is the dual picture of adding two $-1$-framed 2-handles. Its $\pbd$ boundary component is obtained by surgeries on D’, F and H. Then, we add C, a 0-framed 2-handle. Now, the $\pbd$ boundary component of this middle piece is $S^3$ and the difference between the handlebody we are considering and the one used in embedding of $Y_2$ into $M_{m,1}$ from \[BG\] is in the pair of 1-handle and a 0-framed 2-handle added in the piece on the top and glued by a diffeomorphism of the standard 3-sphere. Since by changing a gluing diffeomorphism of the standard 3-sphere we can not change the smooth structure of the resulting 4-dimensional manifold, we have only to consider the diffeomorphism type of the piece on the top. It is easy to see that this piece, together with 3-handles and the 4-handle is diffeomorphic to the standard 4-ball. Namely, by sliding D’ over $(0)$-framed C we can unlink all the $(0)$-framed components and then E” and A’ form a complementary pair of 1- and 2-handles. Now the 3-handles are attached over the resulting $(0)$-framed components, and on top of them we have to attach the 4-handle. Therefore, the handlebody from Figure 15, together with invisible 3- and 4-handles is the same as the manifold from Figures 44 and 45 in \[BG\] and the argument there shows how to decompose the obtained boundary component of the $h$-cobordism, $M_{m,0}$, into a connected sum of ’s and ’s. To generalize to embeddings of $Y_n,\ n\geq 3$ into $M_{m,0}$ note that the difference will be in having extra canceling pairs of 1- and 2-handles that are again separated from the most of other handles of $M_{m,0}$ and, as in the case of $Y_2$, can be removed from the picture. To complete the proof, we have to deal with the changes necessary to embed more general $(U')^k_n$ into $M_{m,0}$. Again we can use the trick from Figures 10 and 11, by switching the roles of the biggest 0-framed and dotted circles in the lover half of Figure 10 and by replacing the dot by 0-framing of the two horizontal line segments throughout Figure 11. Since all handle slides were of 2-handles, the only difference is that we are sliding 2-handles over 1- and 2-handles rather then 2-handles over only 1-handles and so we do not have to use the forbidden link calculus moves involving slides of dotted circles over components without a dot. Note that although we have seen that any positive ribbon  can be associated to a subsequence of the same sequence of stably non-product cobordisms $W_m$ from \[BG\], the particular $m$ needed to embed a given $U^k$ into $M_{m,1}$ and $M_{m,0}$, and the number of their blow-ups, both depend on the given ribbon . \[\]B Ž. Bižaca A handle decomposition of an exotic ${\Bbb R}^4$ J. Diff. Geo.39 1994 491-508 \[\]BG Ž. Bižaca and R. Gompf Elliptic surfaces and some simple exotic ${\Bbb R}^4$’s J. Diff. Geo., (to appear) \[\]C A. Casson Three lectures on new infinite construction in 4-dimensional manifolds (notes prepared by L. Guillou) A la Recherche de la Topologie Perdue L. Guillou and A. Marin Progress in Mathematics Birkhäuser 62 1986 201–244 \[\]DF S. DeMichelis and M. Freedman Uncountably many exotic $R^4$’s in standard 4-space J. Diff. Geom.35 1992 219–254 \[\]F M. Freedman The topology of 4-dimensional manifolds J. Diff. Geom.171982 357–453 \[\]FQ M. Freedman and F. Quinn The topology of 4-manifolds Princeton University Press 1990 \[\]K R. Kirby The topology of 4-manifolds Lecture Notes in Math. 1374 Springer-Verlag 1989 \[\]Ku K. Kuga Representing homology classes of $S^2\times S^2$ Topology 23 2 1984 133–137 \[\]RS C. Rourke and B. Sanderson Introduction to piecewise-linear topology Ergebnisse der Math. Springer-Verlag, New York 1972 \[\]S I. Stewart Fun and games in four dimensions New Scientist, September 3, 1994 \[\]W C. T. C. Wall On simply connected 4-manifolds J. London Math. Soc.391964 141–149
--- abstract: | We present 1420MHz ($\lambda=21$cm) observations of polarized emission from an area of 117 deg$^2$ in the Galactic plane in Cygnus, covering $82^\circ < l < 95^\circ$, $-3^\circ\!\!.5 < b < +5^\circ\!\!.5$, a complex region where the line of sight is directed nearly along the Local spiral arm. The angular resolution is $\sim1'$, and structures as large as $45'$ are fully represented in the images. Polarization features bear little resemblance to features detected in total power: while the polarized signal arises in diffuse Galactic synchrotron emission regions, the appearance of the polarized sky is dominated by Faraday rotation occurring in small-scale structure in the intervening Warm Ionized Medium. There is no concentration of polarization structure towards the Galactic plane, indicating that both the emission and Faraday rotation occur nearby. We develop a conceptual framework for interpretation of the observations. We can detect only that polarized emission which has its origin closer than the [*polarization horizon*]{}, at a distance $d_{ph}$; more distant polarized emission is undetectable because of depth depolarization (differential Faraday rotation) and/or beam depolarization (due to internal and external Faraday dispersion). $d_{ph}$ depends on the instrument used (frequency and beamwidth) as well as the direction being studied. In our data we find that $d_{ph} \approx 2$kpc, consistent with the polarization features originating in the Local arm. The filling factor of the Warm Ionized Medium is constrained by our data to be considerably less than unity: polarized signals that pass through multiple regions of Faraday rotation experience severe depolarization, but polarized fractions up to $\sim$10% are seen, implying that there are lines of sight that intersect only one Faraday rotation region within the polarization horizon. The Rotation Measure (RM) of the extended polarized emission has a distribution which peaks at $-30$radm$^{-2}$ and has a width to half-maximum of $300$radm$^{-2}$. The peak and half-width of the distribution of RMs of extragalactic sources in the region are $-125$radm$^{-2}$ and $600$radm$^{-2}$ respectively. This suggests that RM increases monotonically with length of propagation path through the interstellar medium in this direction. Among localized polarization features that we discuss, G83.2+1.8 and G91.8$-$2.5, stand out for their circular or quasi-circular form and extent of more than $1^\circ$; both are probably related to the impact of stellar activity on the surrounding medium, although the stars responsible cannot be identified. Another polarization feature, G91.5+4.5, extends $2^\circ\!\!.5 \times 1^\circ$, and coincides with a molecular cloud; it plausibly arises in an ionized skin on the outside of the cloud. Polarized emission seen across the face of the large, dense region, W80, must be generated in the 500pc between the Sun and W80, since W80 must depolarize all extended non-thermal emission generated behind it. Of the supernova remnants G84.2$-$0.8, G89.0+4.7 (HB21), G93.7$-$0.2 (CTB104A), and G94.0+1.0 (3C434.1), only HB21 and CTB104A show polarized emission; the other two lie beyond the polarization horizon and their emission suffers beam depolarization. Emission from the surrounding medium is depolarized on passage through HB21. author: - 'B. Uyan[i]{}ker, T.L. Landecker, A.D. Gray, and R. Kothes' title: Radio Polarization from the Galactic Plane in Cygnus --- Introduction ============ The detection of linear polarization in the radio emission from the Galaxy confirmed the synchrotron origin of the emission [@west62; @wiel62]. While theory suggested that the polarized fraction at the point of emission could reach $\sim$70%, the detected fraction was far lower, and both discovery papers discussed the role of Faraday rotation in the interstellar magneto-ionic medium (MIM) in altering the polarization angle as well as its possible role in reducing the fractional polarization of the received emission. At the point of origin the electric vector is perpendicular to the local magnetic field, but the polarization angle changes as the radiation crosses regions where free electrons and magnetic fields are both present. The extent of this Faraday rotation is $$\Delta\theta = 0.81 \lambda^2 \int n_e B_{\|} dL = \lambda^2 \rm{RM}~\rm{(radians)}$$ where $n_e$ (cm$^{-3}$) is the electron density, $B_{\|}$ ($\mu$G) is the component of the magnetic field parallel to the line of sight, $L$ (pc) is the path length, $\lambda$ (m) is the wavelength, and the quantity RM (radm$^{-2}$) is denoted the Rotation Measure. Faraday rotation can lead to depolarization, a reduction in the apparent fractional polarization arising from vector averaging within the telescope beam of emission from different regions, or from superposition of emission which has suffered different Faraday rotation in separate regions, or a combination of both. There has been a revival of interest in the polarized signals from the Galactic “background”, based on new observations of high angular resolution made with large single antennas [@junk87; @dunc97; @dunc99; @uyan98; @uyan99] and with aperture-synthesis telescopes [@wier93; @gray98; @gray99; @have00; @gaen01; @uyla02b]. A striking property of the polarization images from these observations is the almost complete absence of correlation between regions of polarized emission and features in total intensity. The accepted interpretation of this result states that the appearance of the sky is dominated by Faraday rotation in the intervening medium, outweighing the effects of structure in the synchrotron emission regions. The structure in total intensity (Stokes parameter $I$) is generally smooth, while Faraday rotation in the intervening medium imposes structure with fine scale on the linearly polarized emission (Stokes parameters $Q$ and $U$). Aperture-synthesis telescopes preferentially detect small-scale structure, and are quite insensitive to the smooth $I$ component, leading to apparent fractional polarization in excess of 100%. The structure imposed by Faraday rotation is the product of tangled magnetic fields and irregular electron-density distributions that arise from turbulence in the ISM. Furthermore, most modern radio telescopes are more sensitive to rotation measure than they are to emission measure: an ionized region well below the threshold of detectability in the total-power channel can cause a significant and measurable Faraday rotation. Polarimetry is consequently a useful new tool for probing the magneto-ionic component of the ISM. In this paper we present polarization images at a frequency of 1420MHz of a region of area 117 deg$^2$ in the Galactic plane ($82^\circ < l < 95^\circ$, $-3^\circ\!\!.5 < b < +5^\circ\!\!.5$) made as part of the Canadian Galactic Plane Survey (CGPS). Angular resolution is about $1'$, an order of magnitude improvement over any existing polarization data for this region. The Cygnus region covered by our observations is a very complex one, since the line of sight is nearly along the Local arm. In Section 2 we describe the processing of the data and the construction of wide-field images. Section 3 presents the results and describes some general properties of the polarized emission. In Section 4 we develop a general framework for the interpretation of these results. Section 5 discusses our Rotation Measure results, and we proceed in Section 6 to a discussion of individual polarization features. Preparation of Survey Images ============================ The CGPS [@tayl02] is a systematic mapping of the principal constituents of the interstellar medium (ISM) of the Galaxy with an angular resolution close to $1'$, covering a large region in the first and second Galactic quadrant ($75^\circ < l < 145^\circ$, $-3^\circ\!\!.5 < b < +5^\circ\!\!.5$). Among the data products from the DRAO Synthesis Telescope is a set of images in all four Stokes parameters $I$, $Q$, $U$, and $V$ near 1420MHz. The telescope is described by @land00. It receives continuum signals in both hands of circular polarization in four separate bands, each of width 7.5MHz, two on either side of a central band, which is placed on the emission, from which only images are derived. For the CGPS observations, the telescope is tuned to a velocity of about $v_{\rm LSR}=-50$kms$^{-1}$ (frequency $\sim$1420.64MHz), giving center frequencies for the continuum bands of 1406.89MHz, 1414.39MHz, 1426.89MHz, and 1434.39MHz. To make $I$ images, visibilities from all four bands are separately gridded onto the $u-v$ plane and a combined image is computed, but $Q$ and $U$ images are made separately in each band. The $V$ maps are dominated by flux which depends on errors in the telescope and its calibration, mostly arising from ellipticity of the feeds [@smeg97], and are usually of little value. Since we expect the fraction of circular polarization generated in synchrotron emission regions to be very low, this is of no consequence. Calibration of phase, intensity, and instrumental polarization is based on observations of the sources 3C147 and 3C295, which are assumed to be unpolarized; polarization angle is calculated using assumed values for 3C286 [@land00]. Image processing follows the practice conventional for aperture-synthesis data, to the point of image formation and deconvolution with [clean]{}. Further processing, based on routines developed especially for the DRAO Synthesis Telescope [@will99], is described by @tayl02. The essential processes are a visibility-based removal (using the routine [modcal]{}) of the effects of strong sources, both within and outside the main beam, and self-calibration, followed by a special [clean]{} procedure around extended sources. Complex antenna gains, derived from self-calibration of the $I$ image, are applied to the $Q$ and $U$ data. The effects of strong sources like Cas A and Cyg A are seen in $Q$ and $U$ images even when these sources are well outside the main beam; their effects can be removed using [modcal]{}. In a final step, [modcal]{} is used again to remove residual rings (arising from polarization calibration errors) around polarized point sources within the field. Instrumental polarization varies across the field of view [@cazz00] because of feed cross-polarization and aperture blockage by feed-support struts. This results in conversion of $I$ into $Q$ and $U$. The effects were measured by observing the unpolarized sources 3C147 and 3C295 at 88 positions in a grid of spacing $15'$ across the field of view out to a radius of $90'$, and the correction derived from these observations is routinely applied to survey images. Corrected sky images are assembled into mosaics with data from individual fields weighted to provide minimum noise level in the final image. For polarization mosaics, data from individual fields beyond a radius of $75'$ in the main beam are not used. Some instrumental polarization remains, but at a level less than 1% of $I$. Full details can be found in @tayl02. The telescope samples baselines from 12.9 to 617m with an increment of 4.286m, and all structure from the resolution limit of $\sim1'$ up to $\sim45'$ is represented in the images. Information from single-antenna data has been incorporated into the $I$ data (as is standard practice for the CGPS), but no attempt has been made to recover information corresponding to larger polarization structure, and single-antenna data have not been added to the $Q$ and $U$ images discussed in this paper. We discuss existing single-antenna polarization data for the region in Section 4. Polarized intensity, [*PI*]{}, is calculated as ${\it PI}=\sqrt{U^2 + Q^2 - (1.2\sigma)^2}$, where $\sigma=0.03$K is the rms noise in the $Q$ and $U$ images. The third term provides a first-order correction for noise bias [@ward74]. Polarization angle in each of the four bands is calculated as $\theta_\lambda=\frac{1}{2}\,\arctan\,\frac{U}{Q}$. When the signal-to-noise ratio is adequate, RM can be calculated from the four values of $\theta_\lambda$ . The RM algorithm is based on a linear fit of polarization angles $\theta_\lambda$ in the four bands against $\lambda^2$, followed by a routine which chooses among alternative values of RM which result from different resolutions of the $\pi$ ambiguity in the values of $\theta_\lambda$. If no fit produces $\chi^2 < 3$ that pixel is blanked in the RM image. The selection is then made on the basis of minimizing $|{\rm RM}|$ in the fit. Values of $|{\rm RM}| > 4000$radm$^{-2}$ would lead to very large bandwidth depolarization (see Section 4) and are rejected. With only four bands, ambiguity resolution is sometimes difficult, leading to some uncertain RM values. An Overview of the Data ======================= We present here a large mosaic, covering $82^\circ < l < 95^\circ, -3^\circ\!\!.5 < b < +5^\circ\!\!.5$, in the Cygnus region. Figures \[fig1\], \[fig2\], and \[fig3\] show $I$, $Q$, and $U$ mosaics, made from more than 40 Synthesis Telescope images. The $Q$ and $U$ mosaics are averages of the images made in the individual bands. Polarized signals are low relative to the noise in the images, and in order to show the large-scale distribution of polarized intensity we have smoothed the $Q$ and $U$ data to a resolution of $5'$ and have computed an image of [*PI*]{}; this is shown in Figure \[fig4\]. Figure \[fig5\] shows the polarization angle ([*PA*]{}) computed from $Q$ and $U$, at full resolution, as a grayscale. Figure \[fig6\] shows an image of RM. Where the level of [*PI*]{} is low, a meaningful RM calculation is not possible, and pixels in the RM image are not shown where ${\it PI} < 0.01$K in the image of Figure \[fig4\]. This level corresponds to $\sim$5 times the rms noise at $5'$ resolution. Readily apparent features of the $I$ image of Figure \[fig1\] are many compact sources and a smaller number of bright, extended sources with sizes up to a few degrees. Table \[tbl-1\] lists some of the extended sources (some are marked in Figure \[fig1\]), and gives distances and identifications of objects which are discussed or referred to in the following sections of this paper. There is also a very extended band of diffuse emission from the Galactic plane, more than $4^\circ$ wide, crossing the image centered at $b=+2^\circ$. This is emission from the outer Galaxy and its distribution in latitude reflects the northward warp of the disk. Total intensity in the entire region shown in Figure \[fig1\] never drops below $\sim4.6$K. Of this, 2.7K arises in the cosmic microwave background, and we expect the remaining 1.9K to be almost all non-thermal emission from the Galaxy. Examination of Figures \[fig1\] to \[fig5\] yields the following general conclusions. - Many compact sources show significant polarized emission. A study of the RMs of these sources can be found in @brow01 and @brow02; we do not discuss these sources here, but we use some results from that paper. - For extended sources, the differences between total-intensity structures and polarization structures are more striking than the similarities. - There are large regions in which very little significant polarized emission has been detected (apart from point sources). An example is the triangular region extending from the bottom of the images, $84^\circ\!\!.5 < l < 89^\circ\!\!.5$, $b=-3^\circ$, up to $(l,b)=(87^\circ, +0^\circ\!\!.5)$. There appears to be a diagonal band of very low [*PI*]{} extending from $(l,b)=(87^\circ\!\!.6, +4^\circ\!\!.5)$ to $(l,b)=(94^\circ\!\!.0, -2^\circ\!\!.0)$. At the upper end of the band [*PI*]{} is particularly low, at $(l,b)=(87^\circ\!\!.6, +4^\circ\!\!.5)$, where $Q$ and $U$ are below the noise. Given that there is at least 2K of synchrotron emission from all directions in this region (see above), depolarization must be very thorough. Superimposed on this band is the SNR CTB104A whose emission is strongly polarized. - There are many “depolarization filaments” visible in Figure \[fig4\] as narrow threads of very low [*PI*]{}; they are not detectable at full resolution because of noise. These “filaments” mark regions where polarization angle is changing rapidly. Similar structures are seen in other data [e.g. @uyan98; @uyan99; @have00; @gaen01]. - Most of the polarization structure is apparently random, giving a mottled appearance to the $Q$ and $U$ images. Scale sizes extend from degrees down to the resolution limit of the data. - Examination of the [*PA*]{} image (Figure \[fig5\]) shows that angle changes in a rapid and chaotic manner over much of the field, but there are regions where the angle change is relatively slow over many beamwidths[^1]. Although the polarization structures seen in Figure \[fig5\] are large, in some cases degrees in extent, the full angular resolution of the telescope, $\sim1'$, is needed to detect most of them. Almost none of these features can be seen in a [*PA*]{} map computed from $Q$ and $U$ data smoothed to $5'$ resolution. The notable exception is the smooth feature at $(l,b)=(91^\circ\!\!.8,-2^\circ\!\!.5)$. - There appear to be small patches of polarized emission at the positions of the regions S112 and S115. The $Q$ and $U$ signals at these positions arise from a low level of residual instrumental polarization (conversion of $I$ into $Q$ and $U$ at a level below 1%). - There is a polarized “filament” about $1^\circ$ wide crossing the region W80 from north to south at $(l,b)=(85^\circ, -1^\circ)$. - There is a very complex polarized region centered at $(l,b)=(83^\circ, +2^\circ)$ extending across the western edge of the field. - There is a large polarized patch at $(l,b)=(94^\circ, +2^\circ)$ extending across the eastern edge of the field. - The large SNRs HB21 and CTB104A have identifiable counterparts in $Q$ and $U$; we have detected polarized emission from these SNRs. It is noteworthy that polarized emission is [*not*]{} detected from other SNRs in the region; this fact is discussed below. - At $(l,b)=(91^\circ\!\!.5, +4^\circ\!\!.5)$ we see a strongly polarized region of extent roughly $1^\circ \times 2^\circ$. This polarized region has no $I$ counterpart. It lies quite near the SNR HB21, and in fact extends all the way to the eastern rim of the SNR and is contiguous with polarized emission from the SNR. G91.5+4.5 coincides in position with a molecular cloud with which the SNR is interacting [@tate90]. Polarized intensity here is $\sim$0.3K, suggesting a fractional polarization of about 10%. - At $(l,b)=(91^\circ\!\!.8,-2^\circ\!\!.5)$ we see a smooth elliptical structure about $2^\circ$ across that fades into the surrounding, more chaotic polarization. It stands out from the other polarized emission seen in Figures \[fig2\] and \[fig3\] precisely because of its smooth structure. - Centered at $(l,b)=(83^\circ\!\!.16, +1^\circ\!\!.84)$ is a polarization feature which has a remarkably sharp and almost circular edge about $1^\circ\!\!.4$ in diameter, seen particularly well in the $Q$ image (see also Figure \[fig8\] – this feature is discussed in Section 6). A General Conceptual Framework for Interpretation of the Results ================================================================ Before discussing individual polarized regions or “objects”, we need to consider the general characteristics of the polarized emission in this part of the sky and establish a conceptual framework within which our results can be understood. We know that synchrotron emission is strongly polarized at its point of origin, but only about half of the area portrayed in Figures \[fig1\] to \[fig4\] shows detectable polarization. Any interpretation of the polarization phenomena in this direction must explain both regions of high [*and*]{} low polarized intensity. Interpretation of our data thus requires a full consideration of the mechanisms that create small-scale polarization structure and the mechanisms that remove it (i.e. depolarization). A central issue in interpretation is establishing the distance to the polarization features as this is the key to using the data to examine the physical properties of the MIM. The Origin and Nature of the Polarized Emission ----------------------------------------------- The ultimate source of the polarized emission is Galactic synchrotron radiation. Observations of the synchrotron emission from external galaxies [e.g. @berk97] show a high level of order in magnetic fields in galactic halos and thick disks. Based on 408 MHz observations, @beue85 have established that the synchrotron emission in the Milky Way is generated within a thin disk of scale height 180pc, accounting for 10% of the emission, and within a thick disk of scale height 1kpc which provides 90% of the emission (these heights apply near the Solar circle). The thin synchrotron disk more or less coincides with the gas disk. Observations of external galaxies show magnetic structure usually aligned with the spiral arms, and the fractional polarization is sometimes stronger in inter-arm regions than in the arms themselves [@beck96], implying that the irregular component of the field is stronger in the spiral arms, where it is generated by the turbulent motions associated with star formation. We know from our $I$ images that the intensity structure of the synchrotron emission that we are detecting is smooth, but we have no information on the variability of polarization angle at the point of origin because we do not know how smooth the magnetic field is there. However, we can say that the magnetic field cannot be completely irregular, or we would not detect any polarized emission at all. We note that we have detected fractional polarization up to about 10% (calculated as the measured [*PI*]{} as a fraction of the $I$ data with single-antenna data incorporated). In the absence of small-scale structure in $I$, we proceed on the assumption that the intrinsic polarization (i.e. $Q$ and $U$) is smooth or slowly varying. Simple considerations give information on the physical location of the polarized emitting regions relevant to our data. We can easily find the Galactic plane by inspecting the $I$ image, but there is no concentration of polarized structure towards the plane. This suggests that the polarized emission detected in this field must be nearby. (This is related to longitude: in the CGPS data there are regions near $l=130^\circ$ where polarized emission appears strongly concentrated near the Galactic plane, and so must be more distant). The Origin of Structure in the Observed Polarization ---------------------------------------------------- If the polarized emission is intrinsically smooth, the small-scale structure observed in $Q$ and $U$ must then be produced by Faraday rotation along the propagation path (the exception is SNR emission, where there definitely is fine structure in $I$). The Faraday rotation occurs in regions of the Warm Ionized Medium (WIM), loosely referred to as “Faraday screens”. The word “screen” suggests a thin foreground, distinct from the emitting region, but we consider the term to include deep regions. Since synchrotron emission is generated everywhere in the Galaxy, the screen will always be an emission region as well. The WIM is confined to the Galactic thick disk, of height $\pm$1kpc [@reyn89]. The filling factor is unknown, but is significantly lower than unity. The uneven distribution of ionized gas is related to star formation in the thin disk, and the effects of stars also create a high degree of disorder in magnetic field structure. We can anticipate that any line of sight will pass through one or more “cells” of WIM. This is certainly the case for lines-of-sight that traverse the entire Galactic disk—nearly all polarized extragalactic sources show significant RM [@brow01]. Faraday rotation alone cannot alter the amplitude of the polarized signal, but a number of physical and instrumental effects can reduce the fractional polarization of the recorded signal [see @burn66; @gard66; @soko98]: - Bandwidth depolarization occurs when the RM is so high that the polarization angle changes significantly across the received bandwidth, and the resulting non-parallel vectors are averaged. - Depth depolarization (also known as differential Faraday rotation) occurs when synchrotron emission and Faraday rotation co-exist within the same volume of space. Emission generated at different depths along the line of sight suffers different rotation, and vector averaging then reduces the observable polarized intensity. - Beam depolarization occurs when many turbulent cells of the ISM (due to internal or external Faraday dispersion) and/or large RM gradients exist within the beam of the telescope, again leading to vector averaging. We need to evaluate each of these mechanisms for the DRAO Synthesis Telescope at 1420MHz, and for the region studied here. Bandwidth depolarization is not a significant factor in the interpretation of the present data. In band-averaged data a rotation measure of $|{\rm RM}| \approx 790$radm$^{-2}$ is required to produce 50% depolarization, with total depolarization occuring when $|{\rm RM}| \approx 1250$radm$^{-2}$. The equivalent values in single-band data are $|{\rm RM}| \approx 4000$radm$^{-2}$ and 6600radm$^{-2}$, respectively. For the general ISM, $|{\rm RM}| \leq 300$radm$^{-2}$ usually holds (see Figure 7, in which 93% of points lie in that range), producing a depolarization of $<8$% in band-averaged data, and $<0.3$% in a single band. An indication of which areas in our data are most affected by bandwidth depolarization can be obtained by comparing the RM image of Figure 6 with the band-averaged images in Figures 2–5. Depth depolarization is without doubt a major factor in these data. In a volume of ISM with uniform synchrotron emissivity, electron density and magnetic field, the integrated emission is totally depolarized for a path length $$L = \frac{\pi}{0.81 \lambda^2 n_e B_{\|}}.$$ In this direction the line of sight is very close to the direction of the uniform component of the field (determined to be $l \approx 83^\circ$—[@heil96]). We therefore take $B_{\|} \approx 2\,\mu$G, the full value of the uniform component [@beck01]. We take the mean value of the electron density to be $n_e=0.02$cm$^{-3}$, the electron density near the Sun from the model of @tayl93. Then total depth depolarization at 1420MHz occurs over a path length $L \approx 2.2$kpc. However, we should be careful not to apply this result too literally, since the uniformity assumed in the calculation is very far from the truth. Significant though it may be, depth depolarization requires integration along a path of at least 100pc to produce a 10% change in polarized intensity, and therefore cannot create arcminute structure on the sky. The presence of widespread small-scale polarization structure requires small-scale structure in Faraday rotation [*transverse*]{} to the line of sight, and under these circumstances beam depolarization becomes significant, and is likely responsible for the smallest-scale structure that we see. The Polarization Horizon ------------------------ The combined effects of depth depolarization and beam depolarization do not allow us to detect polarized emission features that originate beyond a certain distance. This applies equally to supernova remnant emission and to general Galactic emission. In particular, beam depolarization will act in the same way on SNR emission and on small-scale features generated in a localized Faraday screen. We capture these ideas in the concept of the [*polarization horizon*]{}, and will attempt to establish its distance, $d_{ph}$. We emphasize that $d_{ph}$ is wavelength dependent, beamwidth dependent, and direction dependent. In particular, a telescope of higher angular resolution will experience less beam depolarization and will “see through” to a larger distance and in general, at a fixed angular resolution, an observation at higher frequency will “see” to a larger distance. SNRs provide a means of estimating the distance to the polarization horizon in our data. All SNRs generate polarized emission at source, and are typically bright objects that subtend many beamwidths, but not all SNRs in our data have detectable polarized emission. Those that are not detected must either be intrinsically faint, or lie beyond the polarization horizon. Table \[tbl-2\] lists SNRs in the range $83^\circ < l < 95^\circ$, including some at latitudes beyond the boundaries of the present dataset. Data are taken from the catalog of @gree01 or from the references given in the table. The most straightforward conclusion from the data of Table \[tbl-2\] is that the emission from SNRs more distant than about 2 kpc is not detected. The weakness of this argument is that a very dense region, W80, lies in front of three of the SNRs in Table \[tbl-2\] (G84.2$-$0.8, G84.9+0.5, and G85.9$-$0.6), and it is hardly surprising that beam depolarization occurs on passage of the emission through such a region (see more detailed discussion in Section 6). Furthermore, G85.4+0.7 and G85.9$-$0.6 are SNRs of very low surface brightness, among the faintest known, and our observations simply do not have the sensitivity to detect polarization from these SNRs. This leaves only G94.0+1.0 as a reasonably bright SNR that lacks detected polarization and is not seen through an region; its distance is 5.6kpc [@fost02], placing it in the Perseus arm. (Some polarized emission is seen coincident with both G84.2$-$0.8 and G94.0+1.0, but is not believed to be associated with the SNRs; see Section 6). A safe conclusion is that beam depolarization will eliminate polarization from SNRs beyond the Local arm in the region we are examining; i.e. the polarization horizon is at a few kpc distance, and is likely within the Local arm. regions can also be of use in determining $d_{ph}$. Electron densities in regions are much higher than in the surrounding ISM, and motions are turbulent, resulting in very tangled magnetic fields. Since magnetic fields are frozen into the ionized gas, and move with it, quite high field strengths can result from compression. The resulting high Faraday rotation with variations on small spatial scales causes strong beam depolarization, making polarized emission from behind the region undetectable, and creating an area of very low [*PI*]{} whose outline matches that of the region. This signature effect has been used by @gray99, @pera99, and by @gaen01 to establish distances to Faraday screens, and places a lower limit on $d_{ph}$. Examination of the present data shows that, while there are many bright regions in the field (see Table \[tbl-1\]), they are [*not*]{} seen as depolarization features . This implies that the Faraday screens which define the polarization structure in this region are significantly [*closer*]{} to us than the regions, which are mostly at distances of 2 to 3 kpc (see Section 6 for a detailed discussion of W80, which shows strong coincident polarization). Thus the regions do not allow us to place a strong limit on $d_{ph}$. Weighing the evidence, we conclude that, for lines-of-sight in the Galactic plane in this region, the polarization horizon is probably about 2kpc distant, and all the polarization features seen in Figures \[fig2\] and \[fig3\] are generated within the Local arm. We note that our coverage in Galactic latitude of $-3^\circ\!\!.5 < b < +5^\circ\!\!.5$ implies that our lines-of-sight are within the thin disk out to distances of about 2kpc. We are therefore seeing synchrotron radiation that arises predominantly in the thin disk. Detectability of Polarization Features -------------------------------------- The DRAO Synthesis Telescope at 1420MHz has a much greater sensitivity to RM than to emission measure: it can detect ionized material in its polarization channels which it cannot detect in total power. For example, at 1420MHz an ionized region of extent 10pc with $n_e=0.5$cm$^{-3}$ and $B_{\|}$ = 2$\mu$G will produce a Faraday rotation of $20^\circ$, which is easily detectable. This same region produces an emission measure of 2.5cm$^{-6}$pc, which is only one tenth of the rms thermal noise in the $I$ image. This great sensitivity to Faraday rotation, along with the fact that Faraday screens do not affect total intensity, leads to the effect remarked on earlier, that the polarized sky has a completely different appearance from the total-intensity sky. While this great sensitivity facilitates study of the MIM, it is often impossible to isolate a single region or object causing the Faraday rotation along the line of sight. The following conditions are required to generate a polarization feature that is detectable with the Synthesis Telescope: (a) the presence of a Faraday screen within the polarization horizon, (b) structure within that Faraday screen on size scales between $1'$ and about $45'$, and (c) the absence of confusing Faraday screens along the line of sight. The third point merits some elaboration. Faraday screens are transparent structures, and one Faraday screen can strongly modify the appearance of another on the same line of sight. For example, the smooth appearance of G91.8$-$2.5 must be produced by a Faraday screen whose structure is extremely smooth. That smooth structure could easily be altered, if not completely obscured, by the presence of another screen anywhere along the line of sight within the polarization horizon, either closer or more distant than G91.8$-$2.5, with fine structure. We take the absence of confusing Faraday rotation along the line of sight as evidence that the filling factor of ionized gas is quite low. In the same vein, the short path to W80 favours the detection of polarized emission from the foreground MIM: along a longer path, the superposition of the effects of several Faraday screens could cause partial or total depolarization. This leads us to a conjecture about the regions where we detect only very low levels of polarized emission, the “empty” regions. Faraday rotation is not absent from these regions, as witnessed by the RMs of extragalactic sources seen through them [@brow01]. Small-scale angular structure in Faraday rotation exists in many directions, detected through an analysis of the variation of RMs between the individual components of extragalactic sources [@leah87]. Small-scale variations are also reported by @brow01. Since Faraday rotation is present in the “empty” regions, and probably has small-scale structure, we interpret the absence of detectable polarization as an indication that a number of patches of WIM exist along the line of sight, to the point where the emission is almost totally depolarized. This is still consistent with a low filling-factor for the WIM, as there may in fact be no Faraday screens within the polarization horizon. That is, we are “seeing” the depolarization at the polarization horizon, and any polarized emission arising closer to us is too smooth to be detected by the interferometer. Comparison with Other Data -------------------------- ### High-Resolution Data There have been six discussions of the distance to polarized Galactic emission (including the present work) on the basis of data obtained with antenna beams smaller than $10'$. These results are summarized in Table \[tbl-3\]. Among these results, the present work stands out as showing very nearby emission regions. Can this be understood in terms of the conceptual framework that we have outlined above? The only work that has examined part of the Galaxy beyond the Solar circle is that of @gray99, also based on CGPS data. These authors describe polarized features related to the region W4 (near $l = 135^\circ$) which is definitely a Perseus arm object. W4 is seen as a depolarizing feature, so the polarized emission must arise behind it. In this direction the telescope evidently “sees through” the Local-arm Faraday rotation. There are several reasons to expect the polarization horizon to be closer at $l = 83^\circ$ to $95^\circ$ than at $l = 135^\circ$. First, the local magnetic field is directed towards $l \approx 83^\circ$ [@heil96], so that, in the region we are considering, the line of sight is nearly parallel to the uniform component of the field, and Faraday rotation will be higher. Second, in these directions, the line of sight passes for a considerable distance through the local spiral arm (or spur), which is directed towards $l \approx 80^\circ$. On the other hand, along sight lines toward $l \approx 135^\circ$ the path through the Local arm is fairly short. Furthermore, the line of sight is at an angle of about $50^\circ$ to the direction of the uniform component of the magnetic field and we expect that RMs will be lower; this is confirmed by @brow01 on the basis of analysis of RMs of extragalactic sources. Inter-arm fields are probably quite smooth, and little depolarization will occur there. In these directions, the polarization horizon at 1420MHz must lie in the Perseus arm, or even beyond it. The other work at a wavelength very similar to ours is that of @gaen01 who looked towards the inner Galaxy [at $l \approx 330^\circ$, almost directly opposite the region consdered by @gray99]. They estimate that most of the polarized emission is in the Crux arm; surprisingly, they “see through” the closer Carina arm, although again the large angle to the field direction helps. @dick97 used absorption to establish a minimum distance to the polarized emission in one direction in the same field; absorption by gas in the Carina arm was detected, placing the emission regions in or beyond that arm. The extensive surveys of @dunc97 [@dunc99] used shorter wavelengths. Because of dependence of Faraday rotation on ${\lambda}^2$ we expect the polarimeter to “see” more distant emission. Because of the use of single-antenna telescopes, we might expect greater sensitivity to large-scale emission, and @dunc97 detected an apparently isotropic, and so nearby, component. The emission in the first quadrant mapped by @dunc99 is associated with the Sagittarius arm by the correlation of depolarization with emission. This locates it at distances between 2.5 and 8 kpc. Comparing our results with the others listed in Table \[tbl-3\], we reach the conclusion that the Local arm has a substantial magneto-ionic component, and is a strong Faraday rotator. This is contrary to some other ideas of Galactic structure around the Sun: for example, the electron density model of @tayl93 shows no enhancement of electron density corresponding to the Local arm. ### Low-Resolution Data There is some low-resolution polarization data for the region discussed here, and we now consider its relevance to the above discussion. @brou76 present single-antenna polarization data for most of the northern sky at several frequencies from 408 to 1411MHz (hereafter the Dwingeloo data). At the highest frequency the angular resolution is $36'$. Sampling is irregular, with a minimum interval between points of $2^\circ$. In the $13^\circ \times 9^\circ$ region considered here there are only 14 data points; sampling at half-power beamwidth would have given data at about 1300 points. The extreme under-sampling makes a meaningful comparison with our data quite difficult. In a slightly larger area ($80^\circ < l < 96^\circ, -4^\circ < b < +6^\circ$) there are 24 data points at 1411MHz in the Dwingeloo data. In that area the average [*PI*]{} is 0.143K, and the average polarization angle is $-12^\circ$ (angle increases east of Galactic north) with a scatter of $34^\circ$ rms. The average angle weighted by [*PI*]{} is $-9^\circ$. Within the uncertainties, these results are consistent with an electric vector perpendicular to the Galactic plane, that is with a magnetic field aligned parallel to the plane. There is reason to believe that the low-resolution data sample very nearby regions (see Section 5). Rotation Measure ================ The multi-frequency nature of the data allows us to calculate RM in regions with sufficiently high polarized signal. The result is shown in Figure \[fig6\]. Where $PI$ is strongest, RM values tend to be negative; positive values tend to be found in areas of low polarized intensity, and must be regarded with some caution. We use the RM data to reach some general conclusions about the region, but in this paper we will not attempt to interpret the RM data for individual objects. In this region @brow02 have determined the RM of 107 compact sources, using an algorithm that is entirely independent of ours, but of course is basically similar. It is probable that all of these compact sources are extragalactic. RM values range from $-1413$ to +371radm$^{-2}$, and the mean and median values are $-265$ and $-227$radm$^{-2}$ respectively. Figure \[fig7\] compares the distribution of RM of the extended emission (the data of Figure \[fig6\]) with the distribution of RM of the compact sources. The RM of the extended polarized emission has a distribution which peaks at $-30$radm$^{-2}$ and has a width to half-maximum of $300$radm$^{-2}$. In contrast, the peak and half-width of the distribution of RMs of compact sources are $-125$radm$^{-2}$ and $600$radm$^{-2}$ respectively. The greater spread in compact-source RM is possibly the result of Faraday rotation within the sources themselves, but this should not greatly affect the location of the histogram peak since the source sample is large (107 sources). We interpret the substantial difference in the RM distributions as follows: (i) the extragalactic source emission has suffered greater Faraday rotation as a result of propagation along a longer path, extending from the outer edge of the Galaxy to the Sun, a path of perhaps 10 kpc, while (ii) the extended emission has traversed a shorter path, no more than 2 kpc, the distance to the polarization horizon. This interpretation is consistent with the conclusion that there is no reversal of the uniform component of the Galactic field beyond the Local arm [@brow01]. Under these circumstances the RM will tend to increase monotonically with distance of propagation through the Galactic disk, and RM becomes a (weak) indicator of distance. In the region covered by the CGPS, the second Galactic quadrant, we expect that the MIM in the Local arm and the Perseus arm will dominate polarization effects. The magnetic field falls to low values at large galactocentric radius, and we expect relatively little Faraday rotation in the extreme outer Galaxy. In contrast to our results, RMs determined by @spoe84 from the Dwingeloo data are between $-10$ and +10radm$^{-2}$. This suggests that the Dwingeloo observations detect polarized emission generated very close to the Sun, while the DRAO observations detect polarized emission generated at larger distances. In the language we have developed above, we would say that the polarization horizon for the Dwingeloo telescope is very close; this is understandable when the effects of beam depolarization in a $36'$ beam are considered. From the absence of depolarizing effects of regions, @wilk74 deduced that the polarized emission detected by single-antenna radio telescopes (with beams of $0^\circ\!\!.5$ to $2^\circ$ at frequencies of 240 to 1400 MHz) lie closer than 500 pc. These authors derive RM data compatible with that of @spoe84 over a smaller area. These results were obtained for an area near $l = 150^\circ$, but are probably applicable over a considerable part of the Galactic plane. Results for Individual Regions ============================== In this section we discuss individual “objects” or regions within the survey area. Regions are considered in order of their Galactic longitude and are named in the manner conventional for Galactic objects. G83.2+1.8, a Circular Faraday-Rotation Feature ---------------------------------------------- Centered at $(l,b) = (83^\circ\!\!.16, +1^\circ\!\!.84)$ is a polarization feature which has a remarkably sharp and almost circular edge, seen particularly well in the $Q$ image (Figure \[fig2\]). A $Q$ map covering a smaller area is shown in Figure \[fig8\]. The nearly circular boundary is very obvious; the boundary can be fitted very closely with a circle of diameter $1^\circ\!\!.38$ over about 70% of the circumference. Once again, there is no counterpart in $I$. This sharp circular boundary strongly suggests a stellar-wind phenomenon, and there is a B8 star, HD196833, at $(l,b) = (83^\circ\!\!.19, +1^\circ\!\!.87)$, about $2'\!\!.5$ from the center of the circular feature. The distance of this star, from HIPPARCOS measurements, is $\sim330$pc [@perr97]. At this distance the diameter of the structure would be 8pc. The change in polarization angle at the circle is $\sim60^\circ$. The corresponding change in RM at 1420MHz is 24rad m$^{-2}$. For an assumed constant magnetic field of 2$\mu$G, this implies an enhanced electron density of 1.8cm$^{-3}$. The excitation parameter required to keep this volume ionized is 6cm$^{-2}$pc if the volume is a uniform sphere, or about 4cm$^{-2}$pc if it is a spherical shell. At a distance of 330pc, this demands a star of type approximately B1 [@pana73]. A B8 star produces a flux of ionizing photons inadequate by more than 5 orders of magnitude, so HD196833 cannot have created the spherical feature. In principle G83.2+1.8 could be the result of a [*deficit*]{} of electrons, produced when a stellar wind swept out a spherical region. However, this would require an external electron density of 1.8cm$^{-3}$, an improbably high value. Even though the source of ionization that maintains G83.2+1.8 cannot be identified, it nevertheless seems very likely that the polarization feature has a stellar origin. Polarized Emission Superimposed on W80, G85$-$1 ----------------------------------------------- Before discussing detailed results for the polarization superimposed on W80, we will examine the assertion, made in Section 4, that beam depolarization in W80 eliminates the polarization of any synchrotron emission generated behind it. @wend83 have developed a model for W80, interpreting the region as a giant molecular cloud being disrupted by 8 early-type stars. Using the electron densities and path lengths from their model, and assuming a magnetic field of $B_{\|} = 2\,\mu$G, the Faraday rotation through the center of the nebula is 17radians, corresponding to an RM of 400radm$^{-2}$. The high densities in this model (varying from 6 to 40cm$^{-3}$) mean that the field is likely to be higher than the value we have assumed, and we consider 400radm$^{-2}$ to be a lower bound for the RM of the region. Given the turbulence that is likely to be associated with the disruptive activity of the stars, we can assert with confidence that the ionized gas in W80 will completely “hide” polarized emission arising behind it through beam depolarization. Figure \[fig9\] shows $I$, $Q$, $U$, and [*PA*]{} images of an area including the region W80 and stretching to the western boundary of the field. There is no strong correlation of polarized features with total-intensity objects, except for a handful of compact sources. The most intense polarized feature is a complex “filament” which crosses W80. @roge99 measured the synchrotron emissivity at 22MHz by measuring the brightness temperature towards optically thick regions which absorb all emission arising behind them. One of the regions used was W80, where a value of 21,800K/kpc was obtained. However, this value was based on a distance of 800pc for W80, and adjusting to the value of 500pc which we are using, the emissivity becomes 34,900 K/kpc. Translating this figure to 1420MHz using $\beta=-2.8$ ($T\propto\nu^\beta$) gives an emissivity of 0.30K/kpc. The filament in front of W80 causes an angle change of about 100$^\circ$ along most of its length, increasing to about 200$^\circ$ at its northern end. The [*PA*]{} image leaves the impression that we are looking through a Faraday rotating sheet, with the magnetic field in the sheet possibly aligned with the line of sight. The measured peak [*PI*]{} on the filament is $\sim0.05$K. Assuming a fractional polarization at origin of 70%, the emitted synchrotron radiation must contribute a brightness temperature, $T_b$, of 0.07K. Given the value for the synchrotron emissivity, it is immediately clear that the synchrotron emission we are detecting must be generated along about half of the line of sight to W80. We note the presence of an extremely small polarized filament, $\sim25'$ long and unresolved in width, at $(l,b)=(84^\circ\!\!.65, -1^\circ\!\!.35)$, well portrayed in the $Q$ and $U$ images of Figure \[fig9\]. Assuming a distance of 250pc, the length of this filament is $\sim$1.8pc and its width is less than 0.07pc. The SNR G84.2$-$0.8 ------------------- Figure \[fig10\] shows a $PI$ image of the SNR G84.2$-$0.8 with superimposed contours of $I$. It is apparent that a peak of [*PI*]{} lies near the northern shell of the SNR, where its total intensity peaks. This gives the impression that we have detected polarized emission from the SNR, contrary to the assertion made in Section 4. But is this really SNR emission, or is it foreground emission superimposed by chance on the SNR? It is instructive to look at the images of Figure \[fig9\], which cover a wider area. Inspection of the $Q$ and $U$ images show that, while polarized structure roughly coincides with G84.2$-$0.8, the polarized emission around the SNR extends well beyond the perimeter of the SNR. Second, the scale-size of the polarized structures is not much smaller than the SNR. This is in marked contrast to HB21 and CTB104A where polarization structures as small as the telescope beam ($1'$) are clearly seen. Further, the angle image shows that there is a filament of altered angle of extent about $1^\circ\!\!.5$ at almost constant $b$, crossing right through G84.2$-$0.8. We are satisfied that the polarized emission is probably unrelated to the SNR, but there is no way of reaching a conclusive answer to this question. This case illustrates the difficulty of making polarization observations of SNRs at frequencies as low as 1420MHz. Unless there is a very substantial contrast between SNR emission and the surroundings, either in polarized intensity or in polarization structure (as is the case for HB21 and CTB104A) it is almost impossible to decide whether polarized signals arise from the SNR or from the surrounding ISM. Detected polarization features might be in front of or behind the SNR. In any event, it is advisable to image a wide area around any SNR being investigated. The SNR HB21, G89.0+4.7 ----------------------- HB21 is at the northern edge of the region described here. We will present detailed polarization results for HB21 in a forthcoming paper (Uyan[i]{}ker et al. in preparation), but here we describe some features which are relevant to the more general discussion of the region. Strong polarized emission from HB21 is clearly seen in Figures \[fig2\] and \[fig3\]. Figure \[fig11\] shows $I$ and [*PI*]{} images over a smaller region whose boundaries have been chosen to include HB21 and part of the adjacent molecular cloud. Polarized emission from HB21 is strongest at the bright eastern rim, and polarized intensity drops toward the western edge. Strong polarized emission extends across the eastern edge of the SNR, but the polarized emission on the SNR shows more fine structure than the emission from the adjacent “background”. Surprisingly, the SNR [*depolarizes*]{} emission arising behind it. A sharp drop in polarized intensity can be seen coinciding with the outer edge of the SNR shell (the faint bubble at $(l,b)=(89^\circ\!\!.8, 4^\circ\!\!.4)$ – see Figures \[fig11\] (a) and (b)). This is probably the result of turbulent magnetic fields of relatively high intensity behind the SNR shock, leading to beam depolarization. At the distance of HB21 [800pc— @tate90] the $1'$ beam corresponds to $\sim$0.2pc. The cell size behind the SNR shock must be considerably smaller than this to produce the observed depolarization, perhaps as small as 0.02pc. G91.5+4.5, a Polarized Region Coincident with a Molecular Cloud --------------------------------------------------------------- Figure \[fig12\] shows $I$, $U$, and [*PI*]{} images around G91.5+4.5, a strongly polarized region of extent $2^\circ\!\!.5 \times 1^\circ$. This polarized region has no $I$ counterpart. It lies quite near the SNR HB21, and in fact extends all the way to the eastern rim of the SNR and is contiguous with polarized emission from the SNR. The two regions offer an interesting contrast: both show equally strong polarization structure, but one has a strong $I$ counterpart and the other has none within the sensitivity limits of the DRAO telescope. G91.5+4.5 coincides in position with part of a molecular cloud, $4^\circ \times 2^\circ\!\!.5$ in size (from $l = 90^\circ$ to $94^\circ$ and $b = 2^\circ\!\!.5$ to $5^\circ$). @tate90 have shown that HB21 is interacting with this molecular cloud, and that both objects lie at a distance of about 0.8kpc. We suggest that the polarization feature is produced by a Faraday screen in some way associated with the molecular cloud. We regard the positional coincidence as reasonably strong evidence of this. The molecular cloud, at a distance of 0.8kpc, is the dominant object along the line of sight that is closer than the polarization horizon (2kpc). Can a molecular cloud be a Faraday screen? For the sake of discussion, we assume the following parameters: gas density 100cm$^{-3}$, fractional ionization $10^{-6}$, and magnetic field 100$\mu$G [@fieb89]. For a path length through this molecular cloud of 100pc (probably an overestimate), the rotation measure is $\sim1$radm$^{-2}$, producing an angle change at 1420MHz of only $2^\circ\!\!.6$, compared to the measured value of $\sim200^\circ$. New molecular observations [presented in part in @uyla02a] show molecular material overlapping the polarized feature G91.5+4.5 and extending to the east. Inspection of the $Q$ image of Figure \[fig12\] shows curved features centered near $(l,b)=(93^\circ\!\!.9, +4^\circ)$, extending at least $1^\circ$ in latitude. These features envelop molecular material, and we interpret the $Q$ features as the product of Faraday rotation in a thin ionized layer on the outside of the molecular cloud. We have been able to find a reasonable physical model to match the observed parameters of this Faraday screen. The observed facts are a Faraday rotation $\Delta\theta \approx 200^\circ$ and an absence of detectable $I$ emission. The upper limit for $I$ emission from the Faraday screen material is 0.1K, the 2-$\sigma$ noise level in the $I$ image. The corresponding maximum emission measure ($n_e^2 L$) is 56cm$^{-6}$pc. The extent of the molecular cloud on the sky is $\sim3^\circ$, giving a physical size of 40pc. Thicknesses of the ionized layer of 1 to 4pc are reasonable, implying maximum electron densities between 8 and 4cm$^{-3}$, and an expected rotation of $100^\circ < \Delta\theta < 200^\circ$ for a magnetic field of 7$\mu$G (field slightly enhanced in the relatively dense region). Although we do not have sufficient information to tie down its properties precisely, it seems clear that this Faraday screen is capable of imparting the observed properties to the polarized emission. We discuss this model in more detail in a forthcoming paper (Uyan[i]{}ker et al. in preparation). G91.8$-$2.5, a Highly Ordered Faraday-Rotation Structure -------------------------------------------------------- A strongly polarized elliptical structure at $(l,b) = (91^\circ\!\!.8, -2^\circ\!\!.5)$ stands out from the other polarized emission seen in Figures \[fig2\] and \[fig3\] because of its smooth structure. It has a size of nearly $2^\circ$ but lacks a sharp boundary. This structure is studied in detail by @uyla02b; we present a summary of their discussion. G91.8$-$2.5 has a central peak of polarized intensity, and an outer shell-like structure. Peak polarized intensity is 0.08K, and polarization angle changes across the object by $\sim100^\circ$, corresponding to $\Delta$RM $\approx$ 40radm$^{-2}$. There is no counterpart in total intensity, and the polarized object appears to have no relation to visible objects in the field, including the region LBN416 and the star cluster M39 (NGC7092). G91.8$-$2.5 could be the product of a fluctuation of either electron density or magnetic field, with either parameter increasing or decreasing. The required magnetic field change is 10$\mu$G or greater, and this considerably exceeds the random component of about 4$\mu$G. A region which has a deficit of electrons must be filled with neutral gas, and none is apparent in either atomic or molecular form, so the most likely possibility is an enhancement of electron density. The distance to G91.8$-$2.5 is unknown, but limits can be placed on it. An upper limit is, of course, 2kpc, the distance to the polarization horizon. A path of at least 400pc is needed to generate the synchrotron emission detected in G91.8$-$2.5, moving the maximum distance to 1.6kpc. However, this is a weak constraint: the object must be much closer to show such a smooth structure. A lower limit can also be placed on the distance, with some assumptions. The first assumption is that the depth of the region along the line of sight is equal to its transverse dimension (it is approximately spherical). If it is an enhancement of electron density in a magnetic field of 3$\mu$G, then its distance must be at least 200pc or the bremsstrahlung from the ionized gas would be detectable, either at radio wavelengths or optically. The smooth structure of the magnetic field would be difficult to maintain in a large object, so this fact favours a close distance. A distance of $350\pm50$pc is deduced. The enhancement of electron density is 1.7cm$^{-3}$ and the mass of ionized gas is 23M$_\odot$. G91.8$-$2.5 bears a strong resemblance to the object described by @gray98, which we denote here as G137.6+1.1. The characteristics of the two objects are quite similar, except that G91.8$-$2.5 produces only one third of the Faraday rotation of G137.6+1.1. The discovery of a second object of this type suggests that these smooth, elliptical Faraday-rotation regions may be common. Both objects are probably of similar nature, a smoothly distributed concentration of ionized gas in a very uniform field. The approximately spherical form of these Faraday-rotation regions suggests that they have a stellar origin. The SNR CTB104A, G93.7$-$0.2 ---------------------------- The polarized emission from CTB104A has been described by @uyan02. An outline is given here for completeness, and the results reported in that paper are put in the more general context. As is evident from Figures \[fig2\] and \[fig3\], the polarized emission from the SNR shows very strong small-scale structure. The RM is also chaotic on small scales, but shows a remarkably well-ordered gradient across the remnant, changing smoothly from about $-80$radm$^{-2}$ in the south-east to $\sim$+170radm$^{-2}$ in the north-west. The orientation of the RM gradient is aligned with the magnetic axis of the remnant at $30^\circ\pm10^\circ$ north of west, defined by the bright total-intensity shells. This gradient in rotation measure is attributable to changing magnetic field orientation in a compressed shell, presumably the SNR shell. This fits the @vand62 model very well, where a uniform ambient magnetic field is compressed by the expanding blast wave. The SNR 3C434.1, G94.0+1.0 -------------------------- Figure \[fig13\] shows an $I$ image of a region around the SNR 3C434.1 and the region NRAO655 with superimposed polarization vectors. Polarized emission coincides positionally with 3C434.1, but there is stronger polarization immediately to the east of the SNR. Taking these data at face value, one would have to say that the region NRAO655 appears to be as strongly polarized as the SNR. The data clearly demand a more sophisticated interpretation, and it is obvious that here the polarization image is dominated by effects closer than the SNR and the region. Both are at a distance of 5.6kpc [@fost02]. Conclusions =========== We have described the polarization properties at 1420MHz of emission from the Galactic plane in a complex region in Cygnus, a direction where the line of sight is directed along the Local spiral arm. The DRAO Synthesis Telescope is extremely sensitive to Faraday rotation, allowing the detection of ionized regions which cannot be detected by their total-power emission. Consequently, the appearance of the polarized sky is dominated by Faraday rotation in the Warm Ionized Medium, rather than by structure in the synchrotron emitting regions which are the ultimate source of the radiation. By considering the effects of Faraday rotation, we have developed a general framework for understanding our results. A key concept is the [*polarization horizon*]{} – polarization features generated beyond that point cannot be detected. The distance to the polarization horizon depends on the wavelength and the telescope beamwidth, and on direction. We have shown that SNRs and regions whose distances are known can be used to constrain the distance to the polarization horizon. In this direction, this distance is about 2kpc, and the observed polarization features lie within the Local arm. The detectability of polarization features requires the presence of a Faraday rotation “screen” along the line of sight with structure on a scale that the telescope can detect. If many such screens lie along the line of sight, the detected polarization will be very low. The detection of regions of substantial polarized intensity (as high as 10% of the smooth Galactic synchrotron background) then implies that the filling factor of Faraday screens, and so of the Warm Ionized Medium, is low. The distribution of RM of the extended emission peaks at negative values, but the RMs of compact sources in the field tend to be more negative. This implies that, in this direction, RM increases as the propagation path through the ISM increases, and is consistent with the absence of a field reversal between the Local and Perseus arms. RMs from single-antenna data are much lower, implying that these observations, with beamwidths of $0^\circ\!\!.5$ to $2^\circ$ have detected emission arising at very near distances. Localized features in this region exhibit a surprising variety of attributes, some not seen before. Polarized emission is associated with a molecular cloud. A feature larger than $1^\circ$ displays a sharp circular transition of polarization angle. A feature whose size is nearly $2^\circ$ has exceptionally smooth polarization structure. We have offered interpretations for some, but not all, of these within the conceptual framework that we have established. While many of the observed phenomena have extents of several degrees, arcminute angular resolution is essential in detecting and understanding them. The two nearby SNRs in this direction whose polarized emission can be detected reveal small-scale structure which differs from the polarized emission from their surroundings. Nevertheless, it can be difficult to separate polarized emission from SNRs from their polarized surroundings, and polarization observations of SNRs at frequencies as low as 1420MHz must employ high angular resolution and cover substantial areas of sky to identify clearly any polarization arising in the SNR. Polarization phenomena which are contained in a more or less circular region on the sky, or have a radial structure, strongly suggest that a star has influenced the surrounding environment. G83.2+1.8 and G91.8$-$2.5 are examples, although in neither case can a central star be found. Many more such objects should be investigated closely, with the aim of elucidating the role that stars play in shaping polarization features. The novelty of the polarized sky still presents challenges, and we regard our interpretations of the observed features as provisional. Much more work must be done, through the observation of a wider area of the Galactic plane, complemented by modelling of the emission and Faraday rotation regions. While this one field cannot provide a sufficient number of examples to lead to representative astrophysical data, we believe that our approach has laid the foundation for future work. The Canadian Galactic Plane Survey is a Canadian project with international partners and is supported by the Natural Sciences and Engineering Research Council of Canada. The Dominion Radio Astrophysical Observatory is operated as a National Facility by the National Research Council of Canada. We have gained insight from discussions with many colleagues, particularly Rainer Beck, David Green, Marijke Haverkorn, and Wolfgang Reich. Beck, R. 2001, Space Sci. Rev., 99, 243 Beck, R., Hoernes, P. 1996, Nature, 379, 47 Berkhuijsen, E.M., Horellou, C., Krause, M., Neininger, N., Poezd, A.D., Shukurov, A., Sokoloff, D.D. 1997, , 318, 700 Beuermann, K., Kanbach, G., and Berkhuijsen, E.M. 1985, , 153, 17 Brand, J., and Blitz, L. 1993, , 275, 67 Brown, J., and Taylor, A.R. 2001, , 563, L31 Brown, J., Taylor, A.R., and Jackel, B.J. 2002 , in press Brouw, W.N., and Spoelstra, T.A.T. 1976 , 26, 129 Burn, B.J. 1966, , 133, 67 Cazzolato, F., Landecker, T.L., Gray, A.D., and Veidt B.G., in [*[Galactic Radio Polarization: A New Window on the Interstellar Medium]{}*]{}, edited by T.L. Landecker, Dominion Radio Astrophysical Observatory, Penticton, B.C., Canada, 69, April 2001 Dickey, J.M. 1997, , 488, 258 Duncan, A.R., Haynes, R.F., Jones, K.L., and Stewart, R.T. 1997, , 291, 279 Duncan, A.R., Reich, P., Reich, W., and Fürst, E. 1999, , 350, 447 Feldt, C., and Green, D.A. 1993, , 274, 421 Fiebig, D., and Güsten, R. 1989, , 214, 333 Foster, T., and Routledge, D. 2002, , submitted Gaensler, B.M., Dickey, J.M., McClure-Griffiths, N.M., Green, A.J., Wieringa, M.H., and Haynes, R.F. 2001, , 549, 959 Gardner, F.F., and Whiteoak, J.B. 1966, , 4, 245 Gray, A.D., Landecker, T.L., Dewdney, P.E., and Taylor, A.R. 1998, Nature, 393, 660 Gray, A.D., Landecker, T.L., Dewdney, P.E., Taylor, A.R., Willis, A.G., and Normandeau, M. 1999, , 514, 221 Green, D.A., [*[A Catalogue of Galactic Supernova Remnants (2001 December version)]{}*]{}, Mullard Radio Astronomy Observatory, Cavendish Laboratory, Cambridge, U.K., available at http://www.mrao.cam.uk/surveys/snrs/ Haverkorn, M., Katgert, P., and de Bruyn, A.G. 2000, , 356, L13 Heiles, C. 1996, , 462, 316 Junkes, N., Fürst, E., and Reich, W. 1987, , 69, 451 Kothes, R., Landecker, T.L, Foster, T., and Leahy, D.A. 2001, , 376, 641 Landecker, T.L., Routledge, D., Reynolds, S.P., Smegal, R.J., Borkowski, K.J., and Seward, F.D. 1999, , 527, 866 Landecker, T.L., Dewdney, P.E., Burgess, T.A., Gray, A.D., Higgs, L.A., Hoffmann, A.P., Hovey, G.J., Karpa, D.R., Lacey, J.D., Prowse, N., Purton, C.R., Roger, R.S., Willis, A.G., Wyslouzil, W., Routledge, D., and Vaneldik, J.F. 2000, , 145, 509 Leahy, J.P., 1987, , 226, 433 Panagia, N. 1973, , 78, 929 Peracaula, M., Taylor, A.R., Bellchamber, T.L., Gray, A.D., and Landecker, T.L., 1999, in [*[New Perspectives on the Interstellar Medium]{}*]{}, eds. A.R. Taylor, T.L. Landecker, G. Joncas, ASP Conf. Series, 168, 86 Perryman, M.A.C., Lindergren, L., Kovalevsky. et al. 1997, , 323, L49 Reynolds, R.J. 1989, , 339, L29 Roger, R.S., Costain, C.H., Landecker, T.L., and Swerdlyk, C.M. 1999, , 137, 7 Smegal, R.J., Landecker, T.L., Vaneldik, J.F., Routledge, D., and Dewdney, P.E. 1997, Radio Science, 32, 643 Sokoloff, D.D., Bykov, A.A., Shukurov, A., Berkhuijsen, E.M., Beck, R., and Poezd, A.D. 1998, , 299, 189 Spoelstra, T.A.T. 1984, , 135, 238 Tatematsu, K., Fukui, Y., Landecker, T.L., and Roger, R.S. 1990, , 237, 189 Taylor, A.R., Wallace, B.J., and Goss, W.M. 1992, , 103, 931 Taylor, A.R., Dougherty, S.M., Gibson, S.J., Peracaula, M., Landecker, T.L., Brunt, C.M., Dewdney, P.E., Gray, A.D., Higgs, L.A.,, Kerton, C.R., Knee, L.B.G., Kothes, R., Purton, C.R., Uyan[i]{}ker, B., Wallace, B.J., Willis, A.G., Martin, P.G., and Durand, D. 2002,  submitted Taylor, J.H., and Cordes, J.M. 1993, , 411, 674 Uyan[i]{}ker, B., Fürst, E., Reich, W., Reich, P., and Wielebinski, R. 1998, , 132, 401 Uyan[i]{}ker, B., Fürst, E., Reich, W., Reich, P., and Wielebinski, R. 1999, , 138, 31 Uyan[i]{}ker, B., Fürst, E., Reich, W., Aschenbach, B., and Wielebinski, R. 2001, , 371, 675 Uyan[i]{}ker, B., Kothes, R., Brunt, C.M. 2002, , 565, 1022 Uyan[i]{}ker, B., Landecker, T.L. 2002a, in [*[Astrophysical Polarized Backgrounds]{}*]{}, eds. S. Cecchini, S. Cortiglioni, R. Sault and C. Sbarra, AIP Conference Proceedings, 609, 15 Uyan[i]{}ker, B., Landecker, T.L. 2002b, , 575, 225 van der Laan, H. 1962, , 124, 125 van der Werf, P.P., and Higgs, L.A. 1990, , 235, 407 Wardle, J.F.C., and Kronberg, P.P. 1974, , 194, 249 Wendker, H.J., Benz, D., and Baars, J.W.M. 1983, , 124, 116 Wendker, H.J., Higgs, L.A., and Landecker, T.L. 1991, , 241, 551 Westerhout, G., Seeger, C.L., Brouw, W.N., and Tinbergen, J., 1962, B.A.N. 16, 187 Wielebinski, R., Shakeshaft, J.R., and Pauliny-Toth, I.I.K. 1962, The Observatory, 82, 158 Wieringa, M.H., de Bruyn, A.G., Jansen, D., Brouw, W.N., and Katgert, P. 1993, , 268, 215 Wilkinson, A, and Smith, F.G. 1974, , 167, 593 Willis, A.G. 1999, , 136, 603 [lccclc]{}   Object & $(l,b)$ & Size & Distance &   Notes & Reference\ & & arcmin &kpc & & for distance\ Cyg-X & 80$^\circ$, 0$^\circ$ & 360 $\times$ 360 & 1 to 4 & /SNR complex & 1\ W63 & 82.2$^\circ$, 5.3$^\circ$ & 95 $\times$ 65 & $\sim$2 & SNR & 2\ S112 & 83.8$^\circ$, 3.3$^\circ$ & 15 & 2.1 & region & 3\ S115 & 84.8$^\circ$, 3.9$^\circ$ & 50 $\times$ 25 & 3.0 & region & 3\ W80 & 85$^\circ$, $-$1$^\circ$ & 140 $\times$ 140 & 0.5 & region & 4\ HB21 & 89.0$^\circ$, 4.7$^\circ$ & 120 $\times$ 90 & 0.8 & SNR & 5\ BG2107+49 & 91$^\circ$, 1.7$^\circ$ & 40 & 10 & stellar-wind bubble & 6\ CTB102 & 92.9$^\circ$, 2.7$^\circ$ & 60 & 5 & region & 7\ NRAO655 & 93.4$^\circ$, 1.8$^\circ$ & 35 $\times$ 25 & 5.3 & region & 8\ CTB104A & 93.7$^\circ$, $-$0.2$^\circ$ & 80 & 1.4 & SNR & 9\ 3C434.1 & 94.0$^\circ$, 1.0$^\circ$ & 30 $\times$ 25 & 5.6 & SNR & 8\ S124 & 94.6$^\circ$, $-$1.5$^\circ$ & 50 $\times$ 40 & 2.6 & region & 3\ [lccclc]{}     SNR & Size & Distance & 1420MHz polarization &   Notes & Reference\ & (arcmin) & (kpc) & detected? & &\ G82.2+5.3 (W63) & 95 $\times$ 65 & $\sim$2 & yes & & 1\ G84.2$-$0.8 & 20 $\times$ 16 & 4 & no & behind W80 & 2\ G84.9+0.5 & 6 & & no & behind W80 & 3\ G85.4+0.7 & 40 & 3.8 & no & very faint & 4\ G85.9$-$0.6 & 25 & $\sim$5 & no & very faint & 4\ G89.0+4.7 (HB21) & 120 $\times$ 90 & 0.8 & yes & &    5, 6\ G93.3+6.9 (DA530) & 27 $\times$ 20 & 3.5 & yes & & 7\ G93.7$-$0.2 (CTB104A) & 80 & 1.4 & yes & & 8\ G94.0+1.0 (3C434.1) & 30 $\times$ 25 & 5.6 & no & & 9\ [ccccllc]{} Frequency & Beam & $l$ & Distance & Spiral & Method & Ref\ (MHz) & (arcmin) & ($^\circ$) & (kpc) &   arm & &\ 1420 & 1 & 135 & 2 & Perseus & regions & 1\ 1420 & 1 & 82 – 95 & up to 2 & Local & SNRs, regions & 2\ 1420 & 6 & 329.5 & $>2$ & Carina & absorption & 3\ 1384 & 1.3 & 325.5 – 332.5 & 3.5 & Crux & pulsar RMs, reg. & 4\ 2400 & 10.4 & 238 – 5 & up to 5 & Carina & regions & 5\ & & & & Crux & &\ 2695 & 4.3 & 5 – 74 & 2.5 to 8 & Sagittarius & Depolarization & 6\ & & & & & assoc. with &\ [^1]: We note that the traditional vector representation of these data would be unintelligible; the grayscale representation more adequately conveys the data, although the abrupt black-to-white transitions where angle changes from $180^{\circ}$ to $0^{\circ}$ can be distracting.
--- abstract: | In this paper we give an ${\ensuremath{\widetilde{O}}}((nm)^{2/3}\log C)$ time algorithm for computing min-cost flow (or min-cost circulation) in unit capacity planar multigraphs where edge costs are integers bounded by $C$. For planar multigraphs, this improves upon the best known algorithms for general graphs: the ${\ensuremath{\widetilde{O}}}(m^{10/7}\log C)$ time algorithm of Cohen et al. \[SODA 2017\], the $O(m^{3/2}\log(nC))$ time algorithm of Gabow and Tarjan \[SIAM J. Comput. 1989\] and the ${\ensuremath{\widetilde{O}}}(\sqrt{n}m \log C)$ time algorithm of Lee and Sidford \[FOCS 2014\]. In particular, our result constitutes the first known fully combinatorial algorithm that breaks the $\Omega(m^{3/2})$ time barrier for min-cost flow problem in planar graphs. To obtain our result we first give a very simple successive shortest paths based scaling algorithm for unit-capacity min-cost flow problem that does not explicitly operate on dual variables. This algorithm also runs in ${\ensuremath{\widetilde{O}}}(m^{3/2}\log{C})$ time for general graphs, and, to the best of our knowledge, it has not been described before. We subsequently show how to implement this algorithm faster on planar graphs using well-established tools: $r$-divisions and efficient algorithms for computing (shortest) paths in so-called dense distance graphs. author: - 'Adam Karczmarz[^1]' - 'Piotr Sankowski[^2]' bibliography: - '../references2.bib' title: 'Min-Cost Flow in Unit-Capacity Planar Graphs' --- Introduction ============ The min-cost flow is the core combinatorial optimization problem that now has been studied for over 60 years, starting with the work of Ford and Fulkerson [@ff58]. Classical combinatorial algorithms for this problem have been developed in the 80s. Goldberg and Tarjan [@GoldbergT90] showed an ${\ensuremath{\widetilde{O}}}(nm\log{C})$ time weakly-polynomial algorithm for the case when edge costs are integral, and where $C$ is the maximum edge cost. Orlin [@Orlin88] showed the best-known strongly polynomial time algorithm running in ${\ensuremath{\widetilde{O}}}(m^2)$ time. Faster weakly-polynomial algorithms have been developed in this century using interior-point methods: Daitch and Spielman [@DaitchS08] gave an ${\ensuremath{\widetilde{O}}}(m^{3/2}\log{(U+C)})$ algorithm, and later Lee and Sidford [@LeeS14] obtained an ${\ensuremath{\widetilde{O}}}(\sqrt{n}m\log{(U+C)})$ algorithm, where $U$ is the maximum (integral) edge capacity. Much attention has been devoted to the unit-capacity case of the min-cost flow problem. Gabow and Tarjan [@GabowT89] gave a $O(m^{3/2}\log{(nC)})$ time algorithm. Lee and Sidford [@LeeS14] matched this bound up to polylogarithmic factors for $m={\ensuremath{\widetilde{O}}}(n)$, and improved upon it for larger densities, even though their algorithm solves the case of arbitrary integral capacities. Gabow and Tarjan’s result remained the best known bound for more than 28 years – the problem witnessed an important progress only very recently. In 2017 an algorithm that breaks the $\Omega(m^{3/2})$ time barrier for min-cost flow problem was given by Cohen et al. [@CohenMSV17]. This algorithm runs in ${\ensuremath{\widetilde{O}}}(m^{10/7}\log C)$ time and is also based on interior-point methods. It is worth noting that currently the algorithms of [@CohenMSV17; @LeeS14] constitute the most efficient solutions for the entire range of possible densities (up to polylogarithmic factors) and are also the best-known algorithms for important special cases, e.g., planar graphs or minor-free graphs. Both of these solutions are based on interior point methods and do not shed light on the combinatorial structure of the problem. In this paper we study the unit-capacity min-cost flow in planar multigraphs. We improve upon [@CohenMSV17; @LeeS14] by giving the first known ${\ensuremath{\widetilde{O}}}((mn)^{2/3}\log C)={\ensuremath{\widetilde{O}}}(m^{4/3}\log{C})$ time algorithm for computing min-cost $s,t$-flow and min-cost circulation in planar multigraphs.[^3] Our algorithm is fully combinatorial and uses the scaling approach of Goldberg and Tarjan [@GoldbergT90]. At each scale it implements the classical shortest augmenting path approach similar to the one known from the well-known Hopcroft-Karp algorithm for maximum bipartite matching [@HopcroftK73]. It in black-box manner computes repeatedly either shortest augmenting paths or blocking flows. In this way we can reuse recent sophisticated algorithms for computing shortest paths in planar graphs. Finally, we note that our approach is relatively simpler than the one used to obtain ${\ensuremath{\widetilde{O}}}()$ time \cite{} and ${\ensuremath{\widetilde{O}}}()$ time algorithms for min-cost perfect matchings in bipartite graphs. Min-cost perfect matchings can be reduced to min-cost circulations using e.g. [@miller-naor]. Both these approaches neither use black-box building block nor allow to work with primal and dual problem separately. In comparison to these papers, by successfully avoiding the intermix use of primal and dual, we are able to give more universal and simpler solution to a more general problem. #### Related work. Due to immense number of works on flows and min-cost flows we will not review all of them. Instead we concentrate only on the ones that are relevant to the sparse and planar graph case, as that is the regime where our results are of importance. As already noted above the fastest algorithms for min-cost flows in planar multigraphs are implied by the algorithms for general case. This, however, is not the case for maximum flow problem. Here, the fastest algorithms are based on planar graph duality and reduce the problem to shortest path computations. The undirected $s,t$-flow problem can be solved in $O(n \log \log n)$ time [@ItalianoNSW11], whereas the directed $s,t$-flow problem can be solved in $O(n \log n)$ time [@BorradaileK09; @Erickson10]. Even for the case with multiple source and sinks, a nearly-linear time algorithm is known [@BorradaileKMNW17]. These results naturally raise as an open question whether similar nearly-linear bounds could be possible for min-cost flow. Until very recently there has been no progress towards answering this open question. Partial progress was made by devising ${\ensuremath{\widetilde{O}}}(n^{4/3}\log{C})$ time [@AsathullaKLR18] and ${\ensuremath{\widetilde{O}}}(n^{6/5}\log{C})$ time [@LahnR19] algorithms for min-cost perfect matchings in bipartite planar graphs. Lahn and Raghvendra also give an ${\ensuremath{\widetilde{O}}}(n^{7/5}\log{C})$ time minimum cost perfect matching algorithm for minor-free graphs. These algorithms can be seen as specialized versions of the Gabow-Tarjan’s algorithm for the assignment problem [@GabowT89]. Gabow and Tarjan [@GabowT89] reduced min-cost flow problem to so-called min-cost perfect degree-constrained subgraph problem on a bipartite multigraph, which they solved by extending their algorithm for minimum cost perfect matching. Hence it seems plausible that the recent algorithm of Lahn and Raghvendra [@LahnR19] can be extended to solve min-cost flow, since their algorithm builds upon the Gabow-Tarjan algorithm. The reduction presented by Gabow and Tarjan *is not* planarity preserving, though. Nevertheless, min-cost perfect matching problem can be reduced to min-cost flow problem in an efficient and planarity preserving way [@MillerN95]. The opposite reduction can be done in planarity preserving way as recently shown [@Sankowski18]. However, this reduction is not efficient and produces a graph of quadratic size. Hence, we cannot really take advantage of it. #### Overview and comparison to [@AsathullaKLR18; @LahnR19]. We concentrate on the *min-cost circulation* problem, which is basically the min-cost flow problem with all vertex demands equal to $0$. It is well-known [@GoldbergHKT17] that the min-cost $s,t$-flow problem can be solved by first computing some $s,t$-flow $f$ of requested value (e.g., the maximum value), and then finding a min-cost circulation on the residual network $G_f$. This reduction is clearly planarity-preserving. Since an $s,t$-flow of any given value (in particular, the maximum value) can be found in a planar graph in nearly-linear time (see [@Erickson10]), this reduction works in nearly-linear time as well. Our min-cost circulation algorithm resembles the recent works on minimum cost planar perfect matching [@AsathullaKLR18; @LahnR19], in the sense that we simulate some already-good scaling algorithm for general graphs, but implement it more efficiently using the known and well-established tools from the area of planar graph algorithms. However, instead of simulating an existing unit-capacity min-cost flow algorithm, e.g., [@GabowT89; @GoldbergHKT17], we use a very simple successive-shortest paths based algorithm that, to the best our knowledge, has not been described before. Our algorithm builds upon the cost-scaling framework of Goldberg and Tarjan [@GoldbergT90], similarly as the recent simple unit-capacity min-cost flow algorithms of Goldberg et al. [@GoldbergHKT17]. In this framework, a notion of ${\epsilon}$-optimality of a flow is used. A flow $f$ is ${\epsilon}$-optimal wrt. to a price function $p$ if for any edge $uv=e\in E(G_f)$ we have $c(e)-p(u)+p(v)\geq -{\epsilon}$. Roughly speaking, the parameter ${\epsilon}$ measures the quality of a circulation: any circulation is trivially $C$-optimal wrt. $p$, whereas any $\frac{1}{n}$-feasible (wrt. $p$) circulation is guaranteed to be optimal. The general scheme is to start with a $C$-optimal circulation, run $O(\log(nC))$ *scales* that improve the quality of a circulation by a factor of $2$, and this way obtain the optimal solution. We show that a single scale can be solved by repeatedly sending flow along a cheapest ${\ensuremath{s}}\to{\ensuremath{t}}$ path in a certain graph $G_f''$ with a single source ${\ensuremath{s}}$ and a single sink ${\ensuremath{t}}$, that approximates the residual graph $G_f$. Moreover, if we send flow simultaneously along a maximal set of cheapest ${\ensuremath{s}}\to{\ensuremath{t}}$ paths at once, like in [@EvenT75; @HopcroftK73], we finish after $O(\sqrt{m})$ augmentations. However, as opposed to $\cite{EvenT75, HopcroftK73}$, our graph $G_f''$ is weighted and might have negative edges. We overcome this difficulty as in the classical successive shortest path approach for min-cost flow, by using distances from the previous flow augmentation as a feasible price function that can speed-up next shortest path computation. Our algorithm also retains a nice property[^4] of the Even-Tarjan algorithm that the total length (in terms of the number of edges) of all the used augmenting paths is $O(m\log{m})$. The crucial difference between our per-scale procedure and those of [@GabowT89; @GoldbergHKT17] is that we do not “adjust” dual variables $p(v)$ at all while the procedure runs: we only use them to compute $G_f''$, and recompute them from scratch in nearly-linear time when the procedure finishes. In particular, the recent results of [@AsathullaKLR18; @LahnR19] are quite complicated since, in order to simulate the Gabow-Tarjan algorithm [@GabowT89], they impose and maintain additional invariants about the duals. The only bottlenecks of our per-scale procedure are (1) shortest paths computation, (2) picking a maximal set of edge-disjoint $s\to t$ paths in an unweighted graph[^5]. We implement these on a planar network using standard methods. Let $r\in [1,n]$ be some parameter. We construct a *dense distance graph* $H_f''$ (e.g., [@FR06; @GawrychowskiK18]) built upon an $r$-division (e.g., [@KleinMS13]) of $G_f''$. The graph $H_f''$ is a compressed representation of the distances in $G_f''$ with $O(n/\sqrt{r})$ vertices and $O(m)$ edges. Moreover, it can be updated in ${\ensuremath{\widetilde{O}}}(r)$ time per edge used by the flow. Hence, the total time spent on updating $H_f''$ is ${\ensuremath{\widetilde{O}}}(mr)$. As we show, running our per-scale procedure on $H_f''$ is sufficient to simulate it on $G_f''$. Computing distances in a dense distance graph requires ${\ensuremath{\widetilde{O}}}(n/\sqrt{r})$ time [@FR06; @GawrychowskiK18]. To complete the construction, we show how to find a maximal set of edge-disjoint paths in ${\ensuremath{\widetilde{O}}}(n/\sqrt{r})$ amortized time. To this end, we also exploit the properties of reachability in a dense distance graph, used previously in dynamic reachability algorithms for planar digraphs [@ItalianoKLS17; @Karczmarz18]. This way, we obtain ${\ensuremath{\widetilde{O}}}(\sqrt{m}n/\sqrt{r}+mr)$ running time per scale. This is minimized roughly when $r=n^{2/3}/m^{1/3}$. Recall that Lahn and Raghvendra [@LahnR19] obtained a polynomially better (than ours) bound of ${\ensuremath{\widetilde{O}}}(n^{6/5}\log{C})$, but only for planar min-cost perfect matching problem. To achieve that, they use an additional idea due to Asathulla et al. [@AsathullaKLR18]. Namely, they observe that by introducing vertex weights, one can make augmenting paths avoid edges incident to boundary vertices, thus making the total number of pieces “affected” by augmenting paths truly-sublinear in $n$. It is not clear how to apply this idea to the min-cost flow problem without making additional assumptions about the structure of the instance, like bounded-degree (then, there are only $O(n/\sqrt{r})$ edges incident to boundary vertices of an $r$-division), or bounded vertex capacities (so that only $O(1)$ units of flow can go through each vertex; this is satisfied in the perfect matching case). This phenomenon seems not very surprising once we recall that such assumptions lead to better bounds even for general graphs: the best known combinatorial algorithms for min-cost perfect matching run in $O(n^{1/2}m\log{(nC)})$ time, whereas for min-cost flow in $O(m^{3/2}\log{(nC)})$ time [@GabowT89; @GoldbergHKT17]. #### Organization of the paper. In Section \[s:prelims\] we introduce the notation and describe the scaling framework of [@GoldbergT90]. Next, in Section \[s:general\], we describe the per-scale procedure of unit-capacity min-cost flow for general graphs. Finally, in Section \[s:planar\] we give our algorithm for planar graphs. Preliminaries {#s:prelims} ============= Let $G_0=(V,E_0)$ be the input directed multigraph. Let $n=|V|$ and $m=|E_0|$. Define $G=(V,E)$ to be a multigraph such that $E=E_0\cup{\ensuremath{E_0}^{\mathrm{R}}}$, $E_0\cap {\ensuremath{E_0}^{\mathrm{R}}}=\emptyset$, where ${\ensuremath{E_0}^{\mathrm{R}}}$ is the set of *reverse edges*. For any $uv=e\in E$, there is an edge ${\ensuremath{e}^{\mathrm{R}}}\in E$ such that ${\ensuremath{e}^{\mathrm{R}}}=vu$ and ${\ensuremath{({\ensuremath{e}^{\mathrm{R}}})}^{\mathrm{R}}}=e$. We have $e\in E_0$ iff ${\ensuremath{e}^{\mathrm{R}}}\in {\ensuremath{E_0}^{\mathrm{R}}}$. Let $u:E_0\to \mathbb{R}_+$ be a *capacity function*. A *flow* is a function $f:E\to \mathbb{R}$ such that for any $e\in E$ $f(e)=-f({\ensuremath{e}^{\mathrm{R}}})$ and for each $e\in E_0$, $0\leq f(e)\leq u(e)$. These conditions imply that for $e\in E_0$, $-u(e)\leq f({\ensuremath{e}^{\mathrm{R}}})\leq 0$. We extend the function $u$ to $E$ by setting $u({\ensuremath{e}^{\mathrm{R}}})=0$ for all $e\in E_0$. Then, for all edges $e\in E$ we have $-u({\ensuremath{e}^{\mathrm{R}}})\leq f(e)\leq u(e)$. The *unit capacity* function satisfies $u(e)=1$ for all $e\in E_0$. The *excess* ${\mathrm{exc}}_f(v)$ of a vertex $v\in V$ is defined as $\sum_{uv=e\in E} f(e)$. Due to anti-symmetry of $f$, ${\mathrm{exc}}_f(v)$ is equal to the amount of flow going into $v$ by the edges of $E_0$ minus the amount of flow going out of $v$ by the edges of $E_0$. The vertex $v\in V$ is called an *excess* vertex if ${\mathrm{exc}}_f(v)>0$ and *deficit* if ${\mathrm{exc}}_f(v)<0$. Let $X$ be the set of excess vertices of $G$ and let $D$ be the set of deficit vertices. Define the *total excess* ${\Psi}_f$ as the sum of excesses of the excess vertices, i.e., ${\Psi}_f=\sum_{v\in X} {\mathrm{exc}}_f(v)=\sum_{v\in D}-{\mathrm{exc}}_f(v).$ A flow $f$ is called a *circulation* if there are no excess vertices, or equivalently, ${\Psi}_f=0$. Let $c:E_0\to \mathbb{Z}$ be the input *cost* function. We extend $c$ to $E$ by setting $c({\ensuremath{e}^{\mathrm{R}}})=-c(e)$ for all $e\in E_0$. The *cost* $c(f)$ of a flow $f$ is defined as $\frac{1}{2}\sum_{e\in E} f(e)c(e)=\sum_{e\in E_0} f(e)c(e)$. *To send a unit of flow* through $e\in E$ means to increase $f(e)$ by $1$ and simultaneously decrease $f({\ensuremath{e}^{\mathrm{R}}})$ by $1$. By sending a unit of flow through $e$ we increase the cost of flow by $c(e)$. *To send a unit of flow through a path $P$* means to send a unit of flow through each edge of $P$. In this case we also say that we *augment flow $f$ along path $P$*. The *residual network* $G_f$ of $f$ is defined as $(V,E_f)$, where $E_f=\{e\in E: f(e)<u(e)\}$. #### Price functions and distances. We call any function $p:V\to \mathbb{R}$ a *price function* on $G$. The *reduced cost* of an edge $uv=e\in E$ wrt. $p$ is defined as $c_p(e):=c(e)-p(u)+p(v)$. We call $p$ a *feasible price function* of $G$ if each edge $e\in E$ has nonnegative reduced cost wrt. $p$. It is known that $G$ has no negative-cost cycles (negative cycles, in short) if and only if some feasible price function $p$ for $G$ exists. If $G$ has no negative cycles, distances in $G$ (where we interpret $c$ as a *length* function) are well-defined. For $u,v\in V$, we denote by ${\delta}_G(u,v)$ the distance between $u$ and $v$, or, in other words, the length of a shortest $u\to v$ path in $G$. \[f:distanceto\] Suppose $G$ has no negative cycles. Let $t\in V$ be reachable in $G$ from all vertices $v\in V$. Then the *distance to* function ${\delta}_{G,t}(v):={\delta}_{G}(v,t)$ is a feasible price function of $G$. For $A,B\subseteq V(G)$ we sometimes write ${\delta}_G(A,B)$ to denote $\min\{{\delta}_G(u,v):u\in A, v\in B\}$. #### Planar graph toolbox. An *$r$-division* of a simple undirected plane graph $G$ is a collection of $O(n/r)$ edge-induced subgraphs of $G$, called *pieces*, whose union is $G$ and such that each piece $P$ has $O(r)$ vertices and $O(\sqrt{r})$ *boundary vertices*. The boundary vertices ${\ensuremath{\partial}}{P}$ of a piece $P$ are the vertices of $P$ shared with some other piece. An *$r$-division with few holes* has an additional property that for each piece $P$, (1) $P$ is connected, (2) there exist $O(1)$ faces of $P$ whose union of vertex sets contains ${\ensuremath{\partial}}{P}$. Let $G_1,\ldots,G_\lambda$ be some collection of plane graphs, where each $G_i$ has a distinguished boundary set ${\ensuremath{\partial}}{G_i}$ lying on $O(1)$ faces of $G_i$. A *distance clique* ${\ensuremath{\text{DC}}}(G_i)$ of $G_i$ is defined as a complete digraph on ${\ensuremath{\partial}}{G_i}$ such that the cost of the edge $uv$ in ${\ensuremath{\text{DC}}}(G_i)$ is equal to ${\delta}_{G_i}(u,v)$. \[t:mssp\] Suppose a feasible price function on $G_i$ is given. Then the distance clique ${\ensuremath{\text{DC}}}(G_i)$ can be computed in $O((|V(G_i)|+|E(G_i)|+|{\ensuremath{\partial}}{G_i}|^2)\log{|V(G_i)|}))$ time. The graph ${\ensuremath{\text{DDG}}}={\ensuremath{\text{DC}}}(G_1)\cup \ldots {\ensuremath{\text{DC}}}(G_\lambda)$ is called a *dense distance graph*[^6]. \[t:fr\] Given a feasible price function of $DDG$, single-source shortest paths in ${\ensuremath{\text{DDG}}}$ can be computed in $O{\mathopen{}\mathclose\bgroup\originalleft}(\sum_{i=1}^\lambda |{\ensuremath{\partial}}{G_i}|\frac{\log^2{n}}{\log^2\log{n}}{\aftergroup\egroup\originalright})$ time, where $n=|V(DDG)|$. #### Scaling framework for minimum-cost circulation. The following fact characterizes minimum circulations. \[f:negcycle\] Let $f$ be a circulation. Then $c(f)$ is minimum iff $G_f$ has no negative cycles. It follows that a circulation $f$ is minimum if there exists a feasible price function of $G_f$. A flow $f$ is ${\epsilon}$-optimal wrt. price function $p$ if for any $uv=e\in E_f$, $c(e)-p(u)+p(v)\geq -{\epsilon}.$ The above notion of ${\epsilon}$-optimality allows us, in a sense, to measure the optimality of a circulation: the smaller ${\epsilon}$, the closer to the optimum a circulation $f$ is. Moreover, if we deal with integral costs, $\frac{1}{n+1}$-optimality is equivalent to optimality. [lemma]{}[lscale]{}\[l:scale\] Suppose the cost function has integral values. Let circulation $f$ be $\frac{1}{n+1}$-optimal wrt. some price function $p$. Then $f$ is a minimum cost circulation. Suppose $f$ is not minimum-cost. By Fact \[f:negcycle\], $f$ is not minimum-cost iff $G_f$ contains a simple negative cycle $C$. Note that the cost of $C$ is the same wrt. to the cost functions $c$ and $c_p$, as the prices cancel out. Therefore $\sum_{e\in C}c_p(e)\geq -\frac{n}{n+1}>-1$. But the cost of this cycle is integral and hence is at least $0$, a contradiction. Let $C=\max_{e\in E_0}\{|c(e)|\}$. Suppose we have a procedure $\textsc{Refine}(G,f_0,p_0,{\epsilon})$ that, given a circulation $f_0$ in $G$ that is $2{\epsilon}$-optimal wrt. $p_0$, computes a pair $(f',p')$ such that $f'$ is a circulation in $G$, and it is ${\epsilon}$-optimal wrt. $p'$. We use the general *scaling* framework, due to Goldberg and Tarjan [@GoldbergT90], as given in Algorithm \[alg:scaling\]. By Lemma \[l:scale\], it computes a min-cost circulation in $G$ in $O(\log(nC))$ iterations. Therefore, if we implement $\textsc{Refine}$ to run in $T(n,m)$ time, we can compute a minimum cost circulation in $G$ in $O(T(n,m)\log{(nC)})$ time. $f(e):=0$ for all $e\in G$ $p(v):=0$ for all $v\in V$ ${\epsilon}:=C/2$ $(f,p):=\Call{Refine}{G,f,p,{\epsilon}}$ ${\epsilon}:={\epsilon}/2$ $f$ Refinement via Successive Approximate Shortest Paths {#s:general} ==================================================== In this section we introduce our implementation of $\textsc{Refine}(G,f_0,p_0,{\epsilon})$. For simplicity, we start by setting $c(e):=c(e)-p_0(u)+p_0(v)$. After we are done, i.e., we have a circulation $f'$ that is ${\epsilon}$-optimal wrt. $p'$, (assuming costs reduced with $p_0$), we will return $(f',p'+p_0)$ instead. Therefore, we now have $c(e)\geq -2{\epsilon}$ for all $e\in E_{f_0}$. Let $f_1$ be the flow initially obtained from $f_0$ by sending a unit of flow through each edge $e\in E_{f_0}$ such that $c(e)<0$. Note that $f_1$ is ${\epsilon}$-optimal, but it need not be a circulation. We denote by $f$ the *current flow* which we will gradually change into a circulation. Recall that $X$ is the set of excess vertices of $G$ and $D$ is the set of deficit vertices (wrt. to the current flow $f$). Recall a well-known method of finding the min-cost circulation exactly [@succsp1; @succsp2; @succsp3]: repeatedly send flow through shortest $X\to D$ paths in $G_f$. The sets $X$ and $D$ would only shrink in time. However, doing this on $G_f$ exactly would be too costly. Instead, we will gradually convert $f$ into a circulation, by sending flow from vertices of $X$ to vertices of $D$ but only using approximately (in a sense) shortest paths. Let ${\ensuremath{\mathrm{round}}}(y,z)$ denote the smallest integer multiple of $z$ that is greater than $y$. For any $e\in E$, set $c'(e)={\ensuremath{\mathrm{round}}}(c(e)+{\epsilon}/2,{\epsilon}/2).$ We define $G_f'$ to be the “approximate” graph $G_f$ with the costs given by $c'$ instead of $c$. For convenience, let us also define an extended version $G_f''$ of $G_f'$ to be $G_f'$ with two additional vertices ${\ensuremath{s}}$ (a super-excess-vertex) and ${\ensuremath{t}}$ (a super-deficit-vertex) added. Let $M=\sum_{e\in E}|c'(e)|+{\epsilon}$. We also add to $G_f''$ the following auxiliary edges: 1. an edge $v{\ensuremath{t}}$ for all $v\in V$, we set $c'(v{\ensuremath{t}})=0$ if $v\in D$ and $c'(v{\ensuremath{t}})=M$ otherwise, 2. an edge ${\ensuremath{s}}x$ with $c'({\ensuremath{s}}x)=0$ for all $x\in X$. Clearly, ${\delta}_{G_f''}({\ensuremath{s}},{\ensuremath{t}})={\delta}_{G_f'}(X,D)$ and every vertex in $G_f''$ can reach ${\ensuremath{t}}$. Our algorithm can be summarized very briefly, as follows. Start with $f=f_1$. While $X\neq \emptyset$, send a unit of flow along any shortest path $P$ from $X$ to $D$ in $G_f'$ (equivalently: from ${\ensuremath{s}}$ to ${\ensuremath{t}}$ in $G_f''$). Once finished, return $f$ and ${\delta}_{G_f'',{\ensuremath{t}}}$ as the price function. The correctness of this approach follows from the following two facts that we discuss later on: 1. $G_f'$ is negative-cycle free at all times, 2. after the algorithm finishes, $f$ is a circulation in $G$ that is ${\epsilon}$-optimal wrt. ${\delta}_{G_f'',{\ensuremath{t}}}$. If implemented naively, the algorithm would need $O(m)$ negative-weight shortest paths computations to finish. If we used Bellman-Ford method for computing shortest paths, the algorithm would run in $O(nm^2)$ time. To speed it up, we apply two optimizations. First, as in the successive shortest paths algorithm for general graphs [@succsp4; @succsp5], we observe that the distances ${\delta}_{G_f'',{\ensuremath{t}}}$ computed before sending flow through a found shortest ${\ensuremath{s}}\to{\ensuremath{t}}$ path constitute a feasible price function of $G_f''$ *after* augmenting the flow. This allows us to replace Bellman-Ford algorithm with Dijkstra’s algorithm and reduce the time to $O(m^2+nm\log{n})$. Next, instead of augmenting the flow along a single shortest $X\to D$ path, we send flow through a maximal set of edge-disjoint shortest $X\to D$ paths, as in Hopcroft-Karp algorithm for maximum bipartite matching [@HopcroftK73]. Such a set can be easily found in $O(m)$ time when the distances to ${\ensuremath{t}}$ in $G_f''$ are known. This way, we finish after only $O(\sqrt{m})$ phases of shortest path computation and flow augmentation. The pseudocode is given in Algorithm \[alg:refine\]. $c(e):=c(e)-p_0(u)+p_0(v)$ for all $e=uv\in E$. $f:=\Call{SendFlow}{f_0,\{e\in E_{f_0}:c(e)<0\}}$ $p(v):=0$ for all $v\in V$ Construct $G_f''$ out of $G_f'$. $p:=\Call{DistancesTo}{G_f'',{\ensuremath{t}},p}$\[l:dijkstra\] $Q_0,\ldots,Q_k:=$ a maximal set of edge-disjoint ${\ensuremath{s}}\to{\ensuremath{t}}$ paths in $G_f''$ consisting solely of edges satisfying $c'_p(e)=0$.\[l:augment\] $f:=\Call{SendFlow}{f,E((Q_0\cup\ldots\cup Q_k)\cap G_f')}$ $(f,\Call{DistancesTo}{G_f'',{\ensuremath{t}},p}+p_0)$ Analysis -------- Below we state some key properties of our refinement method. \[l:peq\] Suppose $G_f''$ has no negative cycles. Then $f$ is ${\epsilon}$-optimal wrt. ${\delta}_{G_f'',{\ensuremath{t}}}$. Recall that $G_f$ and $G_f'$ have the same sets of edges, only different costs. Let $uv=e\in G_f$. Set $p:={\delta}_{G_f'',{\ensuremath{t}}}$. By Fact \[f:distanceto\], $c'(e)-p(u)+p(v)\geq 0$. Note that $c(e)\geq c'(e)-{\epsilon}$. Hence, $c(e)-p(u)+p(v)\geq c'(e)-p(u)+p(v)-{\epsilon}\geq -{\epsilon}.$ [lemma]{}[lpath]{}\[l:path\] If $X\neq \emptyset$, then there exists a path from $X$ to $D$ in $G_f$. In order to prove Lemma \[l:path\], we need the following Lemma of Goldberg et al. [@GoldbergHKT17]. \[l:goldberg\] Define $G^+_f=(V,E^+_f)$, where $E^+_f=\{e\in E: f(e)<f_0(e)\}.$ Then for any $C\subseteq V$, $$\sum_{v\in C} {\mathrm{exc}}_f(v)\leq |\{ab=e\in E^+_f:a\in C, b\notin C\}|.$$ Observe that $G^+_f$ is a subgraph of $G_f$. It is hence enough to prove that there exists a $X\to D$ path in $G^+_f$. Let $Q$ be the set of vertices reachable from any vertex of $X$ in $G^+_f$. If $D\cap Q=\emptyset$, then $\sum_{v\in Q} {\mathrm{exc}}_f(v)=\sum_{v\in X}{\mathrm{exc}}_f(v)={\Psi}_f\geq |X|>0$. By Lemma \[l:goldberg\], there exists an edge $e=ab$ in $G^+_f$ such that $a\in Q$ and $b\notin Q$. Hence $b$ is reachable from $X$ and $b\notin Q$, a contradiction. Before we proceed further, we need to introduce more notation. Let $\Delta$ denote the length of the shortest $X\to D$ path in $G_f'$ ($\Delta$ changes in time along with $f$). Let $q={\Psi}_{f_1}$. Clearly, $q\leq m$. For $i=1,\ldots,q$, denote by $f_{i+1}$ the flow (with total excess $q-i$) obtained from $f_i$ by sending a unit of flow through an arbitrarily chosen shortest $X\to D$ path $P_i$ of $G_{f_i}'$. For $i=1,\ldots,q$, let $\Delta_i$ be the value $\Delta$ when $f=f_i$. We set $\Delta_{q+1}=\infty$. [lemma]{}[lcorrect]{}\[l:correct\] Let $p^*_i:V\cup\{{\ensuremath{s}},{\ensuremath{t}}\}\to \{k\cdot {\epsilon}/2:k\in \mathbb{Z}\}$ be defined as $p^*_i={\delta}_{G_{f_i}'',{\ensuremath{t}}}$. Then: 1. $G_{f_{i}}'$ has no cycles of non-positive cost, 2. for any $e\in P_i$, the reduced cost of ${\ensuremath{e}^{\mathrm{R}}}$ wrt. $p^*_i$ is positive, 3. $p_i^*$ is a feasible price function of both $G_{f_i}''$ and $G_{f_{i+1}}''$, 4. $0<\Delta_i\leq \Delta_{i+1}$. We proceed by induction on $i$. We also prove that (5) $G_{f_{i+1}}'$ has no $0$-cost cycles. Consider item (1). If $i=1$, then $G_{f_1}'$ has positive cost edges only so it cannot have non-positive cost cycles. Otherwise, if $i>1$, then by the inductive hypothesis $p^*_{i-1}$ is a feasible price function for $G_{f_i}''$, so $G_{f_i}''$ has no negative cycles. By item (5) for $i-1$, we also have that $G_{f_i}$ has no $0$-cost cycles. By (1) applied for $i$, $G_{f_i}''$ has no negative cycles, and thus distances in $G_{f_i}''$ are well-defined and so is $p^*_i$. Since the edge weights of $G_f''$ are integer multiples of ${\epsilon}/2$, all distances in this graph are like that as well. We conclude that the values of $p^*_i$ are indeed multiples of ${\epsilon}/2$. Note that $M$ is large enough so that a path $v\to {\ensuremath{t}}$ in $G_{f_i}''$ uses an in-edge of ${\ensuremath{t}}$ with weight $M$ if and only if $v$ cannot reach $D$ in $G_{f_i}'$. Hence, if $v$ can reach $D$ in $G_{f_i}'$, we have $p_i^*(v)={\delta}_{G_{f_i}''}(v,{\ensuremath{t}})={\delta}_{G_{f_i}'}(v,D).$ We now prove (2) and (3). By Fact \[f:distanceto\], $p^*_i$ is a feasible price function of $G_{f_i}''$ before sending flow through $P_i$. To prove that $p^*_i$ is a feasible price function of $G_{f_{i+1}}'$, we only need to consider the reduced costs of edges ${\ensuremath{e}^{\mathrm{R}}}$, where $uv=e\in P_i$. Since $e$ is on a shortest path from $X$ to $D$ in $G_{f_i}'$, by (2) and Fact \[f:revbound\] we have $$p_i^*(v)={\delta}_{G_{f_i}'}(v,D_i)={\delta}_{G_{f_i}'}(u,D_i)-c'(e)=p_i^*(u)-c'(e)<p^*_i(u)+c'({\ensuremath{e}^{\mathrm{R}}})-{\epsilon}.$$ Equivalently $c'({\ensuremath{e}^{\mathrm{R}}})-p^*_i(v)+p^*_i(u)>{\epsilon}$ so indeed $p^*_i$ remains feasible for $G_f'$ after sending flow through $P_i$. To see that $p^*_i$ is feasible for $G_{f_{i+1}}''$, note that in comparison to $G_{f_i}''$, $G_{f_{i+1}}''$ has less auxiliary edges $sx$ and for one auxiliary edge $dt$ its cost is increased from $0$ to $M$. We clearly have $\Delta_1>0$ since $G_{f_1}'$ has positive edges only. Note that by Lemma \[l:path\], $\Delta_i$ is finite, whereas $\Delta_{q+1}=\infty$. Hence, $\Delta_i\leq \Delta_{i+1}$ holds for $i=q$. Suppose $i<q$ and $\Delta_i>\Delta_{i+1}$. We have $p_i^*({\ensuremath{s}})=\Delta_i$ and $p_i^*({\ensuremath{t}})=0$. By (3), $p_i^*$ is a feasible price function for $G_{f_{i+1}}''$, so $$0\leq {\delta}_{G_{f_{i+1}}''}({\ensuremath{s}},{\ensuremath{t}})-p^*_i({\ensuremath{s}})+p^*_i({\ensuremath{t}})=\Delta_{i+1}-\Delta_i<0$$ a contradiction. We have proved (4). Finally, we prove that (5) $G_{f_{i+1}}'$ has no $0$-cost cycles. Note that the cost of any cycle in $G_{f_i}'$ is preserved if we reduce the edge costs with $p^*_i$. Recall from (3) that the edge costs reduced by $p^*_i$ are all non-negative both in $G_{f_i}'$ and $G_{f_{i+1}}'$. Hence all $0$-length cycles in $G_{f_{i+1}}'$ consist solely of edges of reduced (with $p_i^*$) cost $0$. But $G_{f_{i+1}}'$ is obtained from $G_{f_i}'$ by replacing some edges with reduced cost $0$ with reverse edges with positive reduced cost. No such edge can thus lie on a $0$-cost cycle in $G_{f_{i+1}}'$. A $0$-cost cycle without an edge of $E(G_{f_{i+1}}')\setminus E(G_{f_i}')$ cannot exist, as it would also exist in $G_{f_i}$ which would contradict (1). By Lemmas \[l:path\] and \[l:correct\], our general algorithm computes a circulation $f_{q+1}$ such that $p_q^*$ is a feasible price function of $G_{f_{q+1}}'$. Since $f_{q+1}$ has no negative cycles, by Lemma \[l:peq\], $f_{q+1}$ is ${\epsilon}$-optimal wrt. ${\delta}_{G_f'',{\ensuremath{t}}}$. We conclude that the algorithm is correct. The following lemma is the key to the running time analysis. [lemma]{}[lbound]{}\[l:bound\] If $X\neq\emptyset$ (equivalently, if $\Delta<\infty$), then ${\Psi}_f \cdot \Delta\leq 6{\epsilon}m.$ This proof is inspired by the proof of an analogous fact in [@GoldbergHKT17]. Suppose $f=f_i$ for some $i$, $1\leq i\leq q$ and let $p:=p_i^*$, where $p_i^*$ is as in Lemma \[l:correct\]. Moreover, by Lemma \[l:correct\], $p(x)\geq \Delta$ for all $x\in X$ and $p(d)\leq 0$ for all $d\in D$, where $\Delta>0$. Let $uv=e\in E^+$. Then since $e\in E_f$, and $f$ is ${\epsilon}$-optimal wrt. $p$ (by Lemma \[l:peq\]), $c(e)-p(u)+p(v)\geq -{\epsilon}$. Equivalently, $p(u)-p(v)\leq c(e)+{\epsilon}=-c({\ensuremath{e}^{\mathrm{R}}})+{\epsilon}$. But since ${\ensuremath{e}^{\mathrm{R}}}\in E_{f_0}$, $-c({\ensuremath{e}^{\mathrm{R}}})\leq 2{\epsilon}$ and we obtain $$p(u)-p(v)\leq 3{\epsilon}.$$ For a number $z$, define $L(z)=\{v\in V: p(v)\geq z\}$. We have $X\subseteq L(z)$ for any $z\leq\Delta$ and $D\cap L(z)=\emptyset$ for any $z>0$. Consequently, for any $0< z\leq \Delta$ we have $$\sum_{v\in L(z)} {\mathrm{exc}}_f(v)=\sum_{v\in X} {\mathrm{exc}}_f(v)={\Psi}_f.$$ Let $E^+(z)=\{uv=e\in E^+:p(u)\geq z, p(v)<z\}=\{uv=e\in E^+:u\in L(z), v\notin L(z)\}$. By Lemma \[l:goldberg\], for any $0< z\leq \Delta$, ${\Psi}_f\leq |E^+(z)|.$ For a particular edge $e\in E^+$, the condition $p(u)\geq z, p(v)<z$, by $z>p(v)\geq p(u)-3{\epsilon}$, is equivalent to $z\in (p(u)-3{\epsilon},p(u)]$. Consider the sets $$E^+({\epsilon}/2), E^+({\epsilon}/2), E^+(2\cdot {\epsilon}/2),\ldots, E^+(k\cdot {\epsilon}/2), \ldots, E^+(\Delta).$$ Since each edge $e\in E^+$ belongs to at most $6$ of these sets, the sum of their sizes is at most $12m$. Hence, for some $z^*$, where $0< z^*\leq \Delta$, $|E^+(z^*)|\leq \frac{6m{\epsilon}}{\Delta}$. We conclude ${\Psi}_f \Delta\leq 6m{\epsilon}$. Efficient Implementation ------------------------ As mentioned before, we could use Lemma \[l:correct\] directly: start with flow $f_1$ and $p_0^*\equiv 0$. Then, repeatedly compute a shortest $X\to D$ path $P_i$ along with the values $p_i^*$ using Dijkstra’s algorithm on $G_f''$ (with the help of price function $p_{i-1}^*$ to make the edge costs non-negative), and send flow through $P_i$ to obtain $f_{i+1}$. However, we can also proceed as in Hopcroft-Karp algorithm and augment along many shortest $X\to D$ paths of cost $\Delta$ at once. We use the following lemma. [lemma]{}[lincr]{}\[l:incr\] Let $p$ be a feasible price function of $G_f''$. Suppose there is no ${\ensuremath{s}}\to{\ensuremath{t}}$ path in $G_f''$ consisting of edges with reduced (wrt. $p$) cost $0$. Then $\Delta={\delta}_{G_f'}(X,D)>p({\ensuremath{s}})-p({\ensuremath{t}}).$ By Lemma \[l:path\], there exists some $X\to D$ path in $G_{f'}$ and hence also a ${\ensuremath{s}}\to{\ensuremath{t}}$ path $e_1\ldots e_k$ in $G_f''$. Let $e_i=u_iu_{i+1}$, where $u_1={\ensuremath{s}}$ and $u_{k+1}={\ensuremath{t}}$. We have $c'(e_i)-p(u_i)+p(u_{i+1})\geq 0$ for all $i$ but for some $j$ we also have $c'(e_j)-p(u_j)+p(u_{j+1})>0$. So, $$\Delta={\delta}_{G_f'}(X,D)={\delta}_{G_f''}({\ensuremath{s}},{\ensuremath{t}})\geq \sum_{i=1}^k c'(e_i)=p({\ensuremath{s}})-p({\ensuremath{t}})+\sum_{i=1}^k (c'(e_i)-p(u_i)+p(u_{i+1}))>p({\ensuremath{s}})-p({\ensuremath{t}}).\qedhere$$ Suppose we run the simple-minded algorithm. Assume that at some point $f=f_i$, and we have $p_i^*$ computed. Any ${\ensuremath{s}}\to{\ensuremath{t}}$ path in $G_{f_i}''$ with reduced (wrt. $p_i^*$) cost $0$ corresponds to some shortest $X\to D$ path (of length $\Delta_i$) in $G_f'$. Additionally, we have $p_i^*({\ensuremath{s}})=0$ and $p_i^*({\ensuremath{t}})=\Delta_i$. Let $Q_0,\ldots,Q_k$ be some maximal set of edge-disjoint ${\ensuremath{s}}\to{\ensuremath{t}}$ paths in $G_{f_i}''$ with reduced cost $0$. By Lemma \[l:correct\], we could in principle choose $P_i=Q_0, P_{i+1}=Q_1, \ldots, P_{i+k}=Q_k$ and this would not violate the rule that we repeatedly choose shortest $X\to D$ paths. Moreover, $p_i^*$ is a feasible price function of $G_{f_{i+1}}''$ for any choice of $P_i=Q_j$, $j=0,\ldots,k$. Hence, the reduced cost wrt. $p_i^*$ of any ${\ensuremath{e}^{\mathrm{R}}}\in Q_j$, is non-negative. Therefore, in fact $p_i^*$ is a feasible price function of all $G_{f_{i+1}}'',G_{f_{i+2}}'',\ldots,G_{f_{i+k+1}}''$. On the other hand, since for all $e\in P_i\cup\ldots\cup P_{i+k}$, the reduced cost (wrt. $p_i^*$) of ${\ensuremath{e}^{\mathrm{R}}}$ is positive, and the set $Q_0,\ldots,Q_k$ was maximal, we conclude that there is no ${\ensuremath{s}}\to{\ensuremath{t}}$ path in $G_{f_{i+k+1}}''$ consisting only of edges with reduced cost (wrt. $p_i^*$) $0$. But $p_i^*(s)-p^*(t)=\Delta_i$, so by Lemma \[l:incr\] we have $\Delta_{i+k+1}>\Delta_i$. Since we can choose a maximal set $Q_0,\ldots,Q_k$ using a DFS-style procedure in $O(m)$ time (for details, see Section \[s:pathcycle\], where we take a closer look at it to implement it faster in the planar case), we can actually move from $f_i$ to $f_{i+k+1}$ and simultaneously increase $\Delta$ in $O(m)$ time. Since $p_i^*$ is a feasible price function of $G_{f_{i+k+1}}''$, the new price function $p_{i+k+1}^*$ can be computed, again, using Dijkstra’s algorithm. The total running time of this algorithm is $O(m+n\log{n})$ times the number of times $\Delta$ increases. \[l:change\] The value $\Delta$ changes $O(\sqrt{m})$ times. By Lemma \[l:correct\], $\Delta$ can only increase, and if it does, it increases by at least ${\epsilon}/2$. After it increases $2\sqrt{m}$ times, $\Delta\geq {\epsilon}\sqrt{m}$. But then, by Lemma \[l:bound\], ${\Psi}_f$ is no more than $6\sqrt{m}$. As each change of $\Delta$ is accompanied with some decrease of ${\Psi}_f$, $\Delta$ can change $O(\sqrt{m})$ times more. $\textsc{Refine}$ as implemented in Algorithm \[alg:refine\] runs in $O((m+n\log{n})\sqrt{m})$. We can in fact improve the running time to $O(m\sqrt{m})$ by taking advantage of so-called Dial’s implementation of Dijkstra’s algorithm [@Dial69]. The details can be found in Appendix \[s:dial\]. Bounding the Total Length of Augmenting Paths. ---------------------------------------------- \[f:revbound\] For every $e\in E$ we have $c'(e)+c'({\ensuremath{e}^{\mathrm{R}}})>{\epsilon}$. We have $c'(e)>c(e)+{\epsilon}/2$. Hence, $c'(e)+c'({\ensuremath{e}^{\mathrm{R}}})>c(e)+{\epsilon}/2+c({\ensuremath{e}^{\mathrm{R}}})+{\epsilon}/2={\epsilon}$. There is a subtle reason why we set $c'(e)$ to be ${\ensuremath{\mathrm{round}}}(c(e)+{\epsilon}/2,{\epsilon}/2)$ instead of ${\ensuremath{\mathrm{round}}}(c(e),{\epsilon})$. Namely, this allows us to obtain the following bound. \[l:cmonoton\] For any $i=1,\ldots,q$ we have $c'(f_{i+1})-c'(f_i)<\Delta_i-|P_i|\cdot {\epsilon}.$ We have $$c'(f_{i+1})-c'(f_i)=\frac{1}{2}\sum_{e\in E}(f_{i+1}(e)-f_i(e))c'(e)=\frac{1}{2}{\mathopen{}\mathclose\bgroup\originalleft}(\sum_{e\in P_i}c'(e)-\sum_{e\in P_i}c'({\ensuremath{e}^{\mathrm{R}}}){\aftergroup\egroup\originalright})=\frac{1}{2}\sum_{e\in P_i}(c'(e)-c'({\ensuremath{e}^{\mathrm{R}}})).$$ By Fact \[f:revbound\], $-c'({\ensuremath{e}^{\mathrm{R}}})<c'(e)-{\epsilon}$ for all $e\in E$. Hence $$c'(f_{i+1})-c'(f_i)< \sum_{e\in P_i}c'(e)-|P_i|\cdot {\epsilon}=\Delta_i-|P_i|\cdot{\epsilon}.\qedhere$$ \[l:diffbound\] Let $f^*$ be any flow. Then $c(f_0)-c(f^*)\leq 2{\epsilon}m.$ We have $$c(f_0)-c(f^*)=\frac{1}{2}\sum_{e\in E} (f_0(e)-f^*(e))c(e).$$ If $f_0(e)>f^*(e)$, then ${\ensuremath{e}^{\mathrm{R}}}\in E_{f_0}$ and hence $c({\ensuremath{e}^{\mathrm{R}}})\geq -2{\epsilon}$, and thus $c(e)\leq 2{\epsilon}$. Otherwise, if $f_0(e)<f^*(e)$ then $e\in E_{f_0}$ and $c(e)\geq -2{\epsilon}$. In both cases $(f_0(e)-f^*(e))c(e)\leq 2{\epsilon}$. Therefore, since $|E|=2m$, $c(f_0)-c(f^*)\leq 2{\epsilon}m$. \[l:apprbound\] Let $f^*$ be any flow. Then $|c'(f^*)-c(f^*)|\leq {\epsilon}m.$ Recall that we had $0< c'(e)-c(e)\leq {\epsilon}$. Hence $|f^*(e)(c'(e)-c(e))|\leq {\epsilon}$ and: $$|c'(f^*)-c(f^*)|=\frac{1}{2}{\mathopen{}\mathclose\bgroup\originalleft}|\sum_{e\in E}f^*(e)(c'(e)-c(e)){\aftergroup\egroup\originalright}|\leq \frac{1}{2}\sum_{e\in E} |f^*(e)(c'(e)-c(e))|\leq \frac{1}{2}\sum_{e\in E}{\epsilon}={\epsilon}m.\qedhere$$ The inequalities from Lemmas \[l:cmonoton\], \[l:diffbound\] and \[l:apprbound\] combined give us the following important property of the set of paths we augment along. \[l:sumpath\] The total number of edges on all the paths we send flow through is $O(m\log{m})$. By Lemma \[l:diffbound\] and the fact that $c(f_1)\leq c(f_0)$, we have: $$c(f_1)-c(f_{q+1})\leq {c(f_0)-c(f_{q+1})}\leq 2{\epsilon}m.$$ On the other hand, by Lemma \[l:apprbound\] and Lemma \[l:cmonoton\], we obtain: $$\begin{aligned} c(f_1)-c(f_{q+1}) & \geq (c'(f_1)-{\epsilon}m)+(-c'(f_{q+1})-{\epsilon}m) =c'(f_1)-c'(f_{q+1})-2{\epsilon}m\\ &=\sum_{i=1}^q (c'(f_i)-c'(f_{i+1}))-2{\epsilon}m \geq\sum_{i=1}^q (|P_i|\cdot{\epsilon}-\Delta_i)-2{\epsilon}m. \end{aligned}$$ By combining the two inequalities and applying Lemma \[l:bound\], we get: $$\sum_{i=1}^q |P_i|\leq 4m +\sum_{i=1}^q\frac{\Delta_i}{{\epsilon}}\leq 4m+\sum_{i=1}^q \frac{6m}{{\Psi}_{f_i}}=4m+\sum_{i=1}^q \frac{6m}{q-i+1}=O(m\log{m}).\qedhere$$ Unit-Capacity Min-Cost Circulation in Planar Graphs {#s:planar} =================================================== In this section we show that the refinement algorithm per scale from Section \[s:general\] can be simulated on a planar digraph more efficiently. Specifically, we prove the following theorem. <span style="font-variant:small-caps;">Refine</span> can be implemented on a planar graph in ${\ensuremath{\widetilde{O}}}((nm)^{2/3})$ time. Let $r\in [1,n]$ be a parameter. Suppose we are given an $r$-division with few holes ${\ensuremath{\mathcal{P}}}_1,\ldots,{\ensuremath{\mathcal{P}}}_\lambda$ of $G$ such that for any $i$ we have $\lambda=O(n/r)$, $|V({\ensuremath{\mathcal{P}}}_i)|=O(r)$, $|{\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}|=O(\sqrt{r})$, ${\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ lies on $O(1)$ faces of ${\ensuremath{\mathcal{P}}}_i$, and the pieces are edge-disjoint. We set ${\ensuremath{\partial}}{G}=\bigcup_{i=1}^\lambda {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$. Clearly, $|{\ensuremath{\partial}}{G}|=O(n/\sqrt{r})$. In Appendix \[s:rdiv\], we show that we can reduce our instance to the case when the above assumptions are satisfied in nearly-linear time. Since $m$ might be $\omega(n)$, we cannot really guarantee that $|E({\ensuremath{\mathcal{P}}}_i)|=O(r)$, This will not be a problem though, since, as we will see, for all the computations involving the edges of ${\ensuremath{\mathcal{P}}}_i$ (e.g., computing shortest paths in ${\ensuremath{\mathcal{P}}}_i$, or sending a unit of flow through a path of ${\ensuremath{\mathcal{P}}}_i$) of all edges $uv=e\in E({\ensuremath{\mathcal{P}}}_i)$ we will only care about an edge $e\in E({\ensuremath{\mathcal{P}}}_i)\cap G_f$ with minimal cost $c'(e)$. Therefore, since ${\ensuremath{\mathcal{P}}}_i$ is planar, at any time only $O(r)$ edges of ${\ensuremath{\mathcal{P}}}_i$ will be needed. Recall that the per-scale algorithm for general graphs (Algorithm \[alg:refine\]) performed $O(\sqrt{m})$ phases, each consisting of two steps: a shortest path computation (to compute the price function $p^*$ from Lemma \[l:correct\]), followed by the computation of a maximal set of edge-disjoint augmenting paths of reduced (wrt. $p^*$) cost $0$. We will show how to implement both steps in ${\ensuremath{\widetilde{O}}}(n/\sqrt{r})$ amortized time, at the additional total data structure maintenance cost (over all phases) of ${\ensuremath{\widetilde{O}}}(mr)$. Since there are $O(\sqrt{m})$ steps, this will yield ${\ensuremath{\widetilde{O}}}(nm)^{2/3})$ time by appropriately setting $r$. We can maintain the flow $f$ explicitly, since it undergoes only $O(m\log{n})$ edge updates (by Lemma \[l:sumpath\]). However, we will not compute the entire price function $p^*$ at all times explicitly, as this is too costly. Instead, we will only compute $p^*$ limited to the subset ${\ensuremath{\partial}}{G}\cup \{{\ensuremath{s}},{\ensuremath{t}}\}$. For each ${\ensuremath{\mathcal{P}}}_i$, define ${\ensuremath{\mathcal{P}}}_{f,i}'=G_f'\cap {\ensuremath{\mathcal{P}}}_i$. We also define ${\ensuremath{\mathcal{P}}}_{f,i}''$ to be ${\ensuremath{\mathcal{P}}}_{f,i}'$ with vertices $\{{\ensuremath{s}},{\ensuremath{t}}\}$ added, and those edges ${\ensuremath{s}}v$, $v{\ensuremath{t}}$ of $G_{f}''$ that satisfy $v\in V({\ensuremath{\mathcal{P}}}_i)\setminus {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$. This way, ${\ensuremath{\mathcal{P}}}_{f,i}''\subseteq G_f''$ and $E({\ensuremath{\mathcal{P}}}_{f,i}'')\cap E({\ensuremath{\mathcal{P}}}_{f,j}'')=\emptyset$ for $i\neq j$. The costs of edges $e\in E({\ensuremath{\mathcal{P}}}_{f,i}'')$ are the same as in $G_{f}''$, i.e., $c'(e)$. Besides, for each $i$ we will store a “local” price function $p_i$ that is feasible only for ${\ensuremath{\mathcal{P}}}_{f,i}''$, After the algorithm finishes, we will know how the circulation looks like precisely. However, the general scaling algorithm requires us to also output price function $p$ such that $f$ is an ${\epsilon}$-optimal circulation wrt. $p$. $f$ is ${\epsilon}$-optimal wrt. $p^*$ in the end, but we will only have it computed for the vertices ${\ensuremath{\partial}}{G}\cup\{{\ensuremath{s}},{\ensuremath{t}}\}$. Therefore, we extend it to all remaining vertices of $G$. [lemma]{}[local]{}\[l:local\] Suppose we are given the values of $p^*$ on ${\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ and a price function $p_i$ feasible for ${\ensuremath{\mathcal{P}}}_{f,i}''$. Then we can compute the values $p^*(u)$ for all $v\in V({\ensuremath{\mathcal{P}}}_{f,i}'')$ in $O(r\log{r})$ time. Recall that $p^*(v)={\delta}_{G_f''}(v,{\ensuremath{t}})$ for all $v\in V$. Since we already know the values of $p^*$ on ${\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}\cup \{{\ensuremath{s}},{\ensuremath{t}}\}$, we only need to compute them for the vertices $V_i=V({\ensuremath{\mathcal{P}}}_{f,i}'')\setminus({\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}\cup \{{\ensuremath{s}},{\ensuremath{t}}\})$. Take some shortest $v\to {\ensuremath{t}}$ path ${\ensuremath{\mathcal{P}}}_v$ in $G_f''$, where $v\in V_i$. ${\ensuremath{\mathcal{P}}}_v$ either contains some first vertex $b\in {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$, or is fully contained in ${\ensuremath{\mathcal{P}}}_{f,i}''$. In the former case we have $p^*(v)={\delta}_{G_f''}(v,{\ensuremath{t}})=p^*(b)+{\delta}_{{\ensuremath{\mathcal{P}}}_{f,i}''}(v,b)$, and in the latter $p^*(v)={\delta}_{{\ensuremath{\mathcal{P}}}_{f,i}''}(v,{\ensuremath{t}})$. Hence, to compute distances from all $v\in V_i$ to ${\ensuremath{t}}$ in $G_f''$, it is sufficient to compute shortest paths on the graph ${\ensuremath{\mathcal{P}}}_{f,i}^*$, defined as ${\ensuremath{\mathcal{P}}}_{f,i}''$ with edges $b\to{\ensuremath{t}}$ of cost $c'(b{\ensuremath{t}})=p^*(b)$, for all $b\in {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$, added. To do that efficiently, note that $p_i$ is an almost feasible price function of ${\ensuremath{\mathcal{P}}}_{f,i}^*$: the only edges with possibly negative reduced cost wrt. $p_i$ are the incoming edges of ${\ensuremath{t}}$. Therefore, we can still compute the distances to ${\ensuremath{t}}$ in ${\ensuremath{\mathcal{P}}}_{f,i}^*$ in $O(r\log{r})$ time using a variant of Dijkstra’s algorithm [@DinitzI17] that allows $k$ such “negative” vertices if we want to compute single-source shortest paths in $O(km\log{n})$ time. Hence, in order to extend $p^*$ to all vertices of $G$ once the final circulation is found, we apply Lemma \[l:local\] to all pieces. This takes $O{\mathopen{}\mathclose\bgroup\originalleft}(\frac{n}{r}\cdot r\log{r}{\aftergroup\egroup\originalright})=O(n\log{n})$ time. Dijkstra Step {#s:planar-dijkstra} ------------- Let us start with an implementation of the Dijkstra step computing the new price function $p^*$. First, for each piece ${\ensuremath{\mathcal{P}}}_i$ we define the compressed version $H_{f,i}''$ of ${\ensuremath{\mathcal{P}}}_{f,i}''$ as follows. Let $V(H_{f,i}'')={\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}\cup \{{\ensuremath{s}},{\ensuremath{t}}\}$. The set of edges of $H_{f,i}''$ is formed by: - a distance clique ${\ensuremath{\text{DC}}}({\ensuremath{\mathcal{P}}}_{f,i}'')$ between vertices ${\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ in ${\ensuremath{\mathcal{P}}}_{f,i}''$, - for each $v\in {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$, an edge ${\ensuremath{s}}v$ of cost ${\delta}_{{\ensuremath{\mathcal{P}}}_{f,i}''}({\ensuremath{s}}, v)$ if this distance is finite, - for each $v\in {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$, an edge $v{\ensuremath{t}}$ of cost ${\delta}_{{\ensuremath{\mathcal{P}}}_{f,i}''}(v, {\ensuremath{t}})$ if this distance is finite, - an edge ${\ensuremath{s}}{\ensuremath{t}}$ of cost ${\delta}_{{\ensuremath{\mathcal{P}}}_{f,i}''}({\ensuremath{s}}, {\ensuremath{t}})$ if this distance is finite. Recall that we store a price function $p_i$ of ${\ensuremath{\mathcal{P}}}_{f,i}''$. Therefore, by Theorem \[t:mssp\], ${\ensuremath{\text{DC}}}({\ensuremath{\mathcal{P}}}_{f,i}'')$ can be computed in $O(r\log{r})$ time. All needed distances ${\delta}_{{\ensuremath{\mathcal{P}}}_{f,i}''}({\ensuremath{s}}, v)$ and ${\delta}_{{\ensuremath{\mathcal{P}}}_{f,i}''}(v, {\ensuremath{t}})$ can be computed in $O(r\log{r})$ time using Dijkstra’s algorithm (again, with the help of price function $p_i$). Now define $H_f''$ to be $\bigcup_{i=1}^\lambda H_{f,i}''$ with edges ${\ensuremath{s}}v$ and $v{\ensuremath{t}}$ of $G_f''$ that satisfy $v\in {\ensuremath{\partial}}{G}$ added. \[f:translate\] For any $u,v\in V(H_f'')$, ${\delta}_{H_f''}(u,v)={\delta}_{G_f''}(u,v)$. Observe that $H_f''$ is a dense distance graph in terms of the definition of Section \[s:prelims\]: it consists of $O(n/r)$ distance cliques ${\ensuremath{\text{DC}}}({\ensuremath{\mathcal{P}}}_{f,i}'')$ with $O(\sqrt{r})$ vertices each, and $O(n/\sqrt{r})$ additional edges which also can be interpreted as $2$-vertex distance cliques. Hence, given a feasible price function on $H_f''$, we can compute distances to ${\ensuremath{t}}$ in $H_f''$ on it using Theorem \[t:fr\] in $O{\mathopen{}\mathclose\bgroup\originalleft}(n/\sqrt{r}\frac{\log^2{n}}{\log^2\log{n}}{\aftergroup\egroup\originalright})$ time. Since $V(H_f'')={\ensuremath{\partial}}{G}\cup \{{\ensuremath{s}},{\ensuremath{t}}\}$, the price function $p^*$ we have is indeed sufficient. The computed distances to ${\ensuremath{t}}$ form the new price function $p^*$ on ${\ensuremath{\partial}}{G}\cup\{{\ensuremath{s}},{\ensuremath{t}}\}$ as in the algorithm for general graphs (see Algorithm \[alg:refine\]). Sending Flow Through a Path {#s:sendflow} --------------------------- In the general case updating the flow after an augmenting path has been found was trivial. However, as we operate on a compressed graph, the update procedure has to be more involved. Generally speaking, we will repeatedly find some shortest ${\ensuremath{s}}\to{\ensuremath{t}}$ path $Q=e_1\ldots e_k$ in $H_f''$, translate it to a shortest ${\ensuremath{s}}\to{\ensuremath{t}}$ path ${\ensuremath{\mathcal{P}}}$ in $G_f''$ and send flow through it. It is easy to see by the definition of $H_f''$ that $Q$ can be translated to a shortest ${\ensuremath{s}}\to {\ensuremath{t}}$ path in $G_f''$ and vice versa. Each edge $e_j$ can be translated to either some subpath inside a single graph ${\ensuremath{\mathcal{P}}}_{f,i}''$, or an edge of $G_f''$ of the form ${\ensuremath{s}}v$ or $v{\ensuremath{t}}$, where $v\in {\ensuremath{\partial}}{G}$. This can be done in $O(r\log{n})$ time by running Dijkstra’s algorithm on ${\ensuremath{\mathcal{P}}}_{f,i}''$ with price function $p_i$. We will guarantee that path $P$ obtained by concatenating the translations of individual edges $e_j$ contains no repeated edges of $G_f''$. We now show how to update each $H_{f,i}''$ after sending flow through the found path $P$. Note that we only need to update $H_{f,i}''$ if $E(P)\cap E({\ensuremath{\mathcal{P}}}_{f,i}'')\neq \emptyset$. In such case we call ${\ensuremath{\mathcal{P}}}_i$ an *affected piece*. Observe that some piece can be affected at most $O(m\log{m})$ times since the total number of edges on all shortest augmenting paths $P$ in the entire algorithm, regardless of their choice, is $O(m\log{m})$ (see Lemma \[l:sumpath\]). To rebuild $H_{f,i}''$ to take into account the flow augmentation we will need a feasible price function on ${\ensuremath{\mathcal{P}}}_{f,i}''$ after the augmentation. However, we cannot be sure that what we have, i.e., $p_i$, will remain a good price function of ${\ensuremath{\mathcal{P}}}_{f,i}''$ after the augmentation. By Lemma \[l:correct\], luckily, we know that $p^*$ is a feasible price function after the augmentation for the whole graph $G_{f}''$. In particular, $p^*$ (before the augmentation) limited to $V({\ensuremath{\mathcal{P}}}_{f,i}'')$ is a feasible price function of ${\ensuremath{\mathcal{P}}}_{f,i}''$ after the augmentation. Hence, we can compute new $p_i$ equal to $p^*$ using Lemma \[l:local\] in $O(r\log{r})$ time. Given a feasible price function $p_i$ on ${\ensuremath{\mathcal{P}}}_{f,i}''$ after $f$ is augmented, we can recompute $H_{f,i}''$ in $O(r\log{r})$ time as discussed in Section \[s:planar-dijkstra\]. We conclude that the total time needed to update the graph $H_f''$ subject to flow augmentations is $O(mr\log{r}\log{m})=O(mr\log{n}\log{m})$. A Path Removal Algorithm {#s:pathcycle} ------------------------ In this section we consider an abstract “path removal” problem, that generalizes the problem of finding a maximal set of edge-disjoint ${\ensuremath{s}}\to{\ensuremath{t}}$ paths. We will use it to reduce the problem of finding such a set of paths on a subgraph of $G_f''$ consisting of edges with reduced cost $0$ wrt. $p^*$ to the problem of finding such a set of paths on the zero-reduced cost subgraph of $H_f''$. Suppose we have some directed acyclic graph $H$ with a fixed source ${\ensuremath{s}}$ and sink ${\ensuremath{t}}$, that additionally undergoes some limited adversarial changes. We are asked to efficiently support a number of *rounds*, until ${\ensuremath{t}}$ ceases to be reachable from ${\ensuremath{s}}$. Each round goes as follows. 1. We first find either any ${\ensuremath{s}}\to{\ensuremath{t}}$ path $P$, or detect that no ${\ensuremath{s}}\to{\ensuremath{t}}$ path exists. 2. Let $E^+\subseteq V\times V$, and $P\subseteq E^-\subseteq E(H)$ be some adversarial sets of edges. Let $H'=(V,E')$, where $E'=E(H)\setminus E^-\cup E^+$. Assume that for any $v\in V(H)$, if $v$ cannot reach ${\ensuremath{t}}$ in $H$, then $v$ cannot reach ${\ensuremath{t}}$ in $H'$ either. Then the adversarial change is to remove $E^-$ from $E$ and add $E^+$ to $E$, i.e., set $E(H)=E'$. Let $\bar{n}=|V(H)|$ and let $\bar{m}$ be the number of edges ever seen by the algorithm, i.e., the sum of $|E(H)|$ and all $|E^+|$. We will show an algorithm that finds all the paths $P$ in $O(\bar{n}+\bar{m})$ total time. Let us also denote by $\bar{\ell}$ the sum of lengths of all returned paths $P$. Clearly, $\bar{\ell}\leq \bar{m}$. A procedure handling the phase (1) of each round, i.e., finding a ${\ensuremath{s}}\to{\ensuremath{t}}$ path or detecting that there is none, is given in Algorithm \[alg:maxpath\]. The second phase of each round simply modifies the representation of the graph $H$ accordingly. Throughout all rounds, we store a set $W$ of vertices $w$ of $H$ for which we have detected that there is no more $w\to {\ensuremath{t}}$ path in $H$. Initially, $W=\emptyset$. Each edge $e\in E(H)$ can be *scanned* or *unscanned*. Once $e$ is scanned, it remains scanned forever. The adversarial edges $E^+$ that are inserted to $E(H)$ are initially unscanned. $Q:=$ an empty path with a single endpoint ${\ensuremath{s}}$ \[l:search\] mark $e$ scanned $Q:=Qe$\[l:append\] $W:=W\cup \{y\}$\[l:winsert\] remove the last edge of $Q$ unless $Q$ is empty\[l:remove\] report ${\ensuremath{t}}$ not reachable from ${\ensuremath{s}}$ and stop $Q$ and $Q:=0$. The following lemmas establish the correctness and efficiency of the crucial parts of Algorithm \[alg:maxpath\]. [lemma]{}[blockingflow]{} Algorithm \[alg:maxpath\] correctly finds an ${\ensuremath{s}}\to{\ensuremath{t}}$ path in $H$ or detects there is none. First note that no edge is appended to $Q$ twice throughout all rounds: only unscanned edges are ever appended to $Q$ and are marked scanned immediately afterwards. Hence, the algorithm stops. Since $H$ is acyclic, $Q$ remains simple at all times. Moreover, for each scanned edge $uv=e\in E(H)$ we either have $e\in Q$ or $v\in W$. The next observation is that immediately after line \[l:winsert\] is executed, for all edges $yv\in E(H)$ we have $v\in W$. By the previous observation, for all edges $e=yv\in E(H)$, we have either $v\in W$ or $e\in Q$. But after line \[l:winsert\] is executed, $y$ is the other endpoint of $Q$, so if $e\in Q$, then $y$ also appears somewhere earlier in $Q$, i.e., $Q$ is not simple, a contradiction. Next we prove that $W$ contains only vertices $v$ that cannot reach ${\ensuremath{t}}$. Consider the first moment when some vertex $v\in W$ can actually reach ${\ensuremath{t}}$ in $H$. If this is a result of changing the edge set, this means that $v$ cannot reach ${\ensuremath{t}}$ in $H$, but can reach ${\ensuremath{t}}$ in $(V,E')$. This, however, violates our assumption about $(V,E')$. So $v$ is the first vertex that gets inserted to $W$ in line \[l:winsert\], but actually can reach ${\ensuremath{t}}$ in $H$ at this time. In this case, for all edges $vw\in E(H)$, $w\in W$ and $w$ was inserted into $W$ before $v$. Therefore, $v$ has only edges to vertices that cannot reach $t$, and thus it cannot reach ${\ensuremath{t}}$ itself, a contradiction. Let us also note that for each edge $uv\in E(H)$, $u\in W$ implies that $v$ cannot reach $t$. Otherwise, $u$ could in fact reach $u$, which would contradict our previous claim. Next we show that if a run of the procedure does not find a ${\ensuremath{s}}\to{\ensuremath{t}}$ path, it visits only vertices $v$ (i.e., $v\in V(Q)$ at some point of that run) reachable from ${\ensuremath{s}}$, and out of those visits all that can reach ${\ensuremath{t}}$. Clearly the procedure does not visit any $v$ not reachable from ${\ensuremath{s}}$, as in that case we would have $v\in V(Q)$ at some point, but $Q$ is always a path starting at ${\ensuremath{s}}$, i.e., all vertices of $Q$ are reachable from ${\ensuremath{s}}$. Now suppose the procedure does not visit some $v$ that is reachable from ${\ensuremath{s}}$ and can reach ${\ensuremath{t}}$, and choose $v$ to be such that ${\delta}_{H}({\ensuremath{s}},v)$ is minimum. Clearly, $v\neq {\ensuremath{s}}$. Let $w$ be such vertex that ${\delta}_{H}({\ensuremath{s}},v)={\delta}_{H}({\ensuremath{s}},w)+1$ and $e=wv\in E(H)$. Observe that $e$ is unscanned, as otherwise we would either have $e\in Q$ (and thus $v$ would be visited) or $v\in W$ (and thus $v$ would not reach ${\ensuremath{t}}$). Note that $w$ is never inserted into $W$, since that would imply that $v$ cannot reach ${\ensuremath{t}}$. Since $w$ is reachable from ${\ensuremath{s}}$, can reach ${\ensuremath{t}}$ (because it can reach $v$), and ${\delta}_{H}({\ensuremath{s}},w)<{\delta}_{H}({\ensuremath{s}},v)$, $w$ is visited by the procedure. But since the procedure does not terminate prematurely having found a ${\ensuremath{s}}\to{\ensuremath{t}}$ path $P$, the edge $e$, being unscanned, will be appended to $Q$ in step (a) when $Q$ is a ${\ensuremath{s}}\to w$ path. Hence, $v$ will be visited, a contradiction. Finally, the procedure either finds a ${\ensuremath{s}}\to{\ensuremath{t}}$ path, or proves that ${\ensuremath{t}}$ is not reachable from ${\ensuremath{s}}$. [lemma]{}[stepremove]{}\[lem:stepremove\] The total number of times line \[l:remove\] is executed, through all rounds, is $O(\bar{n})$. Each execution of line \[l:remove\] is preceded by an insertion of some vertex to $W$. Each $v\in V(H)$ is inserted into $W$ at most once: only the other endpoint of $Q$ can be inserted into $W$, and no vertex of $W$ is ever appended to $Q$. [lemma]{}[stepappend]{}\[lem:stepappend\] Line \[l:append\] of Algorithm \[alg:maxpath\] is executed $O(\bar{n}+\bar{\ell})$ times through all rounds. By Lemma \[lem:stepremove\], $O(\bar{n})$ edges $e$ appended to $Q$ that are later popped in line \[l:remove\]. If the appended edge is never popped in step \[l:remove\], it is a part of a returned path or cycle – this happens precisely $\bar{\ell}$ times. [lemma]{}[stepunscanned]{}\[l:stepunscanned\] The total time used by Algorithm \[alg:maxpath\], through all rounds, is $O(\bar{n}+\bar{m})$. We represent $W$ as a bit array of size $\bar{n}$. Then, by Lemmas \[lem:stepremove\] and \[lem:stepappend\], to show that the algorithm runs in $O(\bar{n}+\bar{m})$ time, we only need to implement line \[l:search\] so that its all executions take $O(\bar{m})$ time in total. But this is easy: it is sufficient to store the outgoing edges of each vertex $v$ in a linked list, so that adding/removing edges takes $O(1)$ time and we can move to a next unscanned edge in $O(1)$ time. Finding a Maximal Set of Shortest Augmenting Paths {#s:planarpaths} -------------------------------------------------- Recall that for a general graph, after computing the price function $p^*$ we found a maximal set of edge-disjoint ${\ensuremath{s}}\to{\ensuremath{t}}$ paths in the graph $Z_f''$, defined as a subgraph of $G_f''$ consisting of edges with reduced cost $0$ (wrt. $p^*$). To accomplish that, we could in fact use the path removal algorithm from Section \[s:pathcycle\] run on $Z_f''$: until there was an ${\ensuremath{s}}\to{\ensuremath{t}}$ path in $Z_f''$, we would find such a path $P$, remove edges of $P$ (i.e., set $E^-=P$ and $E^+=\emptyset$), and repeat. Since in this case we never add edges, the assumption that ${\ensuremath{t}}$ cannot become reachable from any $v$ due to updating $Z_f''$ is met. Let $Y_f''$ be the subgraph of the graph $H_f''$ from Section \[s:planar-dijkstra\] consisting of edges with reduced (wrt. $p^*$) cost $0$. Since all edges of $H_f''$ correspond to shortest paths in $G_f''$, all edges of $Y_f''$ correspond to paths in $G_f''$ with reduced cost $0$. Because $Z_f''$ is acyclic by Lemma \[l:correct\], $Y_f''$ is acyclic as well. Moreover, for any two edges $e_1,e_2\in E(Y_f'')$, if there is a path going through both $e_1$ and $e_2$ in $Y_f''$, then the paths represented by $e_1$ and $e_2$ are edge-disjoint in $Z_f''$ (as otherwise $Z_f''$ would have a cycle). Therefore, any path $Q$ in $Y_f''$ translates to a *simple* path in $Z_f''\subseteq G_f''$. We will now explain why running Algorithm \[alg:maxpath\] on $Y_f''$ can be used to find a maximal set of edge-disjoint ${\ensuremath{s}}\to{\ensuremath{t}}$ paths. Indeed, by Fact \[f:translate\], $Y_f''$ contains an ${\ensuremath{s}}\to{\ensuremath{t}}$ path iff $Z_f''$ does. Since $Y_f''$ is just a compressed version of $Z_f''$, and $Z_f''$ undergoes edge deletions only (since we only remove the found paths), the updates to $Y_f''$ cannot make some ${\ensuremath{t}}$ reachable from some new vertex $v\in V(Y_f'')$. Technically speaking, we should think of $Y_f''$ as undergoing both edge insertions and deletions: whenever some path $Q\subseteq Y_f''$ is found, we include $Q$ in $E^-$ and send the flow through a path corresponding to $Q$ in $G_f''$, as described in Section \[s:sendflow\]. But then for all affected pieces ${\ensuremath{\mathcal{P}}}_i$, $H_{f,i}''$ is recomputed and thus some of the edges of $Q$ might be reinserted to $Y_f''$ again. These edges should be seen as forming the set $E^+$, whereas the old edges of the recomputed graphs $H_{f,i}''$ belong to $E^-$. In terms of the notation of Section \[s:pathcycle\], when running Algorithm \[alg:maxpath\] on $Y_f''$, we have $\bar{n}=O(n/\sqrt{r})$. The sum of values $\bar{\ell}$ from Section \[s:pathcycle\] over all phases of the algorithm is, by Lemma \[l:sumpath\], $O(m\log{m})$. Similarly, again by Lemma \[l:sumpath\], the sum of the values $\bar{m}$ from Section \[s:pathcycle\] over all phases, is $O(m^{3/2}+mr^2\log{m})$ (since each time $E^+$ might be as large as $r^2$ times the number of affected pieces). Recall that there are $O(\sqrt{m})$ phases, and the total time needed to maintain the graph $H_f''$ subject to flow augmentations is $O(mr\log{r}\log{m})$ (see Section \[s:sendflow\]). For each phase, running a Dijkstra step to compute $p^*$ using FR-Dijkstra, followed by running Algorithm \[alg:maxpath\] directly until there are no ${\ensuremath{s}}\to{\ensuremath{t}}$ paths in $Y_f''$ would lead to $O{\mathopen{}\mathclose\bgroup\originalleft}(\sqrt{m}{\mathopen{}\mathclose\bgroup\originalleft}(\frac{n}{\sqrt{r}}\frac{\log^2{n}}{\log^2\log{n}}{\aftergroup\egroup\originalright})+m^{3/2}+mr^2\log{m}{\aftergroup\egroup\originalright})$ total time, i.e., would not yield any improvement over the general algorithm. However, we can do better by implementing Algorithm \[alg:maxpath\] on $Y_f''$ more efficiently. The following lemma is essentially proved in [@ItalianoKLS17; @Karczmarz18]. However, as we use different notation, we give a complete proof below. [lemma]{}[lreach]{}\[l:reach\] Let $Z$ be the subgraph of ${\ensuremath{\mathcal{P}}}_{f,i}'$ consisting of edges with reduced cost $0$ wrt. to some feasible price function $p$. There exists $O(\sqrt{r})$ pairs of subsets $(A_{i,1},B_{i,1}),(A_{i,2},B_{i,2}),\ldots$ of ${\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ such that for each $v\in {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$: - The number of sets $A_{i,j}$ ($B_{i,j}$) such that $v\in A_{i,j}$ ($v\in B_{i,j}$, resp.) is $O(\log{r})$. - Each $B_{i,j}$ is totally ordered according to some order $\prec_{i,j}$. - For any $j$ such that $v\in A_{i,j}$, there exist $l_{i,v,j},r_{i,v,j}\in B_{i,j}$ such that the subset $R_{i,v}$ of ${\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ reachable from $v$ in $Z$ can be expressed as $\bigcup_{j:v\in A_{i,j}} \{w\in B_{i,j}: l_{i,v,j}\preceq_{i,j} w\preceq_{i,j} r_{i,v,j}\}.$ The sets $A_{i,j}$, $B_{i,j}$ and the vertices $l_{i,v,j},r_{i,v,j}$ for all $v,j$ can be computed in $O(\sqrt{r}\log{r})$ time based on the distance clique between ${\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ in ${\ensuremath{\mathcal{P}}}_{f,i}'$ and the values of $p^*$ on ${\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$. The distance clique of ${\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ in ${\ensuremath{\mathcal{P}}}_{f,i}'$ can be partitioned into $O(\sqrt{r})$ rectangular Monge matrices ${\ensuremath{\mathcal{M}}}_1,\ldots,{\ensuremath{\mathcal{M}}}_q$, where $R_j$ and $C_j$ denote the sets of rows and columns, respectively, of ${\ensuremath{\mathcal{M}}}_i$, such that: - these matrices have $O(\sqrt{r}\log{r})$ rows and columns in total, - each $v\in {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ is a row of $O(\log{r})$ matrices ${\ensuremath{\mathcal{M}}}_j$, - for all elements ${\ensuremath{\mathcal{M}}}_j[u,v]$, where $u\in R_j$, $v\in C_j$ we have ${\ensuremath{\mathcal{M}}}_j[u,v]\geq {\delta}_{{\ensuremath{\mathcal{P}}}_{f,i}'}(u,v)$, - for all $u,v\in {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ there exists such ${\ensuremath{\mathcal{M}}}_j$ that ${\ensuremath{\mathcal{M}}}_j[u,v]={\delta}_{{\ensuremath{\mathcal{P}}}_{f,i}'}(u,v)$. Recall that the Monge property here says that for any two rows $a,b\in R_j$ and any two columns $x,y\in C_j$, such that $a$ is to the left of $b$, and $x$ is above $y$, we have ${\ensuremath{\mathcal{M}}}_j[a,x]+{\ensuremath{\mathcal{M}}}_j[b,y]\geq {\ensuremath{\mathcal{M}}}_j[a,y]+{\ensuremath{\mathcal{M}}}_j[b,x]$. The partition can be computed in $O(r\log{r})$ time when constructing the distance clique. The proof of the above can be found in [@GawrychowskiK18; @MozesW10]. Denote by ${\ensuremath{\mathcal{M}}}_{j,p}$ the matrix with entries ${\ensuremath{\mathcal{M}}}_{j,p}[u,v]={\ensuremath{\mathcal{M}}}_j[u,v]-p(u)+p(v)$. ${\ensuremath{\mathcal{M}}}_{j,p}$ is also Monge (see e.g., [@GawrychowskiK18]). Clearly, the non-infinite entries of each matrix ${\ensuremath{\mathcal{M}}}_{j,p}$ are non-negative, since ${\delta}_{{\ensuremath{\mathcal{P}}}_{f,i}'}(u,v)-p(u)+p(v)\geq 0$ for all $u,v\in {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$. For each ${\ensuremath{\mathcal{M}}}_{j,p}$ we find: - the subset $B_{i,j}\subseteq C_j$ of its columns $b$ such that ${\ensuremath{\mathcal{M}}}_j[a,b]-p(a)+p(b)=0$ for some $a\in R_j$, - the subset $A_{i,j}\subseteq R_j$ of its rows $a$ such that ${\ensuremath{\mathcal{M}}}_j[a,b]-p(a)+p(b)=0$ for some $b\in C_j$. Both $A_{i,j}$ and $B_{i,j}$ can be found by finding row/column minima of ${\ensuremath{\mathcal{M}}}_{j,p}$ using SMAWK algorithm [@AggarwalKMSW87] in $O(|R_j|+|C_j|)$ time. Next, we again use SMAWK algorithm to find for each $a\in A_{i,j}$ the leftmost row minimum ${\ensuremath{\mathcal{M}}}_{j,p}[a,l_{i,a,j}]$ and the rightmost row minimum ${\ensuremath{\mathcal{M}}}_{j,p}[a,r_{i,a,j}]$ of the row $a$ of ${\ensuremath{\mathcal{M}}}_{j,p}$. This takes $O(|A_{i,j}|+|C_j|)$ time as well. Set $\prec_{i,j}$ to be the order of columns in ${\ensuremath{\mathcal{M}}}_j$ restricted to $B_{i,j}$. For brevity, below set $A_j:=A_{i,j}$, $B_j:=B_{i,j}$, $l_{u,j}:=l_{i,u,j}$, $r_{u,j}:=r_{i,u,j}$ and $\prec_{j}:=\prec_{i,j}$. It is sufficient to show that for $u,v\in {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$, a path $u\to v$ exists in $Z$ if and only if for some $j$, $u\in A_j$, $v\in B_j$ and $l_{u,j}\preceq_{j} v\preceq_{j} r_{u,j}$. Let us start with $\implies$ direction. There exists such ${\ensuremath{\mathcal{M}}}_j$ that ${\delta}_{{\ensuremath{\mathcal{P}}}_{f,i}'}(u,v)={\ensuremath{\mathcal{M}}}_j[u,v]$. Since ${\delta}_{{\ensuremath{\mathcal{P}}}_{f,i}'}(u,v)-p(u)+p(v)=0$, we have ${\ensuremath{\mathcal{M}}}_{j,p}[u,v]=0$. Since ${\ensuremath{\mathcal{M}}}_{j,p}$ has non-negative entries, $u\in A_{j}$ and $v\in B_{j}$. By the definition of $l_{u,j}$ and $r_{u,j}$, $l_{u,j}\preceq_{j} v\preceq_{j} r_{u,j}$. Now suppose $u\in A_{j}$, $v\in B_{j}$ and $l_{u,j}\preceq_j v\preceq_j r_{u,j}$. Clearly ${\ensuremath{\mathcal{M}}}_{j,p}[u,v]\geq 0$. Suppose ${\ensuremath{\mathcal{M}}}_{j,p}[u,v]>0$. Then $l_{u,j}\prec_j v \prec_j r_{u,j}$. Since $v\in B_j$, there exists some row $x\neq u$ of ${\ensuremath{\mathcal{M}}}_{j,p}$ such that ${\ensuremath{\mathcal{M}}}_{j,p}[x,v]=0$. If the row $x$ is above $u$, by Monge property we have $0={\ensuremath{\mathcal{M}}}_{j,p}[x,v]+{\ensuremath{\mathcal{M}}}_{j,p}[u,r_{u,j}]\geq {\ensuremath{\mathcal{M}}}_{j,p}[x,r_{u,j}]+{\ensuremath{\mathcal{M}}}_{j,p}[u,v]>0$, a contradiction. Similarly, if $x$ is below $u$, then $0={\ensuremath{\mathcal{M}}}_{j,p}[u,l_{u,j}]+{\ensuremath{\mathcal{M}}}_{j,p}[x,v]\geq {\ensuremath{\mathcal{M}}}_{j,p}[u,v]+{\ensuremath{\mathcal{M}}}_{j,p}[x,l_{u,j}]>0$, a contradiction. So in fact ${\ensuremath{\mathcal{M}}}_{j,p}[u,v]=0$ and therefore a path $u\to v$ exists in $Z$. Clearly, the total size of sets $A_j,B_j$ is $O(\sqrt{r}\log{r})$ and the total time to find these subsets and all $l_{a,j},r_{a,j}$, is $O(\sqrt{r}\log{r})$, given the preprocessed matrices ${\ensuremath{\mathcal{M}}}_1,\ldots,{\ensuremath{\mathcal{M}}}_q$. Recall that in Section \[s:pathcycle\], to bound the total running time, it was enough to bound the total time spent on executing lines \[l:search\], \[l:append\] and \[l:remove\]. We will show that using Lemma \[l:reach\], in terms of the notation from Section \[s:pathcycle\], we can make the total time spent on executing line \[l:search\] only ${\ensuremath{\widetilde{O}}}(\bar{n}+\bar{\ell})$ instead of $O(\bar{m})$, at the cost of increasing the total time of executing line \[l:remove\] to ${\ensuremath{\widetilde{O}}}(\bar{n})$. Specifically, at the beginning of each phase we compute the data from Lemma \[l:reach\] for all pieces ${\ensuremath{\mathcal{P}}}_i$. Since for all $i$ we have the distance cliques ${\ensuremath{\text{DC}}}({\ensuremath{\mathcal{P}}}_{f,i}'')$ computed, this takes $O{\mathopen{}\mathclose\bgroup\originalleft}(\frac{n}{r}\cdot \sqrt{r}\log{r}{\aftergroup\egroup\originalright})=O(n/\sqrt{r}\log{n})$ time. We will also recompute the information of Lemma \[l:reach\] for an affected piece ${\ensuremath{\mathcal{P}}}_i$ after $H_{f,i}''$ is recomputed. As the total number of times some piece is affected is $O(m\log{m})$, this takes $O(m\sqrt{r}\log{r}\log{m})$ time through all phases. Whenever the data of Lemma \[l:reach\] is computed for some piece ${\ensuremath{\mathcal{P}}}_i$, for each pair $(A_{i,j},B_{i,j})$ we store $B_{i,j}\cap W$ in a dynamic predecessor/successor data structure $D_{i,j}$, sorted by $\prec_{i,j}$. For each $v\in{\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ and $j$ such that $v\in A_{i,j}$ we store a vertex $next_{i,v,j}$ initially equal to $l_{i,v,j}$. It is easy to see that these auxiliary data structures can be constructed in time linear in their size, i.e., $O(\sqrt{r}\log{r})$ time. Hence, the total cost of computing them is $O(\sqrt{m}n/\sqrt{r}\log{n}+m\sqrt{r}\log{r}\log{m})=O{\mathopen{}\mathclose\bgroup\originalleft}(\sqrt{m}{\mathopen{}\mathclose\bgroup\originalleft}(\frac{n}{\sqrt{r}}\frac{\log^2{n}}{\log^2\log{n}}{\aftergroup\egroup\originalright})+mr\log{n}\log{m}{\aftergroup\egroup\originalright})$. Now, to implement line \[l:remove\], when $y$ is inserted into $W$ we go through all pieces ${\ensuremath{\mathcal{P}}}_i$ such that $y\in {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ and all $B_{i,j}$ such that $y\in B_{i,j}$. For each such $(i,j)$, we remove $y$ from $D_{i,j}$ in $O(\log\log{n})$ time. Recall that the sum of numbers of such pairs $(i,j)$ over all $v\in {\ensuremath{\partial}}{G}$ is $O(\sum_{i=1}^\lambda |{\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}|\log{r})=O(n/\sqrt{r}\log{n})$. Hence, by Lemma \[lem:stepremove\] the total time spent on executing line \[l:remove\] in a single phase is $O(n/\sqrt{r}\log{n}\log\log{n})$. Finally, we implement line \[l:search\] as follows. The unscanned edges of $Y_f''$ that are not between boundary vertices are handled in a simple-minded way as in Lemma \[l:stepunscanned\]. There are only $O(n/\sqrt{r})$ of those, so we can neglect them. In order to be able to efficiently find some unscanned edge $yv$ such that $y,v\in {\ensuremath{\partial}}{G}$ and $v\notin W$, we keep for any $v\in {\ensuremath{\partial}}{G}$ a set $U_v$ of pieces ${\ensuremath{\mathcal{P}}}_i$ such that $v\in{\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ and there may still be some unscanned edges from $v$ to $w\in {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ in $H_{f,i}''$. Similarly, for each ${\ensuremath{\mathcal{P}}}_i\in U_v$ we maintain a set $U_{v,i}$ of data structures $D_{i,j}$ such that $next_{i,v,j}\neq{\ensuremath{\mathbf{nil}}}$. Whenever the data of Lemma \[l:reach\] is computed for ${\ensuremath{\mathcal{P}}}_i$, ${\ensuremath{\mathcal{P}}}_i$ is inserted back to $U_v$ for all $v\in {\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$, and the sets $U_{v,i}$ are recomputed with no additional asymptotic overhead. To find an unscanned edge $yv$, for each ${\ensuremath{\mathcal{P}}}_i\in U_y$ we proceed as follows. We attempt to find an unscanned edge $yv$ in ${\ensuremath{\mathcal{P}}}_i$. If we succeed or $U_y$ is empty, we stop. Otherwise we remove ${\ensuremath{\mathcal{P}}}_i$ from $U_y$ and repeat, i.e., try another ${\ensuremath{\mathcal{P}}}_j\in U_y$, unless $U_y$ is empty. To find an unscanned edge $yv$ from a piece ${\ensuremath{\mathcal{P}}}_i$, we similarly try to find an unscanned edge $yv$ in subsequent data structures $D_{i,j}\in U_{v,i}$, and remove the data structures for which we fail from $U_{v,i}$. For a single data structure $D_{i,j}$, we maintain an invariant that an edge $yw$, $w\in D_{i,j}$ has been scanned iff $w\prec_{i,j} next_{i,v,j}$. Hence, to find the next unscanned edge, we first find $x\in D_{i,j}$ such that $next_{i,v,j}\preceq_{i,j} x$ and $x$ is smallest possible. This can be done in $O(\log\log{n})$ time since $D_{i,j}$ is a dynamic successor data structure. If $x$ does not exist or $r_{i,v,j}\prec x$, then, by Lemma \[l:reach\], there are no more unscanned edges $yw$ such that $w\in D_{i,j}$, and thus we remove $D_{i,j}$ from $U_{v,i}$. Otherwise, we return an edge $yx$ and set $next_{i,v,j}$ to be the successor of $x$ in $D_{i,j}$ (or possibly $next_{i,v,j}:={\ensuremath{\mathbf{nil}}}$ if none exists), again in $O(\log\log{n})$ time. Observe that all “failed” attempts to find an edge $yv$, where $v\in {\ensuremath{\partial}}{G}$ can be charged to an insertion of some ${\ensuremath{\mathcal{P}}}_i$ to $U_y$ or to an insertion of some $D_{i,j}$ to $U_{y,i}$. The total number of such insertions is again $O{\mathopen{}\mathclose\bgroup\originalleft}(\sqrt{m}\frac{n}{\sqrt{r}}\log{n}+m\sqrt{r}\log{r}\log{m}{\aftergroup\egroup\originalright})$. A successful attempt, on the other hand, costs $O(\log\log{n})$ worst-case time. Since line \[l:search\] is executed $O(\sqrt{m}n/\sqrt{r}+m\log{n})$ times through all phases, the total time spent on executing line \[l:search\] is again $O{\mathopen{}\mathclose\bgroup\originalleft}(\sqrt{m}{\mathopen{}\mathclose\bgroup\originalleft}(\frac{n}{\sqrt{r}}\frac{\log^2{n}}{\log^2\log{n}}{\aftergroup\egroup\originalright})+mr\log{n}\log{m}{\aftergroup\egroup\originalright})$. By setting $r=\frac{n^{2/3}}{m^{1/3}}\cdot{\mathopen{}\mathclose\bgroup\originalleft}(\frac{\log{n}}{\log{m}\cdot \log^2\log{n}}{\aftergroup\egroup\originalright})^{2/3}$ we obtain the main result of this paper. The min-cost circulation in a planar multigraph can be found in $O{\mathopen{}\mathclose\bgroup\originalleft}((nm)^{2/3}\cdot\frac{\log^{5/3}{n}\log^{1/3}{m}}{\log^{4/3}\log{n}}\cdot \log{(nC)}{\aftergroup\egroup\originalright})$ time. Reducing to the Case with an $r$-division with Few Holes {#s:rdiv} ======================================================== \[t:rdiv\] Let $G$ be a simple triangulated connected plane graph with $n$ vertices. For any $r\in [1,n]$, an $r$-division with few holes of $G$ can be computed in $O(n)$ time. Let ${\ensuremath{\overline{G}}}$ be an undirected simple plane graph obtained from $G_0$ by subsequently, (1) ignoring the directions of edges, (2) removing multiple edges (i.e., leaving at most one, arbitrary edge $uv$ for any $\{u,v\}\subseteq V$), and (3) embedding ${\ensuremath{\overline{G}}}$ into plane, (4) triangulating the faces of ${\ensuremath{\overline{G}}}$ using infinite-cost edges. We will never send flow through “dummy” infinite-cost edges; we use them to guarantee some useful topological properties of the pieces. We build an $r$-division ${\ensuremath{\overline{{\ensuremath{\mathcal{P}}}}}}_1,\ldots,{\ensuremath{\overline{{\ensuremath{\mathcal{P}}}}}}_\lambda$ with few holes of ${\ensuremath{\overline{G}}}$ using Theorem \[t:rdiv\]. For each $i$ we have $|V({\ensuremath{\overline{{\ensuremath{\mathcal{P}}}}}}_i)|=O(r)$, $|E({\ensuremath{\overline{{\ensuremath{\mathcal{P}}}}}}_i)|=O(r)$ and $|{\ensuremath{\partial}}{{\ensuremath{\overline{{\ensuremath{\mathcal{P}}}}}}_i}|=O(\sqrt{r})$. At this point each $uv=e\in E({\ensuremath{\overline{{\ensuremath{\mathcal{P}}}}}})$ can be contained in many pieces. We choose one piece ${\ensuremath{\overline{{\ensuremath{\mathcal{P}}}}}}_{\{u,v\}}$ containing $e$ and make the cost of $e$ infinite in all the others, effectively turning $e$ into a dummy edge in those pieces. Now we go back to our original graph $G$. We obtain pieces ${\ensuremath{\mathcal{P}}}_1,\ldots,{\ensuremath{\mathcal{P}}}_\lambda\subseteq G$ as follows. For each $uv=e\in E$, we make $e$ the edge of ${\ensuremath{\mathcal{P}}}_i$ such that ${\ensuremath{\overline{{\ensuremath{\mathcal{P}}}}}}_{\{u,v\}}={\ensuremath{\overline{{\ensuremath{\mathcal{P}}}}}}_i$ and make $e$ inherit the embedding of $uv\in E({\ensuremath{\overline{G}}})$. Similarly, for each of the dummy edges $uv=e'\in E({\ensuremath{\overline{G}}})$, we direct it arbitrarily and make it an edge of ${\ensuremath{\mathcal{P}}}_i$ such that $e'\in E({\ensuremath{\overline{{\ensuremath{\mathcal{P}}}}}}_i)$. We set ${\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}={\ensuremath{\partial}}{{\ensuremath{\overline{{\ensuremath{\mathcal{P}}}}}}_i}$. This way, the properties of an $r$-division with few holes: (1) $|V({\ensuremath{\mathcal{P}}}_i)|=O(r)$, (2) $|{\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}|=O(\sqrt{r})$, and (3) ${\ensuremath{\partial}}{{\ensuremath{\mathcal{P}}}_i}$ lies on $O(1)$ faces of ${\ensuremath{\mathcal{P}}}_i$, are still satisfied. The only difference is that now $|E({\ensuremath{\mathcal{P}}}_i)|$ might be $\omega(r)$. In Section \[s:planar\] we have already justified that this is not a big problem, though. Adding Weights ============== For some graph $H$, let ${\text{indeg}}_H(v)=|\{e\in E(H):e=uv\}|$, ${\text{outdeg}}_H(v)=|\{e\in E(H):e=vu\}|$, and ${\text{mindeg}}_H(v)=\min({\text{indeg}}_H(v),{\text{outdeg}}_H(v))$. When the subscript is omitted, we set ${\text{indeg}}={\text{indeg}}_{G_0}$ and analogously ${\text{outdeg}}={\text{outdeg}}_{G_0}$ and ${\text{mindeg}}={\text{mindeg}}_{G_0}$. It is known that min-cost flow algorithms for unit capacities tend to run faster for so-called *type-2* or *simple* networks, where ${\text{mindeg}}(v)\leq 1$ for all $v$. In this section we show that our algorithm for planar graphs with a small modification due to Asathulla et al. [@AsathullaKLR18] has the same property. To proceed, we generalize a bit the notion of ${\epsilon}$-optimality. Let ${\ensuremath{\kappa}}:V\to \mathbb{Z}_+$ be a vertex weight function. Given that, we extend the weight function to edges so that for each $uv=e\in E$ we have ${\ensuremath{\kappa}}(e)={\ensuremath{\kappa}}(u)+{\ensuremath{\kappa}}(v)$. We say that flow $f$ is ${\epsilon}$-optimal wrt. price function $p$ if for each $uv=e\in E_f$, $$c(e)-p(u)+p(v)\geq -{\ensuremath{\kappa}}(e)\cdot {\epsilon}.$$ We again define $G_f'$ to have the same edges as $G_f$, but different cost function $c'$ defined as $$c'(e)={\ensuremath{\mathrm{round}}}(c(e)+{\ensuremath{\kappa}}(e){\epsilon}/2,{\ensuremath{\kappa}}(e){\epsilon}/2).$$ Analogously as before we have $c(e)+{\ensuremath{\kappa}}(e)-{\epsilon}/2 < c'(e)\leq c(e)+{\ensuremath{\kappa}}(e)\cdot{\epsilon}$, ${\ensuremath{\kappa}}(e){\epsilon}<c'(e)+c'({\ensuremath{e}^{\mathrm{R}}})\leq 2{\ensuremath{\kappa}}(e){\epsilon}$. Similarly as in Lemma \[l:peq\], one can prove that $f$ is ${\epsilon}$-optimal wrt. the price function ${\delta}_{G_f'',{\ensuremath{t}}}$. Since the proof of Lemma \[l:correct\] goes through without a change for our generalized notion of ${\epsilon}$-optimality, a similar successive shortest paths algorithm can be applied. To analyze the running time of the algorithm, we need analogues of Lemmas \[l:bound\] and \[l:sumpath\] that take into account the generalized notion of ${\epsilon}$-optimality. Let us set ${\Lambda}=\sum_{v\in V} {\ensuremath{\kappa}}(v)\cdot{\text{mindeg}}(v)$. We also say that ${\ensuremath{\kappa}}$ is *balanced* if for any $x>0$, $\sum_{v:{\ensuremath{\kappa}}(v)>x}{\text{mindeg}}(v)=O({\Lambda}/x)$. Before we proceed we make one simplification. In the following we will additionally assume that $|{\mathrm{exc}}_f(v)|=O({\text{mindeg}}(v))$ for all $v\in V$. To guarantee that, after the weights ${\ensuremath{\kappa}}$ are assigned to the vertices of input graph, we do edge-splitting on the input graph $G$. For each edge $uv=e\in E_0$ of the original graph such that ${\text{mindeg}}(v)={\text{outdeg}}(v)$ or ${\text{mindeg}}(u)={\text{indeg}}(u)$ we introduce a vertex $v_{e}$ “inside” $e$ and set its weight to $1$. Clearly, only $\sum_{v\in V}{\text{mindeg}}(v)$ new vertices and edges are introduced, and each introduced vertex $v_e$ has ${\text{indeg}}(v_e)={\text{outdeg}}(v_e)=1$, so ${\Lambda}=\sum_{v\in V}{\ensuremath{\kappa}}(v){\text{mindeg}}(v)$ grows by a constant factor only. \[l:nonzeroedges\] Let $f^*$ be any flow. For any $v\in V$ ($u\in V$), the number of edges $e=uv$ such that $f^*(e)\neq 0$ is at most $|{\mathrm{exc}}_{f^*}(v)|+2{\text{mindeg}}(v)$ ($|{\mathrm{exc}}_{f^*}(u)|+2{\text{mindeg}}(u)$, resp.). Let $p_v$ be the number of edges $e=uv$ such that $f^*(e)\neq 0$. We have $${\mathrm{exc}}_{f^*}(v)=\sum_{e=uv} f^*(e)=\sum_{\substack{e=uv\\ f^*(e)\neq 0}} f^*(e)=\ \sum_{\substack{e=uv\\ f^*(e)=1}}1-\sum_{\substack{e=uv\\ f^*(e)=-1}}1=p_v-2\cdot \sum_{\substack{e=uv\\ f^*(e)=-1}}1=-p_v+2\cdot \sum_{\substack{e=uv\\ f^*(e)=1}}1,$$ and thus $$p_v={\mathrm{exc}}_{f^*}(v)+2\cdot \sum_{\substack{e=uv\\ f^*(e)=-1}}1\leq {\mathrm{exc}}_{f^*}(v)+2{\text{outdeg}}(v).$$ $$p_v=-{\mathrm{exc}}_{f^*}(v)+2\cdot \sum_{\substack{e=uv\\ f^*(e)=1}}1\leq -{\mathrm{exc}}_{f^*}(v)+2{\text{indeg}}(v).$$ Hence for any $v\in V$ we have: $$\begin{aligned} p_v&\leq\min(-{\mathrm{exc}}_{f^*}(v)+2{\text{indeg}}(v),{\mathrm{exc}}_{f^*}(v)+2{\text{outdeg}}(v))\\ &\leq \min(|{\mathrm{exc}}_{f^*}(v)|+2{\text{indeg}}(v),|{\mathrm{exc}}_{f^*}(v)|+2{\text{outdeg}}(v))\\ &=|{\mathrm{exc}}_{f^*}(v)|+2{\text{mindeg}}(v). \end{aligned}$$ The proof that the number of edges $e=uv$ such that $f^*(e)\neq 0$ is at most $|{\mathrm{exc}}_{f^*}(u)|+2{\text{mindeg}}(u)$ is completely analogous. Now, at the beginning of $\textsc{Refine}$, for all $v\in V_0$ such that ${\text{mindeg}}(v)={\text{outdeg}}(v)$ (${\text{mindeg}}(v)={\text{indeg}}(v)$) we add (subtract, resp.) $4{\ensuremath{\kappa}}(v){\epsilon}$ to (from) $p_0(v)$. Recall that after setting $c(e):=c(e)-p_0(u)+p_0(v)$ for all $uv=e\in E$, we have $c(e)\geq -6{\ensuremath{\kappa}}(e)$ for all $e\in E_{f_0}$. we obtain $f_1$ from $f_0$ by sending a unit of flow through each edge $uv=e\in E_{f_0}$ satisfying $c(e)<0$. Fix some $v\in V_0$. Suppose wlog. that ${\text{mindeg}}(v)={\text{outdeg}}(v)$. Consider some edge $uv=e\in E_{f_0}$, where $f_0(e)=0$. Then $e\in E_0$. If $u\in V_0$, then ${\text{mindeg}}(u)={\text{indeg}}(u)$ and hence $c(e)\geq 0$ since it increased by at least $4({\ensuremath{\kappa}}(u)+{\ensuremath{\kappa}}(v))>2{\ensuremath{\kappa}}(e)$. Otherwise, $c(e)$ increased by at least $4{\ensuremath{\kappa}}(v)>2(1+{\ensuremath{\kappa}}(v))=2{\ensuremath{\kappa}}(e)$ and again $c(e)\geq 0$. No flow is sent through an incoming edge of $v$ in $G_{f_0}$. Now consider an edge $vu=e\in E_{f_0}$ where $f_0(e)=0$. Since $e\in E_0$, there are ${\text{mindeg}}(v)$ such edges. By Lemma \[l:nonzeroedges\], there are $O(|{\mathrm{exc}}_{f_0}(v)|+{\text{mindeg}}(v))=O({\text{mindeg}}(v))$ edges $e$ incoming to $v$ or outgoing from $v$ such that $f_0(e)\neq 0$. We conclude that there are $O({\text{mindeg}}(v))$ edges $e\in E_{f_0}$ adjacent to $e$ such that $f_1(e)\neq f_0(e)$ so $|{\mathrm{exc}}_{f_1}(v)|=O({\text{mindeg}}(v))$. The proof when ${\text{mindeg}}(v)={\text{indeg}}(v)$ is symmetric. Each vertex $v\in V\setminus V_0$ has ${\text{indeg}}(v)={\text{outdeg}}(v)$, so $|{\mathrm{exc}}_{f_1}(v)|=O({\text{mindeg}}(v))$ is clear in this case. Now we are ready to state the analogues of Lemmas \[l:bound\] and \[l:sumpath\]. [lemma]{}[boundtwo]{}\[l:bound2\] Suppose ${\ensuremath{\kappa}}$ is balanced. Then immediately before each flow augmentation we have: $${\Psi}_f= O{\mathopen{}\mathclose\bgroup\originalleft}(\frac{{\Lambda}{\epsilon}}{\Delta}{\aftergroup\egroup\originalright}).$$ [lemma]{}[sumpathtwo]{}\[l:sumpath2\] Suppose ${\ensuremath{\kappa}}$ is balanced. Then, the sum of values ${\ensuremath{\kappa}}(e)$ through all edges $e$ (counted with multiplicity) on all augmenting paths $P_1,\ldots,P_q$ that we send flow through is $O({\Lambda}\log{m})$ A Faster Algorithm for Planar Networks with Small Min-Degree {#s:planarsimple1} ------------------------------------------------------------ We now show that by appropriately setting vertex weights ${\ensuremath{\kappa}}(v)$ we can obtain a faster min-cost circulation algorithm for certain planar networks. Let $B=\max_{v\in V}{\text{mindeg}}(v)$. Again let us start by computing an $r$-division with few holes. Let us set ${\ensuremath{\kappa}}(v)=\max(1,\sqrt{r}/B)$ if $v\in {\ensuremath{\partial}}{G}$ and ${\ensuremath{\kappa}}(v)=1$ otherwise. We now do edge-splitting as described previously – this can only increase the edge sets of individual pieces by a constant factor. We now use the successive shortest path algorithm implemented as in Section \[s:planar\], with the following change: while $\Delta\leq \max(1,\sqrt{r}/B){\epsilon}$, run the implementation for general graphs, and switch to the implementation specialized for planar graphs that operates on a compressed graph only when $\Delta>\max(1,\sqrt{r}/B){\epsilon}$. Clearly, the running time of the algorithm before $\Delta$ becomes more than $\max(1,\sqrt{r}/B){\epsilon}$ is $O(m\max(1,\sqrt{r}/B)\log{n})={\ensuremath{\widetilde{O}}}(n\sqrt{r})$. Observe that since $|{\ensuremath{\partial}}{G}|=O(n/\sqrt{r})$, we have $${\Lambda}=\sum_{v\in {\ensuremath{\partial}}{G}}\max{\mathopen{}\mathclose\bgroup\originalleft}(1,\frac{\sqrt{r}}{B}{\aftergroup\egroup\originalright}){\text{mindeg}}(v)+\sum_{v\in V\setminus {\ensuremath{\partial}}{G}}{\text{mindeg}}(v)=O{\mathopen{}\mathclose\bgroup\originalleft}(\sum_{v\in V}{\text{mindeg}}(v){\aftergroup\egroup\originalright})=O(\min(m,nB)).$$ We now show that ${\ensuremath{\kappa}}$ is balanced. Let $S_x=\sum_{v:{\ensuremath{\kappa}}(v)>x}{\text{mindeg}}(v)$. We need to show that $S_x=O({\Lambda}/x)$ for any $x>0$. If $x\geq\max{\mathopen{}\mathclose\bgroup\originalleft}(1,\sqrt{r}/B{\aftergroup\egroup\originalright})$, then clearly $S_x=0$. If $x\in {\mathopen{}\mathclose\bgroup\originalleft}[1,\max(1,\sqrt{r}/B){\aftergroup\egroup\originalright})$, then $S_x=\frac{1}{\max(1,\sqrt{r}/B)}\sum_{v\in {\ensuremath{\partial}}{G}} \max(1,\sqrt{r}/B)\cdot {\text{mindeg}}(v)\leq {\Lambda}/\max(1,\sqrt{r},B)=O({\Lambda}/x)$. Finally, if $0\leq x<1$, then $S_x=\sum_{v\in V}{\text{mindeg}}(v)\leq {\Lambda}\leq O({\Lambda}/x)$. Lemma \[l:bound2\] implies that the algorithm finishes in $O(\sqrt{{\Lambda}})$ phases. Lemma \[l:sumpath2\], in turn, implies that the sum of ${\ensuremath{\kappa}}(e)$ over all edges that we ever send flow through, is $O({\Lambda}\log{n})$. Since for an edge $e=uv$, ${\ensuremath{\kappa}}(e)={\ensuremath{\kappa}}(u)+{\ensuremath{\kappa}}(v)$, we conclude that the vertices of ${\ensuremath{\partial}}{G}$ appear $O{\mathopen{}\mathclose\bgroup\originalleft}(\frac{{\Lambda}\log{n}}{\max(1,\sqrt{r}/B)}{\aftergroup\egroup\originalright})$ times on the augmenting paths. Note also that when $\Delta>\sqrt{r}{\epsilon}$, by Lemma \[l:bound2\] we have ${\Psi}_f=O{\mathopen{}\mathclose\bgroup\originalleft}(\frac{{\Lambda}\log{n}}{\max(1,\sqrt{r}/B)}{\aftergroup\egroup\originalright})$. Hence, when we switch to operate on a compressed graph, there are $O{\mathopen{}\mathclose\bgroup\originalleft}(\frac{{\Lambda}\log{n}}{\max(1,\sqrt{r}/B)}{\aftergroup\egroup\originalright})$ augmenting paths to be found. As a result, we can observe that the total number of affected pieces is as well $O{\mathopen{}\mathclose\bgroup\originalleft}(\frac{{\Lambda}\log{n}}{\max(1,\sqrt{r}/B)}{\aftergroup\egroup\originalright})$: the number of pieces affected as a result of sending flow through a single path $P$ is at most $1+2|V(P)\cap {\ensuremath{\partial}}{G}|$. Recall that the running time of the algorithm of Section \[s:planarpaths\] was ${\ensuremath{\widetilde{O}}}{\mathopen{}\mathclose\bgroup\originalleft}(\sqrt{m}\frac{n}{\sqrt{r}}{\aftergroup\egroup\originalright})$ plus the time needed for sending the flow through all the found paths, which was back then $O(r\log{n})$ times the number of affected pieces, i.e., $O(mr\log^2{n})$. Now we can benefit from a smaller number of affected pieces and get ${\ensuremath{\widetilde{O}}}{\mathopen{}\mathclose\bgroup\originalleft}(\sqrt{{\Lambda}}\frac{n}{\sqrt{r}}+\frac{{\Lambda}r}{\max(1,\sqrt{r}/B)}+n\sqrt{r}{\aftergroup\egroup\originalright})$ time. In particular, we obtain the following bound for $B$ small enough. Let $B=\max_{v\in V}{\text{mindeg}}(v)=O(n^{1/7})$. Then $\textsc{Refine}$ can be implemented in planar graphs in ${\ensuremath{\widetilde{O}}}((nB)^{5/4})$ time. Observe that the term $O(n\sqrt{r})$ is dominated by $O(\frac{{\Lambda}r}{\sqrt{r}/B})$. Moreover, $O{\mathopen{}\mathclose\bgroup\originalleft}(\sqrt{{\Lambda}}\frac{n}{\sqrt{r}}{\aftergroup\egroup\originalright})=O(\sqrt{nB}\frac{n}{\sqrt{r}})$ and $O(\frac{{\Lambda}r}{\sqrt{r}/B})=O(n\sqrt{r}B^2)$ terms are balanced for $r=n^{1/2}B^{-3/2}$ and from $B=O(n^{1/7})$ it follows that $B=O(\sqrt{r})$. By plugging $r=n^{1/2}B^{-3/2}$ into $O(n\sqrt{r}B^2)$ we obtain $O((nB)^{5/4})$. Proofs of Lemmas \[l:bound2\] and \[l:sumpath2\] ------------------------------------------------ \[l:gplussimple\] For each $v\in V$, ${\text{mindeg}}_{G_f^+}(v)\leq {\text{mindeg}}(v)$. Observe first that $E^+_f$ is a subset of $E^*_f=\{e\in E:f_0(e)=u(e)\}$. Moreover, for $e\in E$, $e\in E^*$ if and only if ${\ensuremath{e}^{\mathrm{R}}}\notin E^*$. Suppose ${\text{outdeg}}(u)\leq{\text{indeg}}(u)$. The other case (i.e., ${\text{outdeg}}(u)>{\text{indeg}}(u)$) is similar. Let $xu=e'\in E^*$. Equivalently ${\ensuremath{e'}^{\mathrm{R}}}=ux$ and $f_0({\ensuremath{e'}^{\mathrm{R}}})<u({\ensuremath{e'}^{\mathrm{R}}})$. Note that $u(e')=0 \land f_0({\ensuremath{e'}^{\mathrm{R}}})<u({\ensuremath{e'}^{\mathrm{R}}})$ is equivalent to $u({\ensuremath{e'}^{\mathrm{R}}})=1\land f_0({\ensuremath{e'}^{\mathrm{R}}})=0$. On the other hand $u(e')=1\land f_0({\ensuremath{e'}^{\mathrm{R}}})<u({\ensuremath{e'}^{\mathrm{R}}})$ is equivalent to $u(e')=1\land f_0(e')=1$. Since $f_0$ is a circulation, there exists some bijection between the edges $e'=xu$ satisfying $f_0(e')=1$ and edges $e''=yu$ satisfying $f_0(e'')=-1$. Therefore, there also exists a bijection between the edges $e'=xu$ satisfying $f_0(e')=1$ and the edges ${\ensuremath{e''}^{\mathrm{R}}}=uy$ satisfying $u({\ensuremath{e''}^{\mathrm{R}}})=1\land f_0({\ensuremath{e''}^{\mathrm{R}}})=1$. By combining the above two cases, we conclude that there exists a bijection between the edges $xu=e'\in E^*$ and edges $e''=yu$ satisfying $u({\ensuremath{e''}^{\mathrm{R}}})=1$. The number of those is clearly equal to the number of edges $uy=e'''\in E$ satisfying $u(e''')=1$, i.e., ${\text{outdeg}}(u)$. Hence we obtain $${\text{mindeg}}_{G_f^+}(u)\leq {\text{indeg}}_{G_f^+}(u)\leq {\text{outdeg}}(u)={\text{mindeg}}(u).$$ Let $V_\Delta\subseteq V$ be the set of vertices $v$ satisfying $21{\ensuremath{\kappa}}(v){\epsilon}>\Delta$. Denote by $S_\Delta$ the sum $\sum_{v\in V_\Delta}{\text{mindeg}}(v)$. Since ${\ensuremath{\kappa}}$ is balanced $S_\Delta=O({\Lambda}{\epsilon}/\Delta)$. Similarly as in the proof of Lemma \[l:bound\], let $f=f_i$ for some $i$, $1\leq i\leq q$ and let $p:=p_i^*$, where $p_i^*$ is as in Lemma \[l:peq\]. Recall that $p(x)\geq \Delta$ for all $x\in X$ and $p(d)\leq 0$ for all $d\in D$. For any $uv=e\in E_f^+$ we have $$p(u)-p(v)\leq c(e)+{\ensuremath{\kappa}}(e)=-c({\ensuremath{e}^{\mathrm{R}}})+{\ensuremath{\kappa}}(e)\leq 7{\ensuremath{\kappa}}(e){\epsilon},$$ since we had $c({\ensuremath{e}^{\mathrm{R}}})\geq -6{\ensuremath{\kappa}}(e){\epsilon}$ after edge splitting. Let $S_z=\{v\in V\setminus V_\Delta: p(v)\geq z-7{\ensuremath{\kappa}}(v){\epsilon}\land p(v)<z+7{\ensuremath{\kappa}}(v){\epsilon}\}$. Suppose for simplicity that $\Delta=l\cdot\frac{{\epsilon}}{2}$, where $l$ is a positive integer divisible by $3$. We now prove that there exists such $a\in \{k\cdot {\epsilon}/2:k\in \mathbb{Z}\}\cap (\frac{\Delta}{3},\frac{2\Delta}{3}]$ that $$\sum_{v\in S_a} {\text{mindeg}}(v)=O{\mathopen{}\mathclose\bgroup\originalleft}(\frac{{\Lambda}{\epsilon}}{\Delta}{\aftergroup\egroup\originalright}).$$ To this end we first prove that the sum of sums $(\sum_{v\in S_z} {\text{mindeg}}(v))$ over all $z\in \{k\cdot {\epsilon}/2:k\in \mathbb{Z}\}$ is $O({\Lambda})$. Indeed, each $v\in V$ is included in $O({\ensuremath{\kappa}}(v))$ sets $S_z$ considered and hence $$\sum_z \sum_{v\in S_z} {\text{mindeg}}(v)=\sum_{v\in V\setminus V_\Delta}{\text{mindeg}}(v)\cdot |\{z:v\in S_z\}|\leq \sum_{v\in V}{\text{mindeg}}(v)\cdot O({\ensuremath{\kappa}}(v))=O({\Lambda}).$$ Finally note that there are $\frac{2\Delta}{3{\epsilon}}$ different values $z\in \{k\cdot {\epsilon}/2:k\in \mathbb{Z}\}\cap [\frac{\Delta}{3},\frac{2\Delta}{3})$ so for some $a$ of this form we indeed have $\sum_{v\in S_a} {\text{mindeg}}(v)=O(\frac{{\Lambda}{\epsilon}}{\Delta})$. Let the sets $L(z)$ be defined as in in the proof of Lemma \[l:bound2\]. Recall that $X\subseteq L(z)$ for any $z\leq \Delta$ and $D\cap L(z)=\emptyset$ for any $z>0$. Consider the set $$A={\mathopen{}\mathclose\bgroup\originalleft}(L(a)\setminus \{v\in S_a\cup V_\Delta: {\text{outdeg}}_{G_f^+}(v)>{\text{indeg}}_{G_f^+}(v)\}{\aftergroup\egroup\originalright})\cup \{v\in S_a\cup V_\Delta: {\text{outdeg}}_{G_f^+}(v)\leq {\text{indeg}}_{G_f^+}(v)\}.$$ Note that for all $v\in S_a$ we have $$p(v)\geq a-7{\ensuremath{\kappa}}(v){\epsilon}\geq a-\Delta/3> \Delta/3-\Delta/3=0,$$ $$p(v)<a+7{\ensuremath{\kappa}}(v){\epsilon}<a+\Delta/3 \leq2\Delta/3+\Delta/3=\Delta.$$ Therefore $S_a\cap X=\emptyset$ and $S_a\cap D=\emptyset$. Since $X\subseteq L(a)$, we have $X\setminus A\subseteq V_\Delta$. We also have $A\cap D\subseteq V_\Delta$. Thus, $${\Psi}_f=\sum_{v\in A}{\mathrm{exc}}_f(v)-\sum_{v\in A\cap D}{\mathrm{exc}}_f(v)+\sum_{v\in X\setminus A}{\mathrm{exc}}_f(v).$$ However, since $|{\mathrm{exc}}_f(v)|=O({\text{mindeg}}(v))$, the two latter sums are of order $O(S_\Delta)$. Hence, to finish the proof it is sufficient to show that $\sum_{v\in A}{\mathrm{exc}}_f(v)=O{\mathopen{}\mathclose\bgroup\originalleft}({\Lambda}{\epsilon}/\Delta+S_\Delta{\aftergroup\egroup\originalright})$. By Lemma \[l:goldberg\], to bound the sum, we can alternatively count the number of edges $e=uv\in E_f^+$ such that $u\in A$ and $v\notin A$. Suppose $u\in A\cap (S_a\cup V_\Delta)$. Then ${\text{outdeg}}_{G_f^+}(u)={\text{mindeg}}_{G_f^+}(u)$, and thus, by Lemma \[l:gplussimple\] there are no more than $\sum_{v\in S_a\cup V_\Delta}{\text{mindeg}}_{G_f^+}(v)\leq \sum_{v\in S_a\cup V_\Delta}{\text{mindeg}}(v)=O(\frac{{\Lambda}{\epsilon}}{\Delta}+S_\Delta)$ such edges. Similarly we can prove that there are $O(\frac{{\Lambda}{\epsilon}}{\Delta}+S_\Delta)$ sought edges $uv$ such that $v\in S_a\cup V_\Delta$. Now suppose $u\in A\setminus (S_a\cup V_\Delta)\subseteq L(a)$ and $v\notin (S_a\cup V_\Delta)$. Since $v\notin A$, and $v\notin (S_a\cup V_\Delta)$, $v\notin L(a)$ also holds. Observe that $p(u)\geq a+7{\ensuremath{\kappa}}(u){\epsilon}$, as otherwise we would have $p(u)\geq a$ and $p(u)<a+7{\ensuremath{\kappa}}(u){\epsilon}$ which would imply $u\in S_a$. Similarly, $p(v)<a-7{\ensuremath{\kappa}}(v){\epsilon}$, as otherwise we would have $p(v)<a$ and $p(v)\geq a-7{\ensuremath{\kappa}}(v){\epsilon}$ which would imply $v\in S_a$. By combining the two inequalities, we obtain: $$p(u)-p(v)> a+7{\ensuremath{\kappa}}(u){\epsilon}-a+7{\ensuremath{\kappa}}(v){\epsilon}=7({\ensuremath{\kappa}}(u)+{\ensuremath{\kappa}}(v)){\epsilon}=7{\ensuremath{\kappa}}(e){\epsilon},$$ a contradiction. Hence, $u\in A\setminus (S_a\cup V_\Delta)$ and $v\notin (S_a\cup V_\Delta)$ cannot hold at once. \[l:diffbound2\] $c(f_0)-c(f_{q+1})=O({\Lambda}{\epsilon})$. By Lemma \[l:nonzeroedges\], for a vertex $v\in V$ ($u\in V$) of a circulation, there can be no more than $2{\text{mindeg}}(v)$ ($2{\text{mindeg}}(u)$, resp.) edges $uv$ with non-zero flow. Since both $f_0$ and $f_{q+1}$ are circulations, for $v\in V$ ($u\in V$) there can be at most $4{\text{mindeg}}(v)$ ($4{\text{mindeg}}(u)$, resp.) edges $e=uv$ such that $(f_0(e)-f_{q+1}(e))$ is non-zero. Recall from the proof of Lemma \[l:diffbound\] that we had $(f_0(e)-f_{q+1}(e))c(e)\leq 2{\epsilon}$. With our more general definition of ${\epsilon}$-optimality, we can analogously prove that after edge splitting we have $(f_0(e)-f_{q+1}(e))c(e)\leq 6{\ensuremath{\kappa}}(e)\cdot{\epsilon}\leq 12\max({\ensuremath{\kappa}}(u),{\ensuremath{\kappa}}(v)){\epsilon}$, i.e., $(f_0(e)-f_{q+1}(e))c(e)=O(\max({\ensuremath{\kappa}}(u),{\ensuremath{\kappa}}(v)){\epsilon})$. We thus have $$\begin{aligned} c(f_0)-c(f_{q+1}) &= \frac{1}{2}\sum_{e\in E}(f_0(e)-f_{q+1}(e))c(e)\\ &=\frac{1}{2}\sum_{u\in V}\sum_{uv=e\in E} (f_0(e)-f_{q+1}(e))c(e)\\ &=O{\mathopen{}\mathclose\bgroup\originalleft}(\sum_{u\in V}\sum_{\substack{uv=e\in E\\ {\ensuremath{\kappa}}(u)\geq{\ensuremath{\kappa}}(v)}} (f_0(e)-f_{q+1}(e))c(e) +\sum_{v\in V}\sum_{\substack{uv=e\in E\\ {\ensuremath{\kappa}}(u)<{\ensuremath{\kappa}}(v)}} (f_0(e)-f_{q+1}(e))c(e){\aftergroup\egroup\originalright})\\ &= O{\mathopen{}\mathclose\bgroup\originalleft}(\sum_{u\in V}\sum_{\substack{uv=e\in E\\ {\ensuremath{\kappa}}(u)\geq{\ensuremath{\kappa}}(v)}} (f_0(e)-f_{q+1}(e))\cdot {\ensuremath{\kappa}}(u)\cdot{\epsilon}+\sum_{v\in V}\sum_{\substack{uv=e\in E\\ {\ensuremath{\kappa}}(u)<{\ensuremath{\kappa}}(v)}} (f_0(e)-f_{q+1}(e))\cdot {\ensuremath{\kappa}}(v)\cdot {\epsilon}{\aftergroup\egroup\originalright})\\ &= O{\mathopen{}\mathclose\bgroup\originalleft}(\sum_{u\in V}\cdot {\text{mindeg}}(u) \cdot {\ensuremath{\kappa}}(u)\cdot{\epsilon}+\sum_{v\in V} \cdot {\text{mindeg}}(v)\cdot {\ensuremath{\kappa}}(v)\cdot {\epsilon}{\aftergroup\egroup\originalright})\\ &= O{\mathopen{}\mathclose\bgroup\originalleft}(\sum_{u\in V}{\ensuremath{\kappa}}(v) \cdot {\text{mindeg}}(v)\cdot {\epsilon}{\aftergroup\egroup\originalright}) = O({\Lambda}{\epsilon}). \end{aligned}$$ \[l:apprbound2\] For any $i=1,\ldots,q$, $|c'(f_i)-c(f_i)|=O({\Lambda}{\epsilon})$. Using Lemma \[l:nonzeroedges\], we obtain the following chain of inequalities. $$\begin{aligned} |c'(f_i)-c(f_i)|&=\frac{1}{2}{\mathopen{}\mathclose\bgroup\originalleft}|\sum_{\substack{uv=e\in E\\ f_i(e)\neq 0}} f_i(e)(c'(e)-c(e)){\aftergroup\egroup\originalright}|\\ &\leq \frac{1}{2}\sum_{\substack{uv=e\in E\\ f_i(e)\neq 0}} |f_i(e)(c'(e)-c(e))|\\ &\leq \frac{1}{2}\sum_{\substack{uv=e\in E\\ f_i(e)\neq 0}} (c'(e)-c(e))\\ &= \frac{1}{2}\sum_{\substack{uv=e\in E\\ f_i(e)\neq 0}} ({\ensuremath{\kappa}}(u)+{\ensuremath{\kappa}}(v))\cdot {\epsilon}\\ &\leq {\epsilon}{\mathopen{}\mathclose\bgroup\originalleft}(\sum_{u\in V}\sum_{\substack{uv=e\in E\\ f_i(e)\neq 0\\ {\ensuremath{\kappa}}(u)\geq{\ensuremath{\kappa}}(v)}} {\ensuremath{\kappa}}(u) +\sum_{v\in V}\sum_{\substack{uv=e\in E\\ f_i(e)\neq 0\\{\ensuremath{\kappa}}(u)<{\ensuremath{\kappa}}(v)}} {\ensuremath{\kappa}}(v){\aftergroup\egroup\originalright})\\ &=O{\mathopen{}\mathclose\bgroup\originalleft}({\epsilon}\sum_{v\in V}{\ensuremath{\kappa}}(v)\cdot {\text{mindeg}}(v){\aftergroup\egroup\originalright})\\ &=O({\Lambda}{\epsilon}). \end{aligned}$$ Let $\Delta_i$ and $p^*_i$ be defined as in the proof of Lemma \[l:sumpath\]. Similarly as in that proof, we bound the value $c(f_1)-c(f_{q+1})$ from both sides. By Lemma \[l:diffbound2\], we have $c(f_1)-c(f_{q+1})\leq c(f_0)-c(f_{q+1})=O({\Lambda}{\epsilon})$. On the other hand, by Lemma \[l:apprbound2\] we have $c'(f_1)-c'(f_{q+1})=c(f_1)-c(f_{q+1})+O({\Lambda}{\epsilon})$. By an analogue of Lemma \[l:cmonoton\], we have $$c'(f_1)-c'(f_{q+1})=\sum_{i=1}^q (c'(f_i)-c'(f_{i+1}))=\sum_{i=1}^q {\mathopen{}\mathclose\bgroup\originalleft}(\sum_{e\in P_i}{\ensuremath{\kappa}}(e)-\Delta_i{\aftergroup\egroup\originalright}).$$ Hence, by Lemma \[l:bound2\] we obtain $$\sum_{i=1}^q \sum_{e\in P_i}{\ensuremath{\kappa}}(e)=\sum_{i=1}^q\Delta_i +c(f_1)-c(f_{q+1})+O({\Lambda}{\epsilon})=\sum_{i=1}^q\Delta_i+O({\Lambda}{\epsilon})=\sum_{i=1}^q O{\mathopen{}\mathclose\bgroup\originalleft}(\frac{{\Lambda}{\epsilon}}{i}{\aftergroup\egroup\originalright})+O({\Lambda}{\epsilon})=O({\Lambda}{\epsilon}\log{n}).$$ <span style="font-variant:small-caps;">Refine</span> in $O(m^{3/2})$ time {#s:dial} ========================================================================= So-called Dial’s implementation of Dikjstra’s algorithm [@Dial69] can compute the distances to a single sink ${\ensuremath{t}}$ to all vertices satisfying ${\delta}_{G}(v,{\ensuremath{t}})\leq K{\epsilon}$ in $O(m+K)$ time, assuming all costs are non-negative integer multiples of ${\epsilon}$. Hence, if we are given a possibly negatively-weighted graph with a price function $p$, in $O(m+K)$ we can compute the distances to ${\ensuremath{t}}$ from all vertices such that ${\delta}_{G}(v,{\ensuremath{t}})-p(v)+p({\ensuremath{t}})\leq K{\epsilon}$. Unfortunately, we cannot use Dial’s algorithm directly, since the (reduced) distances to ${\ensuremath{t}}$ in $G_f''$ can be generally $\omega(m)$. However, one can observe two things. First, by Lemma \[l:bound\], $\Delta\leq 6{\epsilon}m$. Moreover, in the implementation we do not need to use the price function $p^*$ from Lemma \[l:correct\], which we do in line \[l:augment\]. In fact, by Lemma \[l:incr\], any feasible price function $p$ of $G_f''$ will do, provided that $p(s)-p(t)=\Delta$. We will maintain the invariant that $p({\ensuremath{t}})\leq 0$, and $p({\ensuremath{s}})=0$. Clearly, the invariant is satisfied initially. In line \[l:dijkstra\], we run Dijkstra’s algorithm with price function $p$ and stop it when it visits ${\ensuremath{s}}$. Since $p({\ensuremath{s}})=0$, and Dijkstra’s algorithm visits vertices $v$ in non-decreasing order of values ${\delta}_{G_f''}(v,{\ensuremath{t}})-p(v)+p({\ensuremath{t}})$, for any visited $v$ we have $$\Delta={\delta}_{G_f''}({\ensuremath{s}},{\ensuremath{t}})\geq {\delta}_{G_f''}({\ensuremath{s}},{\ensuremath{t}})+p({\ensuremath{t}})={\delta}_{G_f''}({\ensuremath{s}},{\ensuremath{t}})-p({\ensuremath{s}})+p({\ensuremath{t}})\geq {\delta}_{G_f''}(v,{\ensuremath{t}})-p(v)+p({\ensuremath{t}}).$$ Hence, indeed such a Dijkstra run can be performed in $O(m+\Delta/{\epsilon})=O(m)$ time using Dial’s implementation. Next, we set $p(v):={\delta}_{G_f''}(v,{\ensuremath{t}})-{\delta}_{G_f''}({\ensuremath{s}},{\ensuremath{t}})$ for all visited $v$, whereas for the unvisited vertices $v$ we leave $p(v)$ unchanged. Afterwards, we have $p({\ensuremath{s}})=0$ and $p({\ensuremath{t}})=-\Delta<0$. We need to verify that $p$ remains a feasible price function after it is altered. Let $U$ be the set of visited vertices. First note that before the substitution, for any $u\in U$ we have ${\delta}_{G_f''}(u,{\ensuremath{t}})-{\delta}_{G_f''}({\ensuremath{s}},{\ensuremath{t}})\leq p(u)-p({\ensuremath{s}})=p(u)$. Therefore, for each $u\in U$, its price cannot increase. Now, let $uv=e\in E(G_f'')$. If $\{u,v\}\subseteq U$, then the reduced cost of $e$ is non-negative, since $p$ is a shifted distance-to function ${\delta}_{G_f'',t}$ on these vertices. If $v\notin U$, then $c'_p(e)$ cannot decrease due to substitution, and we had $c'_p(e)\geq 0$ before, so $c'_p(e)\geq 0$ afterwards as well. Finally, suppose $u\notin U$ and $v\in U$. Then, since $u$ was not visited before Dijkstra’s run was terminated, we had $(c'(e)-p(u)+p(v))+{\delta}_{G_f''}(v,{\ensuremath{t}})-p(v)+p({\ensuremath{t}})\geq {\delta}_{G_f''}({\ensuremath{s}},{\ensuremath{t}})-p({\ensuremath{s}})+p({\ensuremath{t}})$, or equivalently $c'(e)-p(u)+({\delta}_{G_f''}(v,{\ensuremath{t}})-{\delta}_{G_f''}({\ensuremath{s}},{\ensuremath{t}}))\geq -p({\ensuremath{s}})=0$. But $p(u)$ is not changed afterwards, and $p(v):={\delta}_{G_f''}(v,{\ensuremath{t}})-{\delta}_{G_f''}({\ensuremath{s}},{\ensuremath{t}})$, so $c'_p(e)\geq 0$ afterwards as well. So indeed $p$ remains feasible. [^1]: Supported by ERC Consolidator Grant 772346 TUgbOAT and the Polish National Science Centre 2018/29/N/ST6/00757 grant. [^2]: Supported by ERC Consolidator Grant 772346 TUgbOAT. [^3]: It is known that simple planar graphs have $O(n)$ edges. However, multiple parallel edges (with possibly different costs) are useful in the unit-capacity min-cost flow problem, as they allow us to encode larger edge capacities. Therefore, in this paper we work with planar multigraphs. [^4]: Gabow-Tarjan algorithm for min-cost bipartite matching has a similar property, which was instrumental for obtaining the recent results on minimum-cost planar bipartite matching [@AsathullaKLR18; @LahnR19] [^5]: This is sometimes called *the blocking flow* problem and can be solved for unit capacities in linear time. [^6]: Dense distance graphs have been defined differently multiple times in the literature. We use the definition of [@GawrychowskiK18; @nussbaum2014network] that captures all the known use cases (see [@GawrychowskiK18] for discussion).
--- abstract: 'A prediction of the steady-state reconnection electric field in asymmetric reconnection is obtained by maximizing the reconnection rate as a function of the opening angle made by the upstream magnetic field on the weak magnetic field (magnetosheath) side. The prediction is within a factor of two of the widely examined asymmetric reconnection model \[Cassak and Shay, Phys. Plasmas [**14**]{}, 102114, 2007\] in the collisionless limit, and they scale the same over a wide parameter regime. The previous model had the effective aspect ratio of the diffusion region as a free parameter, which simulations and observations suggest is on the order of 0.1, but the present model has no free parameters. In conjunction with the symmetric case \[Liu et al., Phys. Rev. Lett. [**118**]{}, 085101, 2017\], this work further suggests that this nearly universal number $0.1$, essentially the normalized fast reconnection rate, is a geometrical factor arising from maximizing the reconnection rate within magnetohydrodynamic (MHD)-scale constraints.' author: - 'Yi-Hsin Liu' - 'M. Hesse' - 'P. A. Cassak' - 'M. A. Shay' - 'S. Wang' - 'L. -J. Chen' title: On the collisionless asymmetric magnetic reconnection rate --- [*Introduction–*]{} Magnetic reconnection at Earth’s magnetopause not only allows the transport of solar wind plasmas into Earth’s magnetosphere but also enhances the convection of magnetic flux to Earth’s night side [@dungey61a]. The magnetic fields and plasma conditions on the two sides of the magnetopause current sheet are typically different \[e.g., [@phan96a]\]; a feature that also applies to current sheets in planetary [@masters15a; @fuselier14b], solar [@murphy12a], laboratory [@yoo14a], fusion [@mirnov06a] and turbulent [@servidio09a; @zhdankin13a] plasmas. Reconnection with these different upstream conditions is commonly called [*asymmetric*]{}. To model the global circulation of magnetospheric plasmas around Earth and the magnetic energy release therein, it is crucial to understand how fast the magnetic flux is processed by asymmetric reconnection at Earth’s magnetopause \[e.g., [@borovsky08a; @borovsky13a]\]. One measure of the reconnection rate is the strength of the reconnection electric field inside the reconnection diffusion region which, according to Faraday’s law, is proportional to the magnetic flux change rate at the diffusion region. At Earth’s magnetopause, directly measuring the reconnection electric field has been conducted although it remains challenging \[e.g., [@LJChen17a; @mozer07a; @vaivads04a]\]. A good proxy of the reconnection rate is the convective electric field upstream of the diffusion region induced by the inflowing plasma. Such an electric field was inferred from the ion velocity into the ion diffusion region \[e.g., [@SWang15a; @mozer10a; @fuselier05a; @phan01a]\], or from the electron velocity into the electron diffusion region [@LJChen17a]. The reconnection rate can also be estimated from the magnitude of reconnected magnetic fields downstream of the ion diffusion region \[e.g., [@phan01a]\] using Sweet-Parker scaling [@sweet58a; @parker57a], or from the energy conversion rate [@rosenqvist08a]. Observational evidence [@SWang15a; @fuselier16a; @mozer10a] suggests that the strength of the reconnection electric field follows the scaling $$E_{CS}=2\left(\frac{B_1B_2}{B_1+B_2}\right)\left(\frac{V_{out}}{c}\right)\left(\frac{\delta}{L}\right)_{eff}, \label{CS}$$ that is derived using conservation laws [@cassak07b]. $B_1$ and $B_2$ are the reconnecting component of magnetic fields at the magnetosheath and magnetosphere sides, respectively. The outflow speed $V_{out}=(B_1B_2/4\pi {\bar \rho})^{1/2}$ is the hybrid Alfvén speed based on a hybrid density ${\bar \rho}=(B_1\rho_2+B_2\rho_1)/(B_1+B_2)$. Here $(\delta/L)_{eff}$ is the [*effective aspect ratio*]{} of the diffusion region, which is a free parameter in this model for collisionless reconnection. Observations suggest that $(\delta/L)_{eff}$ is of order 0.1. Numerical simulations have also confirmed this scaling and demonstrated that $(\delta/L)_{eff}\sim 0.1$; these include local MHD simulations with a localized resistivity [@birn08a], local two-fluid [@cassak08a] and local particle-in-cell (PIC) [@malakit10a; @pritchett08a] simulations, as well as global magnetospheric MHD simulations [@borovsky08a; @borovsky13a; @quellette13a; @BZhang16a; @komar16a] and global Vlasov simulations [@hoilijoki17a]. This scaling along with $(\delta/L)_{eff}\sim 0.1$ was then employed to develop a quantitative model of the coupling between the solar wind and Earth’s magnetosphere [@borovsky08a; @borovsky13a]. Given these successes of Eq. (\[CS\]), it remains not understood why the [*effective aspect ratio*]{} in this model should be of order 0.1. This obviously requires an explanation. In this Letter, we provide a theoretical explanation for the collisionless asymmetric reconnection rate. We generalize the approach discussed in Ref. [@yhliu17a] which was used to model the symmetric reconnection electric field. Through analyzing force-balance at the inflow and outflow regions, we cast the reconnection electric field into the form of a function of the opening angle made by the upstream magnetic field on the weak field side. A prediction is then obtained by maximizing this rate as a function of this opening angle, which we find to agree with $E_{CS}$ within a factor of two, with agreement in the scaling sense over a wide range of upstream plasma parameters. This comparison demonstrates that this nearly universal [*effective aspect ratio*]{} of order 0.1 in the collisionless limit [@cassak08a] is also the result of geometrical constraints on the MHD-scale, independent of the dissipation mechanism.\ [ *Constraint on the reconnecting field–*]{} We consider the geometry and notation illustrated in Fig. \[inflow\]. The asymptotic field $B_{x2}$ on side 2 is larger than the asymptotic value $B_{x1}$ on side 1. Thus, sides 1 and 2 nominally correspond to typical conditions at the magnetosheath and magnetosphere, respectively. Unlike the model in Ref. [@cassak07b], the strength of the reconnecting field immediately upstream of the ion diffusion region can be different from the asymptotic field on each side. We use a subscript “$m$” in $B_{xmi}$ to indicate the [*microscopic*]{} ion-diffusion-region-scale, and $i=1,2$ indicates the two inflow sides. $V_{out,m}$ is the outflow speed immediately downstream of the diffusion region. During the nonlinear stage of reconnection, the angle $\theta_i$ (as sketched for side 1) made by the upstream magnetic field lines opens out on each side. This geometry unavoidably induces a tension force ${\bf B}\cdot\nabla B_z/4\pi$ directed away from the x-line (as sketched for side 1), that is mostly balanced by the magnetic pressure gradient force $-(\nabla B^2/8\pi)_z$ directed toward the x-line (as sketched for side 1). Such a finite magnetic pressure gradient requires the reduction of the reconnecting magnetic field immediately upstream of the diffusion region. This effect is modeled in Ref. [@yhliu17a] that results in an expression $$B_{xmi}\simeq B_{xi}\frac{1-S^2_i}{1+S^2_i}. \label{Bxm}$$ Here $S_i=\mbox{tan}|\theta_i|$ is the slope of the upstream magnetic field line on each side, as sketched for side 1 in Fig. \[inflow\]. From Eq. (\[Bxm\]), the reconnecting magnetic field $B_{xmi}$ vanishes as the opening angle approaches $45^\circ$ (i.e., $S_i \rightarrow 1$). In the 2D approximation, we can write ${\bf B}=\nabla \times A_y{\hat{\bf y}} + B_y \hat{\bf y}$. The sample field lines in Fig. \[inflow\] are evenly spaced contours of the flux function $A_y$, hence the “line-density” illustrates the strength of the in-plane magnetic field. The field lines approaching the diffusion region become less dense (i.e., weaker) compared with its asymptotic value on each side, illustrating the reduction of the reconnecting field due to the opening out of the upstream magnetic field lines. The reconnected field immediately downstream of the diffusion region scales as $$B_{zm}\simeq B_{xmi}S_i.$$ This captures the trend that the opening angle made by the upstream magnetic field on side 1 is always larger, as illustrated in Fig. \[inflow\]. This also means that the reduction of reconnecting magnetic field on the weaker field side has a stronger effect in limiting the reconnection rate. Because the field strength of the reconnecting field on side 1 is weaker than that on side 2, all possible solutions of this model must be found in the range $0< S_1 <1$. Therefore, we write $B_{zm}$ as a function of $S_1$: $$B_{zm}(S_1)\simeq \left(\frac{1-S_1^2}{1+S_1^2}\right)S_1 B_{x1}. \label{Bz_S1}$$ ![The geometry of magnetic fields upstream of the diffusion region for asymmetric reconnection. The orange box marks the diffusion region. $S_1=\mbox{tan}|\theta_1|$ marks the slope of the magnetic field line on side 1. The strength of the magnetic field is illustrated by the field line density. []{data-label="inflow"}](inflow){width="7cm"} [ *Constraint on the outflow speed–*]{} To estimate the reconnection electric field, $E_y\simeq B_{zm} V_{out,m}/c$, we need to calculate the outflow speed $V_{out,m}$. We consider the notation and geometry in Fig. \[outflow\]. The dimension of the diffusion region is $2L \times 2\delta$. Lines $a-c$ and $a-d$ represent the separatrices on side 2 and side 1 respectively, and “$a$” marks the x-line. We first derive the outflow density $\bar{\rho}$ as a result of mixing of plasmas from two sides. The integral form of Gauss’ law for a 2D system is $\oint {\bf B}\cdot d{\bf l}=0$ where $d{\bf l}$ is along the perimeter of a closed 2D area. By applying this rule to the triangle area $a-b-c$ in Fig. \[outflow\] and we get $\int_a^b B_z dx+\int_b^c B_x dz+\int_c^a{\bf B}\cdot d{\bf l}=0$. The last integral vanishes identically because the magnetic separatrix passes the upper right corner at point “$c$”. Thus, $(B_{zm}/2) L\simeq (B_{xm2}/2)\delta_2$. A similar exercise reveals $B_{zm} L\simeq B_{xm1}\delta_1$. Combined with the relation $\delta_1+\delta_2=2\delta$, we get $$B_{zm}=2\left(\frac{B_{xm1}B_{xm2}}{B_{xm1}+B_{xm2}}\right)\left(\frac{\delta}{L}\right). \label{Bz}$$ We now estimate the mass density as in Refs. [@cassak07b], taking care to note that the conservation laws are evaluated at the microscopic “$m$” scale. Mass conservation gives $2\bar{\rho}V_{out,m}\delta\simeq\rho_1 V_{zm1} L+\rho_2 V_{zm2}L$. In a 2D steady state, the out-of-plane electric field $E_y$ is uniform around the diffusion region and hence $V_{zm1}B_{xm1}=V_{zm2}B_{xm2}=V_{out,m}B_{zm}$. Eliminating the velocities gives the hybrid mass density [@cassak07b], $$%{\bar \rho}\simeq \frac{B_{xm1}\rho_2+B_{xm2}\rho_1}{B_{xm1}+B_{xm2}}\label{rate} {\bar \rho}\simeq \frac{B_{xm1}\rho_2+B_{xm2}\rho_1}{B_{xm1}+B_{xm2}}. \label{rho}$$ ![The geometry and dimension of the diffusion region. The strength of the magnetic field is illustrated by the field line density. Here $\delta_1+\delta_2=2\delta$. The label “a” marks the reconnection x-line. []{data-label="outflow"}](outflow){width="8cm"} Now we have enough information to derive the outflow speed from the momentum equation in the outflow direction $\hat{x}$, which is written as $(\rho/2)\partial_x V_x^2 \simeq B_z\partial_z B_x/4\pi -\partial_x B^2/8\pi$. Note that we have ignored the thermal pressure gradient by the same reasoning discussed in Ref. [@birn10a]. To get an averaged outflow speed, we follow a process similar to Ref. [@swisdak07a]; we apply $\int_0^Ldx\int_{-\delta}^{\delta} dz$ to the momentum equation, assuming $B_z=B_z(x)$, $B_x=B_x(z)$, $V_x=V_x(x)$ and an uniform density $\rho=\bar{\rho}$ inside the diffusion region. These lead to $(\bar{\rho}/2)V_{out}^22\delta \simeq (B_{zm}/2)L(B_{xm2}+B_{xm1})/4\pi-B_{zm}^22\delta/8\pi$. Substituting Eq. (\[Bz\]) for $B_{zm}$, we get $$%V_{out,m}\simeq \sqrt{\frac{B_{xm1}B_{xm2}}{4\pi \bar{\rho}}-\frac{1}{\pi \bar{\rho}}\left(\frac{B_{xm1}B_{xm2}}{B_{xm1}+B_{xm2}}\right)^2\left(\frac{\delta}{L}\right)^2}. V_{out,m}\simeq \sqrt{\frac{B_{xm1}B_{xm2}}{4\pi \bar{\rho}}\left[1-4\frac{B_{xm1}B_{xm2}}{(B_{xm1}+B_{xm2})^2}\left(\frac{\delta}{L}\right)^2\right]}. \label{Vout_dL}$$ The first term inside the square brackets results from the averaged magnetic tension force and is the speed obtained in previous studies [@swisdak07a; @cassak07b]. The reduction of the reconnecting field discussed in the previous section decreases the tension force that drives the outflow away from the diffusion region. The second term proportional to $(\delta/L)^2$ is a new term that arises from the magnetic pressure gradient and it further reduces the outflow speed. However, the pre-factor dependent on $B_{xm1}$ and $B_{xm2}$ is 1 for the symmetric case [@yhliu17a; @cassak17a] and decreases for increasing field asymmetries, so the correction to the outflow speed is weakened even more for asymmetric reconnection than symmetric reconnection [@yhliu17a]. We cast the outflow speed into a function of $S_1$($\simeq \delta_1/L$) instead, $$%{V_{out,CS}}\simeq \sqrt{\frac{B_{xm1}B_{xm2}}{4\pi\bar \rho}} {V_{out,m}}(S_1)\simeq \sqrt{\frac{B_{xm1}B_{xm2}-S_1^2B_{xm1}^2}{4\pi\bar \rho}}. \label{Vout}$$ The associated reconnection electric field is $$E_y(S_1)\simeq B_{zm}(S_1)V_{out,m}(S_1)/c, \label{rate}$$ which is a function of $S_1$ using Eqs. (\[Bxm\]), (\[Bz\_S1\]), (\[rho\]) and (\[Vout\]). We hypothesize that the reconnection rate corresponds to the maximum allowable value. Our prediction of the reconnection electric field is $E_R \equiv \mbox{max}(E_y)$, that can be found in the range $0 \leq S_1 \leq 1$. Note that writing $B_{xm2}$ as an explicit function of $S_1$ can be done, but there is no simple expression for it. We need to use the relation $B_{xm2}S_2=B_{xm1}S_1$ and Eq. (\[Bxm\]) to derive $S_2(S_1)$ first, which involves finding the roots of a cubic function $S^3_2+ [B_{zm}(S_1)/B_{x2}] S^2_2-S_2+B_{zm}(S_1)/B_{x2}=0$. $S_2(S_1)$ is then plugged into Eq. (\[Bxm\]) to get $B_{xm2}(S_1)$. These calculations can be performed numerically in a straightforward fashion. ![(a) The predicted opening angle made by the upstream magnetic field on side 1 is plotted as a function of $B_{x1}/B_{x2}$ and $n_1/n_2$. (b) The predicted opening angle made by the upstream magnetic field on side 2. (c) The contour of the predicted reconnection electric field normalized by the side 1 (magnetosheath) value. (d) The ratio of the predicted reconnection electric field to the prediction in Eq. (\[CS\]) assuming $\delta / L = 0.1$. []{data-label="scaling_2D"}](scaling_2D){width="9cm"} [ *Prediction–*]{} In the following, we find the maximum reconnection electric field $E_R$ from Eq. (\[rate\]) numerically. The result for a wide parameter range of magnetic field ratio $B_{x1}/B_{x2}$ and density ratio $n_1/n_2$ is shown in Fig. \[scaling\_2D\]. The predicted opening angles on the two sides of the current sheet are shown in Fig. \[scaling\_2D\](a) and (b). The opening angle $\theta_1$ of the upstream magnetic field line on side 1 increases mildly from from $\simeq 18.2^\circ$ in the symmetric limit to $\simeq 21.5^\circ$ in the strong field asymmetry limit. In the same limit, the field line opening angle $\theta_2$ on side 2 becomes small ($\rightarrow 0^\circ$) because the magnetic field is much stiffer on side 2 compared to that on side 1. This qualitatively agrees with all previous asymmetric reconnection simulations, which show $\theta_1 > \theta_2$. In Fig. \[scaling\_2D\](c), the reconnection electric field $\hat {E}_R\equiv c E_R/V_{Ax1}B_{x1}$ is normalized to the Alfvén speed $V_{Ax1}\equiv B_{x1}/\sqrt{4\pi \rho_1}$ and the field strength $B_{x1}$ at the magnetosheath (side 1). The normalized rate $\hat{E}_R$ is $\simeq 0.2$ in the symmetric limit (i.e., $\mbox{log}(\hat{E}_R)\simeq -0.7$ when $n_1/n_2=1$ and $B_{x1}/B_{x2}=1$), as expected from Ref. [@yhliu17a]. In Fig. \[scaling\_2D\](d), we compare our prediction to $E_{CS}$ with $(\delta/L)_{eff}=0.1$. It is important to learn that this prediction agrees with $E_{CS}$ within a factor of two and they scale together over a wide range of parameter space. In conjunction with the symmetric case discussed in Ref. [@yhliu17a], this consistency in the asymmetric limit suggests that the geometrical factor, $(\delta/L)_{eff}\simeq 0.1$, left unexplained in Eq. (\[CS\]), also arises from the MHD-scale constraints imposed at the inflow and outflow region. ![The predicted reconnection electric field normalized by the side 1 (magnetosheath) conditions is plotted as a function of $B_{x1}/B_{x2}$ with a symmetric density in (a) and as a function of $n_1/n_2$ with a symmetric reconnecting magnetic field in (b). The prediction is shown in solid back, the prediction of Eq. (\[CS\]) in red, and the prediction without the outflow correction in dashed black. []{data-label="scaling"}](scaling){width="9cm"} To understand better the difference in different models, we plot the predictions as a function of $B_{x1}/B_{x2}$ with a fixed $n_1/n_2=1$ in Fig. \[scaling\](a) and as a function of $n_1/n_2$ with a fixed $B_{x1}/B_{x2}=1$ in Fig. \[scaling\](b). Red curves show the value of $E_{CS}$ normalized to $V_{Ax1}B_{x1}/c$, solid black curves are $\hat{E}_R$ (our prediction), dashed black curves are the maximum of Eq. (\[rate\]) using ${V_{out,m}}(S_1)\simeq (B_{xm1}B_{xm2}/4\pi\bar \rho)^{1/2}$ instead of Eq. (\[Vout\_dL\]); i.e., the reduction of the outflow speed from the magnetic pressure gradient is not considered. The red and solid black curves exhibit a similar scaling, as suggested in Fig. \[scaling\_2D\](d). The dashed black curve is very close to the solid black curve in each panel, suggesting that the reduction of the reconnecting field, rather than the reduction in $V_{out,m}$ due to the magnetic pressure gradient force, is the dominant mechanism that constrains the rate. [ *Summary and discussion–*]{} In this Letter, we derive the collisionless asymmetric magnetic reconnection rate using a new approach. The prediction is obtained through maximizing a model rate that considers the MHD-scale constraints at both the inflow and outflow regions. The predicted value is found to be within factor of two of the collisionless asymmetric reconnection rate that was widely examined [@cassak08a; @cassak07b]. This comparison suggests that constraints at the MHD-scale explain the geometrical factor $(\delta/L)_{eff}$ of order 0.1 inferred but not explained in the rate model of Ref. [@cassak08a], putting the scaling in Eq. (\[CS\]) on solid footing. The analysis further shows that the dominant limiting effect that constrains the maximum reconnection rate is the field reduction at the weak field (magnetosheath) side. However, caveats need to be kept in mind when applying this theory. An out-of-plane guide field does not affect the in-plane tension force but can contribute to the magnetic pressure gradient in the force balance. The same prediction applies to a general case with a guide field only if the reconnection process does not significantly alter the guide field strength near the x-line. The normalized rate remains to be $\sim 0.1$ in the strong guide field limit, at least, for symmetric cases [@yhliu14a]. This model does not include the effect of the diamagnetic drift driven by the combination of the pressure gradient across the sheet and a finite guide field. The diamagnetic drift can suppress magnetic reconnection [@yhliu16a; @beidler11a; @swisdak10a; @swisdak03a]. In addition, flow shear commonly present at the flank of the magnetopause can also reduce the reconnection rate [@doss15a; @cassak11a]. Potential 3D and turbulence effects [@ergun16a; @price16a; @le17a; @yhliu15b; @daughton14a] are not included in this 2D analysis. Finally, while this theory works in most models, including PIC, hybrid, two-fluid models and MHD with a localized resistivity, it does not apply to MHD systems with a uniform resistivity; i.e., a uniform resistivity does not seem to support the maximum rate allowed by the constraints imposed at the upstream and downstream regions. Nevertheless, by comparing with the well-established scaling [@cassak08a; @cassak07b] previously found in the asymmetric limit of collisionless plasmas, the consistency demonstrated in this Letter confirms the capability of this new approach [@yhliu17a; @cassak17a] in explaining the fast reconnection rate in a more general configuration. This result is timely to the study of collisionless magnetic reconnection in Earth’s magnetosphere. The high cadence electric field measurement on board of NASA’s Magnetospheric Multiscale spacecrafts (MMS) and their close deployment provide an invaluable opportunity to study the reconnection rate [@LJChen17a] and perhaps the effective aspect ratio of diffusion region in both the magnetopause and magnetotail.\ Y.-H. Liu thanks M. Swisdak and J. Dahlin for helpful discussions. Y.-H. L. is supported by NASA grant NNX16AG75G and MMS mission. M. H. is supported by the Research Council of Norway/CoE under contract 223252/F50, and by NASA’s MMS mission. P. A. C. is supported by NASA grant NNX16AF75G and NSF grant AGS1602769. M. S. is supported by NASA grants NNX08A083G-MMS IDS, NNX17AI25G. S. W. and L.-J. C. are supported by DOE grant DESC0016278, NSF grants AGS-1202537, AGS-1543598 and AGS-1552142, and by the NASA’s MMS mission.\ [47]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , ****, (). , , , , , , , ****, (). , , , , , , , , , ****, (). , , , , , , ****, (). , , , , , pp. (). , , , , , p. (). , , , , ****, (). , ****, (). , ****, (). , , , , , , , , , , , ****, (). , ****, (). , , , , , , , , , ****, (). , , , , ****, (). , ****, (). , , , , , ****, (). , , , ****, (). , in ** (), p. . , ****, (). , , , , , , , ****, (). , , , , , , , , , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , ****, (). , , , ****, (). , , , , , , ****, (). , ****, (). , , , , , , , , ****, (). , , , , , , , ****, (). , , , , ****, (). , ****, (). , , , ****, (). , , , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , , , , , , ****, (). , ****, (). , , , , , , , , , , , ****, (). , , , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , , ****, ().
--- abstract: 'The common approach to crack dynamics, linear elastic fracture mechanics (LEFM), assumes infinitesimal strains and predicts a $r^{-1/2}$ strain divergence at a crack tip. We extend this framework by deriving a weakly nonlinear fracture mechanics theory incorporating the leading nonlinear elastic corrections that must occur at high strains. This yields strain contributions “more-divergent” than $r^{-1/2}$ at a finite distance from the tip and logarithmic corrections to the parabolic crack tip opening displacement. In addition, a dynamic length-scale, associated with the nonlinear elastic zone, emerges naturally. The theory provides excellent agreement with recent near-tip measurements that can not be described in the LEFM framework.' author: - 'Eran Bouchbinder, Ariel Livne and Jay Fineberg' title: Weakly Nonlinear Theory of Dynamic Fracture --- Understanding the dynamics of rapid cracks is a major challenge in condensed matter physics. For example, high velocity crack tip instabilities [@99MF; @07LBDF] remain poorly understood from a fundamental point of view. Much of our understanding of how materials fail stems from Linear Elastic Fracture Mechanics (LEFM) [@98Fre], which assumes that materials are [*linearly*]{} elastic outside of a small zone where all nonlinear and dissipative processes occur. A central facet of LEFM is that strains diverge as $r^{-1/2}$ at a crack’s tip and that this singularity dominates all other strain contributions in this region. Linear elasticity should be expected to break down before dissipative processes occur. The small size and rapid propagation velocity of the near-tip region of brittle cracks have, however, rendered quantitative measurements of the near-tip fields elusive. In the companion Letter [@companion] such direct near-tip measurements of the displacement field ${{\bm{u}}}({{\bm{r}}})$ were achieved for Mode I cracks propagating at rapid velocities, $v$. Defining $(r,\theta)$ as coordinates moving with the crack tip, the propagation direction, $x$ is defined by $\theta\!\!=\!\!0$ and the loading direction, $y$, by $\theta \!=\!\pi/2$. As predicted by LEFM, these experiments revealed that the crack tip opening profile, $u_y(r,\pm\pi)$, is parabolic beyond a velocity-dependent length-scale $\delta(v)$. However, it was shown that although $u_x(r,\theta\!=\!0)$ in this range also follows the functional form predicted by LEFM, its parameters are [*inconsistent*]{} with those described by $u_y(r,\pm\pi)$! Moreover, the strain component $\varepsilon_{yy}(r,0)={\partial}_yu_y(r,0)$ was [*wholly incompatible*]{} with LEFM, indicating a “more-divergent” behavior than $r^{-1/2}$. These puzzling discrepancies become increasingly severe as $v$ increases. In this Letter, we show that [*all*]{} of these puzzles can be quantitatively resolved by taking into account nonlinear corrections to linear elasticity, which [*must*]{} be relevant near the crack tip. This is achieved by perturbatively expanding the momentum balance equation for an elastic medium up to second order nonlinearities in the displacement gradients. The resulting theory provides a novel picture of the structure of the fields surrounding a crack tip, and may have implications for our understanding of crack dynamics. Nonlinear material response at the large strains near a crack’s tip motivates us to formulate a nonlinear elastic dynamic fracture problem under plane stress conditions. Consider the deformation field ${{\bm{\phi}}}$, which is assumed to be a continuous, differentiable and invertible mapping between a reference configuration ${{\bm{x}}}$ and a deformed configuration ${{\bm{x}}}'$ such that ${{\bm{x}}}'\!=\!{{\bm{\phi}}}({{\bm{x}}})={{\bm{x}}}+{{\bm{u}}}({{\bm{x}}})$. The deformation gradient tensor ${{\bm{F}}}$ is defined as $ {{\bm{F}}}\!=\!\nabla {{\bm{\phi}}}$ or explicitly $F_{ij}\!=\!\delta_{ij}+{\partial}_j u_i$. The first Piola-Kirchhoff stress tensor ${{\bm{s}}}$, that is work-conjugate to the deformation gradient ${{\bm{F}}}$, is given as ${{\bm{s}}}\!=\!{\partial}_{{{\bm{F}}}} U({{\bm{F}}})$, where $U({{\bm{F}}})$ is the strain energy in the deformed configuration per unit volume in the reference configuration [@Holzapfel]. The momentum balance equation is $$\label{EOM} \nabla \cdot {{\bm{s}}} = \rho {\partial}_{tt}{{{\bm{\phi}}}} \ ,$$ where $\rho$ is the mass density. Under steady-state propagation conditions we expect all of the fields to depend on $x$ and $t$ through the combination $x\!-\!vt$ and therefore ${\partial}_t\!=\!-v{\partial}_x$. The polar coordinate system that moves with the crack tip is related to the rest frame by $r\!=\!\sqrt{(x-vt)^2+y^2}$ and $\theta\!=\!\tan^{-1}[y/(x-vt)]$. Thus, the traction-free boundary conditions on the crack faces are $$\label{BC} s_{xy}(r,\theta\!=\!\pm\pi)\!=\!s_{yy}(r,\theta\!=\!\pm\pi)=0 \ .$$ To proceed, we note that in the measurement region of [@companion] the maximal strain levels are $0.2\!-\!0.35$ (see below) as the velocity of propagation varied from $0.20c_s$ to $0.78c_s$, where $c_s\!=\!\sqrt{\mu/\rho}$ is the shear wave speed ($\mu$ is the shear modulus). These levels of strain motivate a perturbative approach where quadratic elastic nonlinearities must be taken into account. Higher order nonlinearities are neglected below, though they most probably become relevant as the crack velocity increases. We write the displacement field as $$\label{expansion} {{\bm{u}}}(r,\theta) \simeq \epsilon {{\bm{u}}}^{(1)}(r,\theta)+\epsilon^2 {{\bm{u}}}^{(2)}(r,\theta)+ {{\mathcal{O}}}(\epsilon^3) \ ,$$ where $\epsilon$ quantifies the (dimensionless) magnitude of the strain. For a general $U({{\bm{F}}})$, ${{\bm{s}}}$ and ${{\bm{\phi}}}$ can be expressed in terms of ${{\bm{u}}}$ of Eq. (\[expansion\]). Substituting these in Eqs. (\[EOM\])-(\[BC\]) one can perform a controlled expansion in orders of $\epsilon$. To make the derivation concrete, we need an explicit ${{\bm{U}}}({{\bm{F}}})$ that corresponds to the experiments of [@companion]. The polymer gel used in these experiments is well-described by a plane stress incompressible Neo-Hookean constitutive law [@05LCF], defined by the energy functional [@83KS] $$\label{NH} U({{\bm{F}}})= \frac{\mu}{2}\left[ F_{ij}F_{ij}+\det({{\bm{F}}})^{-2}-3\right] \ .$$ Using this explicit ${{\bm{U}}}({{\bm{F}}})$, we derive the first order problem in $\epsilon$ $$\mu\nabla^2{{{\bm{u}}}^{(1)}}+3\mu\nabla(\nabla\cdot{{{\bm{u}}}^{(1)}})=\rho\ddot{{{\bm{u}}}}^{(1)} \ , \label{Lame}$$ with the boundary conditions at $\theta\!=\!\pm\pi$ $$\label{BC1} r^{-1}{\partial}_\theta u_x^{(1)}+{\partial}_r u_y^{(1)}=0,\quad 4r^{-1}{\partial}_\theta u_y^{(1)}+2{\partial}_r u_x^{(1)}=0 \ .$$ This is a standard LEFM problem [@98Fre]. The near crack-tip (asymptotic) expansion of the steady state solution for Mode I symmetry is [@98Fre] $$\begin{aligned} \epsilon u_x^{(1)}(r, \theta;v)\!&=&\!\frac{K_I \sqrt{r}}{4\mu\sqrt{2\pi}}\Omega_x(\theta;v)\!+\!\frac{Tr\cos\theta}{3\mu}+{{\mathcal{O}}}(r^{3/2}),\nonumber\\ \label{firstO} \epsilon u_y^{(1)}(r,\theta;v)\!&=&\!\frac{K_I\sqrt{r}}{4\mu\sqrt{2\pi}}\Omega_y(\theta;v)\!-\!\frac{Tr\sin\theta}{6\mu}\!+\!{{\mathcal{O}}}(r^{3/2}).\end{aligned}$$ Here $K_I$ is the Mode I “stress intensity factor” and $T$ is a constant known as the “T-stress”. Note that these parameters cannot be determined by the asymptotic analysis as they depend on the [*global*]{} crack problem. ${{\bm{\Omega}}}(\theta;v)$ is a known universal function [@98Fre; @supplementary]. $\epsilon$ in Eq.(\[expansion\]) can be now defined explicitly as $\epsilon\!\equiv\!K_I/[4\mu \sqrt{2\pi \ell(v)}]$, where $\ell(v)$ is a velocity-dependent length-scale. $\ell(v)$ defines the scale where only the order $\epsilon$ and $\epsilon^2$ problems are relevant. It is a [*dynamic*]{} length-scale that marks the onset of deviations from a linear elastic constitutive behavior. The solution of the order $\epsilon$ equation, i.e. Eqs. (\[firstO\]), can be now used to derive the second order problem in $\epsilon$. The form of the second order problem for an incompressible material is $$\mu\nabla^2{{{\bm{u}}}^{(2)}}+3\mu\nabla(\nabla\cdot{{{\bm{u}}}^{(2)}})+ \frac{\mu\ell{{\bm{g}}}(\theta;v)}{r^2}=\rho\ddot{{{\bm{u}}}}^{(2)}\ . \label{secondO}$$ The boundary conditions at $\theta\!=\!\pm\pi$ become $$\label{BC2} r^{-1}{\partial}_\theta u_x^{(2)}+{\partial}_r u_y^{(2)}=4r^{-1}{\partial}_\theta u_y^{(2)}+2{\partial}_r u_x^{(2)}+\frac{\kappa(v)\ell}{r}=0,$$ where contributions proportional to $T$ were neglected. Here ${{\bm{g}}}(\theta;v)$ and $\kappa(v)$ are known functions, see [@supplementary] and below. The problem posed by Eqs. (\[secondO\])-(\[BC2\]) has the structure of an effective LEFM problem with a body force $\propto\!r^{-2}$ and a crack face force $\propto\!r^{-1}$. Note that Eqs. (\[secondO\])-(\[BC2\]) are valid in the range $\sim\!\ell(v)$, where $\epsilon^2$ is non-negligible with respect to $\epsilon$, but higher order contributions are negligible. Since one cannot extrapolate the equations to smaller length-scales, no real divergent behavior in the $r\!\to\!0$ limit is implied. We stress that the structure of this problem is universal. Only ${{\bm{g}}}(\theta;v)$ and $\kappa(v)$ depend on the second order elastic constants resulting from expanding a [*general*]{} $U({{\bm{F}}})$ to second order in $\epsilon$. For example, the $\propto\!r^{-2}$ effective body-force in Eq. (\[secondO\]) results from terms of the form ${\partial}({\partial}u^{(1)} {\partial}u^{(1)})$, which are generic quadratic nonlinearities. We now focus on solving Eq. (\[secondO\]) with the boundary conditions of Eqs. (\[BC2\]) for the explicit ${{\bm{g}}}(\theta;v)$ and $\kappa(v)$ derived from Eq. (\[NH\]) [@supplementary]. Our strategy is to look for a particular solution of the inhomogeneous Eq. (\[secondO\]) [*without*]{} satisfying the boundary conditions of Eqs. (\[BC2\]) and then to add to it a solution of the corresponding homogeneous equation that makes the overall solution consistent with the boundary conditions. We find that the inhomogeneous solution, ${{\bm{\Upsilon}}}(\theta;v)$, is r-independent. The homogeneous solution is obtained using a standard approach [@98Fre] by noting that the second boundary condition of Eqs. (\[BC2\]) requires that its first spatial derivative scales as $r^{-1}$. The solution of the second order problem for Mode I symmetry is $$\begin{aligned} \label{solution} \epsilon^2u_x^{(2)}(r,\theta;v)&=&\left(\frac{K_I}{4\mu \sqrt{2\pi}}\right)^2\left[A\log{r}+\frac{A}{2}\log{\left(1-\frac{v^2\sin^2\theta}{c_d^2} \right)}+B\alpha_s\log{r}+\frac{B \alpha_s}{2}\log{\left(1-\frac{v^2\sin^2\theta}{c_s^2} \right)}+\Upsilon_x(\theta;v)\right],\nonumber\\ \epsilon^2u_y^{(2)}(r,\theta;v)&=&\left(\frac{K_I}{4\mu \sqrt{2\pi}}\right)^2\Big[-A\alpha_d\theta_d-B\theta_s+\Upsilon_y(\theta;v)\Big], \quad \tan{\theta_{d,s}}=\alpha_{d,s}\tan{\theta},\quad\alpha^2_{d,s}\equiv1-v^2/c_{d,s}^2 \ ,\end{aligned}$$ where $A\!=\![2\alpha_s B-4{\partial}_\theta\Upsilon_y(\pi;v)-\kappa(v)]/(2- 4\alpha_d^2)$ (cf. Eq. (\[BC2\])) and $c_d$ is the dilatational wave speed. $c_d\!=\!2c_s$ in an incompressible material under plane stress conditions. The analytic form of ${{\bm{\Upsilon}}}(\theta;v)$ depends mainly on ${{\bm{g}}}(\theta;v)$. The latter can be represented as $$g_x(\theta;v)\!\simeq\!\sum_{n=1}^{N(v)}\! a_n(v) \cos(n\theta),~~ g_y(\theta;v)\!\simeq\!\sum_{n=1}^{N(v)}\! b_n(v) \sin(n\theta).$$ For $v\!=\!0$ we have $N(0)\!=\!3$ and the representation is [*exact*]{}, while for higher velocities it provides analytic approximations with whatever accuracy needed. For $v\!\simeq\!0.8c_s$ only seven terms provide a representation that can be regarded exact for any practical purpose [@supplementary]. ${{\bm{\Upsilon}}}(\theta;v)$ is then obtained in the form $$\Upsilon_x(\theta;v)\!\simeq\!\sum_{n=1}^{N(v)}\!\! c_n(v) \cos(n\theta),~~ \Upsilon_y(\theta;v)\!\simeq\!\sum_{n=1}^{N(v)}\! \!d_n(v) \sin(n\theta),$$ where the unknown coefficients are determined by solving a linear set of equations [@supplementary]. A striking feature of Eqs. (\[solution\]) is that they lead to strain contributions that vary as $r^{-1}$, which are “more-singular” than the $r^{-1/2}$ strains predicted by LEFM. We now show that the second order solution of Eqs. (\[solution\]) entirely resolves the discrepancies raised by trying to interpret the experimental data of [@companion] in the framework of LEFM. The complete second order asymptotic solution, Eqs. (\[expansion\]), (\[firstO\]) and (\[solution\]), contains three parameters ($K_I$, $T$ and $B$) that cannot be determined from the asymptotic solution and therefore must be extracted from the experimental data. These parameters were chosen such that Eqs. (\[expansion\]), (\[firstO\]) and (\[solution\]) properly describe the measured $u_x(r,0)$. Examples for $v/c_s\!=\!0.20$, $0.53$ and $0.78$ are provided in Fig. \[measuredstrains\] (top). With $K_I$, $T$ and $B$ at hand, we can now test the theory’s predictions for $\varepsilon_{yy}(r,0)$ with [*no adjustable free parameters*]{}. The corresponding results are compared with both the measured data [@companion] and LEFM predictions in Fig. \[measuredstrains\] (bottom). In general, the agreement with the experimental data is excellent. These results demonstrate the importance of the predicted $r^{-1}$ strain terms near the crack tip. $\ell$ is estimated as the scale where the largest strain component reaches values of $0.10\!-\!0.15$. For the data presented in Fig. \[measuredstrains\]a, $\varepsilon_{yy}\!>\!\varepsilon_{xx}$, where $\varepsilon_{xx}\!=\!{\partial}_x u_x$ is obtained by differentiating $u_x$. Thus, $\ell$ can be read off of the bottom panel to be $\sim\!0.5\!-\!1$mm. Similar estimates can be obtained for every $v$, though not always does $\varepsilon_{yy}\!>\!\varepsilon_{xx}$, e.g. Fig. \[measuredstrains\]c. For $v\!=\!0.53c_s$ (Fig. \[measuredstrains\]b) the theory still agrees well with the measurements, although some deviations near the tip are observed. These deviations signal that higher order corrections may be needed, though second order nonlinearities still seem to provide the dominant correction to LEFM. For higher velocities, it is not clear, [*a-priori*]{}, that second order nonlinearities are sufficient to describe the data. In fact, the strain component $\varepsilon_{xx}(r,0)$ for $v\!=\!0.78c_s$ reaches a value of $\sim\!0.35$ in Fig. \[measuredstrains\]c, suggesting that higher order nonlinearities may be important. Nevertheless, the second order theory avoids a fundamental failure of LEFM; at high velocities ($v\!>\!0.73c_s$ for an incompressible material) LEFM predicts (dashed line in Fig. \[measuredstrains\]c) that the contribution proportional to $K_I$ in $\varepsilon_{yy}(r,0)$ (derived from Eqs. (\[firstO\])) becomes [*negative*]{}. This implies that $\varepsilon_{yy}(r,0)$ [*decreases*]{} as the crack tip is approached and becomes [*compressive*]{}. This is surprising, as material points straddling $y\!=\!0$ must be separated from one another to precipitate fracture. Thus, the second order nonlinear solution (solid line), though applied beyond its range of validity, already induces a qualitative change in the character of the strain. This is a striking manifestation of the breakdown of LEFM, demonstrating that elastic nonlinearities are generally unavoidable, especially as high crack velocities are reached. The results of Figs. \[measuredstrains\]a-c both provide compelling evidence in favor of the developed theory and highlight inherent limitations of LEFM. We note that $\ell(v)$ increases with increasing $v$, reaching values in the mm-scale at very high $v$. Our results indicate that the widely accepted assumption of “K-dominance" of LEFM, i.e. that there is always a region where the $r^{-1/2}$ strain term dominates all other contributions, is violated here. The results presented in Fig. \[measuredstrains\] explicitly demonstrate that quadratic nonlinearities become important in the same region where a non-negligible $T$-stress exists. As elastic nonlinearities intervene before the $r^{-1/2}$ term dominates the strain fields, the contributions of [*both*]{} of these terms must be taken into account as one approaches the crack tip. Since values of the $T$-stress and of $B$ are system specific, this observation is valid for the specific experimental system under study. They do indicate that the assumption of “K-dominance" is not always valid. An additional puzzle raised in [@companion] was that although the form of both $u_x(r,0)$ and the Crack Tip Opening Displacement (CTOD) agreed with LEFM, the respective derived values of $K_I$ differed by about 20%, cf. Fig. 3a in [@companion]. This puzzle is resolved by the theory as follows. The form of the CTOD is given by $\phi_y(r,\pm\pi)$ as a function of the distance, $\phi_x(r,\pi)$, from the crack tip in the moving (laboratory) frame. Substituting $\theta\!=\!\pi$ into Eqs. (\[expansion\]), (\[firstO\]) and (\[solution\]), the nonlinear theory predicts that the CTOD remains parabolic, where the $log(r)$ term in $\phi_x(r,\pi)$ is negligible compared to $r$. This occurs at the [*same*]{} scale $\ell(v)$ at which nonlinear corrections are essential to describe the strain at $\theta\!=\!0$, cf. Fig. 1. Quantitatively, the parabolic CTOD can be described with $K_I$ values that differ from those describing $u_x(r,0)$ by only a few percent with the [*same*]{} values of $T$ and $B$ (cf. Fig. \[measuredstrains\]). This small $K_I$ variation is possibly related to sub-leading nonlinear corrections associated with the $T$-stress and will be addressed elsewhere. Let us now consider the CTOD in the near vicinity of the crack tip, i.e. when $r$ is further reduced. Eqs. (\[solution\]) predict the existence of $\log$-terms in $\phi_x(r,\theta)$. These terms, which are negligible at $\theta\!=\!\pi$ on a scale $\ell(v)$, must become noticeable at smaller scales. Although this region is formally beyond the range of validity of the expansion of Eq. (\[expansion\]), we would still expect the existence of a CTOD contribution proportional to $\log{r}$ to be observable. We test this prediction in Fig. \[parabolas\] by comparing the measured small-scale CTOD to both the parabolic LEFM form and the second order nonlinear solution [*with no adjustable parameters*]{}. We find that these $\log$-terms, whose coefficients were determined at a scale $\ell(v)$, capture the initial deviation from the parabolic CTOD at $\theta\!=\!\pm\pi$ to a surprising degree of accuracy. This result lends further independent support to the validity of Eqs. (\[solution\]). In summary, we have shown that the second order solution presented in Eqs. (\[solution\]) resolves in a self-consistent way all of the puzzles that were highlighted in [@companion]. This solution is universal in the sense that its generic properties are independent of geometry, loading conditions and material parameters. We would entirely expect that [*any*]{} material subjected to the enormous deformations that surround the tip of a crack must experience [*at least*]{} quadratic elastic nonlinearities, prior to the onset of the irreversible deformation that leads to failure. Our results show that these deformations, which are the vehicle for transmitting breaking stresses to crack tips, must be significantly different from the LEFM description, especially at high $v$. One may ask why we should not consider still higher order elastic nonlinearities. We surmise that quadratic elastic nonlinearities may be special, as they mark the emergence of a dynamic length-scale $\ell(v)$ that characterizes a region where material properties - like local wave speeds, local response times and anisotropy - become [*deformation dependent*]{}. This line of thought seems consistent with the observations of Refs. [@Buehler03_06]. As supporting evidence for this view, we note that the geometry-independent wave-length of crack path oscillations discussed in [@07LBDF; @07BP] seems to correlate with the mm-scale $\ell(v)$ at high $v$. Therefore, our results may have implications for understanding crack tip instabilities. [**Acknowledgements**]{} This research was supported by grant 57/07 of the Israel Science Foundation. E. B. acknowledges support from the Horowitz Center for Complexity Science and the Lady Davis Trust. [99]{} J. Fineberg and M. Marder, Phys. Rep. [**313**]{}, 1 (1999). A. Livne, O. Ben-David and J. Fineberg, Phys. Rev. Lett. [**98**]{}, 124301 (2007). L. B. Freund, [*Dynamic Fracture Mechanics*]{}, (Cambridge University Press, Cambridge, 1998). A. Livne, E. Bouchbinder and J. Fineberg, preceding Letter, ?? (2008). A general introduction to finite deformation theory, including a detailed discussion on the different stress and strain measures, can be found in G. A. Holzapfel, [*Nonlinear Solid Mechanics*]{}, (Wiley, Chichester, 2000). A. Livne, G. Cohen and J. Fineberg, Phys. Rev. Lett. [**94**]{}, 224301 (2005). J. K. Knowles and E. Sternberg, J. Elasticity [**13**]{}, 257 (1983); G. Ravichandran and W. G. Knauss, Int. J. Frac. [**39**]{}, 235 (1989). See EPAPS Document No. ?? for explicit expressions. For more information on EPAPS, see http://www.aip.org/pubservs/epaps.html. M. J. Buehler, F. F. Abraham and H. Gao, Nature [**426**]{}, 141 (2003); M. J. Buehler and H. Gao, Nature [**439**]{}, 307 (2006). E. Bouchbinder and I. Procaccia, Phys. Rev. Lett. [**98**]{}, 124302 (2007).
--- author: - 'A.G.J. van Leeuwen' - 'B.W. Stappers' - 'R. Ramachandran' - 'J. M. Rankin' date: 'Received / Accepted' nocite: '[@lkr+02]' title: Probing drifting and nulling mechanisms through their interaction in PSR B0809+74 --- Introduction ============ In pulsars, the emission in individual pulses generally consists of one or more peaks (‘subpulses’), that are much narrower than the average profile and the brightness, width, position and number of these subpulses often vary from pulse to pulse. In contrast, the subpulses in PSR B0809+74 have remarkably steady widths and heights and form a regular pattern (see Fig. \[img:data.fit\]a). They appear to drift through the pulse window at a rate of $-0.09 P_2/P_1$, where $P_2$ is the average longitudinal separation of two subpulses within one rotational period $P_1$, which is $1.29$ seconds. Figure \[img:data.fit\]a also shows how the pulsar occasionally stops emitting, during a so-called null. ![Observed and fitted pulse sequences. A window on the pulsar emission is shown for 150 pulses. One pulse period is $360^\circ$. The centre of the Gaussian that fits the pulse profile best is at $0^\circ$. [**a)**]{} The observed pulse sequence, with a null after pulse 30. [**b)**]{} The Gaussian curves that fitted the subpulses best. Nulls are shown in lightest gray, driftbands fitted to the subpulse pattern are medium gray.[]{data-label="img:data.fit"}](H3808F1.eps) In this paper, we will interpret the drifting subpulse phenomenon in the rotating carousel model [@rs75]. In this model, the pulsar emission originates in discrete locations (‘subbeams’) positioned on a circle around the magnetic pole. The circle rotates as a whole, similar to a carousel, and is grazed by our line of sight. In between successive pulses, the carousel rotation moves the subbeams through this sight line, causing the subpulses to drift. Generally, the average profiles of different pulsars evolve with frequency in a similar manner: the profile is narrow at high frequencies and broadens towards lower frequencies, occasionally splitting into a two-peaked profile [@kis+98]. This is usually interpreted in terms of ‘radius to frequency mapping’, where the high frequencies are emitted low in the pulsar magnetosphere. Lower frequencies originate higher, and as the dipolar magnetic field diverges the emission region grows, causing the average profile to widen. The profile evolution seen in PSR B0809+74 is different. The movement of the trailing edge broadens the profile as expected, but the leading edge does the opposite. The profile as a whole decreases in width as we go to lower frequencies until about 400 MHz. Towards even lower frequencies the profile then broadens somewhat [@dls+84; @kis+98]. Our own recent observations of PSR B0809+74, simultaneously at 382, 1380 and 4880 MHz, confirm these results [@rrl+02]. Why the leading part of the expected profile at 400 MHz is absent is not clear. While @bkk+81 suggest cyclotron absorption, @dls+84 conclude that the phenomenon is caused by a non-dipolar field configuration. We will refer to this non-standard profile evolution as ‘absorption’, but none of the arguments we present in this paper depends on the exact mechanism involved. In a recent paper (van Leeuwen et al. 2002, henceforth Paper I) we investigated the behaviour of the subpulse drift in general, with special attention to the effect of nulls. We found that after nulls the driftrate is less, the subpulses are wider but more closely spaced, and the average pulse profile moves towards earlier arrival. Occasionally this post-null drift pattern remains stable for more than 150 seconds. For a more complete introduction to previous work on PSR B0809+74, as well as for information on the observational parameters and the reduction methods used, we refer the reader to Paper I. In this paper we will investigate the processes that underly the post-null pattern changes. We will quantify some of the timescales associated with the rotating carousel model and map the post-null changes in the drift pattern onto the emission region. One of the interesting timescales is the time it takes one subbeam to complete a rotation around the magnetic pole. This carousel-rotation time is predicted to be of the order of several seconds in the Ruderman & Sutherland model. Only recently a carousel-rotation time was first measured: @dr99 find a periodicity associated with a 41-second carousel-rotation time for PSR B0943+10. The second goal is to determine the changes in the emission region that underly the different drift pattern we see after nulls. Mapping this emission region could increase our insight into what physically happens around nulls. Achieving either goal requires solving the so-called aliasing problem: as the subpulses are indistinguishable and as we observe their positions only once every pulse period, we cannot determine their actual speed. Solving the aliasing problem ============================ The main obstacles in the aliasing problem are the under sampling of the subpulse motion and our inability to distinguish between subpulses. The pulsar rotation only permits an observation of the subpulse positions once every pulse period. Following them through subsequent pulses might still have led to a determination of their real speed, but unfortunately the subpulses are so much alike that a specific subpulse in one pulse cannot be identified in the next, making it impossible to learn its real speed. In Fig. \[img:alias\_example\] we show a simulation of subpulse drifting, where we have marked all subpulses formed by a particular subbeam with a darker colour. We use these simulations to discuss how the driftrate, which is the observable motion of the subpulses through subsequent pulses, is related to the subbeam speed, which cannot be determined directly. In Fig. \[img:alias\_example\]a the speed of the subbeams is low ($-0.09 P_2/P_1$) and identical to the driftrate. In Fig. \[img:alias\_example\]b the subbeam speed is higher ($0.91 P_2/P_1$), but the driftrate is identical to the one seen in Fig. \[img:alias\_example\]a. When the differences between subpulses formed by various subbeams are smaller than the fluctuations in subpulses from one single subbeam, these two patterns cannot be distinguished from one another. In that case the subpulses within one driftband, which seem to be formed by one subbeam, can actually be formed by a different subbeam each pulse period (‘aliasing’). ![Different alias orders illustrated. We show two series of stacked simulated drifting subpulses. We have marked the subpulses formed by one particular subbeam with a darker colour. [**a)**]{} At a low subbeam speed, a single subbeam traces an entire driftband by itself. The driftrate is identical to the subbeam speed: alias order $0$. [**b)**]{} At alias order $-1$, the subbeam speed is higher than the driftrate and in opposite direction. []{data-label="img:alias_example"}](H3808F2.eps) To solve the aliasing problem for PSR B0809+74 we follow driftrate changes after nulls to determine the subbeam speed. Nulls last between 1 and 15 pulse periods, and in Paper I we have shown that for each null the positions of the subpulses before and after the null are identical if we correct for the shift of the pulse profile. So, as there is no apparent shift in subpulse position, either the subbeams have not moved at all, or their movement caused the new subpulses to appear exactly at the positions of the old ones. As the lengths of the nulls are drawn from a continuous sample it is highly unlikely that the subbeam displacement is always an exact multiple of the subpulse separation: only a total stop of the subbeam carousel can explain why the subpulse positions are always unchanged over the null. At some point after the null, however, the subbeams have accelerated, and the drift pattern has returned to normal. In Fig. \[img:data.fit\] we see how, after a null, the driftrate increases to its normal value in about 50 pulses. There are two scenarios for this subbeam acceleration. The first we will call gradual speedup. Here the changes in the subbeam speed occur on timescales larger than $P_1$. The second we will call instantaneous, as the entire acceleration happens within $1P_1$, effectively out of sight. In Fig. \[img:sim.speedup\] we show four simulated pulse sequences with different speedup parameters. In all cases, we simulate a drift pattern like that of PSR B0809+74. During a null, from pulses 30 to 45, there is no subbeam displacement. Immediately after the null the subbeams build up speed, and each pulse period we translate the subbeam displacement to a change in subpulse position. Although the final driftrate is the same for all scenarios ($-0.09 P_2/P_1$), the subbeam speeds differ considerably. The bottom four graphs show these speeds for each scenario. In the top four diagrams we have marked the subpulses from one subbeam with a darker colour for clarification. Let us look at the case of gradual speedup to alias order $0$, where the subbeam speed is the same as the driftrate (Fig. \[img:sim.speedup\]a). In this case the driftrate will gradually increase and form a regular driftband pattern, much like the pattern found in the observations. Next, we investigate a subbeam acceleration to a slightly higher speed. At $0.91 P_2/P_1$, Fig. \[img:sim.speedup\]b shows alias mode $-1$, which is the simplest configuration in which the subbeams move opposite to the subpulse drift. Right after the null the drift consequently commences in this opposite direction. When the subbeam speed nears the first aliasing boundary $0.5 P_2/P_1$, the subpulses seem to move erratically through the window. After the subbeam speed passes the first alias boundary, the subpulse drift resumes its normal direction, and as the subbeam speed approaches $0.91 P_2/P_1$, the drift pattern returns to normal. Also in the ‘alias order 1’ scenario (Fig. \[img:sim.speedup\]c), which is the simplest aliased mode in which the subbeams move in the same direction as the subpulses, the subpulses wander when the subbeam speed nears an alias boundary. Such a disturbance of the drift pattern turns out to be present in all simulations of non-zero alias orders. Because the observed pulse sequences always show smooth, non-wandering driftbands (like in Fig. \[img:data.fit\]a), we conclude that the subbeams cannot accelerate gradually to a high speed. With instantaneous acceleration the subbeam speed switches suddenly. Most likely it will do so when the pulsar beam faces away from us. After we see pulse number $n$, the subbeams will move slowly for a certain time, quickly speed up, and then move fast until we see the subpulses of pulse $n+1$ appear. The actual speedup can occur any moment between seeing pulses $n$ and $n+1$. This means that the displacement of the subbeams can vary from very little (speedup just before pulse $n+1$) to a lot (speedup right after pulse $n$). The accompanying changes in subpulse position will then be evenly distributed between 0 and $P_2$; the subpulse positions after the speedup will not be related to those before. If the subbeams accelerate instantaneously at the end of a null, before the first pulse can be observed, the subpulse phases are not preserved over the null. As this is opposite to what is observed, this possibility is ruled out. If the subbeam acceleration occurs a few pulses after a null, we expect that the change in driftrate will nearly always be accompanied by a sudden change in the longitude of the driftband, as illustrated in Fig. \[img:sim.speedup\]d for a final subbeam speed of $-1.09 P_2/P_1$. The absence of a significant number of such sudden shifts in the data indicates that the subbeam carousel of PSR B0809+74 does not speed up instantaneously to a high speed. As only the gradual speedup to alias order 0 can explain the observed drift patterns, the drift seen in the subpulses of PSR B0809+74 directly reflects the movement of the subbeams, without any aliasing. ![image](H3808F3.eps) Discussion ========== Alias order of other pulsars ---------------------------- The subbeam speedup in PSR B0809+74 follows the simplest scenario possible: it is gradual, and the subpulse drift is not aliased. In other pulsars this might not be the case, and for those we predict drift-direction reversals or jumps in driftband longitude during the subpulse-drift speedup phase after nulls. Subbeam-carousel rotation time ------------------------------ Having resolved the aliasing of the subpulses’ representation of the subbeams, we know that the subbeams move at $-0.09 P_2/P_1$. Such a low rotation rate means it will take each subbeam $11P_1$ to move to the current position of its neighbour. The following estimate of the number of subbeams then leads directly to the carousel-rotation time. Towards the edges of the profile the sight line and the carousel move away from each other, and a subbeam will cease to be visible when the sight line no longer crosses it. When the subbeams are small, or widely spaced compared to the curvature of the carousel, the sight line will cross only few. In that case there will not be many subpulses visible in one pulse. If, however, the subbeams are large and closely spaced compared to the carousel curvature, the sight line passes over more subbeams in one traverse, leading to many subpulses per pulse. In most of the pulses of PSR B0809+74 we observe two subpulses, occasionally we discern three. By combining this with the ratio of the subpulse width and separation we find there must be more than 15 subbeams on the carousel. As one subbeam reaches the position of its neighbour in $11P_1$, a 15-subbeam carousel rotates in over 200 seconds. Persistent differences in the properties of individual subbeams should introduce a long term periodicity in the pulse sequences, at the carousel-rotation frequency. Thus far, no such periodicity has been found, which is not surprising as the periodicity can only be measured if the lifetime of the subbeam characteristics is longer than the carousel-rotation time. In PSR B0943+10, the only pulsar for which we know how the subbeams vary [@dr99; @dr01], the associated timescales are in the order of 100 seconds. If we assume their lifetimes are comparably long in PSR B0809+74, the subbeams will have lost most of their recognisable traits once they return into view after one carousel rotation. Having regular pulse sequences that contain several tens of rotation times might still show some periodicity at the carousel-rotation frequency. Unfortunately nulls have a destructive influence on the driftband pattern and possibly on the subpulse characteristics. This has thus far made it impossible to observe bright sequences longer than 3 times our lower limit on the rotation time of 200 seconds. After the prediction that the rotation time in most pulsars would be on the order of several seconds [@rs75], the observation that it is 41 seconds in PSR B0943+10 was somewhat surprising. The suggested dependence of the carousel-rotation time on the magnetic field strength and the pulse period predicts that the rotation time in PSR B0809+74 should be roughly 4 times smaller. The circulation time we observe, several hundreds of seconds, shows that the theoretical predictions are incorrect in absolute numbers as well as in the scaling relations they propose. ![Pulse profiles for the normal mode and the two longest slow drifting sequences after nulls described in Paper I. The broadening of the subpulses causes the slow-drifting profile to outshine the normal one. Here all three profiles have been scaled to the same height to show the change in the pulse location and shape.[]{data-label="img:prof"}](H3808F4.eps) Emission region geometry ------------------------ In Paper I we found many interesting changes in the driftband pattern after nulls. The subpulses reappear $5-10\%$ broader and $15\%$ closer together, which results in a brighter average profile. The individual subpulses move $1.5^{\circ}$ towards earlier arrival, as does their envelope, the average profile, with $1.1^{\circ}$ (Fig. \[img:prof\]). The subpulses drift $50\%$ more slowly than usual. Occasionally, this slow drifting pattern remained stable for over 100 pulses. Knowing, as we do now, that there is no aliasing involved, we can identify single subpulses with individual subbeams. This implies that we can interpret the drift pattern changes in terms of the subbeam-carousel geometry (Fig. \[img:carousel\]). We will do so for each of the altered characteristics. The first thing we note is that, relative to the shift of the average profile, the positions of the subpulses are unchanged over nulls. This means that the subbeam carousel has not rotated during the null. After nulls, the subpulses are positioned about 15% closer together. In principle, this could be the result from a change in subbeam speed. At a certain moment, a particular subbeam will be pointing towards the observer. It takes some time before the pulsar has rotated the next subbeam into the observer’s view, and during this time the subbeams themselves have moved, too. This motion translates directly to a change in the longitudinal subpulse separation. Yet, as we have shown that subbeam speed is low, this effect is negligible. ![image](H3808F5.eps) A more significant change in $P_2$ could be caused by an increase in the number of subbeams on a carousel of unchanged size. The other option is that a decrease in the carousel radius moves the subbeams closer together. In the first scenario the number of subbeams would have to change during the null, causing the subbeam placement to change considerably. This would lead to subpulse-position jumps over the null. In that case, we would not expect the phases of the subpulses to be as unchanged over the nulls as is observed. Secondly, some time after the null the new configuration would have to return to normal. This means subbeams would have to appear or disappear. We see no evidence of this in any of the 200 nulls we observed. The change in the subbeam separation $P_2$ can therefore not be due to a changed number of subbeams. In the second scenario the carousel radius decreases by 15%, but the number of subbeams remains the same. As the subbeams now share a reduced circumference, their separation also decreases. The contraction of a carousel causes all subpulses to move towards the longitude of the magnetic axis. If we combine this contraction with the aforementioned absorption, we can immediately explain the observed shift to earlier arrival of the subpulses and their envelope, the average profile: with only the trailing part unabsorbed, all visible subpulses move towards earlier arrival (Figs. \[img:prof\] and \[img:carousel\]). In general, the subpulses farthest from the longitude of the magnetic axis will move most, while those at it, if visible, will remain at their original positions. Enlargement of our sample of subpulse positions can therefore indicate were the magnetic axis of PSR B0809+74 is located. This would immediately indicate how much of the pulse profile is ‘absorbed’ and illuminate the thus far much debated alignment of the pulse profiles at different frequencies. Normally, the edges of a pulse profile indicate where the overlap of the sight line and the subbeam carousel begins and ends. The contraction of a carousel will then move both edges inward. Yet when the first part of the profile is absorbed, the leading edge reflects the end of the absorption. If this edge is near the middle of the sight-line traverse over the carousel, it will not be affected by a reduction of the carousel size. In that case we will see a change in the position of the trailing edge but not in the leading one, which is exactly what we observe in PSR B0809+74 (Fig. \[img:prof\]). The wider subpulses we see after a null translate directly to wider subbeams. We do note that a change in carousel radius leads to a new sight-line path, which may have an impact as well. The significant chance that a null starts or ends within the pulse window of the pulses that surround it, would on average make these pulses dimmer than normal ones [@la83]. Quite unexpectedly however, the pulses after nulls were found to be brighter than average. This is easily explained in our model: the change in the carousel geometry causes the subpulses to broaden and become more closely spaced, while their peak brightness remains the same. This leads to the post-null pulse-intensity increase that is observed. Post-null polar gap height -------------------------- If we follow the radius-to-frequency-mapping argument discussed in the introduction, then the reduction of the carousel size implies that the emission after a null comes from lower in the pulsar magnetosphere than it does normally. The reduction of the subbeam separation supports this idea, but the broadening of the subpulses seems inconsistent. At this stage a comparison with the pulse profiles and drift patterns observed at higher frequencies seems promising. Emission at higher frequencies is also supposed to originate lower in the pulsar magnetosphere, so similar effects may point at one cause: a decrease in emission height. Much like the post-null profile, the average profile at higher frequencies moves towards earlier arrival [@rrl+02]. Comparing drift patterns, we see that at higher frequencies the driftrate decreases and that the subpulses broaden and move closer to one another [@dls+84], strikingly similar to the behaviour we find after nulls. Only one anomaly remains: when comparing drift patterns at different frequencies, the changes in driftrate and subpulse separation $P_2$ always counterbalance, leaving the carousel rotation time $\hat{P}_3$ unchanged. This means that the subbeams we observe at a certain frequency are cuts through rods of emission, intersections at the emission height associated with that frequency. The rods rotate rigidly but diverge with increasing height. At each height $\hat{P}_3$ is the same, while the speed and separation of the subbeams do change. In contrast to this normal invariance of $\hat{P}_3$, we see it change considerably after nulls, an increase that cannot be explained by a change in viewing depth. We therefore expect that the disturbance that caused the change in this depth, can also account for the 50% increase in $\hat{P}_3$. Usually, both the subbeam separation $P_2$ and the emission height are assumed to scale with the polar gap height $h$ [@rs75; @mgp00]. The 15% decrease in $P_2$ would thus be due to a identical fractional decrease in the gap height, which should also cause an equal decrease in emission height. If this change in gap height could also account for the observed increase of the carousel rotation time $\hat{P}_3$, all post-null drift-pattern changes can be attributed to one single cause. $\hat{P}_3$ is thought to scale as $h^{-2}$. For the inferred 15% gap height decrease this predicts a 40% increase in $\hat{P}_3$, nicely similar to the 50% we find. With one single cause we can therefore explain both the puzzling well-known phenomena (the driftrate decrease after nulls, the bright first active pulse) and the newly discovered subtle ones (the change in the position of the average profile, the decrease of the subpulse separation and the subpulse-width increase). This post-null decrease in gap height offers a glimpse of the circumstances needed to make the pulsar turn off so dramatically. Conclusions =========== We have shown that the drift of the subpulses directly reflects the actual motion of the subbeams, without any aliasing. In other pulsars with drifting subpulses this may be different: for those we predict drift-direction reversals or longitude jumps in the post-null drift pattern. We find that the carousel-rotation time for PSR B0809+74 must be long, probably over 200 seconds. The expected lifetime of the subbeam characteristics is less, which explains why thus far no periodicity from the carousel rotation could be found in the pulse sequence. The rotation time we find is larger than theoretically predicted, not only in absolute numbers but also after extrapolating the rotation time found in PSR B0943+10. Both the magnitude and the scaling relations that link the carousel-rotation time to the magnetic field and period of the pulsar are therefore incorrect. When the emission restarts after a null the drift pattern is different, and having determined the alias mode, we identify the underlying changes in the geometry of the subbeam carousel. A combination of a decrease in carousel size and ‘absorption’ already explains many of the changes seen in the post-null drift pattern. The resemblance between the drift pattern after nulls and that seen at higher frequencies, thought to originate at a lower height, is striking. Assuming that similar effects have identical causes leads us to conclude that after nulls we look deeper in the pulsar magnetosphere, too. Both this decrease in viewing depth and the striking increase in the carousel rotation time can be quantitatively explained by a post-null decrease in gap height. We thank Marco Kouwenhoven for his help with the observations and Frank Verbunt, Russell Edwards and Patrick Weltevrede for constructive discussions. [11]{} natexlab\#1[\#1]{} Bartel, N., Kardashev, N. S., Kuzmin, A. D., [et al.]{} 1981, A&A, 93, 85 Davies, J. G., Lyne, A. G., Smith, F. G., [et al.]{} 1984, MNRAS, 211, 57 Deshpande, A. A. & Rankin, J. M. 1999, ApJ, 524, 1008 —. 2001, MNRAS, 322, 438 Kuzmin, A. D., Izvekova, V. A., Shitov, Y. P., [et al.]{} 1998, A&AS, 127, 255 van Leeuwen, A. G. J., Kouwenhoven, M. L. A., Ramachandran, R., Rankin, J. M., & Stappers, B. W. 2002, A&A, 387, 169 Lyne, A. G. & Ashworth, M. 1983, MNRAS, 204, 519 Melikidze, G. I., Gil, J. A., & Pataraya, A. D. 2000, AJ, 544, 1081 Ramachandran, R., Rankin, J. M., Stappers, B. W., Kouwenhoven, M. L. A., & van Leeuwen, A. G. J. 2002, A&A, 381, 993 Rankin, J. M., Ramachandran, R., van Leeuwen, A. G. J., Suleymanova, S. A., & Deshpande, A. A. 2002, A&A, in preparation Ruderman, M. A. & Sutherland, P. G. 1975, ApJ, 196, 51
--- abstract: 'Large deviations principles for a family of scalar $1+1$ dimensional conservative stochastic PDEs (viscous conservation laws) are investigated, in the limit of jointly vanishing noise and viscosity. A first large deviations principle is obtained in a space of Young measures. The associated rate functional vanishes on a wide set, the so-called set of measure-valued solutions to the limiting conservation law. A second order large deviations principle is therefore investigated, however, this can be only partially proved. The second order rate functional provides a generalization for non-convex fluxes of the functional introduced by Jensen [@J] and Varadhan [@V] in a stochastic particles system setting.' address: | M. Mariani\ CEREMADE, UMR-CNRS 7534, Université de Paris-Dauphine, Place du Marechal de Lattre de Tassigny, F-75775 Paris Cedex 16. author: - Mauro Mariani title: Large Deviations Principles for Stochastic Scalar Conservation Laws --- Introduction ============ Macroscopic description of physical systems with a large number of degrees of freedom can be often provided by the means of partial differential equations. Rigorous microscopic derivations of such PDEs have been proved in different settings, and we will refer in particular to stochastic interacting particles systems [@KL; @S], where stochastic microscopic dynamics of particles are considered. One is usually interested in the asymptotic properties of the empirical measures associated with some relevant physical quantities of the system, such as the particles density. Provided that time and space variables are suitably rescaled, it has been proved for several models that, as the number of particles diverges to infinity, the empirical measure associated with the particles density converges to a “macroscopic density” $u \equiv u(t,x)$. Moreover such a density $u$ solves a limiting “hydrodynamical equation”, which in the conservative case has usually the following structure $$\begin{aligned} \label{e:1.1} \partial_t u +\nabla \cdot \big(f(u)-D(u) \nabla u \big)=0\end{aligned}$$ Here $\nabla$ and $\nabla \cdot $ stands for the space gradient and divergence operators, $D \ge 0$ is a diffusion coefficient, while the flux $f$ takes into account the transport phenomena that may occur in the system. Roughly speaking, $D$ is strictly positive for symmetric (or zero mean) and weakly asymmetric systems, in which case is usually obtained in the so-called *diffusive scaling* of the time and space variables. The case $D \equiv 0$ is instead associated with asymmetric systems, and is usually obtained in the so-called *Euler scaling*. Once the hydrodynamics of the density is understood, a deeper insight into the system behavior is provided by the investigation of large deviations for the probability law of the empirical measure associated with the density. Establishing large deviations for these models can in fact provide a better understanding of the concepts of entropy and fluctuations in the context of non-equilibrium statistical mechanics. However, while several large deviations results have been obtained for symmetric (or weakly asymmetric) systems under diffusive scaling [@KL], very little is known for asymmetric systems, with the remarkable exception of the seminal works [@L; @J; @V]. According to [@KL Chap. 8], large deviations for asymmetric processes are “one of the main open questions in the theory of hydrodynamical limits”. Stochastic conservation laws ---------------------------- In this paper we will focus on a slightly different approach. We consider a continuous “mesoscopic density” $u^{\varepsilon}\equiv u^{\varepsilon}(t,x) \in {{\mathbb R}}$ depending on a small parameter ${\varepsilon}$ (which should be regarded as the inverse of the number of particles). We assume that $u^{\varepsilon}$ satisfies a continuity equation, with a stochastic current taking into account the transport, diffusion and fluctuation phenomena that may occur in the system. More precisely, for ${\varepsilon},\gamma>0$ we consider the stochastic PDE in the unknown $u$ $$\begin{aligned} \label{e:1.2} \partial_t u+ \nabla \cdot \big(f(u)-\frac{{\varepsilon}}2 D(u) \nabla u -{\varepsilon}^\gamma\,\sqrt{a^2(u)}\,\alpha^{\varepsilon}\big)=0\end{aligned}$$ where $a^2$ is a fluctuation coefficient, and $\alpha^{\varepsilon}$ is a stochastic noise, white in time and with a correlation in space regulated by a convolution kernel $\jmath^{\varepsilon}$. We assume that $\jmath^{\varepsilon}$ converges to the identity as ${\varepsilon}\to 0$, namely that the the range of spatial correlations vanishes at the macroscopic scale. We are then interested in the asymptotic properties (convergence and large deviations) of the solution $u^{\varepsilon}$ to , as ${\varepsilon}\to 0$, namely as diffusion and noise vanish *simultaneously*. We remark that, while equations of the form may describe quite general physical systems, the limit ${\varepsilon}\to 0$ is indeed motivated by the heuristic behavior of the density of asymmetric particles systems under Euler scaling. In fact, while one expects the stochastic noise and its spatial correlation to vanish at a macroscopic scale for quite general systems, the limit of jointly vanishing viscosity and noise is somehow specific for the Euler scaling. This specific feature may be one of the (several) reasons making the large deviations of asymmetric systems more challenging. From the point of view of stochastic PDEs, the limit ${\varepsilon}\to 0$ also introduces new difficulties. In fact, large deviations for diffusion processes have been widely investigated [@FW; @DZ] in the vanishing noise case, and general methods are available to identify the rate functionals associated with large deviations. On the other hand, at our knowledge no results are available -even for finite dimensional diffusions- if vanishing noise and deterministic drift with nontrivial limiting behavior are considered (here the deterministic drift has a so-called singular limit, see ). As shown below, in this more general case one needs to investigate a (deterministic) variational problem associated with the stochastic equation. The variational problem associated to has been addressed in [@BBMN] in a slightly different setting, and we will use most of the results therein obtained. With respect to the models usually considered in particles systems, allows us to get rid of several technicalities related to the discrete nature of particles; we may thus provide a unified treatment of several models (that is, $f$, $D$ and $a$ are arbitrary). However, as discussed below, the results obtained (namely the speed and rates of large deviations) are in substantial agreement with [@J; @V] if the case $f(u)=a^2(u)=u(1-u)$ and $D(u)=1 $ is considered. Outline of the results ---------------------- Informally setting ${\varepsilon}=0$ in , we obtain the deterministic PDE $$\begin{aligned} \label{e:1.3} \partial_t u+ \nabla \cdot f(u) =0\end{aligned}$$ usually referred to as a *conservation law*. As well known [@Daf Chap. 4], if $f$ is nonlinear, the Cauchy problem associated to does not admit global smooth solutions, even if the initial datum is smooth. In general there exist infinitely many weak solutions to , and an additional *entropic condition* is needed to recover uniqueness and to identify the relevant physical weak solution to . While is invariant under the transformation $(t,x) \mapsto (-t,-x)$, the entropic condition selects a direction of the time, by requiring that entropy is dissipated. A classical result in PDE theory states that the solution to $$\begin{aligned} \label{e:1.4} \partial_t u+ \nabla \cdot \big(f(u)-\frac{{\varepsilon}}2 D(u)\nabla u \big)=0\end{aligned}$$ converges to the entropic solution to as ${\varepsilon}\to 0$, provided the initial data also converge. At the heuristic level, the entropic condition keeps memory of the diffusive term in which indeed breaks the symmetry $(t,x) \mapsto (-t,-x)$. We will briefly recall the definition of entropic *Kruzkov* solutions to in Section \[s:2\], and refer to [@Daf] for an introduction to conservation laws. There is only a few literature for existence and uniqueness of solutions to fully nonlinear stochastic parabolic equations, see e.g.[@LS1] and [@LS2] dealing with finite-dimensional noise. Under general hypotheses, in the appendix we provide existence and uniqueness (for ${\varepsilon}$ small enough and $\gamma>1/2$) for the Cauchy problem associated to , by the means of a piecewise semilinear approximation of such equation. In Section \[ss:3.1\] we gather some a priori bounds for the solution $u^{\varepsilon}$ to , and show that, as ${\varepsilon}\to 0$, $u^{\varepsilon}$ converges in probability to the entropic Kruzkov solution to in a strong topology. We next analyze large deviations principles for the law of $u^{\varepsilon}$ as ${\varepsilon}\to 0$. In order to avoid technical difficulties associated with the unboundedness of $u^{\varepsilon}$, and in order to keep our setting as close as possible to the one considered in [@J; @V], we assume that the fluctuation coefficient $a^2(u)$ vanishes for $u \not \in (0,1)$. As we will also assume the initial datum to take values in $[0,1]$, this condition guarantees that $u^{\varepsilon}$ takes values in $[0,1]$, see Theorem \[t:exun\]. We only consider the $(1+1)$ dimensional case, with the $(t,x)$ variables running in $[0,T]\times {{\mathbb T}}$, where $T>0$ and ${{\mathbb T}}$ is the one dimensional torus. While these restrictions are merely technical, we remark that only the case of scalar $u$ is considered, as the vectorial case (systems of conservation laws) is certainly far more difficult. In Section \[ss:3.2\] we establish a large deviations principle with speed ${\varepsilon}^{-2\gamma}$, roughly equivalent to the classical Freidlin-Wentzell speed for finite dimensional diffusions [@FW]. The bottom line is that, when events with probability of order $e^{{\varepsilon}^{-2\gamma}}$ are considered, the noise term in can bitterly deviate from its “typical behavior” thus completely overcoming the regularizing effect of the vanishing parabolic term. Any entropy-dissipation phenomena is lost at this speed, and the noise may drive severe oscillations of the density $u^{\varepsilon}$ as ${\varepsilon}\to 0$. The large deviations are then naturally investigated in a Young measures setting. We prove that on a Young measure $\mu\equiv \mu_{t,x}(d\lambda)$ (satisfying a suitable initial condition) the large deviations rate functional is given by (see Section \[ss:2.4\] for a more precise definition of ${{\mathcal I}}$) $$\begin{aligned} {{\mathcal I}} (\mu) := \frac 12 \int_0^T\!dt \, \Big\| \partial_t \mu(\imath) +\nabla \cdot \mu( f) \Big\|_{H^{-1}(\mu (a^2), dx)}^2\end{aligned}$$ Here $\imath:{{\mathbb R}} \to {{\mathbb R}}$ is the identity map, for $F$ a continuous function, $\mu(F)(t,x)$ stands for $\int \mu_{t,x}(d\lambda) F(\lambda)$, and with a little abuse of notation, we denoted by $\| \varphi \|_{H^{-1}( \mu_{t,\cdot} (a^2), dx)}$ the dual norm to $\big[\int\!dx\, \mu_{t,x}(a^2) \, \varphi_x^2\big]^{1/2}$. Note that ${{\mathcal I}}(\mu)=0$ iff $\mu$ is a measure-valued solution to (see Section \[ss:2.4\]). The Cauchy problem admits in general infinitely many measure-valued solutions, but we stated above that $u^{\varepsilon}$ converges in probability to the (unique) entropic solution to . One thus expects that nontrivial large deviations principle may hold with a speed slower than ${\varepsilon}^{-2\gamma}$. In Section \[ss:3.3\], we investigate large deviations principle with speed ${\varepsilon}^{-2\gamma+1}$. At this scale, deviations of the noise term in are of the same order of the parabolic term. The law of $u^{\varepsilon}$ is then exponentially tight (with speed ${\varepsilon}^{-2\gamma+1}$) in a suitable space of functions. To informally define the candidate rate functional for the large deviations with this speed, we briefly introduce some preliminary notions, which will be precisely explained in Section \[ss:2.5\]. We say that a weak solution $u$ to is an *entropy-measure* solution iff there exists a measurable map $\varrho_u$ from $[0,1]$ to the set of Radon measures on $(0,T)\times {{\mathbb T}}$, such that for each $\eta \in C^2([0,1])$ and $\varphi \in C^\infty_{\mathrm{c}}\big((0,T)\times {{\mathbb T}} \big)$ $$\begin{aligned} -\int\!dt\,dx\,\big[\eta(u) \varphi_t+ q(u) \nabla \varphi \big] = \int\!dv\,\varrho_u(v;dt,dx) \eta''(v) \varphi(t,x)\end{aligned}$$ where $q(v):=\int^v\!dw\,\eta'(w)f'(w)$, see Proposition \[p:kin\] for a characterization of entropy-measure solutions to . The candidate rate functional for the second order large deviations is the functional $H$ defined as follows. If $u$ is not an entropy-measure solution to then $H(u)=+\infty$. Otherwise $H(u)=\int\,dv\,\varrho_u^+(v;dt,dx) D(v)\,a^{-2}(v)$, where $\varrho_u^+$ denotes the positive part of $\varrho_u$. Note that $H$ depends on the diffusion coefficient $D$ and the fluctuation coefficient $a^2$ only through their ratio, thus fitting in the *Einsten paradigm* for macroscopic diffusive systems. We also remark that, while the functional ${{\mathcal I}}$ is convex, $H$ is not (for instance, convex combinations of entropy-measure solutions to , in general are not weak solutions). While we prove a large deviations upper bound with speed ${\varepsilon}^{-2\gamma+1}$ and rate $H$, we obtain the lower bound only on a suitable set ${{\mathcal S}}$ of weak solutions to , see Definition \[d:splittable\]. To complete the proof of this *second order* large deviations, an additional density argument is needed. This seems to be a challenging problem, and as noted by Varadhan in [@V] “…one does not see at the moment how to produce a ‘general’ non-entropic solution, partly because one does not know what it is.” It is easy to see that, on the set of weak solutions to with bounded variations and on the set ${{\mathcal S}}$, the rate functional $H^{JV}$ introduced in [@J; @V] coincides with the rate functional $H$ evaluated for $f(v)=v(1-v)$, $D \equiv 1$ and $a^2(v)=v(1-v)$, which are the expected transport, diffusion and fluctuation coefficients for the totally asymmetric simple exclusion process there investigated. In particular, $H$ comes as a natural generalization of the functional introduced in [@J; @V], whenever the flux $f$ is neither convex nor concave. Unfortunately, since chain rule formulas are not available out of the BV setting, one cannot check that $H=H^{JV}$ on the whole set of entropy-measure solutions to . Note however that the inequality $H\ge H^{JV}$ holds. Furthermore, under smoothness and genuine nonlinearity assumption on $f$, $H(u)=0$ iff $u$ is the unique entropic solution to , so that higher order large deviations principles are trivial. Outline of the proof -------------------- The convergence in probability of $u^{\varepsilon}$ to the entropic solution of is obtained by a sharp stability analysis of the stochastic perturbation of . The large deviations upper bound with speed ${\varepsilon}^{-2\gamma}$ is provided by lifting the standard Varadhan’s minimax method to the Young measures setting, while exponential tightness in this space is easily proved. The corresponding lower bound is first proved for Young measures that are Dirac masses at almost every point $(t,x) \in [0,T]\times {{\mathbb T}}$, and then extended to the whole set of Young measures by adapting the relaxation argument in [@BBMN]. The large deviations with speed ${\varepsilon}^{-2\gamma+1}$ are much different than the usual small noise asymptotic limit for Itô processes. Note indeed that, as ${\varepsilon}\to 0$, the parabolic term in has a nontrivial behavior. In such a case there is no general method to study large deviations, even in a finite dimensional setting. We provide a link of the large deviations problem with a $\Gamma$-convergence result obtained in [@BBMN]. Indeed we use the equicoercivity of a suitable family of functionals to show exponential tightness, and we use the so-called $\Gamma$-limsup result to build up the optimal exponential martingales for the lower bound. In particular, since the $\Gamma$-limsup inequality in [@BBMN] is not fully established, we only have partial results for the lower bound. The upper bound is established by a nonlinear version of the Varadhan’s minimax method. Main results {#s:2} ============ Notation {#ss:2.1} -------- In this paper, $T>0$ is a positive real number and we let $\big(\Omega, {{\mathfrak F}}, \{{{\mathfrak F}}_t\}_{0\le t \le T}, P\big)$ be a *standard* filtered probability space. For $B$ a real Banach space and $M:[0,T]\times \Omega \to B$ a given adapted process, we write equivalently $M(t)\equiv M(t,\omega)$. For each $\phi \in B^\ast$ we denote by $\langle M,\phi \rangle \equiv \langle M,\phi \rangle (t,\omega)$ the real–valued process obtained by the dual action of $M$ on $B$. Given two real–valued $P$-square integrable martingales $M,\,N$, we denote by $\big[M,N\big]\equiv \big[M,N\big](t,\omega)$ the cross quadratic variation process of $M$ and $N$. In the following *martingale* will always stand for *continuous martingale*. For a Polish space $X$, we also let ${{\mathcal P}}(X)$ denote the set of Borel probability measures on $X$. For $\nu$ a measure on some measurable space and $F \in L_1(d\nu)$, we denote by $\nu(F)$ the integral of $F$ with respect to $\nu$. However, for a probability $P$ we used the notation ${{\mathbb E}}^P$ to denote the expected value. We denote by ${{\mathbb T}}$ the one-dimensional torus, by $\langle \cdot,\cdot\rangle$ the inner product in $L_2({{\mathbb T}})$, and by $\langle\langle \cdot,\cdot\rangle\rangle$ the inner product in $L_2([0,T]\times{{\mathbb T}})$. For $E$ a closed set in $[0,T]\times {{\mathbb T}}$, $C^k(E)$ denotes the collection of $k$-times differentiable functions on $E$, with continuous derivatives up to the boundary. We also let $H^1({{\mathbb T}})$ be the Hilbert space of square integrable functions on ${{\mathbb T}}$ with square integrable derivative, and let $H_{-1}({{\mathbb T}})$ be its dual space. Throughout this paper $\partial_t$ denotes derivative with respect to the time variable $t$, $\nabla$ and $\nabla \cdot$ derivatives with respect to the space variable $x$ (while we consider a one dimensional space setting, we consider gradient and divergence as distinct operators). For a function $\vartheta$ explicitly depending on the $x$ variable, $\partial_x$ denotes the partial derivative with respect to $x$. Namely, given a function $u: {{\mathbb T}} \to [0,1]$ and $\vartheta:[0,1]\times {{\mathbb T}} \to {{\mathbb R}}$, we understand $\nabla [\vartheta(u(x),x)]= (\partial_u \vartheta)(u(x),x) \nabla u(x) + (\partial_x \vartheta) (u(x),x)$. In the following we will usually omit the dependence on the $\omega$ variable, as well as on the $t$ and/or $x$ variables when no misunderstanding is possible. Stochastic conservation laws {#ss:2.2} ---------------------------- We refer to [@DZ] for a general theory of stochastic differential equations in infinite dimensions. Let $W$ be an $L_2({{\mathbb T}})$–valued cylindrical Brownian motion on $\big(\Omega, {{\mathfrak F}}, \{{{\mathfrak F}}_t\}_{0\le t \le T}, P\big)$. Namely, $W$ is a Gaussian, $L_2({{\mathbb T}})$–valued $P$-martingale with quadratic variation: $$\label{e:2.1} \big[\langle W, \phi \rangle, \langle W , \psi \rangle \big](t,\omega) = \langle \phi, \psi \rangle\, t$$ for each $\phi, \psi \in L_2({{\mathbb T}})$. For ${\varepsilon}>0$, we consider the following stochastic Cauchy problem in the unknown $u$: $$\begin{aligned} \label{e:2.2} \nonumber & & d u = \big[- \nabla \cdot f(u) +\frac{{\varepsilon}}{2} \nabla \cdot \big(D(u) \nabla u \big)\big]\,dt +{\varepsilon}^{\gamma}\, \nabla \cdot \big[a(u) (\jmath^{\varepsilon}\ast d W)\big] \\ & & u(0,x) = u^{\varepsilon}_0(x)\end{aligned}$$ Here $\gamma>0$ is a real parameter, and $\nabla \cdot \big[a(u) (\jmath^{\varepsilon}\ast d W)\big]$ stands for the martingale differential acting on $\psi \in H^1({{\mathbb T}})$ as $$\begin{aligned} \big \langle \nabla \cdot \big[a(u) (\jmath^{\varepsilon}\ast d W)\big], \psi \rangle = - \langle d W, \jmath^{\varepsilon}\ast [ a(u) \nabla \psi] \rangle\end{aligned}$$ The following hypotheses will be always assumed below, but in the appendix. - [$f:[0,1]\to {{\mathbb R}}$ is a Lipschitz function.]{} - [$D:[0,1]\to {{\mathbb R}}$ is a uniformly positive Lipschitz function.]{} - [$a \in C^2([0,1])$ is such that $a(0)=a(1)=0$, and $a(v) \neq 0$ for $v \in (0,1)$.]{} - [$\{\jmath^{\varepsilon}\}_{{\varepsilon}>0} \subset H^1({{\mathbb T}})$ is a sequence of positive mollifiers with $\int\!dx\, \jmath^{\varepsilon}(x)=1$, weakly converging to the Dirac mass centered at $0$.]{} - [For ${\varepsilon}>0$, $u^{\varepsilon}_0:\Omega \times {{\mathbb T}} \to [0,1]$ is a measurable map with respect to the product ${{\mathfrak F}}_0$ $\times$ Borel $\sigma$-algebra. Moreover there exists a Borel measurable function $u_0:{{\mathbb T}} \to [0,1]$ such that, for each $\delta>0$ $$\begin{aligned} \lim_{\varepsilon}P\big(\|u^{\varepsilon}_0 -u_0\|_{L_1({{\mathbb T}})}>\delta \big)=0\end{aligned}$$ ]{} The next proposition is an immediate consequence of Proposition \[p:uunique\] in the appendix, where we also recall the precise definitions of strong and martingale solutions to and we briefly discuss why the condition on $\gamma$ and $\jmath^{\varepsilon}$ (see Proposition \[t:uunique\] below) are needed. \[t:uunique\] Assume $\lim_{\varepsilon}{\varepsilon}^{2\gamma-1} \|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 =0$. Then there is an ${\varepsilon}_0>0$ depending only on $D$ and $a$, such that, for each ${\varepsilon}<{\varepsilon}_0$, there exists a unique adapted process $u^{\varepsilon}: \Omega \to C\big([0,T];H^{-1}({{\mathbb T}})\big) \cap L_2\big([0,T];H^1({{\mathbb T}})\big)$ solving in the strong stochastic sense. Moreover $u^{\varepsilon}$ admits a version in $C\big([0,T];L_1({{\mathbb T}})\big)$, and for every $t\in [0,T]$ $u^{\varepsilon}(\omega;t,x) \in [0,1]$ for $d P\,dx$ a.e. $(\omega,\,x)$. Note that the total mass of $u^{\varepsilon}$ is conserved a.s. by the stochastic flow , namely for each $t \in [0,T]$ we have $\int \!dx\, u^{\varepsilon}(t,x)=\int \!dx\, u_0^{{\varepsilon}}(x)$ $P$ a.s.. We are interested in the asymptotic limit of the probability law of the solution $u^{\varepsilon}$ to as ${\varepsilon}\to 0$. Deterministic conservation laws {#ss:2.3} ------------------------------- Let $U$ denote the compact Polish space of measurable functions $u:{{\mathbb T}} \to [0,1]$, equipped with the metric it inherits as a (closed) subset of $H^{-1}({{\mathbb T}})$, namely $$\begin{aligned} d_U (u,v) := \sup \Big\{ \langle u-v,\varphi \rangle,\, \varphi \in H^1({{\mathbb T}})\,:\: \|\varphi\|_{L_2({{\mathbb T}})}^2+ \|\nabla \varphi\|_{L_2({{\mathbb T}})}^2\le 1 \Big\}\end{aligned}$$ Fix $T>0$ and consider the formal limiting equation for $$\begin{aligned} \label{e:2.4} \nonumber & & \partial_t u + \nabla \cdot f (u) =0 \\ & & u(0,x) = u_0(x)\end{aligned}$$ In general there exist no smooth solutions to . A function $u\in C\big([0,T]; U\big)$ is a *weak solution* to iff for each $\varphi\in C^\infty([0,T]\times {{\mathbb T}})$ it satisfies $$\begin{aligned} \langle u(T),\varphi(T)\rangle -\langle u_0,\varphi(0)\rangle -\langle\langle u, \partial_t\varphi\rangle\rangle -\langle\langle f(u),\nabla \varphi\rangle\rangle = 0\end{aligned}$$ As well known [@Daf Chap. 6], existence and uniqueness of a weak *Kruzkov* solution to is guaranteed under an additional entropic condition, which is recalled in Section \[ss:2.5\] below. Then $u^{\varepsilon}$ converges in probability to such a solution both in the strong $L_p([0,T]\times {{\mathbb T}})$ and $C\big([0,T]; U\big)$ topologies. \[t:uconv\] Assume $\lim_{\varepsilon}{\varepsilon}^{2(\gamma-1)} \big[ \|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 + {\varepsilon}\|\nabla \jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \big]=0$. Let $\bar{u}$ be the unique Kruzkov solution to . Then for each $p<+\infty$ and $\delta>0$ $$\begin{aligned} \lim_{\varepsilon}P \big( \|u^{\varepsilon}-\bar{u}\|_{L_p([0,T]\times {{\mathbb T}})}^p + \sup_{t\in [0,T]}d_{U}(u^{\varepsilon}(t),\bar{u}(t)) > \delta \big)=0\end{aligned}$$ Proposition \[t:uconv\] establishes a convergence result for the probability law of the process $u^{\varepsilon}$ solution to , as ${\varepsilon}\to 0$. We are then interested in large deviations principles for this probability law. We recall the definition of the large deviations bounds [@DZ2]. Let ${{\mathcal X}}$ be a Polish space and $\{{{\mathbb P}}^{\varepsilon}\} \subset {{\mathcal P}}({{\mathcal X}})$ a family of Borel probability measures on ${{\mathcal X}}$. For $\{\alpha_{\varepsilon}\}$ a sequence of positive reals such that $\lim_{\varepsilon}\alpha_{\varepsilon}=0$ and $I:{{\mathcal X}} \to [0,+\infty]$ a lower semicontinuous functional, we say that $\{{{\mathbb P}}^{\varepsilon}\}$ satisfies - [A *large deviations upper bound* with speed $\alpha_{\varepsilon}^{-1}$ and rate $I$, iff for each closed set ${{\mathcal C}} \subset {{\mathcal X}}$ $$\begin{aligned} \varlimsup_{\varepsilon}\alpha_{\varepsilon}\log {{\mathbb P}}^{\varepsilon}({{\mathcal C}}) \le - \inf_{u \in {{\mathcal C}}} I(u)\end{aligned}$$ ]{} - [A *large deviations lower bound* with speed $\alpha_{\varepsilon}^{-1}$ and rate $I$, iff for each open set ${{\mathcal O}} \subset {{\mathcal X}}$ $$\begin{aligned} \varlimsup_{\varepsilon}\alpha_{\varepsilon}\log {{\mathbb P}}^{\varepsilon}({{\mathcal O}}) \ge - \inf_{u \in {{\mathcal O}}} I(u)\end{aligned}$$ ]{} $\{{{\mathbb P}}^{\varepsilon}\}$ is said to satisfy a *large deviations principle* if both the upper and lower bounds hold with same rate and speed. In the next sections, we introduce some preliminary notions and state a first large deviations principle with speed ${\varepsilon}^{-2\gamma}$. We next introduce some additional preliminaries and state a second large deviations partial result, associated with the speed ${\varepsilon}^{-2\gamma+1}$ . First order large deviations {#ss:2.4} ---------------------------- We first introduce a suitable space ${{\mathcal M}}$ of Young measures and recall the notion of measure-valued solution to . Consider the set ${{\mathcal N}}$ of measurable maps $\mu$ from $[0,T]\times {{\mathbb T}}$ to the set ${{\mathcal P}}([0,1])$ of Borel probability measures on $[0,1]$. The set ${{\mathcal N}}$ can be identified with the set of positive finite Borel measures $\mu$ on $[0,T]\times {{\mathbb T}} \times [0,1]$ such that $\mu(dt,\, dx,\,[0,1])=dt\,dx$, by the bijection $\mu(dt,\,dx,\,d\lambda)=dt\,dx\,\mu_{t,x}(d\lambda)$. For $\imath : [0,1]\to [0,1]$ the identity map, we set $$\begin{aligned} {{\mathcal M}} := \big\{\mu \in {{\mathcal N}}\,:\:\text{the map $[0,T]\ni t \mapsto \mu_{t,\cdot}(\imath)$ is in $C\big([0,T]; U\big)$} \big\}\end{aligned}$$ in which, for a bounded measurable function $F:[0,1]\to {{\mathbb R}}$, the notation $\mu_{t,x}(F)$ stands for $\int_{[0,1]} \! \mu_{t,x}(d\lambda) F(\lambda)$. We endow ${{\mathcal M}}$ with the metric $$\begin{aligned} d_{{{\mathcal M}}} (\mu,\nu) := d_{\mathrm{*w}}(\mu,\nu) +\sup_{t\in [0,T]} d_{U}\big(\mu_{t,\cdot}(\imath),\nu_{t,\cdot}(\imath)\big)\end{aligned}$$ where $d_{\mathrm{*w}}$ is a distance generating the relative topology on ${{\mathcal N}}$ regarded as a subset of the finite Borel measures on $[0,T]\times {{\mathbb T}} \times [0,1]$ equipped with the $*$-weak topology. $({{\mathcal M}}, d_{{{\mathcal M}}})$ is a Polish space. An element $\mu\in {{\mathcal M}}$ is a *measure-valued solution* to iff for each $\varphi\in C^\infty \big([0,T]\times {{\mathbb T}})$ it satisfies $$\begin{aligned} \langle \mu_{T,\cdot}(\imath),\varphi(T)\rangle -\langle u_0,\varphi(0)\rangle - \langle\langle \mu(\imath), \partial_t \varphi \rangle\rangle - \langle\langle \mu(f), \nabla\varphi \rangle\rangle =0\end{aligned}$$ If $u\in C\big([0,T]; U\big)$ is a weak solution to , then the map $(t,x) \mapsto \delta_{u(t,x)}(d\lambda)\in {{\mathcal P}}([0,1])$ is a measure-valued solution. However, in general there exist measure-valued solutions which do not have this form, namely they are not a Dirac mass at a.e. $(t,x)$ (e.g. finite convex combinations of Dirac masses centered on weak solutions are measure-valued solutions). Consider the process $\mu^{\varepsilon}:\Omega \to {{\mathcal M}}$ defined by $\mu^{\varepsilon}_{t,x}:=\delta_{u^{\varepsilon}(t,x)}$. We let ${{\mathbf P}}^{\varepsilon}:=P \circ (\mu^{\varepsilon})^{-1} \in {{\mathcal P}}({{\mathcal M}})$ be the law of $\mu^{\varepsilon}$ on ${{\mathcal M}}$. In Section \[ss:3.2\] we prove \[t:ld1\] Assume $\lim_{\varepsilon}{\varepsilon}^{2(\gamma-1)}\big[\|\jmath^{\varepsilon}\|_{L_2}^2 + {\varepsilon}\|\nabla\jmath^{\varepsilon}\|_{L_2}^2\big]=0$. - [Then the sequence $\{{{\mathbf P}}^{\varepsilon}\}\subset {{\mathcal P}}({{\mathcal M}})$ satisfies a large deviations upper bound on ${{\mathcal M}}$ with speed ${\varepsilon}^{-2\gamma}$ and rate functional ${{\mathcal I}}:{{\mathcal M}} \to [0,+\infty]$ defined as $$\begin{aligned} \label{e:2.5} \nonumber & & {{\mathcal I}} (\mu) :=\sup_{\varphi \in C^\infty([0,T] \times {{\mathbb T}}) } \Big\{ \langle \mu_{T,\cdot}(\imath),\varphi(T) \rangle -\langle u_0,\varphi(0)\rangle - \langle\langle \mu(\imath), \partial_t \varphi \rangle\rangle \\ & & \phantom{ {{\mathcal I}} (\mu) :=\sup_{\varphi \in C^\infty([0,T] \times {{\mathbb T}}) } \Big\{ } - \langle\langle \mu (f), \nabla \varphi \rangle\rangle - \frac 12\, \langle\langle \mu (a^2) \nabla \varphi, \nabla \varphi \rangle\rangle \Big\}\end{aligned}$$ ]{} - [Assume furthermore that $\zeta \le u_0\le 1-\zeta$ for some $\zeta>0$. Then $\{{{\mathbf P}}^{\varepsilon}\} \subset {{\mathcal P}}({{\mathcal M}})$ satisfies a large deviations lower bound on ${{\mathcal M}}$ with speed ${\varepsilon}^{-2\gamma}$ and rate functional ${{\mathcal I}}$. ]{} We denote by ${{\mathbb P}}^{\varepsilon}:=P \circ (u^{\varepsilon})^{-1} \in {{\mathcal P}}\big(C\big([0,T]; U\big)\big)$ the law of $u^{\varepsilon}$ on the Polish space $(C\big([0,T]; U\big)$. By contraction principle [@DZ2 Theorem 4.2.1] we get \[c:ld1\] Under the same hypotheses of Theorem \[t:ld1\], the sequence $\{{{\mathbb P}}^{\varepsilon}\} \subset {{\mathcal P}}\big(C\big([0,T]; U\big)\big)$ satisfies a large deviations principle on $C\big([0,T]; U\big)$ with speed ${\varepsilon}^{-2\gamma}$ and rate functional $I:C\big([0,T]; U\big) \to [0,+\infty]$ defined as $$\begin{aligned} & & I(u) := \inf \Big\{ \int \!dt \, dx\, R_{f,a^2}\big(u(t,x),\Phi(t,x)\big), \\ & & \phantom{I(u) := \inf \Big\{ } \Phi\in L_2([0,T]\times {{\mathbb T}}) \,:\: \nabla \Phi=- \partial_t u \text{ weakly} \Big\}\end{aligned}$$ where $R_{f,a^2}:[0,1]\times {{\mathbb R}} \to [0,+\infty]$ is defined by $$\begin{aligned} R_{f,a^2}(w,c):=\inf \{\big(\nu(f)-c\big)^2 / \nu(a^2), \,\nu \in {{\mathcal P}}([0,1])\,:\: \nu(\imath)=w \}\end{aligned}$$ in which we understand $(c-c)^2/0=0$. Note that, if ${{\mathcal I}}(\mu)<+\infty$, then $\mu_{0,x}(\imath)=u_0(x)$ and analogously $I(u)<+\infty$ implies $u(0,x)=u_0(x)$. On the other hand, ${{\mathcal I}}(\mu)=0$ iff $\mu$ is a measure-valued solution to . ${{\mathcal I}}(\mu)$ quantifies indeed how $\mu$ deviates from being a measure-valued solution to in a suitable Hilbert norm, see the proof of Theorem \[t:ld1\] item (i) in Section \[ss:3.2\]. On the other hand, if $f$ is nonlinear, in general we have $I(u)<{{\mathcal I}}(\delta_u)$, so that $I$ vanishes on a set wider than the set of weak solutions to . In general there exist infinitely many measure-valued solutions to , but Proposition \[t:uconv\] implies that $\{{{\mathbf P}}^{\varepsilon}\}$ converges in probability in ${{\mathcal M}}$ to the unique Kruzkov solution $\bar{u}$ to (more precisely, to the Young measure $\bar{\mu}$ defined by $\bar{\mu}_{t,x}=\delta_{\bar{u}(t,x)}$). We thus expect that additional nontrivial large deviations principles may hold with a speed slower than ${\varepsilon}^{-2\gamma}$. Entropy-measure solutions to conservation laws {#ss:2.5} ---------------------------------------------- Let ${{\mathcal X}}$ be the same set $C([0,T];U)$ endowed with the metric $$\begin{aligned} d_{{{\mathcal X}}}(u,v):= \|u-v\|_{L_1([0,T]\times {{\mathbb T}})} + \sup_{t\in [0,T]} d_{U}(u(t), v(t))\end{aligned}$$ Convergence in ${{\mathcal X}}$ is of course strictly stronger than convergence in $C\big([0,T]; U \big)$, since convergence in $L_p([0,T]\times {{\mathbb T}})$ for $p\in [1,+\infty)$ is also required. Note that ${{\mathcal X}}$ can be identified with the subset of ${{\mathcal M}}$ $$\big\{ \mu \in {{\mathcal M}} \,:\:\mu=\delta_u,\,\text{for some $u \in C\big([0,T]; U\big)$} \big\}$$ and $d_{{{\mathcal X}}}$ is indeed a distance generating the relative topology induced by $d_{{{\mathcal M}}}$ on ${{\mathcal X}}$. In particular, once exponential tightness is established on ${{\mathcal X}}$, it is immediate to *lift* large deviations principles for the law of $u^{\varepsilon}$ on ${{\mathcal X}}$, to the corresponding law of $\delta_{u^{\varepsilon}}$ on ${{\mathcal M}}$. A function $\eta \in C^2([0,1])$ is called an *entropy* and its *conjugated entropy flux* $q\in C([0,1])$ is defined up to a constant by $q(u):=\int^u\!dv\,\eta'(v)f'(v)$. For $u$ a weak solution to , for $(\eta,q)$ an entropy–entropy flux pair, the *$\eta$-entropy production* is the distribution $\wp_{\eta,u}$ acting on $C^\infty_{\mathrm{c}}\big([0,T)\times {{\mathbb T}}\big)$ as $$\begin{aligned} \label{e:2.6} \wp_{\eta,u}(\varphi):= -\langle \eta(u_0),\varphi(0)\rangle - \langle\langle \eta(u) ,\partial_t \varphi \rangle\rangle - \langle\langle q(u) , \nabla \varphi \rangle\rangle\end{aligned}$$ Let $C^{2,\infty}_{\mathrm{c}}\big([0,1]\times [0,T)\times {{\mathbb T}} \big)$ be the set of compactly supported maps $\vartheta:[0,1]\times [0,T)\times {{\mathbb T}} \ni (v,t,x) \to \vartheta(v,t,x) \in {{\mathbb R}}$, that are twice differentiable in the $v$ variable, with derivatives continuous up to the boundary of $[0,1]\times [0,T)\times {{\mathbb T}}$, and that are infinitely differentiable in the $(t,x)$ variables. For $\vartheta \in C^{2,\infty}_{\mathrm{c}}\big([0,1]\times [0,T)\times {{\mathbb T}} \big)$ we denote by $\vartheta'$ and $\vartheta''$ its partial derivatives with respect to the $v$ variable. We say that a function $\vartheta \in C^{2,\infty}_{\mathrm{c}}\big([0,1]\times [0,T)\times {{\mathbb T}} \big)$ is an *entropy sampler*, and its *conjugated entropy flux sampler* $Q:[0,1]\times [0,T)\times {{\mathbb T}}$ is defined up to an additive function of $(t,x)$ by $Q(u,t,x):=\int^u\!dv\,\vartheta'(v,t,x) f'(v)$. Finally, given a weak solution $u$ to , the *$\vartheta$-sampled entropy production* $P_{\vartheta,u}$ is the real number $$\begin{aligned} \label{e:2.6b} \nonumber P_{\vartheta,u} &:=& -\int\!dx\,\vartheta(u_0(x),0,x) \\ & & -\int \! dt\,dx\, \Big[\big(\partial_t \vartheta)\big(u(t,x),t,x \big) + \big(\partial_x Q\big)\big(u(t,x),t,x \big)\Big]\end{aligned}$$ If $\vartheta(v,t,x)=\eta(v) \varphi(t,x)$ for some entropy $\eta$ and some $\varphi \in C^{\infty}_{\mathrm{c}}\big([0,T)\times {{\mathbb T}}\big)$, then $P_{\vartheta,u}=\wp_{\eta,u}(\varphi)$. We next introduce a suitable class of solutions to for later use. We denote by $M\big([0,T)\times {{\mathbb T}}\big)$ the set of Radon measures on $[0,T) \times {{\mathbb T}}$ that we consider equipped with the vague topology. In the following, for $\wp \in M\big([0,T)\times {{\mathbb T}}\big)$ we denote by $\wp^\pm$ the positive and negative part of $\wp$. For $u$ a weak solution to and $\eta$ an entropy, recalling we set $$\begin{aligned} \|\wp_{\eta,u}\|_{\mathrm{TV}}:= \sup \big\{\wp_{\eta,u}(\varphi),\,\varphi \in C^\infty_{\mathrm{c}}\big([0,T)\times {{\mathbb T}} \big),\, |\varphi|\le 1 \big\} \end{aligned}$$ $$\begin{aligned} \|\wp_{\eta,u}^+\|_{\mathrm{TV}}:= \sup \big\{\wp_{\eta,u}(\varphi),\,\varphi \in C^\infty_{\mathrm{c}}\big([0,T)\times {{\mathbb T}} \big),\, 0 \le \varphi \le 1 \big\}\end{aligned}$$ The following result follows by adapting [@BBMN Prop. 2.3] and [@DOW Prop. 3.1] to the setting of this paper. \[p:kin\] Let $u \in {{\mathcal X}}$ be a weak solution to . The following statements are equivalent: - [ For each entropy $\eta$, the $\eta$-entropy production $\wp_{\eta,u}$ can be extended to a Radon measure on $[0,T)\times {{\mathbb T}}$, namely $\|\wp_{\eta,u}\|_{\mathrm{TV}}<+\infty$ for each entropy $\eta$.]{} - [There exists a bounded measurable map $\varrho_u:[0,1] \ni v \to \varrho_u(v;dt,dx) \in M\big([0,T)\times {{\mathbb T}}\big)$ such that for any entropy sampler $\vartheta$ $$\begin{aligned} P_{\vartheta,u} = \int \! dv\,\varrho_u(v;dt,dx)\,\vartheta''(v,t,x)\end{aligned}$$ ]{} A weak solution $u \in {{\mathcal X}}$ that satisfies the equivalent conditions in Proposition \[p:kin\] is called an *entropy-measure solution* to . We denote by ${{\mathcal E}} \subset {{\mathcal X}}$ the set of entropy-measure solutions to . A weak solution $u\in {{\mathcal X}}$ to is called an *entropic solution* iff for each convex entropy $\eta$ the inequality $\wp_{\eta,u} \le 0$ holds in distribution sense, namely $\|\wp_{\eta,u}^+\|_{\mathrm{TV}}=0$. Entropic solutions are entropy-measure solutions such that $\varrho_u(v;dt,dx)$ is a negative Radon measure for each $v \in [0,1]$. It is well known, see e.g.[@Daf Theorem 6.2.1], that for each $u_0 \in U$ there exists a unique entropic weak solution $\bar{u} \in {{\mathcal X}} \cap C\big([0,T];L_1({{\mathbb T}})\big)$ to . Such a solution is called the *Kruzkov solution* with initial datum $u_0$. Up to minor adaptations, the following class of solutions have been also introduced in [@BBMN], where some examples of such solutions are also given. \[d:splittable\] An entropy-measure solution $u \in {{\mathcal E}}$ is *entropy-splittable* iff there exist two closed sets $E^+,E^- \subset [0,T]\times {{\mathbb T}}$ such that - [For a.e. $v \in [0,1]$, the support of $\varrho_u^+(v;dt,dx)$ is contained in $E^+$, and the support of $\varrho_u^-(v;dt,dx)$ is contained in $E^-$.]{} - [The set $\big\{ t \in [0,T]\,:\: \big(\{t\} \times {{\mathbb T}} \big) \cap E^+ \cap E^- \neq \emptyset \big\}$ is nowhere dense in $[0,T]$.]{} - [There exists $\delta>0$ such that $\delta\le u\le 1-\delta$. ]{} The set of entropy-splittable solutions to is denoted by ${{\mathcal S}}$. Note that ${{\mathcal S}} \subset {{\mathcal E}} \subset {{\mathcal X}}$, and if $u_0$ is bounded away from $0,\,1$, then ${{\mathcal S}}$ is nonempty (for instance the Kruzkov solution to is in ${{\mathcal S}}$). Indeed in general ${{\mathcal S}} \not\subset BV\big([0,T]\times {{\mathbb T}} \big)$. Second order large deviations {#ss:2.6} ----------------------------- With a little abuse of notation, we still denote with ${{\mathbb P}}^{\varepsilon}:=P \circ (u^{\varepsilon})^{-1} \in {{\mathcal P}}({{\mathcal X}})$ the law of $u^{\varepsilon}$ on the Polish space $({{\mathcal X}},d_{{{\mathcal X}}})$. Since $\int \! dx\, \jmath^{\varepsilon}(x)=1$ (see hypothesis **H4)**), we have that $\jmath^{\varepsilon}-1$ is the derivative of some smooth function $J$ on ${{\mathbb T}}$, defined up to an additive constant. We define $\|\jmath^{\varepsilon}-{{1 \mskip -5mu {\rm I}}}\|_{W^{-1,1}({{\mathbb T}})}$ as the infimum of $\|J\|_{L_1({{\mathbb T}})}$ as $J$ runs on the set of functions $J$ such that $\nabla \cdot J=\jmath^{\varepsilon}-1$. We have the following \[t:ld2\] Assume that there is no interval in $[0,1]$ where $f$ is affine, and that $\lim_{\varepsilon}{\varepsilon}^{2(\gamma-1)} \big[ \|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 + {\varepsilon}\|\nabla \jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \big]=0$. - [Then the sequence $\{{{\mathbb P}}^{\varepsilon}\} \subset {{\mathcal P}}(X)$ satisfies a large deviations upper bound on $({{\mathcal X}},d_{{{\mathcal X}}})$ with speed ${\varepsilon}^{-2\gamma+1}$ and rate functional $H:{{\mathcal X}} \to [0,+\infty]$ defined as $$\begin{aligned} H(u):= \begin{cases} \displaystyle{ \int \! dv\,\varrho_u^+(v;dt,dx)\, \frac{D(v)}{a^2(v)} } & \text{if $u\in {{\mathcal E}}$ } \\ + \infty & \text{ otherwise } \end{cases}\end{aligned}$$ ]{} - [Assume furthermore $\lim_{\varepsilon}{\varepsilon}^{-3/2} \|\jmath^{\varepsilon}-{{1 \mskip -5mu {\rm I}}}\|_{W^{-1,1}({{\mathbb T}})}=0$ and $f \in C^2([0,1])$. Then the sequence $\{{{\mathbb P}}^{\varepsilon}\} \subset {{\mathcal P}}(X)$ satisfies a large deviations lower bound on $({{\mathcal X}},d_{{{\mathcal X}}})$ with speed ${\varepsilon}^{-2\gamma+1}$ and rate functional ${\,\overline{\! H}}:{{\mathcal X}} \to [0,+\infty]$ defined as $$\begin{aligned} {\,\overline{\! H}}(u) := \sup_{\substack{{{\mathcal O}} \ni u \\ {{\mathcal O}}\,\mathrm{open} }} \:\inf_{v\in {{\mathcal O}} \cap {{\mathcal S}}} \, H(v)\end{aligned}$$ ]{} Since $H$ is lower semicontinuous on ${{\mathcal X}}$, we have ${\,\overline{\! H}} \ge H$ on ${{\mathcal X}}$ and ${\,\overline{\! H}}= H$ on ${{\mathcal S}}$, namely a large deviations principle holds on ${{\mathcal S}}$. In order to obtain a full large deviations principle, one needs to show $H(u)\ge {\,\overline{\! H}}(u)$ for $u \not\in {{\mathcal S}}$. This amounts to show that ${{\mathcal S}}$ is $H$-dense in ${{\mathcal X}}$, namely that for $u \in {{\mathcal X}}$ such that $H(u)<+\infty$ there exists a sequence $\{u^n\} \subset {{\mathcal S}}$ converging to $u$ in ${{\mathcal X}}$ such that $H(u^n)\to H(u)$. In particular it can be shown that ${\,\overline{\! H}}(u)=H(u)$ for $u$ piecewise smooth. The main difficulties here arise from the lacking of a chain rule formula connecting the measures $\varrho_u$ to the structure of $u$ itself. If $u$ has bounded variation, Vol’pert chain rule [@AFP] allows an explicit representation for $\varrho_u$ and thus $H(u)$, see Remark 2.7 in [@BBMN]. On the other hand, there exists $u \in {{\mathcal X}}$ with infinite variation such that $H(u)<+\infty$, see Example 2.8 in [@BBMN]. While chain rule formulas out of the BV setting are subject to current research investigation, see e.g. [@DOW; @ADM], only partial results are available. Under the same hypotheses of Theorem \[t:ld2\], one can show that entropy-measure solutions to are in $C([0,T];L_1({{\mathbb T}}))$, see Lemma 5.1 in [@BBMN]. By Kruzkov uniqueness theorem [@Daf Theorem 6.2.1], we gather that $H(u)=0$ iff $u$ is the Kruzkov solution to with initial datum $u_0$. In particular, by item (i) in Theorem \[t:ld2\], large deviations principles with speeds slower than ${\varepsilon}^{-2\gamma+1}$ are trivial. Note that in Proposition \[t:uunique\], Proposition \[t:uconv\], Theorem \[t:ld1\] and Theorem \[t:ld2\] various hypotheses on $\jmath^{\varepsilon}$ are required, the most restrictive in Theorem \[t:ld2\]. It is easy to see that, if $\gamma >1$, there exist convolution kernels $\jmath^{\varepsilon}$ satisfying them all. Proofs {#s:3} ====== Convergence and bounds {#ss:3.1} ---------------------- In the following we will need to consider several different perturbations of . In the next lemma we write down explicitly an Itô formula for . The corresponding Itô formula for the perturbed equations can be obtained analogously, as the martingale term in these equations is always the same. \[l:ito\] Let $(\vartheta; Q)$ be an entropy sampler–entropy sampler flux pair for the equation (recall in particular $\vartheta(u,T,x)=0$). Then $$\begin{aligned} \label{e:ito1} \nonumber & & -\int\!dx\, \vartheta(u_0(x),0,x) -\int \! dt\,dx\, \big[\big(\partial_t \vartheta)\big(u^{\varepsilon}(t,x),t,x \big) + \big(\partial_x Q\big)\big(u^{\varepsilon}(t,x),t,x \big)\big] \\ \nonumber & & \qquad \qquad = -\frac{{\varepsilon}}2 \langle \langle \vartheta''(u^{\varepsilon}) \nabla u^{\varepsilon}, D(u^{\varepsilon}) \nabla u^{\varepsilon}\rangle \rangle -\frac{{\varepsilon}}2 \langle \langle \partial_x \vartheta'(u^{\varepsilon}), D(u^{\varepsilon}) \nabla u^{\varepsilon}\rangle \rangle \\ \nonumber & & \qquad \qquad \phantom{=} + \frac{{\varepsilon}^{2\gamma}}{2} \|\nabla \jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \langle \langle \vartheta''(u^{\varepsilon}) a(u^{\varepsilon}), a(u^{\varepsilon})\rangle \rangle \\ & & \qquad \qquad \phantom{=} + \frac{{\varepsilon}^{2\gamma}}{2} \|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \langle \langle \vartheta''(u^{\varepsilon}) \nabla u^{\varepsilon}, [a'(u^{\varepsilon})]^2 \nabla u^{\varepsilon}\rangle \rangle +N^{{\varepsilon};\vartheta}(T)\end{aligned}$$ where $N^{{\varepsilon};\vartheta}$ is the martingale $$\begin{aligned} \label{e:Nvartheta} N^{{\varepsilon};\vartheta}(t) := -{\varepsilon}^\gamma \int_0^t \big\langle \jmath^{\varepsilon}\ast \big[a(u^{\varepsilon}) \vartheta''(u^{\varepsilon})\nabla u^{\varepsilon}+ a(u^{\varepsilon}) \partial_x \vartheta'(u^{\varepsilon})\big], dW \big\rangle\end{aligned}$$ Moreover the quadratic variation of $N^{{\varepsilon},\vartheta}$ enjoys the bound $$\label{e:youngquad} \big[ N^{{\varepsilon};\vartheta}, N^{{\varepsilon};\vartheta} \big](t)\le {\varepsilon}^{2\gamma} \big\| a(u^{\varepsilon}) \big[\vartheta''(u^{\varepsilon}) \nabla u^{\varepsilon}+\partial_x \vartheta'(u^{\varepsilon}) \big] \big\|_{L_2([0,t]\times {{\mathbb T}})}^2$$ Equation follows, up to minor manipulations, from Itô formula [@DZ Theorem 4.17] for the map $$[0,T]\times U \ni (t,u) \mapsto \int\!dx\,\vartheta(u(x),t,x) \in {{\mathbb R}}$$ By and , the quadratic variation of $N^{{\varepsilon};\vartheta}$ is given by $$\big[ N^{{\varepsilon};\vartheta}, N^{{\varepsilon};\vartheta} \big](t) = {\varepsilon}^{2\gamma} \big\|\jmath^{\varepsilon}\ast \big\{ a(u^{\varepsilon}) \big[\vartheta''(u^{\varepsilon}) \nabla u^{\varepsilon}+\partial_x \vartheta'(u^{\varepsilon}) \big] \big\}\big\|_{L_2([0,t]\times {{\mathbb T}})}^2$$ so that the inequality stated in the lemma follows by Young inequality for convolutions and hypothesis **H4)**. \[l:dismart\] Let $\zeta,\,T>0$, let $X$ be a real, continuous, local, square integrable supermartingale starting from $0$, and let $\tau \le T$ be a stopping time. Let $F: {{\mathbb R}} \to {{\mathbb R}}^+$ be such that: $$\begin{aligned} \label{e:Fmart} \frac{F(x)}{F(\zeta)} \le 2\frac{x}{\zeta}-1, \quad \text{for all $x >\zeta$}\end{aligned}$$ Then: $$\begin{aligned} \label{e:bernstein.a} {{\mathbb P}} \Big(\sup_{0\le t \le \tau} X(t) \ge \zeta, \, \big[X, X \big](\tau) \le F(\sup_{t\le \tau} X(t)) \Big) \le \exp\Big[-\frac{\zeta^2}{2 F(\zeta)}\Big]\end{aligned}$$ Note that the hypotheses on $F$ are satisfied by any nonincreasing function, and by functions with affine or subaffine behavior. Lemma \[l:dismart\] provides an elementary generalization of the well known Bernstein inequality [@RY page 153], which deals with the case of constant $F$. Hypotheses on $F$ imply that the map $G_\zeta: x \to \frac{\zeta}{F(\zeta)}x -\frac{1}{2}\frac{\zeta^2}{F(\zeta)^2}F(x)$ satisfies $G_\zeta(x)\ge G_\zeta(\zeta)=\frac{\zeta^2}{2 F(\zeta)}$ for all $x \ge \zeta$. Therefore: $$\begin{aligned} & & {{\mathbb P}} \Big(\sup_{t\le \tau} X(t) \ge \zeta, \, \big[X, X \big](\tau) \le F(\sup_{t\le \tau} X(t))\Big) \\ & & \le {{\mathbb P}} \Big(e^{\frac{\zeta}{F(\zeta)} \sup_{t\le \tau} X(t) -\frac{1}{2}\frac{\zeta^2}{F(\zeta)^2} F(\sup_{t\le \tau} X(t))} \ge e^{\frac{1}{2}\frac{\zeta^2}{F(\zeta)}}, \\ & & \phantom{ \le {{\mathbb P}} \Big( } \big[X, X \big](\tau) \le F(\sup_{t\le \tau} X(t)) \Big) \\ & & \le {{\mathbb P}} \Big(\sup_{t\le T} e^{\frac{\zeta}{F(\zeta)}X(t) -\frac{1}{2} \frac{\zeta^2}{F(\zeta)^2}[X,X](t)} \ge e^{\frac{1}{2}\frac{\zeta^2}{F(\zeta)}}\Big) \le e^{-\frac{\zeta^2}{2 F(\zeta)}}.\end{aligned}$$ where in the last line we applied the maximal inequality for positive supermartingales [@RY page 58], to the supermartingale $e^{\frac{\zeta}{F(\zeta)}X(t) -\frac{1}{2} \frac{\zeta^2}{F(\zeta)^2}[X,X](t)}$. The next lemma provides a key a priori bound. \[l:supexpnabla\] For ${\varepsilon}>0$, let $E^{\varepsilon}\in L_2\big([0,T];H^1({{\mathbb T}})\big)$ and let ${{\mathbb Q}}^{\varepsilon}\in {{\mathcal P}}\big(C\big([0,T]; U\big)\big)$ be any martingale solution to the Cauchy problem $$\begin{aligned} \nonumber & & d u = \big[- \nabla \cdot f(u) +\frac{{\varepsilon}}{2} \nabla \cdot \big(D(u) \nabla u \big) - \nabla \cdot( a(u) E^{\varepsilon})\big]\,dt \\ \nonumber & & \phantom{ d u =} +{\varepsilon}^{\gamma}\, \nabla \cdot \big[a(u) (\jmath^{\varepsilon}\ast d W)\big] \\ & & u(0,x) = u_0^{\varepsilon}(x) \label{e:stoc2b}\end{aligned}$$ Assume $\|\nabla E^{\varepsilon}\|_{L_2([0,T]\times {{\mathbb T}})} \le C_0$ for some constant $C_0$ independent of ${\varepsilon}$, and $\lim_{\varepsilon}{\varepsilon}^{2\gamma-1}(\|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2+ {\varepsilon}\| \nabla \jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2)=0$. Then there exist $C,\, {\varepsilon}_0>0$ such that for any ${\varepsilon}<{\varepsilon}_0$: $$\label{e:superexp.a} {\varepsilon}\| \nabla u\|_{L_2([0,T] \times {{\mathbb T}})}^2 \le C+N^{\varepsilon}(T,u) \qquad \text{for ${{\mathbb Q}}^{\varepsilon}$ a.e.\ $u$ }$$ where $N^{\varepsilon}$ is a ${{\mathbb Q}}^{\varepsilon}$-martingale starting from $0$ and satisfying $$\label{e:superexp.d} {{\mathbb Q}}^{\varepsilon}\big(\sup_{t \le T} N^{\varepsilon}(t) >\zeta \big) \le \exp \Big\{- \frac{\zeta^2}{{\varepsilon}^{2\gamma-1}C (1+\zeta)}\Big\}$$ Itô formula for the map $U \ni u \mapsto \int\!dx\, u^2(x) \in {{\mathbb R}}$ can be obtained as in Lemma \[l:ito\], so that $$\begin{aligned} \nonumber & & \|u(T)\|_{L_2({{\mathbb T}})} - \|u_0\|_{L_2({{\mathbb T}})} +{\varepsilon}\langle \langle \nabla u, D(u) \nabla u \rangle \rangle \\ & & \quad = - \langle \langle A(u),\nabla E^{\varepsilon}\rangle \rangle + {\varepsilon}^{2\gamma} \|\nabla \jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2\, \| a(u) \|_{L_2([0,T] \times {{\mathbb T}})}^2 \\ & & \phantom{\quad =} + {\varepsilon}^{2\gamma} \|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2\, \| a'(u) \nabla u\|_{L_2([0,T] \times {{\mathbb T}})}^2 +N^{{\varepsilon}}(T,u)\end{aligned}$$ where $A \in C^1([0,1])$ is any antiderivative of $a(\cdot)$ and $N^{\varepsilon}$ is a ${{\mathbb Q}}^{\varepsilon}$-martingale, which -reasoning as in the proof of - satisfies $$\begin{aligned} \label{e:quadvar3} \big[N^{{\varepsilon}}, N^{{\varepsilon}}\big](T,u) \le 4\,{\varepsilon}^{2\gamma} \| a(u) \nabla u \|_{L_2([0,T] \times {{\mathbb T}})}^2\end{aligned}$$ By **H2)**, **H3)** and the hypotheses of this lemma, there exist $C_1,\,{\varepsilon}_0>0$ such that, for each ${\varepsilon}\le {\varepsilon}_0$ and $v \in [0,1]$ $${\varepsilon}^{2\gamma} \|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 [a'(v)]^2 \le \frac{{\varepsilon}}2 D(v)$$ $$|\langle \langle A(u),\nabla E^{\varepsilon}\rangle \rangle|+{\varepsilon}^{2\gamma} \|\nabla \jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2\, \|a(u)\|_{L_2([0,T] \times {{\mathbb T}})}^2 \le C_1$$ Therefore, since $|u_0|\le 1$ $$\begin{aligned} \label{e:parest} \frac{{\varepsilon}}2 \langle \langle \nabla u, D(u) \nabla u \rangle \rangle \le 1+C_1 + N^{{\varepsilon}}(T,u)\end{aligned}$$ and thus since $D$ is uniformly positive. By and , there exists a constant $C_2>0$ such that $$\big[N^{{\varepsilon}}, N^{{\varepsilon}}\big](T,u) \le {\varepsilon}^{2\gamma}C_2 \langle \langle \nabla u, D(u) \nabla u \rangle \rangle \le 2\,C_2 \,{\varepsilon}^{2\gamma-1} \big[1+C_1+ N^{{\varepsilon}}(T) \big]$$ This inequality allows the application of Lemma \[l:dismart\] for the martingale $N^{{\varepsilon}}$ with $$F(\zeta)=2\,C_2 \,{\varepsilon}^{2\gamma-1} (1+C_1+ \zeta)$$ which clearly satisfies the condition . The bound then follows straightforwardly. The following lemma provides a stability result for . It will be repeatedly used to evaluate the effects of the Girsanov terms appearing in when absolutely continuous perturbations of ${{\mathbb P}}^{\varepsilon}$ are considered. \[l:stab\] For each ${\varepsilon}>0$, let $v^{\varepsilon}:{{\mathcal X}} \to {{\mathcal X}} \cap L_2([0,T];H^1({{\mathbb T}}))$ and $G^{\varepsilon}:{{\mathcal X}} \times {{\mathcal X}} \to L_2([0,T]\times {{\mathbb T}})$ be adapted maps (with respect to the standard filtrations of ${{\mathcal X}}$ and ${{\mathcal X}} \times {{\mathcal X}}$ respectively). Let ${{\mathbb Q}}^{\varepsilon}\in {{\mathcal P}}({{\mathcal X}})$ be any martingale solution to the stochastic Cauchy problem in the unknown $u$ $$\begin{aligned} \label{e:stocgir} \nonumber & & d u = \big[- \nabla \cdot f(u) +\frac{{\varepsilon}}{2} \nabla \cdot \big(D(u) \nabla u \big)+ \partial_t v^{\varepsilon}(u)+\nabla \cdot f(v^{\varepsilon}(u)) \\ \nonumber & & \phantom{d u = \big[} -\frac{{\varepsilon}}{2} \nabla \cdot \big(D(v^{\varepsilon}(u)) \nabla v^{\varepsilon}(u) \big)+G^{\varepsilon}(u,v^{\varepsilon}(u)) \big]\,dt \\ \nonumber & & \phantom{d u =} +{\varepsilon}^{\gamma}\, \nabla \cdot \big[a(u) (\jmath^{\varepsilon}\ast d W)\big] \\ & & u(0,x) = u^{\varepsilon}_0(x)\end{aligned}$$ Suppose - [$\lim_{\varepsilon}{\varepsilon}^{2(\gamma-1)}\big[\|\jmath^{\varepsilon}\|_{L_2}^2 + {\varepsilon}\|\nabla\jmath^{\varepsilon}\|_{L_2}^2\big]=0$.]{} - [There exist adapted processes $G_1^{\varepsilon},\,G_2^{\varepsilon},\,G_3^{\varepsilon}:{{\mathcal \chi}} \times {{\mathcal \chi}} \to L_2([0,T]\times {{\mathbb T}})$ such that $G^{\varepsilon}(u,v)(t,x) =G_1^{\varepsilon}(u,v)(t,x) + \nabla \cdot G_2^{\varepsilon}(u,v)(t,x) + \nabla \cdot G_3^{\varepsilon}(u,v)(t,x)$, and $$|G_3^{\varepsilon}(u,v^{\varepsilon}(u))(t,x)| \le G_4^{\varepsilon}(u)(t,x) |u-v^{\varepsilon}(u)| \qquad \text{for ${{\mathbb Q}}^{\varepsilon}$ a.e.\ $u$}$$ for some adapted process $G_4^{\varepsilon}: {{\mathcal X}} \to L_2([0,T]\times {{\mathbb T}})$.]{} - [Let $G_1,\,G_2$ be as in (ii). Then for each $\delta>0$ $$\begin{aligned} \nonumber & & \lim_{\varepsilon}{{\mathbb Q}}^{\varepsilon}\big(\|v^{\varepsilon}(u)(0)-u^{\varepsilon}_0\|_{L_1({{\mathbb T}})} + \|G_1^{\varepsilon}(u,v^{\varepsilon}(u)) \|_{L_1([0,T]\times {{\mathbb T}})} \\ & & \phantom{\lim_{\varepsilon}Q^{\varepsilon}\big(} + {\varepsilon}^{-1}\|G_2^{\varepsilon}(u,v^{\varepsilon}(u))\|_{L_2([0,T]\times {{\mathbb T}})}> \delta\big)=0\end{aligned}$$ ]{} - [Let $G_4$ be as in (ii). Then $$\begin{aligned} \lim_{\ell \to +\infty} \varlimsup_{\varepsilon}{{\mathbb Q}}^{\varepsilon}\big(\|G_4^{\varepsilon}(u,v^{\varepsilon}(u)) \|_{L_2([0,T]\times {{\mathbb T}})} +{\varepsilon}\|\nabla u \|_{L_2([0,T]\times {{\mathbb T}})} >\ell \big)=0 \end{aligned}$$ ]{} Then for each $\delta>0$ $$\begin{aligned} \label{e:l1stab} \lim_{\varepsilon}{{\mathbb Q}}^{\varepsilon}\big( \|u-v^{\varepsilon}(u)\|_{L_\infty([0,T];L_1({{\mathbb T}}))}>\delta\big)=0\end{aligned}$$ We denote by $z^{\varepsilon}(t,x)\equiv z^{\varepsilon}(u)(t,x) := u(t,x)-v^{\varepsilon}(u)(t,x) \in [-1,1]$. Let $l \in C^2([-1,1])$. For each ${\varepsilon},\,t>0$ let us define (in the following we omit the dependence of $v^{\varepsilon}$ and $z^{\varepsilon}$ on the $u$ variable) $$\begin{aligned} \label{e:extl1} \nonumber N^{{\varepsilon};l}(t,u) & := & \int\!dx \, [l(z^{\varepsilon}(t))-l(z^{\varepsilon}(0))]- \int_0^t\!ds \Big[ \langle l''(z^{\varepsilon}) \nabla z^{\varepsilon}, f(u)-f(v^{\varepsilon}) \rangle \\ \nonumber & & - \frac{{\varepsilon}}2 \langle l''(z^{\varepsilon})\nabla z^{\varepsilon}, D(v^{\varepsilon})\nabla z^{\varepsilon}\rangle -\frac{{\varepsilon}}2 \langle l''(z^{\varepsilon})\nabla z^{\varepsilon}, [D(u)-D(v^{\varepsilon})]\nabla u^{\varepsilon}\rangle \\ \nonumber & & +\langle l'(z^{\varepsilon}), G_1^{\varepsilon}(u,v^{\varepsilon})\rangle -\langle l''(z^{\varepsilon})\nabla z^{\varepsilon}, G_2^{\varepsilon}(u,v^{\varepsilon})\rangle \\ \nonumber & & -\langle l''(z^{\varepsilon})\nabla z^{\varepsilon}, G_3^{\varepsilon}(u,v^{\varepsilon})\rangle + \frac{{\varepsilon}^{2\gamma}}{2} \|\nabla \jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \langle l''(z^{\varepsilon}) a(u), a(u)\rangle \\ & & + \frac{{\varepsilon}^{2\gamma}}{2} \|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \langle l''(z^{\varepsilon}) \nabla u, [a'(u)]^2 \nabla u\rangle \Big]\end{aligned}$$ By Itô formula, $N^{{\varepsilon};l}$ is a ${{\mathbb Q}}^{\varepsilon}$-martingale starting at $0$, and applying Young inequality for convolutions (analogously to ) $$\begin{aligned} \label{e:Nlquad} \big[N^{{\varepsilon},l},N^{{\varepsilon},l}\big](t,u) \le {\varepsilon}^{2\gamma} \| a(u) l''(z^{\varepsilon}) \nabla z^{\varepsilon}\|_{L_2([0,t] \times {{\mathbb T}})}^2\end{aligned}$$ We now choose $l$ convex and define $$\begin{aligned} R^{{\varepsilon},l}(t)\equiv R^{{\varepsilon},l}(u)(t):= \big[\int\!dx\,\langle l''(z^{\varepsilon}(t))\nabla z^{\varepsilon}(t),\nabla z^{\varepsilon}(t)\rangle \big]^{1/2}\end{aligned}$$ Since $D$ and $f$ are Lipschitz, and $D$ is uniformly positive, by and Cauchy-Schwartz inequality we gather $$\begin{aligned} \label{e:extim1} \nonumber & & \int\!dx\,l(z^{\varepsilon}(t)) -l(z^{\varepsilon}(0))\le -c\, {\varepsilon}\, [R^{{\varepsilon};l}(t)]^2 \|\sqrt{l''(z^{\varepsilon})}z^{\varepsilon}\|_{L_\infty([0,T]\times {{\mathbb T}})} R^{{\varepsilon};l}(t) \\ \nonumber & & \qquad \quad + C_1{\varepsilon}\| \nabla u\|_{L_2([0,T]\times {{\mathbb T}})}\, \|\sqrt{l''(z^{\varepsilon})}z^{\varepsilon}\|_{L_\infty([0,T]\times {{\mathbb T}})} R^{{\varepsilon};l}(t) \\ \nonumber & & \qquad \quad + \|l'(z^{\varepsilon})\|_{L_\infty([0,T]\times {{\mathbb T}})} \, \|G_1^{\varepsilon}(u,v^{\varepsilon})\|_{L_1([0,T]\times {{\mathbb T}})} \\ \nonumber & & \qquad \quad +\|G_2^{\varepsilon}(u,v^{\varepsilon})\|_{L_2([0,T]\times {{\mathbb T}})} \|\sqrt{l''(z^{\varepsilon})}\|_{L_\infty([0,T]\times {{\mathbb T}})} R^{{\varepsilon};l}(t) \\ \nonumber & & \qquad \quad + \|G_4^{\varepsilon}(u)\|_{L_2([0,T]\times {{\mathbb R}})}\, \|\sqrt{l''(z^{\varepsilon})}z^{\varepsilon}\|_{L_\infty([0,T]\times {{\mathbb T}})} R^{{\varepsilon};l}(t) \\ \nonumber & & \qquad \quad + C_1 {\varepsilon}^{2\gamma} \|\nabla \jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \|l''(z^{\varepsilon})\|_{L_\infty([0,T]\times {{\mathbb T}})} \\ & & \qquad \quad + C_1 {\varepsilon}^{2\gamma} \|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \| l''(z^{\varepsilon})\|_{L_\infty([0,T]\times {{\mathbb T}})}\| \nabla u\|_{L_2([0,T]\times {{\mathbb T}})}^2 +N^{{\varepsilon};l}(t)\end{aligned}$$ for some constants $c,\,C_1>0$ independent of ${\varepsilon}$ and $l$. For arbitrary $\zeta>0$ to be chosen below, we now consider $l(Z)=\sqrt{Z^2+{\varepsilon}^2\zeta^2}$ so that $$\begin{aligned} |Z|\le l(Z) \le |Z|+{\varepsilon}\zeta & & \qquad \max_{Z \in [-1,1]} |l'(Z)| \le 1 \\ \max_{Z \in [-1,1]} |l''(Z)| \le {\varepsilon}^{-1} \zeta^{-1} & & \qquad \max_{Z \in [-1,1]} |l''(Z)\,Z^2|\le \sqrt{2} {\varepsilon}\zeta\end{aligned}$$ Using these bounds in the right hand side of , we get for some $C_2>0$ $$\begin{aligned} \label{e:extim2} \nonumber & & \int\!dx\,|z^{\varepsilon}(t)| \le \int\!dx\,|z^{\varepsilon}(0)| + C_2 \|G_1^{\varepsilon}\|_{L_1([0,T]\times {{\mathbb T}})} \\ \nonumber & & \qquad +C_2 \big[1 + {\varepsilon}^2 \|\nabla u\|_{L_2([0,T]\times {{\mathbb T}})}^2 +\|G_4^{\varepsilon}(u)\|_{L_2([0,T]\times {{\mathbb T}})}^2\big] \zeta \\ \nonumber & & \qquad +C_2\zeta^{-1} \big[ {\varepsilon}^{-2} \|G_2^{\varepsilon}\|_{L_2([0,T]\times {{\mathbb T}})}^2 +{\varepsilon}^{2\gamma-1} \|\nabla \jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \\ \nonumber & & \qquad \phantom{+C_2\zeta^{-1} \big[ } +{\varepsilon}^{2(\gamma-1)} \|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \|\nabla u \|_{L_2([0,T]\times {{\mathbb T}})}^2\big] \\ & & \qquad - \frac{c\,{\varepsilon}}2 [R^{{\varepsilon};l}(t)]^2 + N^{{\varepsilon};l}(t)\end{aligned}$$ where we have also used the straightforward inequality $\alpha R - \frac{c{\varepsilon}}2 R^2 \le \frac{\alpha}{2c{\varepsilon}}$ for a suitable $\alpha \in {{\mathbb R}}$. Recalling , for some $C_3>0$ independent of ${\varepsilon},\,\zeta$ $$\begin{aligned} \big[N^{{\varepsilon};l},N^{{\varepsilon};l}\big](t,u) \le C_3\, {\varepsilon}^{2\gamma} \zeta^{-1} [R^{{\varepsilon};l}(t)]^2\end{aligned}$$ so that, by maximal inequality for positive supermartingales [@RY page 58], for each $\delta>0$ the term in the last line of satisfies $$\begin{aligned} \label{e:martexp2} \nonumber & & {{\mathbb Q}}^{\varepsilon}\big (\sup_{s \le t} N^{{\varepsilon};l}(s) - \frac{c\,{\varepsilon}}2 [R^{{\varepsilon};l}(s)]^2 >\delta \big) \le \\ \nonumber & & {{\mathbb Q}}^{\varepsilon}\Big(\sup_{s \le t} \exp\big(\frac{2\,c}{C_3} {\varepsilon}^{1-2\gamma} \zeta\, N(s) -\frac{2\,c^2}{C_3^2} {\varepsilon}^{2(1-2\gamma)} \zeta^2\, [N,N](s)\big) > \\ & & \phantom{\phantom{\le} {{\mathbb Q}}^{\varepsilon}\big(} \exp(\frac{2\,c}{C_3} {\varepsilon}^{1-2\gamma} \zeta\, \delta) \Big) \le \exp(-\frac{2\,c}{C_3} {\varepsilon}^{-2\gamma+1} \zeta\, \delta)\end{aligned}$$ Furthermore for $\ell>0$ $$\begin{aligned} \nonumber & & {{\mathbb Q}}^{\varepsilon}\big( \sup_t \int\!dx\,|z^{\varepsilon}(t)| >\delta \big) \le {{\mathbb Q}}^{\varepsilon}\Big( \sup_t \int\!dx\,|z^{\varepsilon}(t)| >\delta, \\ & & \qquad \qquad \|G_4^{\varepsilon}(u,v^{\varepsilon}(u)) \|_{L_2([0,T]\times {{\mathbb T}})}+{\varepsilon}\|\nabla u \|_{L_2([0,T]\times {{\mathbb T}})} \le \ell \Big) + o_{\ell,{\varepsilon}}\end{aligned}$$ where $\lim_\ell \varlimsup_{\varepsilon}o_{\ell,{\varepsilon}}=0$ by hypotheses (iv). Therefore, using hypotheses (i) and (iii) and the estimate in , the result easily follows as we let ${\varepsilon}\to 0$, then $\zeta \to 0$ and finally $\ell \to +\infty$. The following result will be used to provide exponential tightness in stronger topologies in the next sections. \[l:exptight1\] There exists a sequence $\{K_\ell\}$ of compact subsets of $C\big([0,T]; U\big)$ such that $$\begin{aligned} \lim_\ell \varlimsup_{\varepsilon}{\varepsilon}^{2\gamma} \log {{\mathbb P}}^{\varepsilon}(K_\ell^c)=-\infty\end{aligned}$$ We refer to the criterion in [@FK Corollary 4.17] to establish the exponential tightness of $\{{{\mathbb P}}^{\varepsilon}\}$. Let $d \in C^1([0,1])$ be any antiderivative of $D$. Integrating twice by parts the diffusive term in the weak formulation of (see and ), for each $\varphi \in C^\infty({{\mathbb T}})$ the map $E^{{\varepsilon};\varphi}:[0,T]\times C\big([0,T]; U\big) \to {{\mathbb R}}$ defined by $$\begin{aligned} & & E^{{\varepsilon};\varphi}(t;u):=\exp\Big[ {\varepsilon}^{-2\gamma} \langle u(t),\varphi \rangle - {\varepsilon}^{-2\gamma} \langle u(0),\varphi \rangle \\ & & \qquad \qquad - {\varepsilon}^{-2\gamma} \int_0^t \! ds\, \langle f(u) +\frac{{\varepsilon}}2 d(u),\Delta \varphi \rangle -\frac 12 \langle \jmath \ast (a(u) \nabla \varphi), \jmath \ast (a(u) \nabla \varphi) \rangle \Big]\end{aligned}$$ is a martingale. For a fixed $\varphi \in C^\infty({{\mathbb T}})$, the following bound on the integral term in the definition of $E^{{\varepsilon};\varphi}$ is easily established $$\sup_{v \in U} \,\Big| \langle f(v) +\frac{{\varepsilon}}2 d(v),\Delta \varphi \rangle -\frac 12 \langle \jmath \ast (a(v) \nabla \varphi), \jmath \ast (a(v) \nabla \varphi) \rangle \Big| <+\infty$$ Furthermore the family of maps $l^\varphi : U \ni v \to \langle v,\varphi \rangle \in {{\mathbb R}}$ is closed under addition, separates points in $U$ and satisfies $c\, l^{\varphi}= l^{c\varphi}$ for $c \in {{\mathbb R}}$. All the hypotheses of the criterion in [@FK Corollary 4.17] are therefore satisfied. with ${{\mathbb Q}}^{\varepsilon}\equiv {{\mathbb P}}^{\varepsilon}:=P\circ (u^{\varepsilon})^{-1}$, and $v^{\varepsilon}$ as the solution to the (deterministic) Cauchy problem $$\begin{aligned} & & \partial_t v = - \nabla \cdot f(v)+\frac{{\varepsilon}}{2} \nabla \cdot \big( D(v) \nabla v \big) \\ & & v(0,x) = u_0(x)\end{aligned}$$ ${{\mathbb P}}^{\varepsilon}$ and $v^{\varepsilon}$ fulfill the hypotheses Lemma \[l:stab\], since $G^{\varepsilon}\equiv 0$ and Lemma \[l:supexpnabla\] holds (with $E^{\varepsilon}\equiv 0$). As well known [@Daf Chap. 6.3], $v^{\varepsilon}\to \bar{u}$ in $L_p([0,T]\times {{\mathbb T}})$. Therefore the statement of the proposition follows by the same Lemma \[l:stab\] and the fact that ${{\mathbb P}}^{\varepsilon}$ is (exponentially) tight in $C\big([0,T]; U\big)$, as proved in Lemma \[l:exptight1\]. Large deviations with speed ${\varepsilon}^{-2\gamma}$ {#ss:3.2} ------------------------------------------------------ In this section we prove Theorem \[t:ld1\]. \[l:exptight2\] There exists a sequence $\{{{\mathcal K}}_\ell\}$ of compact subsets of ${{\mathcal M}}$ such that $$\label{e:exptight2} \lim_\ell \varlimsup_{\varepsilon}{\varepsilon}^{2\gamma} \log {{\mathbf P}}^{\varepsilon}({{\mathcal K}}_\ell^c)=-\infty$$ Let the sequence $\{K_\ell\}$ of compact subsets of $C\big([0,T]; U\big)$ be as in Lemma \[l:exptight1\]. For $\ell>0$ consider the set $$\begin{aligned} \tilde{{{\mathcal K}}}_\ell:= \{\mu \in {{\mathcal M}}\,:\:\mu_{t,x}=\delta_{u(t,x)}\, \text{for some $u \in K_\ell$} \}\end{aligned}$$ Then ${{\mathbf P}}^{\varepsilon}(\tilde{{{\mathcal K}}}_\ell)={{\mathbb P}}^{\varepsilon}(K_\ell)$ and by Lemma \[l:exptight1\] $$\lim_\ell \varlimsup_{\varepsilon}{\varepsilon}^{2\gamma} \log {{\mathbf P}}^{\varepsilon}(\tilde{{{\mathcal K}}}_\ell^c)=-\infty$$ On the other hand $\tilde{{{\mathcal K}}}_\ell$ is precompact in $({{\mathcal M}},d_{{{\mathcal M}}})$ for any $\ell$, and thus the Lemma is proved by taking ${{\mathcal K}}_\ell$ to be the closure of $\tilde{{{\mathcal K}}}_\ell$. Let $d \in C^2([0,1])$ be any antiderivative of $D$. For ${\varepsilon}>0$ and $\varphi \in C^\infty([0,T]\times {{\mathbb T}})$, define the map $ {{\mathcal N}}^{{\varepsilon};\varphi} : [0,T]\times {{\mathcal M}} \to {{\mathbb R}}$ by $$\begin{aligned} \nonumber {{\mathcal N}}^{{\varepsilon};\varphi}(t,\mu) & := & \langle \mu_{T,\cdot}(\imath),\varphi(T)\rangle -\langle u_0,\varphi(0) \rangle \\ \phantom{ {{\mathcal N}}^\varphi }& \phantom{:} & - \int_0^t\!ds\,\big[ \langle \mu(\imath), \partial_t \varphi \rangle - \langle \mu(f), \nabla \varphi \rangle +\frac{{\varepsilon}}2 \langle \mu(d),\Delta \varphi \rangle \big]\end{aligned}$$ ${{\mathbf P}}^{\varepsilon}$ is concentrated on the set $$\{\mu \in {{\mathcal M}}\,:\: \mu=\delta_u\,\text{for some $u \in C\big([0,T]; U\big)$}\}$$ so that ${{\mathcal N}}^{{\varepsilon};\varphi}$ is a ${{\mathbf P}}^{\varepsilon}$-martingale. Indeed an integration by parts shows that ${{\mathcal N}}^{{\varepsilon};\varphi}(t,\delta_u)$ is the martingale term appearing in the very definition of *martingale solution to* , see the appendix. Reasoning as in , we have $$\big[{{\mathcal N}}^{{\varepsilon};\varphi},{{\mathcal N}}^{{\varepsilon};\varphi}\big](t,\mu) \le {\varepsilon}^{2\gamma} \int_0^t\!ds\, \langle \mu(a^2)\nabla \varphi, \nabla \varphi\rangle$$ Therefore, the map ${{\mathcal Q}}^{{\varepsilon};\varphi} : [0,T]\times {{\mathcal M}} \to {{\mathbb R}}$ defined by $$\begin{aligned} {{\mathcal Q}}^{{\varepsilon};\varphi}(t,\mu) := \exp \Big\{{{\mathcal N}}^{{\varepsilon};\varphi}(t,\mu) - \frac{{\varepsilon}^{2\gamma}}{2} \int_0^t\!ds\, \langle \mu(a^2)\nabla \varphi, \nabla \varphi\rangle \Big\}\end{aligned}$$ is a continuous ${{\mathbf P}}^{\varepsilon}$-supermartingale, with ${{\mathcal Q}}^{{\varepsilon};\varphi}(0,\mu)=1$ and ${{\mathcal Q}}^{{\varepsilon};\varphi}(T,\mu)>0$, ${{\mathbf P}}^{\varepsilon}$ a.s.. For an arbitrary Borel set ${{\mathcal A}} \subset {{\mathcal M}}$ we then have $$\begin{aligned} \nonumber {{\mathbf P}}^{\varepsilon}({{\mathcal A}}) & = & {{\mathbb E}}^{{{\mathbf P}}^{\varepsilon}}\big({{1 \mskip -5mu {\rm I}}}_{{{\mathcal A}}}(\cdot) {{\mathcal Q}}^{{\varepsilon};\varphi}(T,\cdot) [{{\mathcal Q}}^{{\varepsilon};\varphi}(T,\cdot)]^{-1} \big) \\ &\le & \sup_{\mu \in {{\mathcal A}}} [{{\mathcal Q}}^{{\varepsilon};\varphi}(T,\mu)]^{-1} {{\mathbb E}}^{{{\mathbf P}}^{\varepsilon}}\big( {{1 \mskip -5mu {\rm I}}}_{{{\mathcal A}}}(\cdot) {{\mathcal Q}}^{{\varepsilon};\varphi}(T,\cdot) \big) \le \sup_{\mu \in {{\mathcal A}}} [{{\mathcal Q}}^{{\varepsilon};\varphi}(T,\mu)]^{-1}\end{aligned}$$ Since this inequality holds for each $\varphi$, we can evaluate it replacing $\varphi$ with ${\varepsilon}^{-2\gamma}\varphi$, thus obtaining $$\begin{aligned} \nonumber {\varepsilon}^{2\gamma} \log {{\mathbf P}}^{\varepsilon}({{\mathcal A}}) & \le & - \inf_{\mu \in {{\mathcal A}}} \Big\{ \langle \mu_{T,\cdot}(\imath),\varphi(T)\rangle -\langle u_0,\varphi(0) \rangle - \langle \langle \mu(\imath), \partial_t \varphi \rangle \rangle \\ \nonumber & & \quad - \langle \langle \mu(f), \nabla \varphi \rangle \rangle -\frac{{\varepsilon}}2 \langle \langle \mu(d), \Delta \varphi \rangle \rangle -\frac 12 \langle \langle \mu(a^2)\nabla \varphi, \nabla \varphi\rangle\rangle \Big\} \\ \nonumber &\le & - \inf_{\mu \in {{\mathcal A}}} \Big\{ \langle \mu_{T,\cdot}(\imath),\varphi(T)\rangle -\langle u_0,\varphi(0) \rangle - \langle \langle \mu(\imath), \partial_t \varphi \rangle \rangle \\ & & \quad - \langle \langle \mu(f), \nabla \varphi \rangle \rangle -\frac 12 \langle \langle \mu(a^2)\nabla \varphi, \nabla \varphi\rangle\rangle \Big\} + {\varepsilon}\, C_{d,\varphi}\end{aligned}$$ for some constant $C_{d,\varphi}$ depending only on $d$ and $\varphi$. Taking the limsup for ${\varepsilon}\to 0$, the last term vanishes. Optimizing on $\varphi$: $$\begin{aligned} \varlimsup_{\varepsilon}{\varepsilon}^{2\gamma} \log {{\mathbf P}}^{\varepsilon}({{\mathcal A}}) & \le & -\sup_{\varphi \in C^\infty([0,T]\times {{\mathbb T}})} \inf_{\mu \in {{\mathcal A}}} \Big\{ \langle \mu_{T,\cdot}(\imath),\varphi(T)\rangle -\langle u_0,\varphi(0) \rangle \\ & & \quad - \langle \langle \mu(\imath), \partial_t \varphi \rangle \rangle - \langle \langle \mu(f), \nabla \varphi \rangle \rangle -\frac 12 \langle \langle \mu(a^2)\nabla \varphi, \nabla \varphi\rangle\rangle \Big\}\end{aligned}$$ By a standard application [@KL Appendix 2, Lemma 3.2] of the minimax lemma, we gather that upper bound with rate ${{\mathcal I}}$, see , holds on each compact subset ${{\mathcal K}} \subset {{\mathcal M}}$. By Lemma \[l:exptight2\], it holds on each closed subset of ${{\mathcal M}}$. We recall a well known method to prove large deviations lower bounds, see e.g. [@J Chap. 4]. For ${{\mathbb P}}$, ${{\mathbb Q}}$ two Borel probability measures on a Polish space, we denote by $\mathrm{Ent}({{\mathbb Q}}|{{\mathbb P}})$ the relative entropy of ${{\mathbb Q}}$ with respect to ${{\mathbb P}}$. \[l:ldglim\] Let ${{\mathcal X}}$ be a Polish space, $I:{{\mathcal X}}\to [0,+\infty]$ a positive functional, $\{\alpha_{\varepsilon}\}$ a sequence of positive reals such that $\lim_{\varepsilon}\alpha_{\varepsilon}=0$, and let $\{{{\mathbb P}}^{\varepsilon}\} \subset {{\mathcal P}}({{\mathcal X}})$. Suppose that for each $x \in {{\mathcal X}}$ there is a sequence $\{{{\mathbb Q}}^{{\varepsilon},x}\} \subset {{\mathcal P}}({{\mathcal X}})$ such that ${{\mathbb Q}}^{{\varepsilon},x} \to \delta_x$ weakly in ${{\mathcal P}}({{\mathcal X}})$, and $\varlimsup_{\varepsilon}\alpha_{\varepsilon}\mathrm{Ent}_{\varepsilon}({{\mathbb Q}}^{{\varepsilon},x}|{{\mathbb P}}^{\varepsilon}) \le I(x)$. Then $\{{{\mathbb P}}^{\varepsilon}\}$ satisfies a large deviations lower bound with speed $\alpha_{\varepsilon}^{-1}$ and rate $I$. We will prove the lower bound following the strategy suggested by Lemma \[l:ldglim\]. More precisely, consider the set $${{\mathcal M}}_0:=\Big\{\mu \in {{\mathcal M}}\,:\: \exists \zeta>0\,:\:\mu =\delta_u \text{ for some $u \in C^2\big([0,T]\times {{\mathbb T}};[\zeta,1-\zeta]\big)$} \Big\}$$ Here we prove that for each $\mu \in {{\mathcal M}}_0$ there exists a sequence of probability measures $\{{{\mathbf Q}}^{\varepsilon}\} \subset {{\mathcal P}}({{\mathcal M}})$ such that ${{\mathbf Q}}^{\varepsilon}\to \delta_\mu$ and $\varlimsup {\varepsilon}^{2\gamma}\mathrm{Ent}({{\mathbf Q}}^{\varepsilon}|{{\mathbf P}}^{\varepsilon}) \le {{\mathcal I}}(\mu)$. By Lemma \[l:ldglim\] this will yield a large deviations lower bound with rate $\tilde{{{\mathcal I}}}:{{\mathcal M}} \to [0,+\infty]$ defined as $$\begin{aligned} \tilde{{{\mathcal I}}}(\mu):= \begin{cases} \displaystyle{ {{\mathcal I}}(\mu) } & \text{if $\mu \in {{\mathcal M}}_0$} \\ +\infty & \text{otherwise} \end{cases}\end{aligned}$$ By a standard diagonal argument, the lower bound then also holds with the lower semicontinuous envelope of $\tilde{{{\mathcal I}}}$ as rate functional. In [@BBMN Theorem 4.1] it is shown, in a slightly different setting, that the lower semicontinuous envelope of $\tilde{{{\mathcal I}}}$ is indeed ${{\mathcal I}}$. By the assumption $\zeta \le u_0 \le 1-\zeta$ (which is equivalent to the requirement that $a^2(u_0)$ is uniformly positive), it is not difficult to adapt the arguments in the proof of Theorem 4.1 in [@BBMN Theorem 4.1], to obtain the analogous result in this case. We are thus left with the proof of the lower bound on ${{\mathcal M}}_0$. Let $\mu \in {{\mathcal M}}_0$ be such that ${{\mathcal I}}(\mu)<\infty$. Then $\mu=\delta_v$ for some smooth $v \in C\big([0,T]; U\big)$ with $v(0,x)=u_0(x)$ and $a(v)^2 \ge r$ for some $r>0$. By the definition of ${{\mathcal I}}$ and the smoothness of $v$ $$\begin{aligned} {{\mathcal I}}(\mu) & = & \sup_{\varphi \in C^\infty\big([0,T] \times {{\mathbb T}}\big) } \big\{ - \langle\langle \partial_t v +\nabla \cdot f(v), \varphi \rangle\rangle - \frac 12\, \langle\langle a(v)^2 \nabla \varphi, \nabla \varphi \rangle\rangle \big\} \\ & \ge & \sup_{\varphi \in C^\infty\big([0,T] \times {{\mathbb T}}\big) } \big\{ - \langle\langle \partial_t v +\nabla \cdot f(v), \varphi \rangle\rangle - \frac r2 \langle\langle \nabla \varphi, \nabla \varphi \rangle\rangle \big\}\end{aligned}$$ Since the supremums in this formula are finite, Riesz representation lemma implies the existence of a $\Psi^v \in L_2\big([0,T]; H^1({{\mathbb T}})\big)$ such that $$\begin{aligned} \label{e:equ} \partial_t v + \nabla \cdot f(v)= -\nabla \cdot [a(v)^2 \nabla \Psi^v]\end{aligned}$$ holds weakly and $$\begin{aligned} \label{e:Imu} {{\mathcal I}}(\mu)=\frac 12 \langle \langle a(v)^2 \nabla \Psi^v, \nabla \Psi^v\rangle\rangle\end{aligned}$$ We next define the $P$-martingale $M^{{\varepsilon};v}$ on $\Omega$ as $$\begin{aligned} M^{{\varepsilon};v}(t):=-{\varepsilon}^{-\gamma}\int_0^t \big\langle \jmath^{\varepsilon}\ast [a(v) \nabla \Psi^v], dW \big] \big\rangle\end{aligned}$$ so that, by Young inequality for convolutions and , we have $P$ a.s. $$\label{e:Mvarineq} \big[M^{{\varepsilon};v},M^{{\varepsilon};v}\big](T) \le {\varepsilon}^{-2 \gamma} \| a(v) \nabla \Psi^v \|_{L_2([0,T] \times {{\mathbb T}})}^2 = 2{\varepsilon}^{-2\gamma} {{\mathcal I}}(\mu)$$ Since the quadratic variation of $M^{{\varepsilon};v}$ is bounded, its stochastic exponential $$E^{{\varepsilon};v}(t,\omega):=\exp\big(M^{{\varepsilon};v}(t,\omega)- \frac 12 [M^{{\varepsilon};v},M^{{\varepsilon};v}](t,\omega) \big)$$ is a uniformly integrable $P$-martingale. For ${\varepsilon}>0$ we define the probability measure $Q^{{\varepsilon};v}$ on $\Omega$ by $$Q^{{\varepsilon};v}(d \omega) := E^{{\varepsilon};v}(T,\omega) P(d \omega)$$ Recalling that $u^{\varepsilon}$ was the process solving , we next define ${{\mathbf Q}}^{{\varepsilon};v} :=Q^{{\varepsilon};v} \circ (\delta_{u^{\varepsilon}})^{-1} \in {{\mathcal P}}({{\mathcal M}})$. Then $$\begin{aligned} \label{e:entropy1} \nonumber {\varepsilon}^{2\gamma}\mathrm{Ent}({{\mathbf Q}}^{{\varepsilon};v}|{{\mathbf P}}^{{\varepsilon};v}) & \le & {\varepsilon}^{2\gamma}\mathrm{Ent}(Q^{{\varepsilon};v}| P) = {\varepsilon}^{2\gamma} \int Q^{{\varepsilon};v}(d\omega) \log E^{{\varepsilon};v}(T,\omega) \\ \nonumber & = & {\varepsilon}^{2\gamma} \int Q^{{\varepsilon};v}(d\omega) \big(M^{{\varepsilon};v}(T,\omega) - [M^{{\varepsilon};v},M^{{\varepsilon};v}](T,\omega) \big) \\ & & + \frac{{\varepsilon}^{2 \gamma}}2 \int Q^{{\varepsilon};v}(d\omega) [M^{{\varepsilon};v},M^{{\varepsilon};v}](T,\omega) \le I(\mu)\end{aligned}$$ where in the last line we used Girsanov theorem, stating that $M^{{\varepsilon};v}-[M^{{\varepsilon};v},M^{{\varepsilon};v}]$ is a $Q^{{\varepsilon},v}$-martingale and it has therefore vanishing expectation, and . By , Lemma \[l:exptight2\] and entropy inequality, the sequence $\{{{\mathbf Q}}^{{\varepsilon};v}\}$ is tight in ${{\mathcal P}}({{\mathcal M}})$, and in view of it remains to show that any limit point of $\{{{\mathbf Q}}^{{\varepsilon};v}\}$ is concentrated on $\{\delta_v\}$. Let ${{\mathbb Q}}^{{\varepsilon};v}:= Q^{{\varepsilon};v} \circ (u^{\varepsilon})^{-1} \in {{\mathcal P}}\big(C\big([0,T]; U\big)\big)$; we will show $$\begin{aligned} \label{e:ld1lbconv} \lim_{\varepsilon}{{\mathbb E}}^{{{\mathbb Q}}^{\varepsilon}} \big(\sup_t \|u(t)-v(t)\|_{L_1({{\mathbb T}})}\big)=0\end{aligned}$$ which is easily seen to imply the required convergence of $\{{{\mathbf Q}}^{\varepsilon}\}$. Since ${{\mathbb Q}}^{{\varepsilon};v}$ is absolutely continuous with respect to ${{\mathbb P}}^{\varepsilon}$, it is concentrated on $C\big([0,T]; U\big) \cap L_2\big([0,T];H^1({{\mathbb T}})\big)$ and by Girsanov theorem it is a solution to the martingale problem associated with the stochastic partial differential equation in the unknown $u$ $$\begin{aligned} \label{e:stoc2} \nonumber & & d u = \Big[- \nabla \cdot f(u) +\frac{{\varepsilon}}{2} \nabla \cdot \big [D(u) \nabla u - a(u) \big((\jmath^{\varepsilon}\ast \jmath^{\varepsilon}) \ast (a(v) \nabla \Psi^v)\big)\big] \Big]\,dt \\ \nonumber & & \phantom{d u =} +{\varepsilon}^{\gamma}\, \nabla \cdot \big[a(u) (\jmath^{\varepsilon}\ast d W)\big] \\ & & u(0,x) = u_0^{\varepsilon}(x)\end{aligned}$$ where we used the same notation of . Note that $\Psi^v$ is twice continuously differentiable, since $a(v)^2$ is strictly positive and can be regarded as an elliptical equation for $\Psi^v$ with smooth data. Therefore by Lemma \[l:supexpnabla\] applied with $E^{\varepsilon}= \jmath^{\varepsilon}\ast \jmath^{\varepsilon}\ast [a(v) \nabla \Psi^v]$ we have that $ {{\mathbb E}}^{{{\mathbb Q}}^{{\varepsilon};v}} \big( {\varepsilon}\| \nabla u\|_{L_2([0,T]\times {{\mathbb T}})}^2 \big)$ is bounded uniformly in ${\varepsilon}$. By and , we can then apply Lemma \[l:stab\] with: $v^{\varepsilon}(u)(t,x)=v(t,x)$, $G_1^{\varepsilon}(u,v)(t,x)=\frac{{\varepsilon}}2 \nabla \cdot \big[D(v) \nabla v \big]$, $G_2^{\varepsilon}(u,v)=0$ and $G_3^{\varepsilon}(u,v)(t,x)=[a(v)-a(u)] \big[a(v) \Psi^v - \jmath^{\varepsilon}\ast \jmath^{\varepsilon}\ast [a(v)\Psi^v] \big]$. Since $v$ and $\Psi^v$ are smooth, the hypotheses of Lemma \[l:stab\] hold and we thus obtain . The corollary is an immediate consequence of the contraction principle [@DZ2 Theorem 4.4.1] applied to the continuous map ${{\mathcal M}} \ni \mu \mapsto \mu(\imath) \in C\big([0,T]; U\big)$. If $\mu \in {{\mathcal M}}$ is such that ${{\mathcal I}}(\mu)<\infty$, then there exists $\Phi \in L_2\big([0,T]\times {{\mathbb T}}\big)$ such that $\partial_t \mu(\imath)= -\nabla \Phi$ holds weakly, and thus we have for $u \in C\big([0,T]; U\big)$ and any $\Phi$ in the above class $$\begin{aligned} & & \inf_{\mu \in {{\mathcal M}},\,\mu(\imath)=u} I(\mu) = \inf_{\mu \in {{\mathcal M}},\,\mu(\imath)=u} \\ & & \qquad \qquad \qquad\qquad \sup_{\varphi \in C^\infty([0,T] \times {{\mathbb T}}) } \Big\{ \langle\langle \Phi-\mu (f), \nabla \varphi \rangle\rangle - \frac 12\, \langle\langle \mu (a^2) \nabla \varphi, \nabla \varphi \rangle\rangle \Big\} \\ & & \qquad = \int \!dt\, \inf_{c\in {{\mathbb R}}} \int \!dx\, \inf_{\mu \in {{\mathcal M}},\,\mu(\imath)=u} \frac{\big[\mu_{t,x}(f)-\Phi(t,x)-c \big]^2}{\mu_{t,x}(a^2)} \end{aligned}$$ Since the function $\Phi$ satisfying $\partial_t \mu(\imath)= -\nabla \Phi$ are defined up to a measurable additive function of $t$, the optimization over $c$ can be replaced by an optimization over $\Phi$, namely $$\begin{aligned} & & \inf_{\mu \in {{\mathcal M}},\,\mu(\imath)=u} I(\mu) = \inf_{\Phi \in L_2([0,T]\times {{\mathbb T}}), \nabla \Phi=-\partial_t u} \\ & & \qquad \qquad \qquad \qquad \quad \inf_{\mu \in {{\mathcal M}},\,\mu(\imath)=u} \int \!dt\, dx\, \frac{\big[\mu_{t,x}(f)-\Phi(t,x)\big]^2}{\mu_{t,x}(a^2)} \end{aligned}$$ which coincides with $I(u)$. Large deviations with speed ${\varepsilon}^{-2\gamma+1}$ {#ss:3.3} -------------------------------------------------------- The next statement follows easily from entropy inequality (see also the introduction of [@M] for further details). \[p:ldglim1\] Let ${{\mathcal X}}$ be a Polish space and $\{{{\mathbb P}}^{\varepsilon}\} \subset {{\mathcal P}}({{\mathcal X}})$. The following are equivalent: - [ $\{{{\mathbb P}}^{\varepsilon}\}$ is exponentially tight with speed ${\varepsilon}^{-2\gamma+1}$.]{} - [If a sequence $\{{{\mathbb Q}}^{\varepsilon}\} \subset {{\mathcal P}}({{\mathcal X}})$ is such that $\varlimsup_{\varepsilon}{\varepsilon}^{2\gamma-1} \mathrm{Ent}({{\mathbb Q}}^{\varepsilon}|{{\mathbb P}}^{\varepsilon}) <+\infty $, then $\{{{\mathbb Q}}^{\varepsilon}\}$ is tight.]{} Let ${{\mathbb Q}} \in {{\mathcal P}}({{\mathcal X}})$. For $\Psi:[0,T]\times {{\mathcal X}} \to C^\infty([0,T]\times {{\mathbb T}})$ a predictable process, let $$\|\Psi\|_{{{\mathcal D}}^{\varepsilon}({{\mathbb Q}})}^2:=\int \! {{\mathbb Q}}(du) \big\| \jmath^{\varepsilon}\ast [a(u) \nabla \Psi(u)]\big\|_{L_2([0,T] \times {{\mathbb T}})}^2 \in [0,+\infty]$$ We let ${{\mathcal D}}^{\varepsilon}({{\mathbb Q}})$ be the Hilbert space obtained by identifying and completing the set of predictable processes $\Psi:[0,T]\times {{\mathcal X}} \to C^\infty([0,T]\times {{\mathbb T}})$ such that $\|\cdot\|_{{{\mathcal D}}^{\varepsilon}({{\mathbb Q}})}<+\infty$ with respect to this seminorm. \[l:comp1\] Let ${\varepsilon}>0$ and ${{\mathbb Q}} \in {{\mathcal P}}({{\mathcal X}})$ be such that $\mathrm{Ent}({{\mathbb Q}}|{{\mathbb P}}^{\varepsilon})<+\infty$. Then there exists $\Psi \in {{\mathcal D}}^{\varepsilon}({{\mathbb Q}})$ such that ${{\mathbb Q}}$ is a martingale solution to the Cauchy problem in the unknown $u$ $$\begin{aligned} \label{e:stocpert} \nonumber & & d u = \big(- \nabla \cdot f(u) +\frac{{\varepsilon}}{2} \nabla \cdot \big(D(u) \nabla u \big) -{\varepsilon}^\gamma\nabla \cdot \big[a(u) \jmath^{\varepsilon}\ast \jmath^{\varepsilon}\ast [a(u) \nabla \Psi(u)]\big] \big)dt \\ \nonumber & & \phantom{ d u =} +{\varepsilon}^{\gamma}\, \nabla \cdot \big[a(u) (\jmath^{\varepsilon}\ast d W)\big] \\ & & u(0,x) = u_0^{\varepsilon}(x)\end{aligned}$$ and $\mathrm{Ent}({{\mathbb Q}}|{{\mathbb P}}^{\varepsilon}) \ge \frac 12 \|\Psi\|_{{{\mathcal D}}^{\varepsilon}({{\mathbb Q}})}^2$. Since ${{\mathbb Q}}$ is absolutely continuous with respect to ${{\mathbb P}}^{\varepsilon}$, there exists a continuous local ${{\mathbb P}}^{\varepsilon}$-martingale $N$ on ${{\mathcal X}}$ such that $${{\mathbb Q}}(du)=\exp \big(N(T,u)-\frac 12 \big[N,N \big](T,u) \big) {{\mathbb P}}^{\varepsilon}(du)$$ and $\mathrm{Ent}\big({{\mathbb Q}}|{{\mathbb P}}^{\varepsilon}\big)= {{\mathbb E}}^{{{\mathbb Q}}} \big(N(T)-\frac 12 \big[N,N\big](T)\big) =\frac 12 {{\mathbb E}}^{{{\mathbb Q}}} \big( \big[N,N\big](T)\big)$ by Girsanov theorem. It is easy to see that, as $\varphi$ runs in $C^\infty([0,T]\times {{\mathbb T}})$, the family of maps (defined ${{\mathbb P}}$ a.s.) $$\begin{aligned} & & {{\mathcal [}}0,T]\times \chi \ni (t,u) \mapsto \langle M(t,u), \varphi \rangle := \langle u(t),\varphi(t)\rangle - \langle u(0), \varphi(0) \rangle \\ & & \qquad \qquad - \int_0^t\!ds\, \big\langle u,\partial_t \varphi \rangle - \langle f(u) -\frac 12 D(u) \nabla u,\nabla \varphi \big\rangle \in {{\mathbb R}}\end{aligned}$$ generates the standard filtration of ${{\mathcal X}}$. Therefore the martingale $N$ is adapted to $\{\langle M,\varphi\rangle\}$, and reasoning as in [@RY Lemma 4.2], there exists a predictable process $\Psi$ on ${{\mathcal X}}$ and a martingale $\tilde{N}$ such that $$\begin{aligned} N(t)= \int_0^t \langle \Psi, dM\rangle + \tilde{N}(t)\end{aligned}$$ and $$\begin{aligned} \label{e:quad0} \big[\tilde{N},\langle M, \varphi \rangle \big](T,u)=0 \qquad \text{for all $\varphi \in C^\infty({{\mathbb T}})$, for ${{\mathbb P}}$ a.e. $u$.}\end{aligned}$$ In particular $$\begin{aligned} {{\mathbb E}}^{{{\mathbb Q}}} \big( \big[N,N](T)\big) & = & {{\mathbb E}}^{{{\mathbb Q}}} \big(\big[\int_0^\cdot \langle \Psi, dM\rangle ,\int_0^\cdot \langle \Psi, dM\rangle\big](T) \big) + {{\mathbb E}}^{{{\mathbb Q}}} \big( \big[\tilde{N},\tilde{N}](T)\big) \\ & \ge & {{\mathbb E}}^{{{\mathbb Q}}} \big(\big\| \jmath^{\varepsilon}\ast [a(u) \nabla \Psi(u)] \big\|_{L_2([0,T] \times {{\mathbb T}})}^2 \big)\end{aligned}$$ Therefore $\mathrm{Ent}({{\mathbb Q}}|{{\mathbb P}}^{\varepsilon}) \ge \frac 12 \|\Psi\|_{{{\mathcal D}}^{\varepsilon}({{\mathbb Q}})}$ and follows by Girsanov theorem and . It is immediate to see that both the bound on the relative entropy $\mathrm{Ent}({{\mathbb Q}}|{{\mathbb P}}^{\varepsilon})$ and the Girsanov term in are compatible with the identification induced by the seminorm $\|\cdot\|_{{{\mathcal D}}^{\varepsilon}({{\mathbb Q}})}$, and thus one can identify $\Psi$ with an element in ${{\mathcal D}}^{\varepsilon}({{\mathbb Q}})$. \[l:exptight3\] Under the same hypotheses of Theorem \[t:ld2\] item (i), there exists a sequence $\{K_\ell \}$ of compact subsets of ${{\mathcal X}}$ such that $$\lim_\ell \varlimsup_{\varepsilon}{\varepsilon}^{2\gamma-1} \log {{\mathbb P}}^{\varepsilon}(K_\ell)=-\infty$$ In view of Lemma \[p:ldglim1\], we will prove that if ${{{\mathbb Q}}^{\varepsilon}} \subset {{\mathcal P}}({{\mathcal X}})$ is a sequence with ${\varepsilon}^{2\gamma -1} \mathrm{Ent}({{\mathbb Q}}^{\varepsilon}|{{\mathbb P}}^{\varepsilon}) \le C$ for some $C \ge 0$ independent of ${\varepsilon}$, then ${{\mathbb Q}}^{\varepsilon}$ is tight. By Lemma \[l:comp1\], there exists a sequence $\Psi^{\varepsilon}\in {{\mathcal D}}^{\varepsilon}({{\mathbb Q}}^{\varepsilon})$ such that $$\begin{aligned} \label{e:PsiD} \frac{{\varepsilon}^{-1}}2 \|\Psi^{\varepsilon}\|_{{{\mathcal D}}^{\varepsilon}({{\mathbb Q}}^{\varepsilon})}^2 \le {\varepsilon}^{2\gamma-1} \mathrm{Ent}({{\mathbb Q}}^{\varepsilon}|{{\mathbb P}}^{\varepsilon}) \le C\end{aligned}$$ and ${{\mathbb Q}}^{\varepsilon}$ is a martingale solution to the Cauchy problem in the unknown $u$ $$\begin{aligned} \label{e:stocpert2} \nonumber & & d u = \Big(- \nabla \cdot f(u) +\frac{{\varepsilon}}{2} \nabla \cdot \big(D(u) \nabla u \big) \\ \nonumber & & \phantom{ d u = \Big(} - \nabla \cdot \big[a(u) \jmath^{\varepsilon}\ast \jmath^{\varepsilon}\ast [a(u) \nabla \Psi^{\varepsilon}(u)]\big] \Big)\,dt +{\varepsilon}^{\gamma}\, \nabla \cdot \big[a(u) (\jmath^{\varepsilon}\ast d W)\big] \\ & & u(0,x) = u_0^{\varepsilon}(x)\end{aligned}$$ For ${\varepsilon}>0$, we next define (${{\mathbb P}}^{\varepsilon}$ a.s.) the predictable map $v^{\varepsilon}:{{\mathcal X}} \to {{\mathcal X}}$ as the solution to the parabolic Cauchy problem $$\begin{aligned} \label{e:stocpert3} \nonumber & & \partial_t v = - \nabla \cdot f(v) +\frac{{\varepsilon}}{2} \nabla \cdot \big(D(v) \nabla v \big) - \nabla \cdot \big[a(v) \jmath^{\varepsilon}\ast \jmath^{\varepsilon}\ast [a(u) \nabla \Psi^{\varepsilon}(u)]\big] \\ & & v(0,x) = u_0(x)\end{aligned}$$ It is easily seen that, for ${{\mathbb P}}^{\varepsilon}$ a.e. $u$, admits a unique solution $v^{\varepsilon}(u) \in {{\mathcal X}} \cap L_2\big([0,T];H^1({{\mathbb T}})\big)$, and that the definition of $v^{\varepsilon}$ is compatible with the equivalence relation for $\Psi^{\varepsilon}$ in the definition of ${{\mathcal D}}^{\varepsilon}({{\mathbb Q}}^{\varepsilon})$. By and Young inequality for convolutions we also have $$\begin{aligned} \label{e:boundiv5} \nonumber I_{\varepsilon}(v^{\varepsilon}(u)) & = & \frac 12 \big\|\jmath^{\varepsilon}\ast\jmath^{\varepsilon}\ast [a(u) \nabla \Psi^{\varepsilon}(u)] \big\|_{L_2([0,T] \times {{\mathbb T}})}^2 \\ & \le & \frac 12 \big\|\jmath^{\varepsilon}\ast [a(u) \nabla \Psi^{\varepsilon}(u)] \big\|_{L_2([0,T] \times {{\mathbb T}})}^2\end{aligned}$$ where $I_{\varepsilon}: {{\mathcal X}} \cap L_2\big([0,T];H^1({{\mathbb T}})\big) \to [0,+\infty]$ is defined as $$\begin{aligned} I_{\varepsilon}(v) & := & \sup_{\varphi\in C^\infty([0,T]\times {{\mathbb R}})} \Big[ \langle v(T),\varphi(T)\rangle - \langle u_0, \varphi(0) \rangle \\ & & - \langle \langle v,\partial_t \varphi \rangle \rangle + \langle \langle f(v) -\frac 12 D(v) \nabla v,\nabla \varphi \rangle \rangle - \frac 12 \langle\langle a(v)^2 \nabla\varphi, \nabla \varphi \rangle\rangle \Big]\end{aligned}$$ Therefore taking the ${{\mathbb E}}^{{{\mathbb Q}}^{\varepsilon}}$ expectation in , multiplying by ${\varepsilon}^{-1}$ and using $$\begin{aligned} \label{e:entbound1} E^{{{\mathbb Q}}^{\varepsilon}} \big( {\varepsilon}^{-1} I_{\varepsilon}(v^{\varepsilon}(u))\big) \le \frac{{\varepsilon}^{-1}}2 \|\Psi^{\varepsilon}\|_{{{\mathcal D}}^{\varepsilon}({{\mathbb Q}}^{\varepsilon})}^2 \le C\end{aligned}$$ Minor adaptations of the proof of [@BBMN Theorem 2.5] imply that for each $\ell>0$ there exist ${\varepsilon}_0(\ell)>0$ and a compact $K_\ell \subset {{\mathcal X}}$ such that $$\begin{aligned} \label{e:Hequic} \cup_{{\varepsilon}\le {\varepsilon}_0(\ell)} \big\{v \in {{\mathcal X}} \cap L_2\big([0,T];H^1({{\mathbb T}})\big)\,:\: {\varepsilon}^{-1} I_{\varepsilon}(v) \le \ell \big\} \subset K_\ell\end{aligned}$$ and imply that the the sequence $\{{{\mathbb Q}}^{\varepsilon}\circ (v^{\varepsilon})^{-1} \} \subset {{\mathcal P}}({{\mathcal X}})$ is tight in ${{\mathcal X}}$, since by Chebyshev inequality $$\big( {{\mathbb Q}}^{\varepsilon}\circ (v^{\varepsilon})^{-1}\big)(K_\ell^c ) \le C \ell ^{-1}$$ By Lemma \[l:supexpnabla\] (applied to ${{\mathbb P}}^{\varepsilon}$ with $E^{\varepsilon}\equiv 0$) and entropy inequality, we have $$\begin{aligned} \lim_{\ell \to +\infty} \varlimsup_{\varepsilon}{{\mathbb Q}}^{\varepsilon}\big({\varepsilon}\| \nabla u \| _{L_2([0,T] \times {{\mathbb T}})}^2 \ge \ell \big) = 0\end{aligned}$$ Therefore, in view of and we can apply Lemma \[l:stab\] to ${{\mathbb Q}}^{\varepsilon}$ with $G_1(u,v)=0$, $G_2(u,v)=0$, $G_3(u,v)=[a(v)-a(u)]\big[\jmath^{\varepsilon}\ast \jmath^{\varepsilon}\ast [a(u) \nabla \Psi^{\varepsilon}(u )\big]$. Indeed, since holds, the hypotheses of Lemma \[l:stab\] are easily satisfied. We then gather for each $\delta>0$ $$\begin{aligned} \lim_{\varepsilon}{{\mathbb Q}}^{\varepsilon}\big(\sup_t \|u-v^{\varepsilon}(u)\|_{L_1({{\mathbb T}})} \ge \delta \big) =0\end{aligned}$$ which implies, together with the tightness of $\{{{\mathbb Q}}^{{\varepsilon}} \circ (v^{\varepsilon})^{-1}\}$ proved above, the tightness of $\{{{\mathbb Q}}^{\varepsilon}\}$. Let ${{\mathcal W}} \subset {{\mathcal X}}$ be the set of weak solutions to . Let $K \subset {{\mathcal X}}$ be compact, and set ${{\mathcal K}}:= \{ \mu \in {{\mathcal M}}\,:\:\mu=\delta_u,\,\text{for some $u \in K$}\}$. ${{\mathcal K}}$ is compact in ${{\mathcal M}}$, since ${{\mathcal X}}$ is equipped with the topology induced by the map ${{\mathcal X}} \ni u \mapsto \delta_u \in {{\mathcal M}}$. If $K \cap {{\mathcal W}} = \emptyset$, then $\inf_{\mu \in {{\mathcal K}}} {{\mathcal I}}(\mu) >0$ as ${{\mathcal I}}$ vanishes only on measure-valued solutions to . In particular by Theorem \[t:ld1\] item (i) $$\begin{aligned} \varlimsup_{\varepsilon}{\varepsilon}^{2\gamma-1} \log {{\mathbb P}}^{\varepsilon}(K)=\varlimsup_{\varepsilon}{\varepsilon}^{2\gamma-1} \log {{\mathbf P}}^{\varepsilon}({{\mathcal K}})=-\infty\end{aligned}$$ Then, since ${{\mathcal W}}$ is closed in ${{\mathcal X}}$ and Lemma \[l:exptight3\] holds, we need to prove the large deviations upper bound for $\{{{\mathbb P}}^{\varepsilon}\}$ only for compact sets $K \subset {{\mathcal W}} \subset {{\mathcal X}}$. Let $(\vartheta,Q)$ be an entropy sampler–entropy sampler flux pair. Recall the definition of the martingale $N^{{\varepsilon};\vartheta}$ in Lemma \[l:ito\], and consider its stochastic exponential $$\begin{aligned} & & E^{{\varepsilon};\vartheta}(t,u) := \exp \big(N^{{\varepsilon},\vartheta}(t,u) - \frac 12 \big[N^{{\varepsilon},\vartheta},N^{{\varepsilon},\vartheta} \big](t,u)\big) \\ & & \quad = \exp \Big\{ \int\!dx\, \vartheta(u (t),t,x) -\int\!dx\, \vartheta(u_0,0,x) \\ & & \qquad -\int_0^t\!ds \int \! dx\, \big[\big(\partial_s \vartheta)\big(u (s,x),s,x \big) + \big(\partial_x Q\big)\big(u (s,x),s,x \big)\big] \\ & &\qquad + \int_0^t\!ds\,\Big[ \frac{{\varepsilon}}2 \langle \vartheta''(u ) \nabla u , D(u ) \nabla u \rangle +\frac{{\varepsilon}}2 \langle \partial_x \vartheta'(u ), D(u ) \nabla u \rangle \\ & & \qquad \phantom{ + \int_0^t\!ds\,\Big[ } - \frac{{\varepsilon}^{2\gamma}}{2} \|\nabla \jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \langle \vartheta''(u ) a(u ), a(u )\rangle \\ & & \qquad \phantom{ + \int_0^t\!ds\,\Big[ } - \frac{{\varepsilon}^{2\gamma}}{2} \|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \langle \vartheta''(u ) \nabla u , [a'(u )]^2 \nabla u \rangle\Big] \\ & & \qquad - \frac{ {\varepsilon}^{2\gamma}}{2} \int_0^t\!ds\, \big\langle a(u )^2 \big[\vartheta''(u ) \nabla u +\partial_x \vartheta'(u ) \big], \vartheta''(u ) \nabla u +\partial_x \vartheta'(u ) \big\rangle \Big\}\end{aligned}$$ $E^{{\varepsilon};\vartheta}$ is a continuous strictly positive ${{\mathbb P}}^{\varepsilon}$-supermartingale starting at $1$. For $\ell>0$ let $$B^{\ell}:=\{u \in {{\mathcal X}} \cap L_2\big([0,T];H^1({{\mathbb T}})\big)\,:\: \| \nabla u \|_{L_2([0,T] \times {{\mathbb T}})}^2 \le \ell \}$$ Recall that ${{\mathcal W}}$ is the set of weak solutions to . Given a Borel subset $A \subset {{\mathcal W}}$ we have, for $C$, ${\varepsilon}_0$ as in Lemma \[l:supexpnabla\] (applied with $E^{\varepsilon}\equiv 0$) and $\ell>C$, ${\varepsilon}\le {\varepsilon}_0$ $$\begin{aligned} \label{e:varanonlin} \nonumber {{\mathbb P}}^{\varepsilon}(A)& \le & {{\mathbb E}}^{P^{\varepsilon}}\big(E^{{\varepsilon};\frac{\vartheta}{{\varepsilon}^{2\gamma-1}}}(T,u) [E^{{\varepsilon};\frac{\vartheta}{{\varepsilon}^{2\gamma-1}}}(T,u)]^{-1} {{1 \mskip -5mu {\rm I}}}_{A \cap B^{\ell/{\varepsilon}}}(u)\big) + {{\mathbb P}}^{\varepsilon}(B^{\ell/{\varepsilon}}) \\ &\le & \sup_{u \in A \cap B^{\ell/{\varepsilon}}} [E^{{\varepsilon};\frac{\vartheta}{{\varepsilon}^{2\gamma-1}}}(T,v)]^{-1} + \exp\big(-\frac{(\ell-C)^2}{C {\varepsilon}^{2\gamma-1}(\ell+1)} \big)\end{aligned}$$ where in the last line we used the supermartingale property of $E^{{\varepsilon};\vartheta}$ and Lemma \[l:supexpnabla\]. Since $$\begin{aligned} & & {\varepsilon}^{2\gamma-1}\log E^{{\varepsilon};\frac{\vartheta}{{\varepsilon}^{2\gamma-1}}}(T,u) = -\int\!dx\, \vartheta(u_0(x),0,x) \\ & &\qquad \qquad -\int \! ds\,dx\, \big[\big(\partial_s \vartheta)\big(u (s,x),s,x \big) + \big(\partial_x Q\big)\big(u (s,x),s,x \big)\big] \\ & &\qquad \qquad + \frac{{\varepsilon}}2 \langle \langle \vartheta''(u ) \nabla u , D(u ) \nabla u \rangle \rangle +\frac{{\varepsilon}}2 \langle \langle\partial_x \vartheta'(u ), D(u ) \nabla u \rangle \rangle \\ & &\qquad \qquad - \frac{{\varepsilon}^{2\gamma}}{2} \|\nabla \jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \langle \langle \vartheta''(u ) a(u ), a(u )\rangle \rangle \\ & &\qquad \qquad - \frac{{\varepsilon}^{2\gamma}}{2} \|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 \langle \langle \vartheta''(u ) \nabla u , [a'(u )]^2 \nabla u \rangle \rangle \\ & &\qquad \qquad - \frac{ {\varepsilon}}{2} \langle \langle a(u)^2 \vartheta''(u ) \nabla u, \vartheta''(u ) \nabla u \rangle \rangle - \frac{ {\varepsilon}}{2} \langle \langle a(u)^2 \partial_x \vartheta'(u ), \partial_x \vartheta'(u) \rangle \rangle \\ & &\qquad \qquad -{\varepsilon}\langle \langle a(u)^2\vartheta''(u ) \nabla u, \partial_x \vartheta'(u ) \rangle \rangle\end{aligned}$$ by Cauchy-Schwartz inequality, for each $u \in B^{\ell/{\varepsilon}}$ $$\begin{aligned} \label{e:uppboundineq1} \nonumber & & {\varepsilon}^{2\gamma-1}\log E^{{\varepsilon};\frac{\vartheta}{{\varepsilon}^{2\gamma-1}}}(T,u) \ge -\int\!dx\, \vartheta(u_0(x),0,x) \\ \nonumber & & \qquad -\int \! ds\,dx\, \big[\big(\partial_s \vartheta)\big(u (s,x),s,x \big) + \big(\partial_x Q\big)\big(u (s,x),s,x \big)\big] \\ \nonumber & & \qquad + \frac{{\varepsilon}}2 \langle \langle \vartheta''(u ) \nabla u , \big(D(u ) -a(u)^2\vartheta''(u)\big)\nabla u \rangle \rangle -C_\vartheta \sqrt{{\varepsilon}\ell} \\ & & \qquad - C_{\vartheta} {\varepsilon}^{2\gamma} \|\nabla \jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 - C_\vartheta {\varepsilon}^{2\gamma-1}\ell \|\jmath^{\varepsilon}\|_{L_2({{\mathbb T}})}^2 - C_\vartheta {\varepsilon}-\sqrt{{\varepsilon}\ell} C_\vartheta\end{aligned}$$ for a suitable constant $C_\vartheta >0 $ depending only on $\vartheta$, $D$ and $a$. The key point now is that, if the entropy sampler $\vartheta$ satisfies $$\begin{aligned} \label{e:thetaineq} a(u)^2 \vartheta''(u,t,x) \le D(u) \qquad \forall\,u\in [0,1],\, t \in [0,T],\,x \in {{\mathbb T}}\end{aligned}$$ then the term $\langle \langle \vartheta''(u ) \nabla u , \big(D(u ) -a(u)^2\vartheta''(u)\big)\nabla u \rangle \rangle$ in is positive. Namely, the largest term in the quadratic variation of $N^{{\varepsilon};\vartheta}$ is controlled by the positive parabolic term associated with the deterministic diffusion. Therefore taking the limit ${\varepsilon}\to 0$ in , by the hypotheses assumed on $\jmath^{\varepsilon}$, for each entropy sampler $\vartheta$ satisfying and each $u \in B^{\ell/{\varepsilon}}$ $$\begin{aligned} \label{e:uppboundineq2} \nonumber & & \varlimsup_{\varepsilon}{\varepsilon}^{2\gamma-1}\log E^{{\varepsilon};\frac{\vartheta}{{\varepsilon}^{2\gamma-1}}}(T,u) \ge -\int\!dx\, \vartheta(u_0(x),0,x) \\ & & \qquad \qquad -\int \! ds\,dx\, \big[\big(\partial_s \vartheta)\big(u (s,x),s,x \big) + \big(\partial_x Q\big)\big(u (s,x),s,x \big)\big]\end{aligned}$$ We now take the logarithm of and multiply it by ${\varepsilon}^{2\gamma-1}$. Taking the limits ${\varepsilon}\to 0$, then $\ell \to +\infty$, and using , we have for each $\vartheta$ satisfying $$\begin{aligned} & & \varlimsup_{{\varepsilon}} {\varepsilon}^{2\gamma-1} \log {{\mathbb P}}^{\varepsilon}(A) \le - \inf_{u \in A} \big\{ -\int\!dx\, \vartheta(u_0(x),0,x) \\ & & \quad -\int \! ds\,dx\, \big[\big(\partial_s \vartheta)\big(u (s,x),s,x \big) + \big(\partial_x Q\big)\big(u (s,x),s,x \big)\big] \big\} \le - \inf_{u \in A} \sup_{\vartheta} P_{\vartheta,u}\end{aligned}$$ where we have applied the definition of $P_{\vartheta,u}$. Note that the map ${{\mathcal X}} \ni u \mapsto P_{\vartheta,u} \in {{\mathbb R}}$ is lower semicontinuous. Applying the minimax lemma, we gather for a compact set $K \subset {{\mathcal W}}$ $$\begin{aligned} \varlimsup_{{\varepsilon}} {\varepsilon}^{2\gamma-1} {{\mathbb P}}^{\varepsilon}(K) \le - \inf_{u \in K} \sup_{\vartheta} P_{\vartheta,u}\end{aligned}$$ where the supremum is taken over the entropy samplers $\vartheta$ satisfying . It is easy to see that a weak solution $u$ to such that $\sup_{\vartheta} P_{\vartheta,u}<+\infty$ is indeed an entropy-measure solution $u \in {{\mathcal E}}$, and $\sup_{\vartheta} P_{\vartheta,u}=H(u)$. We will use the entropy method suggested by Lemma \[l:ldglim\], as we did in the proof of Theorem \[t:ld1\] item (ii). Recall the Definition \[d:splittable\] of ${{\mathcal S}}$. Given $v \in {{\mathcal S}}$, we need to show that there exists a sequence $\{{{\mathbb Q}}^{{\varepsilon};v}\} \subset {{\mathcal P}}({{\mathcal X}})$ such that $\varlimsup {\varepsilon}^{2\gamma-1}\mathrm{Ent}({{\mathbb Q}}^{{\varepsilon};v}|{{\mathbb P}}^{\varepsilon}) \le H(u)$ and ${{\mathbb Q}}^{\varepsilon}\to \delta_v$ in ${{\mathcal P}}({{\mathcal X}})$. The lower bound with rate ${\,\overline{\! H}}$ then follows by a standard diagonal argument. With minor adaptations from Theorem 2.5 in [@BBMN], we have that the following statement holds. \[l:bbmn\] For each sequence $\beta_{\varepsilon}\to 0$ and each $v \in {{\mathcal S}}$, there exist a sequence $\{w^{\varepsilon}\} \subset {{\mathcal X}} \cap L_2\big([0,T];H^1({{\mathbb T}})\big)$ and a sequence $\{\Psi^{\varepsilon}\} \subset L_2\big([0,T];H^2({{\mathbb T}})\big)$ such that: - [ $w^{\varepsilon}\to v$ in ${{\mathcal X}}$, and $w^{\varepsilon}(0,x)=u_0(x)$.]{} - [ ${\varepsilon}\|\nabla w^{\varepsilon}\|_{L_2([0,T] \times {{\mathbb T}})}^2 \le C$ for some $C>0$ independent of ${\varepsilon}$.]{} - [ $\varlimsup_{\varepsilon}\frac{{\varepsilon}^{-1}}{2} \langle \langle a(w^{\varepsilon})^2 \nabla \Psi^{\varepsilon}, \nabla \Psi^{\varepsilon}\rangle \rangle = H(v)$.]{} - [$\beta_{\varepsilon}\big\| \nabla [a(w^{\varepsilon})\nabla \Psi^{\varepsilon}]\big\|_{L_2([0,T] \times {{\mathbb T}})}^2 \le C\,{\varepsilon}^{-1}$, for some $C>0$ independent of ${\varepsilon}$.]{} - [The equation $$\begin{aligned} \partial_t w^{\varepsilon}+\nabla \cdot f(w^{\varepsilon}) -\frac{{\varepsilon}}2\nabla \cdot\big(D(w^{\varepsilon})\nabla w^{\varepsilon}\big) =- \nabla \cdot \big(a(w^{\varepsilon})^2\,\nabla \Psi^{\varepsilon}\big)\end{aligned}$$ holds weakly.]{} We let $\beta_{\varepsilon}:= {\varepsilon}^{-3/2} \|\jmath^{\varepsilon}-{{1 \mskip -5mu {\rm I}}}\|_{W^{-1,1}({{\mathbb T}})}$, and let $\{w^{\varepsilon}\}$, $\{\Psi^{{\varepsilon}}\}$ be chosen correspondingly. Note that with this choice of $\beta_{\varepsilon}$ and by the assumption on $\|\jmath^{\varepsilon}-{{1 \mskip -5mu {\rm I}}}\|_{W^{-1,1}({{\mathbb T}})}$ $$\begin{aligned} \label{e:beta} \lim_{\varepsilon}{\varepsilon}^{-2}\int_0^t\!ds\,\|\jmath^{\varepsilon}\ast \jmath^{\varepsilon}\ast [a(w^{\varepsilon})\nabla \Psi^{{\varepsilon}}] - a(w^{\varepsilon})\nabla \Psi^{\varepsilon}\|_{L_2({{\mathbb T}})}^2=0\end{aligned}$$ We define the martingale $M^{{\varepsilon};v}$ on $\Omega$ as $$\begin{aligned} M^{{\varepsilon};v}(t):={\varepsilon}^{-\gamma}\int_0^t \langle \jmath^{\varepsilon}\ast [ a(w^{\varepsilon}) \nabla \Psi^{{\varepsilon}}] ,dW \rangle\end{aligned}$$ Then by Young inequality for convolutions: $$\begin{aligned} \label{e:quadminH} \frac 12 \big[M^{{\varepsilon};v},M^{{\varepsilon};v} \big](T) \le \frac{{\varepsilon}^{-2\gamma}}2 \langle \langle a(w^{\varepsilon})^2 \nabla \Psi^{\varepsilon}, \nabla \Psi^{\varepsilon}\rangle \rangle\end{aligned}$$ In particular the stochastic exponential of $N^{{\varepsilon};v}$ is a martingale on $\Omega$, and we can define the probability measure $Q^{{\varepsilon};v} \in {{\mathcal P}}(\Omega)$ as $$\begin{aligned} Q^{{\varepsilon};v}(d\omega):= \exp \big(N^{{\varepsilon};v}(T,\omega)-\frac 12 \big[N^{{\varepsilon};v},N^{{\varepsilon};v} \big](T,\omega) \big) P(d\omega)\end{aligned}$$ and ${{\mathbb Q}}^{{\varepsilon};v}:=Q^{{\varepsilon};v} \circ (u^{\varepsilon})^{-1} \in {{\mathcal P}}({{\mathcal X}})$, where $u^{\varepsilon}:\Omega \to {{\mathcal X}}$ is the solution to . Reasoning as in , and using and property (c) in Lemma \[l:bbmn\] $$\begin{aligned} \label{e:entropy2} \nonumber \varlimsup_{\varepsilon}{\varepsilon}^{2\gamma-1}\mathrm{Ent}({{\mathbb Q}}^{{\varepsilon};v}|{{\mathbb P}}^{{\varepsilon};v}) & \le & \varlimsup_{\varepsilon}{\varepsilon}^{2\gamma-1}\mathrm{Ent}(Q^{{\varepsilon};v}| P) \\ \nonumber & = & \varlimsup_{\varepsilon}\frac{{\varepsilon}^{2 \gamma-1}}2 \int Q^{{\varepsilon};v}(d\omega) [M^{{\varepsilon};v},M^{{\varepsilon};v}](T,\omega) \\ & \le & \varlimsup_{\varepsilon}\frac{{\varepsilon}^{-1}}{2} \langle \langle a(w^{\varepsilon})^2 \nabla \Psi^{\varepsilon}, \nabla \Psi^{\varepsilon}\rangle \rangle = H(v)\end{aligned}$$ We next need to prove that ${{\mathbb Q}}^{{\varepsilon};v}$ converges to $\delta_v$ in ${{\mathcal P}}( {{\mathcal X}})$ as ${\varepsilon}\to 0$. By Girsanov theorem ${{\mathbb Q}}^{{\varepsilon};v}$ is a martingale solution to the stochastic Cauchy problem in the unknown $u$ $$\begin{aligned} \label{e:stoc4} \nonumber & & d u = \big[- \nabla \cdot f(u) +\frac{{\varepsilon}}{2} \nabla \cdot \big(D(u) \nabla u\big) -\nabla \cdot a(u) (\jmath \ast \jmath \ast(a(w^{\varepsilon}) \nabla \Psi^{\varepsilon}) \big]\,dt \\ \nonumber & & \phantom{d u = } +{\varepsilon}^{\gamma}\, \nabla \cdot \big[a(u) (\jmath^{\varepsilon}\ast d W)\big] \\ & & u(0,x) = u_0^{\varepsilon}(x)\end{aligned}$$ In view of property (a) in Lemma \[l:bbmn\], it is enough to check that Lemma \[l:stab\] holds with $v^{\varepsilon}(u)(t,x)=w^{\varepsilon}(t,x)$. Indeed, still by property (a) in Lemma \[l:bbmn\] and the assumptions of this theorem, conditions (i) and (ii) in Lemma \[l:stab\] are immediate. By property (e) in Lemma \[l:bbmn\] and , ${{\mathbb Q}}^{{\varepsilon};v}$ is a martingale solution to with $G_1^{\varepsilon}\equiv 0$, $$G_2^{\varepsilon}(u,w)=a(w) \big[\jmath^{\varepsilon}\ast \jmath^{\varepsilon}\ast [a(w^{\varepsilon})\nabla \Psi^{{\varepsilon}}] - a(w^{\varepsilon})\nabla \Psi^{\varepsilon}\big]$$ $$G_3^{\varepsilon}(u,w)=[a(w)-a(u)] [\jmath \ast \jmath \ast(a(w^{\varepsilon}) \nabla \Psi^{\varepsilon}]$$ Therefore, in view of , condition (iii) in Lemma \[l:stab\] is easily seen to hold. Condition (iv) is also immediate from the definition of $G_3$ and the bound on ${{\mathbb Q}}^{{\varepsilon};v}\big({\varepsilon}\|\nabla u\|_{L_2([0,T]\times {{\mathbb T}})} >\ell \big)$ provided by the application of Lemma \[l:supexpnabla\] for ${{\mathbb P}}^{\varepsilon}$ (thus with $E^{\varepsilon}\equiv 0$), the entropy bound , and the usual entropy inequality. Existence and uniqueness results for fully nonlinear parabolic SPDEs with conservative noise {#s:A} ============================================================================================ In this appendix, we are concerned with existence and uniqueness results for the Cauchy problem in the unknown $u\equiv u(t,x)$, $t \in [0,T]$, $x \in {{\mathbb T}}$ $$\begin{aligned} \label{e:A1} \nonumber & & du = \big[- \nabla \cdot f(u) +\frac 12 \nabla \cdot \big( D(u) \nabla u \big) \big]\, dt + \nabla \cdot \big[a(u) (\jmath \ast dW) \big] \\ & & u(0,x) = u_0(x)\end{aligned}$$ Although we assume the space-variable $x$ to run on a one-dimensional torus ${{\mathbb T}}$, it is not difficult to extend the results given below to the case $x \in {{\mathbb T}}^d$ or $x \in {{\mathbb R}}^d$ for $d \ge 1$. Let $W$ be an $L_2({{\mathbb T}})$–valued cylindrical Brownian motion on a given standard filtered probability space $\big(\Omega, {{\mathfrak F}}, \{{{\mathfrak F}}_t\}_{0\le t \le T}, P\big)$. Hereafter we set $$\begin{aligned} Q(v):= a'(v)^2 \| \jmath\|_{L_2({{\mathbb T}})}^2\end{aligned}$$ We will assume the following hypotheses: - [$f$ and $D$ are uniformly Lipschitz on ${{\mathbb R}}$.]{} - [$a \in C^2({{\mathbb R}})$ is uniformly bounded.]{} - [$\jmath \in H^1({{\mathbb T}})$ and, with no loss of generality, $\int\!dx\,|\jmath(x)|=1 $. ]{} - [There exists $c>0$ such that $D \ge Q+c$.]{} - [$u_0:\Omega \to L_2({{\mathbb T}})$ is ${{\mathfrak F}}_0$-Borel measurable and satisfies ${{\mathbb E}}^{{{\mathbb P}}} \big( \|u_0\|_{L_2({{\mathbb T}})}^2 \big)<+\infty$.]{} We introduce the Polish space $Y:= C\big([0,T];H^{-1}({{\mathbb T}})\big) \cap L_2\big([0,T];H^1({{\mathbb T}}) \big) \cap L_{\infty}\big([0,T]; L_2({{\mathbb T}})\big)$. A probability measure ${\,\overline{\! {{\mathbb P}}}}$ on $Y$ is a *martingale solution* to iff the law of $u(0)$ under ${\,\overline{\! {{\mathbb P}}}}$ is the same of the law of $u_0$, and for each $\varphi \in C^\infty\big([0,T] \times {{\mathbb T}} \big)$ $$\begin{aligned} \label{e:A2} \nonumber & & \langle M(t,u), \varphi \rangle := \langle u(t),\varphi(t)\rangle - \langle u(0), \varphi(0) \rangle \\ & & \phantom{\langle M(t,u), \varphi \rangle :=} - \int_0^t\!ds\, \langle u,\partial_s \varphi \rangle + \langle f(u) -\frac 12 D(u) \nabla u,\nabla \varphi \rangle\end{aligned}$$ is a continuous square-integrable martingale with respect to ${\,\overline{\! {{\mathbb P}}}}(du)$ with quadratic variation $$\begin{aligned} \label{e:A3} \big[\langle M, \varphi \rangle ,\langle M, \psi \rangle\big](t,u) = \int_0^t\!ds\, \langle \jmath \ast (a(u) \nabla \varphi), \jmath \ast (a(u) \nabla \psi) \rangle\end{aligned}$$ We say that a progressively measurable process $u:\Omega \to Y$ is a *strong solution* to iff $u(0)=u_0$ $P$-a.s. and for each $\varphi \in C^\infty \big([0,T]\times {{\mathbb T}}\big)$ $$\begin{aligned} \langle M, \varphi \rangle = -\int_0^t \langle \jmath \ast \big(a(u) \nabla \varphi\big) , dW \rangle\end{aligned}$$ In this appendix we prove \[t:exun\] Assume **A1)**–**A5)**. Then there exists a unique strong solution $u$ to in $Y$. Such a solution $u$ admits a version in $C\big([0,T];L_2({{\mathbb T}})\big)$. Furthermore, if $u_0$ takes values in $[0,1]$ and $a$ is supported by $[0,1]$, then $u$ takes values in $[0,1]$ a.s.. By compactness estimates we will prove that there exists a solution to the martingale problem related to . Then we will provide pointwise uniqueness for using a stability result similar to the one used in the proof of Lemma \[l:stab\]. By Yamada-Watanabe theorem we get the existence and uniqueness stated in Theorem \[t:exun\]. We remark that assumption **A4)** is a key hypotheses in the proof of Theorem \[t:exun\], as it implies that the noise term is smaller than the second order parabolic term, thus allowing some a priori bounds. In general, one may expect nonexistence of the solution to if such a condition fails, see [@DZ Example 7.21]. \[l:piecestoc\] Let $0 \le t^\prime <t^{\prime \prime} \le T$, let $u^\prime, v:\Omega \to L_2({{\mathbb T}})$ be ${{\mathfrak F}}_{t^\prime}$-measurable maps such that ${{\mathbb E}}^{P}\big( \| |u^\prime|+ |v|+|\nabla v| \|_{L_2({{\mathbb T}})}^2\big)<+\infty $. Then the stochastic Cauchy problem in the unknown $w$ $$\begin{aligned} \label{e:piecestoc} \nonumber & & dw = \big[- \nabla \cdot f (w) +\frac{1}{2} \nabla \cdot \big(D(v) \nabla w\big)\big]\,dt +\nabla \cdot \big[a(v) (\jmath \ast d W)\big] \\ & & w(t^\prime,x) = u^\prime(x)\end{aligned}$$ admits a unique strong solution $u$ in $L_2\big([t^\prime,t^{\prime \prime}];H^1({{\mathbb T}}) \big) \cap C\big([t^\prime,t^{\prime \prime}],H^{-1}({{\mathbb T}})\big)$ with probability $1$. For each $t\in [t^\prime,t^{\prime \prime}]$, such a solution $u$ satisfies $$\begin{aligned} \label{e:pieceito} \nonumber & & \langle u(t),u(t)\rangle +\int_{t^\prime}^t \!ds \langle D(v) \nabla u, \nabla u \rangle = N(t,t^\prime) +\langle u^\prime,u^\prime \rangle \\ & & \qquad \qquad \qquad \quad +\int_{t^\prime}^t \!ds\, \big[ \langle Q(v) \nabla v, \nabla v \rangle+\|\nabla \jmath\|_{L_2({{\mathbb T}})}^2 \int \!dx\,a(v)^2 \big]\end{aligned}$$ where $N(t,t^\prime):=-2 \int_{t^\prime}^t \langle \jmath \ast \big( a(v) \nabla u \big), dW\rangle$. Furthermore $${{\mathbb E}}^{P} \big( \sup_{t \in [t^\prime,t^{\prime \prime}]} \|u(t)\|_{L_2({{\mathbb T}})}^2 \big) <+\infty$$ Existence and uniqueness of the semilinear equation are standard, see e.g. [@DZ Chap. 7.7.3]. Applying Itô formula to the function $L_2({{\mathbb T}}) \ni w \mapsto \langle w, w \rangle \in {{\mathbb R}}$ we get . Note that by Burkholder-Davis-Gundy inequality [@RY Theorem 4.4.1], Young and Cauchy-Schwarz inequalities, for suitable constants $C,C^{\prime}>0$ $$\begin{aligned} {{\mathbb E}}^{P} \big(\sup_{t\in [t^\prime,t^{\prime \prime}]} |N(t,t^\prime)|\big) & \le & C\, {{\mathbb E}}^{P} \big( \big[N(\cdot,t^\prime), N(\cdot,t^\prime)\big](t^{\prime \prime})^{1/2} \big) \\ & = & 2\,C\, {{\mathbb E}}^{P}\Big( \| \jmath \ast \big(a(v) \nabla u \big) \|_{L_2([t^{\prime},t^{\prime \prime}]\times {{\mathbb T}})}\Big) \\ & \le & 2\,C\, {{\mathbb E}}^{P}\Big( \| \big(a(v) \nabla u \big) \|_{L_2([t^{\prime},t^{\prime \prime}]\times {{\mathbb T}})}\Big) \\ &\le & C^{\prime}\, \Big[ {{\mathbb E}}^{P} \Big(\int_{t^\prime}^{t^{\prime \prime}}\!ds\, \langle D(v) \nabla u, \nabla u \rangle \Big) \Big]^{1/2} \end{aligned}$$ so that the bound on ${{\mathbb E}}^{P} \big(\sup_{t \in [t^\prime,t^{\prime \prime}]} \|u(t)\|_{L_2({{\mathbb T}})}^2\big)$ is easily obtained by taking the supremum over $t$ and the ${{\mathbb E}}^{P}$ expected values in . We next introduce a sequence $\{u^n\}$ of adapted processes in $Y$. We will gather existence of a weak solution to by tightness of the laws $\{{{\mathbb P}}^n\}$ of such a sequence. For $n \in {{\mathbb N}}$ and $i=0,\ldots,\,2^n$ let $t_i^n:=i2^{-n} T$, and let $\{\imath^n\}$ be a sequence of smooth mollifiers on ${{\mathbb T}}$ such that $\lim_{n} 2^{-n} \|\imath^n\|_{L_1({{\mathbb T}})}^2 =0$. We define a process $u^n$ on $Y$ and the auxiliary random functions $\{v_i^n\}_{i=0}^{2^n}$ on ${{\mathbb T}}$ as follows. For $i=0$ we set $$\begin{aligned} & & u^n(0) := u_0 \\ & & v_0^n := \imath^n \ast u_0\end{aligned}$$ and for $i=1,\ldots,2^n-1$ and $t \in [t_i^n,t_{i+1}^n]$, we let $u^n(t)$ be the solution to the problem with $u^\prime=u(t_i^n)$ and $v=v_i^n$, where for $i\ge 1$ we set $$\label{e:vin} v_i^n:=\frac{2^n}{T} \int_{t_{i-1}^n}^{t_i^n}\!ds\, u^n(s)$$ By Lemma \[l:piecestoc\], these definitions are well-posed, and $u^n$ is in $Y$ with probability $1$. We also define a sequence $\{v^n\}$ of cadlag processes in the Skorohod space $D\big([0,T);L_2({{\mathbb T}})\big)$, by requiring $$\label{e:vvern} v^n(t)=v_i^n \text{for $t \in [t_i^n,t_{i+1}^n)$}$$ \[l:tight\] There exists a constant $C>0$ independent of $n$ such that $$\begin{aligned} \label{e:tight1} {{\mathbb E}}^{P} \Big( \sup_{t\in[0,T]} \| u^n(t)\|_{L_2({{\mathbb T}})}^2 +\| \nabla u^n\|_{L_2([0,T] \times {{\mathbb T}})}^2 \Big) \le C\end{aligned}$$ and for each $\varphi \in H^1({{\mathbb T}})$ such that $\| \nabla \varphi \|_{L_2({{\mathbb T}})}^2 \le 1$, for each $\delta>0$ and $r\in(0,1)$ $$\begin{aligned} \label{e:tight2} P \big( \sup_{s,t \in [0,T]\, : |s-t|\le \delta} \big|\langle u^n(t)-u^n(s),\varphi\rangle \big|>r \big) \le C\,\delta\,r^{-2}\end{aligned}$$ Furthermore for each $r>0$ $$\begin{aligned} \label{e:uclosev} \lim_{n \to \infty} P\big( \| u^n-v^n \|_{L_2([0,T] \times {{\mathbb T}})} > r \big) =0\end{aligned}$$ Writing Itô formula for $u^n$ in the intervals $[t_i^n,t_{i+1}^n]$ and summing over $i$, we get for each $t \in [0,T]$ $$\begin{aligned} & & \langle u^n(t),u^n(t)\rangle +\int_0^t\!ds\, \langle D(v^n) \nabla u^n, \nabla u^n \rangle = \langle u_0,u_0 \rangle \\ & & \qquad \qquad \qquad +\int_0^t\! ds\,\big[ \langle Q(v^n) \nabla v^n, \nabla v^n \rangle+\|\nabla \jmath\|_{L_2({{\mathbb T}})}^2 \int \!dx\, a(v^n)^2 \big]+ N^n(t)\end{aligned}$$ where, by the same means of Lemma \[l:piecestoc\] and Doob’s inequality, the martingale $$\begin{aligned} N^n(t):=2 \int_0^t \langle \jmath \ast \big( a(v^n) \nabla u^n \big), dW\rangle\end{aligned}$$ enjoys the bound $$\begin{aligned} {{\mathbb E}}^{P} \big( \sup_{s \in [0,T]} |N^n(t)|^2 \big) \le C_1 \, {{\mathbb E}}^{P}\big( \| \nabla u^n\|_{L_2([0,T] \times {{\mathbb T}})}^2 \big)\end{aligned}$$ for some $C_1>0$ depending only on $D$ and $a$. Note that, by the definition of $v_i^n$ , hypotheses **A4)**-**A5)** and Young inequality for convolutions $$\begin{aligned} & & \int_0^t \!ds\,\langle Q(v^n) \nabla v^n, \nabla v^n\rangle \\ & & \qquad \qquad \le C_2 \int_0^{t_1^n}\!ds\, \| \imath^n \ast u_0\|_{L_2({{\mathbb T}})}^2 + \int_0^t \!ds\,\langle Q(v^n) \nabla u^n,\nabla u^n \rangle \\ & & \qquad \qquad \le 2^{-n}T\,C_2\,\|\imath^n\|_{L_1({{\mathbb T}})}^2 \| u_0\|_{L_2({{\mathbb T}})}^2 + \int_0^t \!ds\,\langle (D(v^n)-c) \nabla u^n,\nabla u^n \rangle\end{aligned}$$ for some constant $C_2$ depending only on $a$. Patching all together $$\begin{aligned} & & {{\mathbb E}}^{P} \big( \sup_{t \in [0,T]} \| u^n(t)\|_{L_2([0,T] \times {{\mathbb T}})}^2 +c \langle \langle D(v^n) \nabla u^n, \nabla u^n \rangle\rangle \big) \\ & & \qquad \le \big(1+2^{-n}T\,C_2 \|\imath^n\|_{L_1({{\mathbb T}})}^2 \big) \, {{\mathbb E}}^{P}\big( \| u_0\|_{L_2({{\mathbb T}})}^2 \big) \\ & & \qquad \phantom{\le} + C_1\, {{\mathbb E}}^{P}\big( \langle \langle D(v^n) \nabla u^n, \nabla u^n \rangle\rangle^{1/2} \big) +\|\nabla \jmath\|_{L_2({{\mathbb T}})}^2\, {{\mathbb E}}^{P} \big(\| a(v^n)\|_{L_2([0,t] \times {{\mathbb T}})}^2 \big) \end{aligned}$$ Since $2^{-n}\|\imath^n\|_{L_1({{\mathbb T}})}$ was assumed bounded, and since the last term in the right hand side is bounded uniformly in $n$, it is not difficult to gather . Since $u$ satisfies in each interval $[t_i^n,t_{i+1}^n]$ $$\begin{aligned} \big| \langle u^n(t)-u^n(s),\varphi \rangle \big| & \le & C_3 \big( 1+ \|\nabla u^n\|_{L_2([0,T] \times {{\mathbb T}})} \big) \| \nabla \varphi \|_{L_2({{\mathbb T}})}|t-s|^{1/2} \\ & & + \big|\int_s^t \langle \jmath \ast (a(v) \nabla \varphi), dW \rangle \big|\end{aligned}$$ for a suitable constant $C_3$ depending only on $f$ and $D$. then follows from the first part of the lemma. Since $v^n(t)= \imath^n \ast u_0$ for $t\in [0,t_1^n)$, the bound implies $$\lim_{n\to \infty } P\big(\|u^n-v^n\|_{L_2([0,t_1^n]\times {{\mathbb T}})}> r \big)=0$$ for each $r>0$. Therefore, still by , in order to prove , it is enough to show that for each $r,\,\ell>0$ $$\begin{aligned} \lim_{n \to \infty} P\big( \| u^n-v^n\|_{L_2([t_1^n,T] \times {{\mathbb T}})} > r, \|\nabla u^n\|_{L_2([0,T] \times {{\mathbb T}})}^2 \le \ell \big) =0\end{aligned}$$ Let $\kappa \in C^\infty({{\mathbb T}})$ be such that $\int\,dx\, \kappa(x)=1$, and that $$\begin{aligned} \label{e:knorm} \nonumber & & \|\kappa-\mathrm{id}\|_{-1,1}:= \sup \big\{ \int\!dx\, \big|\int\!dy\, \kappa(x-y)\varphi(y)-\varphi(x) \big|, \\ & & \phantom{\|\kappa-\mathrm{id}\|_{-1,1}:= \sup \Big\{ } \varphi \in C^\infty({{\mathbb T}})\,:\:\sup_x |\nabla \varphi(x)|\le 1 \Big\} \le \frac{r}{2\ell}\end{aligned}$$ It is immediate to see that such a $\kappa$ exists. Then $$\begin{aligned} & & \|u^n-v^n\|_{L_2([t_1^n,T] \times {{\mathbb T}})} \le \|u^n-\kappa \ast u^n\|_{L_2([t_1^n,T] \times {{\mathbb T}})} \\ & & \qquad \phantom{\le} + \|v^n-\kappa \ast v^n\|_{L_2([t_1^n,T] \times{{\mathbb T}})} + \|\kappa \ast u^n-\kappa \ast v^n\|_{L_2([t_1^n,T] \times{{\mathbb T}})} \\ & & \qquad \le \|\kappa-\mathrm{id}\|_{-1,1} \big[ \| \nabla u^n \|_{L_2([t_1^n,T] \times{{\mathbb T}})} + \| \nabla v^n \|_{L_2([t_1^n,T] \times{{\mathbb T}})} \big] \\ & & \qquad \phantom{\le} + \| \kappa \ast (u^n- v^n)\|_{L_2([t_1^n,T] \times {{\mathbb T}})}\end{aligned}$$ where in the last inequality we used the Young inequality. By the definition - of $v^n$, $\| \nabla v^n \|_{L_2([t_1^n,T] \times {{\mathbb T}})}^2 \le \| \nabla u^n \|_{L_2([0,T] \times{{\mathbb T}})}^2$. Moreover $$\begin{aligned} & & \int_{t_1^n}^T\!dt\, \| \kappa \ast (u^n -v^n)\|_{L_2({{\mathbb T}})}^2 \\ & & \quad = \sum_{i=1}^{2^n-1} \int_{t_i^n}^{t_{i+1}^n} \!dt\, \Big\| \kappa \ast u^n(t) - \frac{2^{n}}{T} \int_{t_{i-1}^n}^{t_i^n}\!ds\, \kappa \ast u^n(s) \Big\|_{L_2({{\mathbb T}})}^2 \\ & & \quad \le T\,\sup_{|t-s|\le 2^{-n+1}T} \| \kappa \ast (u^n(t) -u^n(s))\|_{L_2({{\mathbb T}})}^2\end{aligned}$$ Therefore by $$\begin{aligned} \label{e:filext} \nonumber & & \|u^n-v^n\|_{L_2([t_1^n,T] \times {{\mathbb T}})}^2 \le \frac{r}{2\ell} \| \nabla u^n \|_{L_2([t_1^n,T] \times {{\mathbb T}})}^2 \\ & & \qquad \qquad \qquad +T\,\sup_{|t-s|\le 2^{-n+1}T} \| \kappa \ast (u^n(t) -u^n(s))\|_{L_2({{\mathbb T}})}^2\end{aligned}$$ so that $$\begin{aligned} & & \lim_{n \to \infty} P\big( \| u^n-v^n\|_{L_2([t_1^n,T] \times {{\mathbb T}})} > r, \|\nabla u^n\|_{L_2([0,T] \times {{\mathbb T}})}^2 \le \ell \big) \\ & & \qquad \le \varlimsup_{n \to \infty} P \big( \sqrt{T}\,\sup_{|t-s|\le 2^{-n+1}T} \| \kappa \ast (u^n(t) -u^n(s))\|_{L_2({{\mathbb T}})} \ge r/2 \big)\end{aligned}$$ which vanishes in view of . We define ${{\mathbb P}}^n$ to be the law of $u^n$, namely ${{\mathbb P}}^n = P \circ (u^n)^{-1}$. In order to establish tightness of the sequence $\{{{\mathbb P}}^n\}$, the ${{\mathbb P}}^n$ will be regarded as probability measures on $C\big([0,T],H^{-1}({{\mathbb T}})\big) \supset Y$, although they are concentrated on $Y$. \[c:tight\] $\{{{\mathbb P}}^n\}$ is tight, and thus compact, on $C\big([0,T],H^{-1}({{\mathbb T}})\big)$ equipped with the uniform topology. Furthermore each limit point ${\,\overline{\! {{\mathbb P}}}}$ of $\{{{\mathbb P}}^n\}$ is concentrated on $Y$ and satisfies $$\begin{aligned} \label{e:limbound} {{\mathbb E}}^{{\,\overline{\! {{\mathbb P}}}}} \big(\sup_t \| u(t)\|_{L_2({{\mathbb T}})}^2 + \| \nabla u\|_{L_2([0,T] \times {{\mathbb T}})}^2 \big)<+\infty\end{aligned}$$ By the compact Sobolev embedding of $L_2({{\mathbb T}})$ in $H^{-1}({{\mathbb T}})$, the estimate implies that *compact containment condition* is satisfied, namely there exists a sequence $\{K_\ell\}$ of compact subsets of $H^{-1}({{\mathbb T}})$ such that $$\begin{aligned} \lim_\ell \varlimsup_n {{\mathbb P}}\big(\exists t \in [0,T]\,:\:u^n(t) \not\in K_\ell \big)=0\end{aligned}$$ Moreover the estimate implies that for each $\varphi \in H^1({{\mathbb T}})$ the laws of the processes $t \mapsto \langle u^n(t),\varphi \rangle$ are tight in $C\big([0,T];{{\mathbb R}}\big)$ as $n$ runs on ${{\mathbb N}}$, see [@Bi page 83]. By [@Ja Theorem 3.1], we get tightness of $\{{{\mathbb P}}^n\}$ on $C\big([0,T],H^{-1}({{\mathbb T}})\big)$. follows immediately by . The following statement is derived following closely the proof of Proposition 3.5 in [@BBMN]. \[p:compactlow\] Let $K \subset C\big([0,T]; U\big)$. Suppose that each $u \in K$ has a Schwartz distributional derivative in the $x$-variable $\nabla u \in L_2([0,T]\times {{\mathbb T}})$, and suppose that exists $\zeta>0$ such that $\| \nabla u\|_{L_2([0,T]\times {{\mathbb T}})} \le \zeta$. Then $K$ is strongly compact in ${{\mathcal X}}$. \[p:uexists\] Each limit point ${\,\overline{\! {{\mathbb P}}}}$ of $\{{{\mathbb P}}^n\}$ is a weak solution to . Let ${\,\overline{\! {{\mathbb P}}}}$ be a limit point of $\{{{\mathbb P}}^n\}$ along a subsequence $n_k$. The law of $u(0)$ under ${\,\overline{\! {{\mathbb P}}}}$ coincides with the law of $u_0$. For $u \in Y$, $v \in D\big([0,T);L_2({{\mathbb T}}) \big)$ and $\varphi \in C^\infty\big([0,T] \times {{\mathbb T}}\big)$ let $$\begin{aligned} \langle M(t;u,v), \varphi \rangle & := & \langle u(t),\varphi(t)\rangle - \langle u(0), \varphi(0) \rangle \\ & & - \int_0^t \!ds\, \big\langle u,\partial_t \varphi \rangle - \langle f(v) -\frac 12 D(v) \nabla u,\nabla \varphi \big\rangle\end{aligned}$$ By , , and Lemma \[p:compactlow\], the law of $\langle M(\cdot;u^n,v^n), \varphi \rangle$ converges, along the subsequence $n_k$, to the law of $\langle M(\cdot;u,u), \varphi \rangle=\langle M(\cdot,u), \varphi \rangle$ under ${\,\overline{\! {{\mathbb P}}}}$. For each $n$ and $\varphi$, $\langle M(\cdot;u^n,u^n), \varphi \rangle$ is a martingale with respect to ${{\mathbb P}}^n$, with quadratic variation $$\big[\langle M(\cdot;u^n,u^n), \varphi \rangle,\langle M(\cdot;u^n,u^n), \varphi \rangle\big](t) =\big\| \jmath \ast (a(v^n) \nabla \varphi)\big\|_{L_2([0,t] \times {{\mathbb T}})}^2$$ Still by , , and Lemma \[p:compactlow\], we have that $\langle M(\cdot,u), \varphi \rangle$ is a martingale under ${\,\overline{\! {{\mathbb P}}}}$, with quadratic variation given by . \[p:uunique\] There exists at most one strong solution to in $Y$. Each strong solution to admits a version in $C\big([0,T];L_2({{\mathbb T}})\big)$. Let $u$, $v$ be to strong solutions to equation . By Ito formula, for $l \in C^2({{\mathbb R}})$ with bounded derivatives $$\begin{aligned} \label{e:Astima} \nonumber & & \int \!dx\, l(u-v)(t) - l(0)+ \frac{1}{2} \int_0^t \!ds\, \langle D(u) l^{\prime\prime}(u-v) \nabla (u-v), \nabla (u-v)\rangle \\ & & \nonumber \qquad = X(t) + \int_0^t \!ds \langle l^{\prime \prime}(u-v) \nabla (u-v), f(u) -f(v)\rangle \\ & &\nonumber \qquad \phantom{=} -\frac{1}{2} \int_0^t \!ds\, \langle l^{\prime \prime}(u-v) \nabla (u-v), [D(u)-D(v)] \nabla v \rangle \\ & & \nonumber \qquad \phantom{=} +\frac{1}{2} \int_0^t \!ds\, \langle l^{\prime \prime}(u-v), \|\nabla \jmath\|_{L^2({{\mathbb T}})}^2 \big(a(u)-a(v) \big)^2 \\ & & \qquad \phantom{=+\frac{1}{2} \int_0^t \!ds\, \langle l^{\prime \prime}(u-v), } +\|\jmath \|_{L^2({{\mathbb T}})}^2 \big(a'(u) \nabla u-a'(v) \nabla v \big)^2 \rangle\end{aligned}$$ and the quadratic variation of the martingale $X(t)$ enjoys the bound $$\big[X,X \big](t) \le \int_0^t \!ds\,\|l^{\prime \prime}(u-v) \nabla(u-v)\big(a(u)-a(v)\big)\|_{L^2({{\mathbb T}})}^2$$ We next introduce the real number $$\begin{aligned} R:=\Big[ {{\mathbb E}}^{P} \Big(\int_0^t \!ds\, \langle l^{\prime \prime}(u-v) \nabla (u-v),\nabla(u-v) \rangle \Big) \Big]^{1/2}\end{aligned}$$ Taking the supremum over $t$ and the ${{\mathbb E}}^{P} $ expected value in , using repeteadly Hölder inequality and the Burkholder-Davis-Gundy inequality [@RY Theorem 4.4.1], assumptions **A2)** and **A5)** and the bound , we get for a suitable constant $C>0$ $$\begin{aligned} & & {{\mathbb E}}^{P} \Big( \sup_{t\le T} \int\!dx\,l(u-v)(t) \Big)+ c R^2 \\ & & \qquad \qquad \le 2\,l(0)+ C \big[ {{\mathbb E}}^{P} \big( \|l^{\prime \prime}(u-v) |u-v|^2 \|_{L^\infty([0,T]\times {{\mathbb T}})}\big) \big]^{1/2} R \\ & & \qquad \qquad \phantom{\le} +C {{\mathbb E}}^{P} \Big( \int_0^t \!ds\, \langle l^{\prime \prime}(u-v)|u-v|,|u-v|\rangle \Big)\end{aligned}$$ For any $\delta>0$, we can choose $l$ so that $|z|\le l(z) \le |z|+\delta$, $l(z)=|z|$ for $|z|\ge \delta$, and $|l^{\prime \prime}(z)| \le 3 \delta^{-1}$. Therefore $$\begin{aligned} {{\mathbb E}}^{P} \big( \sup_t \|u-v\|_{L^1({{\mathbb T}})} \big) & \le & {{\mathbb E}}^{P} \Big( \sup_{t} \int\!dx\, l(u-v)(t) \Big) \\ & \le& 2 \delta-c R^2+ C \sqrt{\delta} R + C \delta \le \Big(\frac{C^2}{4 c} + C+2\Big) \delta\end{aligned}$$ Since the last inequality holds for any $\delta>0$, we have $u=v$. The $C\big([0,T];L_2({{\mathbb T}}) \big)$ regularity for a version $u$ can be easily derived from Itô formula for the map $(t,u) \mapsto \int\!dx\,u(t,x)^2$. Existence and uniqueness of a strong solution to is a consequence of Proposition \[p:uexists\], Proposition \[p:uunique\] and Yamada-Watanabe theorem [@KS Chap. 5, Corollary 3.23]. The fact that $u$ takes values in $[0,1]$ is provided in the same fashion of Lemma \[l:supexpnabla\]. Let $\{l^n\}$ be a sequence of infinitely differentiable convex functions on ${{\mathbb R}}$ with bounded derivatives. We can choose $\{l_n\}$ such that for $v \in [0,1]$ $l_n^{\prime \prime}(v) \le D(v)\,a^{-2}(v)$ and $l_n(v) \le C_n (1+v^2)$ (for some $C_n>0$), while $l_n(v) \uparrow +\infty$ for $n\to +\infty$ pointwise for $v \not \in [0,1]$. By Itô formula $$\begin{aligned} & &\int\!dx\, \big[l_n(u(t))- l_n(u_0)\big] + \frac{1}{2} \int_0^t \!ds\, \big\langle l_n^{\prime \prime}(u) D(u)\, \nabla u,l_n^{\prime \prime}(u)\, \nabla u \big\rangle \\ & & =\frac 12 \int_0^t \!ds\,\big\langle l_n^{\prime \prime}(u) \nabla u, Q(u)\,\nabla u \big \rangle +\|\nabla \jmath\|_{L_2({{\mathbb T}})}^2 \int_0^t \!ds\int\!dx \, l_n^{\prime \prime}(u)\, a(u)^2+N_n(t)\end{aligned}$$ where $N_n(t)$ is a martingale, and by Young inequality for convolutions its quadratic variation is bounded by $\big[N_n , N_n \big](t) \le \|a(u) l_n^{\prime \prime}(u) \, \nabla u\|_{L_2([0,T]\times {{\mathbb T}})}^2$. Following closely the proof of Lemma \[l:supexpnabla\], we gather for some constant $C$ independent of $n$ $$\begin{aligned} {{\mathbb E}}^{P} \big( \sup_{t \le T} \int \!dx\, l_n(u(t)) \big) \le {{\mathbb E}}^{P} \big( \int \!dx\, l_n(u_0) \big)+ C\end{aligned}$$ As we let $n \to \infty$, the left hand side stays bounded, and since $l_n \to +\infty$ pointwise off $[0,1]$, we have $dx\,d P$-a.s. that $u(t,x) \in [0,1]$, for each $t \in [0,T]$. *Acknowledgements* I am grateful to Lorenzo Bertini for introducing me to the problem and providing invaluable help. I also thank S.R.S. Varadhan for enlightening discussions both on technical and general aspects of this work. I acknowledge the hospitality and the support of Istituto Guido Castelnuovo (Sapienza Università di Roma), and Courant Institute of Mathematical Sciences (New York University). This work was partially supported by ANR LHMSHE. Ambrosio L., De Lellis C., Maly J., *On the chain rule for the divergence of BV like vector fields: applications, partial results, open problems*. AMS series in contemporary mathematics “Perspectives in Nonlinear Partial Differential Equations: in honor of Haim Brezis” (2005). Ambrosio L., Fusco N., Pallara D., Functions of bounded variation and free discontinuity problems. Oxford University Press, New York (2000). Bellettini G., Bertini L., Mariani M., Novaga N., *$\Gamma$-entropy cost functional for scalar conservation laws* (to appear in Arch.Rat.Mech.Anal.). Billingsley P., Convergence of Probability measures, 2nd Edition. John Wiley and Sons, New York (1999). Dafermos C.M., Hyperbolic conservation laws in continuum physics, Second edition. Springer-Verlag, Berlin (2005). De Lellis C., Otto F., Westdickenberg M, *Structure of entropy solutions for multi-dimensional scalar conservation laws*, Arch. Ration. Mech. Anal. **170** no. 2, 137–184 (2003). Da Prato G., Zabczyk J., Stochastic equations in infinite dimensions. Cambridge University Press, Cambridge (1992). Dembo A., Zeitouni O., Large Deviations Techniques and Application. Springer Verlag, New York etc. (1993). Feng J., Kurtz T.G., Large deviations for stochastic processes, Mathematical Surveys and Monographs, 131. American Mathematical Society (2006). Freidlin M.I., Wentzell A.D., Random perturbations of dynamical systems, Second Edition. Springer-Verlag, New York (1998). Jakubowski A., *On the Skorokhod topology*, Ann. Inst. H. Poincaré Probab. Statist. **22** no. 3, 263–285 (1986). Jensen L.H., *Large deviations of the asymmetric simple exclusion process in one dimension.* Ph.D. Thesis, Courant Institute NYU (2000). Kipnis C., Landim C., Scaling limits of interacting particle systems. Springer-Verlag, Berlin (1999). Karatzas I. Shreve S.E., Brownian Motion and Stochastic Calculus. Springer-Verlag, New York Berlin Heidelberg (1988). Landim C., *Hydrodynamical limit for space inhomogeneous one dimensional totally asymmetric zero range process*. Ann.Probab. **24** no. 2, 599-638 (1996). Lions P.-L., Souganidis P., *Fully nonlinear stochastic partial differential equations.*, C. R. Acad. Sci. Paris Sér. I Math. **326** no. 9, 1085–1092 (1998). Lions P.-L., Souganidis P., *Uniqueness of weak solutions of fully nonlinear stochastic partial differential equations.* C. R. Acad. Sci. Paris Sér. I Math. **331** no. 10, 783–790 (2000). Mariani M., *Large Deviations for stochastic conservation laws and their variational counterparts*, Ph.D. Thesis, Sapienza Università di Roma 2007. Revuz D., Yor M., Continuous Martingales and Brownian Motion. Springer, Berlin etc. (1999). Spohn H., Large scale dynamics of interacting particles. Springer-Verlag, Berlin (1991). Varadhan S.R.S., *Large Deviations for the Simple Asymmetric Exclusion Process*, Stochastic analysis on large scale interacting systems, Adv. Stud. Pure Math., **39**, 1–27 (2004).
--- abstract: 'We construct a family of two new optimized explicit Runge-Kutta methods with zero phase-lag and derivatives for the numerical solution of the time-independent radial Schrödinger equation and related ordinary differential equations with oscillating solutions. The numerical results show the superiority of the new technique of nullifying both the phase-lag and its derivatives.' author: - 'Z.A. [^1]' - 'D.S. [^2]' - 'T.E. [^3]' title: 'A Family of Runge-Kutta Methods with Zero Phase-Lag and Derivatives for the Numerical Solution of the Schrödinger Equation and Related Problems' --- Introduction {#Intro} ============ Much research has been done on the numerical integration of the radial Schrödinger equation: $$\label{Schrodinger} y''(x) = \left( \frac{l(l+1)}{x^{2}}+V(x)-E \right) y(x)$$ where $\frac{l(l+1)}{x^{2}}$ is the *centrifugal potential*, $V(x)$ is the *potential*, $E$ is the *energy* and $W(x) = \frac{l(l+1)}{x^{2}} + V(x)$ is the *effective potential*. It is valid that ${\mathop {\lim} \limits_{x \to \infty}} V(x) = 0$ and therefore ${\mathop {\lim} \limits_{x \to \infty}} W(x) = 0$. Many problems in chemistry, physics, physical chemistry, chemical physics, electronics etc., are expressed by equation (\[Schrodinger\]). In this paper we will study the case of $E>0$. We divide $[0,\infty]$ into subintervals $[a_{i},b_{i}]$ so that $W(x)$ is a constant with value ${\mathop{W_{i}}\limits^{\_}}$. After this the problem (\[Schrodinger\]) can be expressed by the approximation $$\begin{array}{l} \label{Schrodinger_simplified} y''_{i} = ({\mathop{W}\limits^{\_}} - E)\,y_{i}, \quad\quad \mbox{whose solution is}\\ y_{i}(x) = A_{i}\,\exp{\left(\sqrt{{\mathop{W}\limits^{\_}}-E}\,x\right)} + B_{i}\,\exp{\left(-\sqrt{{\mathop{W}\limits^{\_}}-E}\,x\right)}, \\ A_{i},\,B_{i}\,\in {\mathbb R}. \end{array}$$ There has been an extended bibliography on the development and analysis of numerical methods for the efficient solution of the Schrödinger equation: see for example [@royal]-[@jnaiam3_11]. Basic theory {#theory} ============ Explicit Runge-Kutta methods {#theory_rk} ---------------------------- An $s$-stage explicit Runge-Kutta method used for the computation of the approximation of $y_{n+1}(x)$, when $y_{n}(x)$ is known, can be expressed by the following relations: $$\begin{aligned} \label{RK_gen} \nonumber y_{n + 1} = y_{n}+{\sum\limits_{i = 1}^{s} {b_{i}}}\,k_{i}\\ k_{i} = h\, f\left(x_{n} + c_{i} h,\,y_{n} + h\,{\sum\limits_{j = 1}^{i - 1} {a_{ij}\, k_{j}} } \right),\; i = 1,\ldots,s\end{aligned}$$ where in this case $f\left(x,y(x)\right) = \left( W(x) - E \right) \, y(x)$. Actually to solve the second order ODE (\[Schrodinger\]) using first order numerical method (\[RK\_gen\]), (\[Schrodinger\]) becomes: $$\begin{array}{l} \label{Schrodinger2} z'(x) = \left( W(x) - E \right) \, y(x)\\ y'(x) = z(x) \end{array}$$ while we use two sets of equations (\[RK\_gen\]): one for $y_{n + 1}$ and one for $z_{n + 1}$. The method shown above can also be presented using the Butcher table below: $$\begin{array}{c|ccccc} \label{table_RK} 0\\ c_{2} & a_{21}\\ c_{3} & a_{31} & a_{32}\\ \vdots& \vdots& \vdots\\ c_{s} & a_{s1}& a_{s2}& \ldots & a_{s,s-1}& \\ \hline & b_{1} & b_{2} & \ldots & b_{s-1} & b_{s}\\ \end{array}$$ Coefficients $c_{2}$, …, $c_{s}$ must satisfy the equations: $$\label{eq_tree2} c_{i} = {\sum\limits_{j = 1}^{i-1} {a_{ij},\;i = 2,\ldots,s}}$$ \[defn\_tree5\] *[@butcher] A Runge-Kutta method has algebraic order $p$ when the method’s series expansion agrees with the Taylor series expansion in the $p$ first terms:* $y^{( {n} )}( {x} ) = y_{app.}^{( {n} )} ( {x} )$, $\;\; n=1,2,\ldots,p.$ A convenient way to obtain a certain algebraic order is to satisfy a number of equations derived from Tree Theory. These equations will be shown during the construction of the new methods. Phase-Lag Analysis of Runge-Kutta Methods ----------------------------------------- The phase-lag analysis of Runge-Kutta methods is based on the test equation $$\label{eq_phase_test1} y' = I \omega y, \quad \omega \in R$$ Application of the Runge-Kutta method described in (\[RK\_gen\]) to the scalar test equation (\[eq\_phase\_test1\]) produces the numerical solution: $$\label{eq_phase_test2} {y_{n+1}=a^{n}_{*}}y_{n},\;\; a_{*}=A_{s}(v^{2})+ivB_{s}(v^{2}),$$ where $v=\omega h$ and $A_{s}, B_{s}$ are polynomials in $v^{2}$ completely defined by Runge-Kutta parameters $a_{i,j}$, $b_{i}$ and $c_{i}$, as shown in (\[table\_RK\]). \[defn\_dissip\]*[@royal]* *In the explicit *s*-stage Runge-Kutta method, presented in (\[table\_RK\]), the quantities* $t(v)=v-\arg[a_{*}(v)]$, $a(v)=1-|a_{*}(v)|$\ *are respectively called the *phase-lag* or *dispersion error* and the *dissipative error*. If $t(v)=O(v^{q+1})$ and $a(v)=O(v^{r+1})$ then the method is said to be of dispersive order *q* and dissipative order *r**. Construction of the new trigonometrically fitted Runge-Kutta methods {#Construction} ==================================================================== We consider the explicit Runge-Kutta method with 3 stages and 3rd algebraic order given in table (\[table\_classical\]). $$\begin{array}{c|cccccccc} \label{table_classical} \frac{1}{2} & \frac{1}{2}\\ 1 & - 1 & 2\\ \hline & \frac{1}{6} & \frac{2}{3} & \frac{1}{6} \end{array}$$ We will construct two new optimized methods. First optimized method with zero phase-lag {#Constr1} ------------------------------------------ In order to develop the new optimized method, we set free $b_3$, while all other coefficients are borrowed from the classical method. We want the phase-lag of the method to be null, so we satisfy the equation $PL=0$, while solving for $b_3$, where $$PL = 1/6\, \left( 6+ \left( -2-6\,b_{{3}} \right) {v}^{2} \right) \tan \left( v \right) +{v}^{3}b_{{3}}+1/6\, \left( -5-6\,b_{{3}} \right) v$$ So $b_3$ becomes $${\displaystyle}b_3 = -{\frac {-6\,\tan \left( v \right) +2\,\tan \left( v \right) {v}^{2}+5\,v}{6 v \left( v\tan \left( v \right) -{v}^{2}+1 \right) }}$$ and its Taylor series expansion is $${\displaystyle}b_3 = \frac{1}{6}-\frac{1}{30}\,{v}^{4}-{\frac {4}{315}}\,{v}^{6}+{\frac {17}{2835}}\,{v}^{8}+{\frac {206}{31185}}\,{v}^{10}+{\frac {7951}{12162150}}\,{v}^{12}-\ldots$$ where $v=\omega h$, $\omega$ is a real number and indicates the dominant frequency of the problem and $h$ is the step-length of integration. Second optimized method with zero phase-lag and derivative {#Constr1} ---------------------------------------------------------- As for the development of the second optimized method, we set free $b_2$ and $b_3$, while all other coefficients are borrowed from the classical method. We want the phase-lag and its first derivative of the method to be null, so we satisfy the equations $\{PL=0, PL'=0\}$, while solving for $b_2$ and $b_3$, where $$\begin{array}{l} PL = 1/6\, \left( 6+ \left( -3\,b_{{2}}-6\,b_{{3}} \right) {v}^{2} \right) \tan \left( v \right) +v \left( -1/6-b_{{2}}-b_{{3}}+b_{{3}}{v}^{2} \right)\\ PL' = -v\tan \left( v \right) b_{{2}}-2\,v\tan \left( v \right) b_{{3}}+5/6+\left( \tan \left( v \right) \right) ^{2}-1/2\,{v}^{2}b_{{2}}\\ -1/2\,{v}^{2}b_{{2}} \left( \tan \left( v \right) \right) ^{2}+2\,b_{{3}}{v}^{2}-b_{{3}}{v}^{2} \left( \tan \left( v \right) \right) ^{2}-b_{{2}} -b_{{3}} \end{array}$$ Then we have $$\begin{array}{l} b_2=\frac{1}{6}\,{\frac {12 \,v+{v}^{3}+\tan \left( v \right) {v}^{2}-12\,\tan \left( v \right) +{ v}^{3} \left( \tan \left( v \right) \right) ^{2}}{{v}^{2} \left( -3\, v+\tan \left( v \right) +v \left( \tan \left( v \right) \right) ^{2}- \tan \left( v \right) {v}^{2}+{v}^{3}+{v}^{3} \left( \tan \left( v \right) \right) ^{2} \right) }}\\ b_3=\frac{1}{3}\,{\frac {5\,{v}^{3} \left( \tan \left( v \right) \right) ^{2}+7\,{v}^{3}-19\,\tan \left( v \right) {v}^{2}+6\,v \left( \tan \left( v \right) \right) ^{2}-6\,v+6\,\tan \left( v \right) }{{v}^{2} \left( -3\,v+\tan \left( v \right) +v \left( \tan \left( v \right) \right) ^{2}-\tan \left( v \right) {v}^{2}+{v}^{3}+ {v}^{3} \left( \tan \left( v \right) \right) ^{2} \right) }} \end{array}$$ The Taylor series expansion of the coefficients are given below: $$\begin{array}{l} b_2 = \frac{2}{3}-\frac{2}{15}\,{v}^{2}-{\frac {52}{315}}\,{v}^{4}-{\frac {3526}{14175}}\,{v }^{6}-{\frac {173788}{467775}}\,{v}^{8}-{\frac {354768808}{638512875}} \,{v}^{10}-\ldots\\ b_3 = \frac{1}{6}+\frac{2}{15}\,{v}^{2}+{\frac {25}{126}}\,{v}^{4}+{\frac {4201}{14175}}\,{v }^{6}+{\frac {207349}{467775}}\,{v}^{8}+{\frac {423287713}{638512875}} \,{v}^{10}+\ldots \end{array}$$ where $v=\omega h$, $\omega$ is a real number and indicates the dominant frequency of the problem and $h$ is the step-length of integration. Algebraic order of the new methods {#Order} ================================== The following 4 equations must be satisfied so that the new methods maintain the third algebraic order of the corresponding classical method (\[table\_classical\]). The number of stages is symbolized by $s$, where $s=4$. Then we are presenting the Taylor series expansions of the remainders of these equations, that is the difference of the right part minus the left part. $$\begin{array}{cc}\label{alg6_gen} \textbf{1st Alg. Order (1 equation)} & \textbf{3rd Alg. Order (4 equations)}\\ {\sum\limits_{i = 1}^{s} {b_{i}} } = 1 & {\sum\limits_{i = 1}^{s} {b_{i}} } c_{i} ^{2} = \frac{1}{3}\\ \textbf{2nd Alg. Order (2 equations)} & {\sum\limits_{i,j = 1}^{s} {b_{i}} } a_{ij} c_{j} = \frac{1}{6}\\ {\sum\limits_{i = 1}^{s} {b_{i}} } c_{i} = \frac{1}{2} \end{array}$$ Equations remainders for the first method ----------------------------------------- We are presenting ${\it Rem}$ which is the remainder for all four equations for the first method: $$\begin{array}{l} {\it Rem} = -\frac{1}{30}\,{v}^{4}-{\frac {4}{315}}\,{v}^{6}+{\frac {17}{2835}}\,{v}^{8}+{\frac {206}{31185}}\,{v}^{10}+\ldots \end{array}$$ Equations remainders for the second method ------------------------------------------ The four remainders of the equations for the second method are: $$\begin{array}{l} {\it Rem}_{{1}} = \frac{1}{30}\,{v}^{4}+\frac{1}{21}\,{v}^{6}+{\frac {113}{1575}}\,{v}^{8}+{\frac {7171}{66825}}\,{v}^{10}+\ldots\\ {\it Rem}_{{2}} = \frac{1}{15}\,{v}^{2}+{\frac {73}{630}}\,{v}^{4}+{\frac {2438}{14175}}\,{v}^{6}+{\frac {24091}{93555}}\,{v}^{8}+{\frac {245903309}{638512875}}\,{v}^{10}+\ldots\\ {\it Rem}_{{3}} = \frac{1}{10}\,{v}^{2}+{\frac {11}{70}}\,{v}^{4}+{\frac {2213}{9450}}\,{v}^{6}+{\frac {54634}{155925}}\,{v}^{8}+{\frac {37177279}{70945875}}\,{v}^{10}+\ldots\\ {\it Rem}_{{4}} = \frac{2}{15}\,{v}^{2}+{\frac {25}{126}}\,{v}^{4}+{\frac {4201}{14175}}\,{v}^{6}+{\frac {207349}{467775}}\,{v}^{8}+{\frac {423287713}{638512875}}\,{v}^{10}+\ldots\\ \end{array}$$ We see that the two optimized methods retain the third algebraic order, since the constant term of all the remainders is zero. Numerical results {#Numerical_results} ================= The inverse resonance problem ----------------------------- The efficiency of the two new constructed methods will be measured through the integration of problem (\[Schrodinger\]) with $l=0$ at the interval $[0,15]$ using the well known Woods-Saxon potential $$\begin{aligned} \label{Woods_Saxon} V(x) = \frac{u_{0}}{1+q} + \frac{u_{1}\,q}{(1+q)^2}, \quad\quad q = \exp{\left(\frac{x-x_{0}}{a}\right)}, \quad \mbox{where}\\ \nonumber u_{0}=-50, \quad a=0.6, \quad x_{0}=7 \quad \mbox{and} \quad u_{1}=-\frac{u_{0}}{a}\end{aligned}$$ and with boundary condition $y(0)=0$. The potential $V(x)$ decays more quickly than $\frac{l\,(l+1)}{x^2}$, so for large $x$ (asymptotic region) the Schrödinger equation (\[Schrodinger\]) becomes $$\label{Schrodinger_reduced} y''(x) = \left( \frac{l(l+1)}{x^{2}}-E \right) y(x)$$ The last equation has two linearly independent solutions $k\,x\,j_{l}(k\,x)$ and $k\,x\,n_{l}(k\,x)$, where $j_{l}$ and $n_{l}$ are the *spherical Bessel* and *Neumann* functions. When $x \rightarrow \infty$ the solution takes the asymptotic form $$\label{asymptotic_solution} \begin{array}{l} y(x) \approx A\,k\,x\,j_{l}(k\,x) - B\,k\,x\,n_{l}(k\,x) \\ \approx D[sin(k\,x - \pi\,l/2) + \tan(\delta_{l})\,\cos{(k\,x - \pi\,l/2)}], \end{array}$$ where $\delta_{l}$ is called *scattering phase shift* and it is given by the following expression: $$\tan{(\delta_{l})} = \frac{y(x_{i})\,S(x_{i+1}) - y(x_{i+1})\,S(x_{i})} {y(x_{i+1})\,C(x_{i}) - y(x_{i})\,C(x_{i+1})},$$ where $S(x)=k\,x\,j_{l}(k\,x)$, $C(x)=k\,x\,n_{l}(k\,x)$ and $x_{i}<x_{i+1}$ and both belong to the asymptotic region. Given the energy we approximate the phase shift, the accurate value of which is $\pi/2$ for the above problem. We will use three different values for the energy: i) $989.701916$, ii) $341.495874$ and iii) $163.215341$. As for the frequency $\omega$ we will use the suggestion of Ixaru and Rizea [@ix_ri]: $$\omega = \cases{ \sqrt{E-50} & $x\in[0,\,6.5]$ \cr \sqrt{E} &$x\in[6.5,\,15]$ \cr}$$ Nonlinear Problem ----------------- $y''=-100\, y+\sin(y),\;$ with $\; y(0)=0,\: y'(0)=1,\: t\in[0,20\,\pi]$, $y(20\pi)=$ $3.92823991 \,\cdot\, 10 ^{-4}$ and $\omega=10$ as frequency of this problem. Comparison ---------- We present the **accuracy** of the tested methods expressed by the $-\log_{10}$(error at the end point) when comparing the phase shift to the actual value $\pi/2$ versus the $\log_{10}$(total function evaluations). The **function evaluations** per step are equal to the number of stages of the method multiplied by two that is the dimension of the vector of the functions integrated for the Schrödinger ($y(x)$ and $z(x)$). In Figure \[fig\_resonance\_989\] we use $E = 989.701916$, in Figure \[fig\_resonance\_341\] $E = 341.495874$ and in Figure \[fig\_resonance\_163\] $E = 163.215341$. ![Efficiency for the Schrödinger equation using E = 989.701916[]{data-label="fig_resonance_989"}](E989.eps){width="\textwidth"} ![Efficiency for the Schrödinger equation using E = 341.495874[]{data-label="fig_resonance_341"}](E341.eps){width="\textwidth"} ![Efficiency for the Schrödinger equation using E = 163.215341[]{data-label="fig_resonance_163"}](E163.eps){width="\textwidth"} ![Efficiency for different eigenenergies[]{data-label="fig_energies"}](Energies.eps){width="\textwidth"} ![Efficiency for the Nonlinear Problem[]{data-label="fig_nonlinear"}](Nonlinear.eps){width="\textwidth"} Conclusions =========== We compare the two optimized methods and the corresponding classical explicit Runge-Kutta method for the integration of the Schrödinger equation and the Nonlinear problem. We see that the second method with the phase-lag and its first derivative nullified is the most efficient in all cases, followed in terms of efficiency by the optimized method with zero phase-lag and then by the corresponding classical method. [120]{} Ixaru L. Gr., Rizea M., A Numerov-like scheme for the numerical solution of the Schrödinger equation in the deep continuum spectrum of energies, Comp. Phys. Comm. 19, 23-27 (1980) J.C. Butcher, Numerical methods for ordingary differential equations, Wiley (2003) L.Gr. Ixaru and M. Micu, [*Topics in Theoretical Physics*]{}. Central Institute of Physics, Bucharest, 1978. L.D. Landau and F.M. Lifshitz: [*Quantum Mechanics*]{}. Pergamon, New York, 1965. I. Prigogine, Stuart Rice (Eds): Advances in Chemical Physics Vol. 93: New Methods in Computational Quantum Mechanics, John Wiley & Sons, 1997. G. Herzberg, [*Spectra of Diatomic Molecules*]{}, Van Nostrand, Toronto, 1950. T.E. Simos, Atomic Structure Computations in Chemical Modelling: Applications and Theory (Editor: A. Hinchliffe, UMIST), [*The Royal Society of Chemistry*]{} 38-142(2000). T.E. Simos, Numerical methods for 1D, 2D and 3D differential equations arising in chemical problems, [*Chemical Modelling: Application and Theory*]{}, The Royal Society of Chemistry, 2(2002),170-270. T.E. Simos and P.S. Williams, On finite difference methods for the solution of the Schrödinger equation, [*Computers & Chemistry*]{} [**23**]{} 513-554(1999). T.E. Simos: [*Numerical Solution of Ordinary Differential Equations with Periodical Solution*]{}. Doctoral Dissertation, National Technical University of Athens, Greece, 1990 (in Greek). A. Konguetsof and T.E. Simos, On the Construction of exponentially-fitted methods for the numerical solution of the Schrödinger Equation, [*Journal of Computational Methods in Sciences and Engineering*]{} [**1**]{} 143-165(2001). A.D. Raptis and A.C. Allison: Exponential - fitting methods for the numerical solution of the Schrödinger equation, *Computer Physics Communications*, [**14**]{} 1-5(1978). A.D. Raptis, Exponential multistep methods for ordinary differential equations, [*Bull. Greek Math. Soc.*]{} [**25**]{} 113-126(1984). L.Gr. Ixaru, Numerical Methods for Differential Equations and Applications, Reidel, Dordrecht - Boston - Lancaster, 1984. T. E. Simos, P. S. Williams: A New Runge-Kutta-Nystrom Method with Phase-Lag of Order Infinity for the Numerical Solution of the Schrödinger Equation, [*MATCH Commun. Math. Comput. Chem.*]{} [**45**]{} 123-137(2002). T. E. Simos, Multiderivative Methods for the Numerical Solution of the Schrödinger Equation, [*MATCH Commun. Math. Comput. Chem.*]{} [**45**]{} 7-26(2004). A.D. Raptis, Exponentially-fitted solutions of the eigenvalue Shrödinger equation with automatic error control, [*Computer Physics Communications*]{}, [**28**]{} 427-431(1983) A.D. Raptis, On the numerical solution of the Schrodinger equation, [*Computer Physics Communications*]{}, [**24**]{} 1-4(1981) Zacharoula Kalogiratou and T.E. Simos, A P-stable exponentially-fitted method for the numerical integration of the Schrödinger equation, [*Applied Mathematics and Computation*]{}, [**112**]{} 99-112(2000). J.D. Lambert and I.A. Watson, Symmetric multistep methods for periodic initial values problems, [*J. Inst. Math. Appl.*]{} [**18**]{} 189-202(1976). A.D. Raptis and T.E. Simos, A four-step phase-fitted method for the numerical integration of second order initial-value problem, [*BIT*]{}, [**31**]{} 160-168(1991). Peter Henrici, [*Discrete variable methods in ordinary differential equations*]{}, John Wiley & Sons, 1962. M.M. Chawla, Uncoditionally stable Noumerov-type methods for second order differential equations, [*BIT*]{}, [**23**]{} 541-542(1983). M. M. Chawla and P. S. Rao, A Noumerov-type method with minimal phase-lag for the integration of second order periodic initial-value problems, [*Journal of Computational and Applied Mathematics*]{} [**11(3)**]{} 277-281(1984) Liviu Gr. Ixaru and Guido Vanden Berghe, Exponential Fitting, Series on Mathematics and its Applications, Vol. 568, Kluwer Academic Publisher, The Netherlands, 2004. L. Gr. Ixaru and M. Rizea, Comparison of some four-step methods for the numerical solution of the Schrödinger equation, [*Computer Physics Communications*]{}, [**38(3)**]{} 329-337(1985) Z.A. Anastassi, T.E. Simos, A family of exponentially-fitted Runge-Kutta methods with exponential order up to three for the numerical solution of the Schrödinger equation, [*J. Math. Chem*]{} [**41 (1)**]{} 79-100 (2007) T. Monovasilis, Z. Kalogiratou , T.E. Simos, Trigonometrically fitted and exponentially fitted symplectic methods for the numerical integration of the Schrödinger equation, [*J. Math. Chem*]{} [**40 (3)**]{} 257-267 (2006) G. Psihoyios, T.E. Simos, The numerical solution of the radial Schrödinger equation via a trigonometrically fitted family of seventh algebraic order Predictor-Corrector methods, [*J. Math. Chem*]{} [**40 (3)**]{} 269-293 (2006) T.E. Simos, A four-step exponentially fitted method for the numerical solution of the Schrödinger equation, [*J. Math. Chem*]{} [**40 (3)**]{} 305-318 (2006) T. Monovasilis, Z. Kalogiratou , T.E. Simos, Exponentially fitted symplectic methods for the numerical integration of the Schrödinger equation [*J. Math. Chem*]{} [**37 (3)**]{} 263-270 (2005) Z. Kalogiratou , T. Monovasilis, T.E. Simos, Numerical solution of the two-dimensional time independent Schrödinger equation with Numerov-type methods [*J. Math. Chem*]{} [**37 (3)**]{} 271-279 (2005) Z.A. Anastassi, T.E. Simos, Trigonometrically fitted Runge-Kutta methods for the numerical solution of the Schrödinger equation [*J. Math. Chem*]{} [**37 (3)**]{} 281-293 (2005) G. Psihoyios, T.E. Simos, Sixth algebraic order trigonometrically fitted predictor-corrector methods for the numerical solution of the radial Schrödinger equation, [*J. Math. Chem*]{} [**37 (3)**]{} 295-316 (2005) D.P. Sakas, T.E. Simos, A family of multiderivative methods for the numerical solution of the Schrödinger equation, [*J. Math. Chem*]{} [**37 (3)**]{} 317-331 (2005) T.E. Simos, Exponentially - fitted multiderivative methods for the numerical solution of the Schrödinger equation, [*J. Math. Chem*]{} [**36 (1)**]{} 13-27 (2004) K. Tselios, T.E. Simos, Symplectic methods of fifth order for the numerical solution of the radial Shrodinger equation, [*J. Math. Chem*]{} [**35 (1)**]{} 55-63 (2004) T.E. Simos, A family of trigonometrically-fitted symmetric methods for the efficient solution of the Schrödinger equation and related problems [*J. Math. Chem*]{} [**34 (1-2)**]{} 39-58 JUL 2003 K. Tselios, T.E. Simos, Symplectic methods for the numerical solution of the radial Shrödinger equation, [*J. Math. Chem*]{} [**34 (1-2)**]{} 83-94 (2003) J. Vigo-Aguiar J, T.E. Simos, Family of twelve steps exponential fitting symmetric multistep methods for the numerical solution of the Schrödinger equation, [*J. Math. Chem*]{} [**32 (3)**]{} 257-270 (2002) G. Avdelas, E. Kefalidis, T.E. Simos, New P-stable eighth algebraic order exponentially-fitted methods for the numerical integration of the Schrödinger equation, [*J. Math. Chem*]{} [**31 (4)**]{} 371-404 (2002) T.E. Simos, J. Vigo-Aguiar, Symmetric eighth algebraic order methods with minimal phase-lag for the numerical solution of the Schrödinger equation [*J. Math. Chem*]{} [**31 (2)**]{} 135-144 (2002) Z. Kalogiratou , T.E. Simos, Construction of trigonometrically and exponentially fitted Runge-Kutta-Nystrom methods for the numerical solution of the Schrödinger equation and related problems a method of 8th algebraic order, [*J. Math. Chem*]{} [**31 (2)**]{} 211-232 T.E. Simos, J. Vigo-Aguiar, A modified phase-fitted Runge-Kutta method for the numerical solution of the Schrödinger equation, [*J. Math. Chem*]{} [**30 (1)**]{} 121-131 (2001) G. Avdelas, A. Konguetsof, T.E. Simos, A generator and an optimized generator of high-order hybrid explicit methods for the numerical solution of the Schrödinger equation. Part 1. Development of the basic method, [*J. Math. Chem*]{} [**29 (4)**]{} 281-291 (2001) G. Avdelas, A. Konguetsof, T.E. Simos, A generator and an optimized generator of high-order hybrid explicit methods for the numerical solution of the Schrödinger equation. Part 2. Development of the generator; optimization of the generator and numerical results, [*J. Math. Chem*]{} [**29 (4)**]{} 293-305 (2001) J. Vigo-Aguiar, T.E. Simos, A family of P-stable eighth algebraic order methods with exponential fitting facilities, [*J. Math. Chem*]{} [**29 (3)**]{} 177-189 (2001) T.E. Simos, A new explicit Bessel and Neumann fitted eighth algebraic order method for the numerical solution of the Schrödinger equation [*J. Math. Chem*]{} [**27 (4)**]{} 343-356 (2000) G. Avdelas, T.E. Simos, Embedded eighth order methods for the numerical solution of the Schrödinger equation, [*J. Math. Chem*]{} [**26 (4)**]{} 327-341 1999, T.E. Simos, A family of P-stable exponentially-fitted methods for the numerical solution of the Schrödinger equation, [*J. Math. Chem*]{} [**25 (1)**]{} 65-84 (1999) T.E. Simos, Some embedded modified Runge-Kutta methods for the numerical solution of some specific Schrödinger equations, [*J. Math. Chem*]{} [**24 (1-3)**]{} 23-37 (1998) T.E. Simos, Eighth order methods with minimal phase-lag for accurate computations for the elastic scattering phase-shift problem, [*J. Math. Chem*]{} [**21 (4)**]{} 359-372 (1997) P. Amodio, I. Gladwell and G. Romanazzi, Numerical Solution of General Bordered ABD Linear Systems by Cyclic Reduction, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{} [**1(1)**]{} 5-12(2006) S. D. Capper, J. R. Cash and D. R. Moore, Lobatto-Obrechkoff Formulae for 2nd Order Two-Point Boundary Value Problems, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{} [**1(1)**]{} 13-25 (2006) S. D. Capper and D. R. Moore, On High Order MIRK Schemes and Hermite-Birkhoff Interpolants, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{} [**1(1)**]{} 27-47 (2006) J. R. Cash, N. Sumarti, T. J. Abdulla and I. Vieira, The Derivation of Interpolants for Nonlinear Two-Point Boundary Value Problems, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{} [**1(1)**]{} 49-58 (2006) J. R. Cash and S. Girdlestone, Variable Step Runge-Kutta-Nyström Methods for the Numerical Solution of Reversible Systems, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{} [**1(1)**]{} 59-80 (2006) Jeff R. Cash and Francesca Mazzia, Hybrid Mesh Selection Algorithms Based on Conditioning for Two-Point Boundary Value Problems, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{} [**1(1)**]{} 81-90 (2006) Felice Iavernaro, Francesca Mazzia and Donato Trigiante, Stability and Conditioning in Numerical Analysis, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{} [**1(1)**]{} 91-112 (2006) Felice Iavernaro and Donato Trigiante, Discrete Conservative Vector Fields Induced by the Trapezoidal Method, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{} [**1(1)**]{} 113-130 (2006) Francesca Mazzia, Alessandra Sestini and Donato Trigiante, BS Linear Multistep Methods on Non-uniform Meshes, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{} [**1(1)**]{} 131-144 (2006) L.F. Shampine, P.H. Muir, H. Xu, A User-Friendly Fortran BVP Solver, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{} [**1(2)**]{} 201-217 (2006) G. Vanden Berghe and M. Van Daele, Exponentially- fitted Störmer/Verlet methods, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{} [**1(3)**]{} 241-255 (2006) L. Aceto, R. Pandolfi, D. Trigiante, Stability Analysis of Linear Multistep Methods via Polynomial Type Variation, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{} [**2(1-2)**]{} 1-9 (2007) G. Psihoyios, A Block Implicit Advanced Step-point (BIAS) Algorithm for Stiff Differential Systems, [*Computing Letters*]{} [**2(1-2)**]{} 51-58(2006) W.H. Enright, On the use of ’arc length’ and ’defect’ for mesh selection for differential equations, [*Computing Letters*]{} [**1(2)**]{} 47-52(2005) T.E. Simos, P-stable Four-Step Exponentially-Fitted Method for the Numerical Integration of the Schrödinger Equation, [*Computing Letter*]{} **1(1)** 37-45(2005). T.E. Simos, Stabilization of a Four-Step Exponentially-Fitted Method and its Application to the Schrödinger Equation, [*International Journal of Modern Physics C*]{} **18(3)** 315-328(2007). Zhongcheng Wang, P-stable linear symmetric multistep methods for periodic initial-value problems, [*Computer Physics Communications*]{} **171** 162–174(2005) T.E. Simos, A Runge-Kutta Fehlberg method with phase-lag of order infinity for initial value problems with oscillating solution, [*Computers and Mathematics with Applications*]{} [**25**]{} 95-101(1993). T.E. Simos, Runge-Kutta interpolants with minimal phase-lag, [*Computers and Mathematics with Applications*]{} [**26**]{} 43-49(1993). T.E. Simos, Runge-Kutta-Nyström interpolants for the numerical integration of special second-order periodic initial-value problems, [*Computers and Mathematics with Applications*]{} [**26**]{} 7-15(1993). T.E. Simos and G.V. Mitsou, A family of four-step exponential fitted methods for the numerical integration of the radial Schrödinger equation, [*Computers and Mathematics with Applications*]{} [**28**]{} 41-50(1994). T.E. Simos and G. Mousadis, A two-step method for the numerical solution of the radial Schrödinger equation, [*Computers and Mathematics with Applications*]{} [**29**]{} 31-37(1995). G. Avdelas and T.E. Simos, Block Runge-Kutta methods for periodic initial-value problems, [*Computers and Mathematics with Applications*]{} [**31**]{} 69- 83(1996). G. Avdelas and T.E. Simos, Embedded methods for the numerical solution of the Schrödinger equation, [*Computers and Mathematics with Applications*]{} [**31**]{} 85-102(1996). G. Papakaliatakis and T.E. Simos, A new method for the numerical solution of fourth order BVP’s with oscillating solutions, [*Computers and Mathematics with Applications*]{} [**32**]{} 1-6(1996). T.E. Simos, An extended Numerov-type method for the numerical solution of the Schrödinger equation, [*Computers and Mathematics with Applications*]{} [**33**]{} 67-78(1997). T.E. Simos, A new hybrid imbedded variable-step procedure for the numerical integration of the Schrödinger equation, [*Computers and Mathematics with Applications*]{} [**36**]{} 51-63(1998). T.E. Simos, Bessel and Neumann Fitted Methods for the Numerical Solution of the Schrödinger equation, [*Computers & Mathematics with Applications*]{} [**42**]{} 833-847(2001). A. Konguetsof and T.E. Simos, An exponentially-fitted and trigonometrically-fitted method for the numerical solution of periodic initial-value problems, [*Computers and Mathematics with Applications*]{} [**45**]{} 547-554(2003). Z.A. Anastassi and T.E. Simos, An optimized Runge-Kutta method for the solution of orbital problems, [*Journal of Computational and Applied Mathematics*]{} [**175(1)**]{} 1-9(2005) G. Psihoyios and T.E. Simos, A fourth algebraic order trigonometrically fitted predictor-corrector scheme for IVPs with oscillating solutions, [*Journal of Computational and Applied Mathematics*]{} [**175(1)**]{} 137-147(2005) D.P. Sakas and T.E. Simos, Multiderivative methods of eighth algrebraic order with minimal phase-lag for the numerical solution of the radial Schrödinger equation, Journal of Computational and Applied Mathematics [**175(1)**]{} 161-172(2005) K. Tselios and T.E. Simos, Runge-Kutta methods with minimal dispersion and dissipation for problems arising from computational acoustics, [*Journal of Computational and Applied Mathematics*]{} [**175(1)**]{} 173-181(2005) Z. Kalogiratou and T.E. Simos, Newton-Cotes formulae for long-time integration, [*Journal of Computational and Applied Mathematics*]{} [**158(1)**]{} 75-82(2003) Z. Kalogiratou, T. Monovasilis and T.E. Simos, Symplectic integrators for the numerical solution of the Schrödinger equation, [*Journal of Computational and Applied Mathematics*]{} [**158(1)**]{} 83-92(2003) A. Konguetsof and T.E. Simos, A generator of hybrid symmetric four-step methods for the numerical solution of the Schrödinger equation, [*Journal of Computational and Applied Mathematics*]{} [**158(1)**]{} 93-106(2003) G. Psihoyios and T.E. Simos, Trigonometrically fitted predictor-corrector methods for IVPs with oscillating solutions, [*Journal of Computational and Applied Mathematics*]{} [**158(1)**]{} 135-144(2003) Ch. Tsitouras and T.E. Simos, Optimized Runge-Kutta pairs for problems with oscillating solutions, [*Journal of Computational and Applied Mathematics*]{} [**147(2)**]{} 397-409(2002) T.E. Simos, An exponentially fitted eighth-order method for the numerical solution of the Schrödinger equation, [*Journal of Computational and Applied Mathematics*]{} [**108(1-2)**]{} 177-194(1999) T.E. Simos, An accurate finite difference method for the numerical solution of the Schrödinger equation, [*Journal of Computational and Applied Mathematics*]{} [**91(1)**]{} 47-61(1998) R.M. Thomas and T.E. Simos, A family of hybrid exponentially fitted predictor-corrector methods for the numerical integration of the radial Schrödinger equation, [*Journal of Computational and Applied Mathematics*]{} [**87(2)**]{} 215-226(1997) Z.A. Anastassi and T.E. Simos: Special Optimized Runge-Kutta methods for IVPs with Oscillating Solutions, International Journal of Modern Physics C, 15, 1-15 (2004) Z.A. Anastassi and T.E. Simos: A Dispersive-Fitted and Dissipative-Fitted Explicit Runge-Kutta method for the Numerical Solution of Orbital Problems, New Astronomy, 10, 31-37 (2004) Z.A. Anastassi and T.E. Simos: A Trigonometrically-Fitted Runge-Kutta Method for the Numerical Solution of Orbital Problems, New Astronomy, 10, 301-309 (2005) T.V. Triantafyllidis, Z.A. Anastassi and T.E. Simos: Two Optimized Runge-Kutta Methods for the Solution of the Schr?dinger Equation, MATCH Commun. Math. Comput. Chem., 60, 3 (2008) Z.A. Anastassi and T.E. Simos, Trigonometrically Fitted Fifth Order Runge-Kutta Methods for the Numerical Solution of the Schrödinger Equation, Mathematical and Computer Modelling, 42 (7-8), 877-886 (2005) Z.A. Anastassi and T.E. Simos: New Trigonometrically Fitted Six-Step Symmetric Methods for the Efficient Solution of the Schrödinger Equation, MATCH Commun. Math. Comput. Chem., 60, 3 (2008) G.A. Panopoulos, Z.A. Anastassi and T.E. Simos: Two New Optimized Eight-Step Symmetric Methods for the Efficient Solution of the Schrödinger Equation and Related Problems, MATCH Commun. Math. Comput. Chem., 60, 3 (2008) Z.A. Anastassi and T.E. Simos: A Six-Step P-stable Trigonometrically-Fitted Method for the Numerical Integration of the Radial Schrödinger Equation, MATCH Commun. Math. Comput. Chem., 60, 3 (2008) Z.A. Anastassi and T.E. Simos, A family of two-stage two-step methods for the numerical integration of the Schrödinger equation and related IVPs with oscillating solution, Journal of Mathematical Chemistry, Article in Press, Corrected Proof T.E. Simos and P.S. Williams, A finite-difference method for the numerical solution of the Schrödinger equation, [*Journal of Computational and Applied Mathematics*]{} [**79(2)**]{} 189-205(1997) G. Avdelas and T.E. Simos, A generator of high-order embedded P-stable methods for the numerical solution of the Schrödinger equation, [*Journal of Computational and Applied Mathematics*]{} [**72(2)**]{} 345-358(1996) R.M. Thomas, T.E. Simos and G.V. Mitsou, A family of Numerov-type exponentially fitted predictor-corrector methods for the numerical integration of the radial Schrödinger equation, [*Journal of Computational and Applied Mathematics*]{} [**67(2)**]{} 255-270(1996) T.E. Simos, A Family of 4-Step Exponentially Fitted Predictor-Corrector Methods for the Numerical-Integration of The Schrödinger-Equation, [*Journal of Computational and Applied Mathematics*]{} [**58(3)**]{} 337-344(1995) T.E. Simos, An Explicit 4-Step Phase-Fitted Method for the Numerical-Integration of 2nd-order Initial-Value Problems, [*Journal of Computational and Applied Mathematics*]{} [**55(2)**]{} 125-133(1994) T.E. Simos, E. Dimas and A.B. Sideridis, A Runge-Kutta-Nyström Method for the Numerical-Integration of Special 2nd-order Periodic Initial-Value Problems, [*Journal of Computational and Applied Mathematics*]{} [**51(3)**]{} 317-326(1994) A.B. Sideridis and T.E. Simos, A Low-Order Embedded Runge-Kutta Method for Periodic Initial-Value Problems, [*Journal of Computational and Applied Mathematics*]{} [**44(2)**]{} 235-244(1992) T.E. Simos amd A.D. Raptis, A 4th-order Bessel Fitting Method for the Numerical-Solution of the SchrÖdinger-Equation, [*Journal of Computational and Applied Mathematics*]{} [**43(3)**]{} 313-322(1992) T.E. Simos, Explicit 2-Step Methods with Minimal Phase-Lag for the Numerical-Integration of Special 2nd-order Initial-Value Problems and their Application to the One-Dimensional Schrödinger-Equation, [*Journal of Computational and Applied Mathematics*]{} [**39(1)**]{} 89-94(1992) T.E. Simos, A 4-Step Method for the Numerical-Solution of the Schrödinger-Equation, [*Journal of Computational and Applied Mathematics*]{} [**30(3)**]{} 251-255(1990) C.D. Papageorgiou, A.D. Raptis and T.E. Simos, A Method for Computing Phase-Shifts for Scattering, [*Journal of Computational and Applied Mathematics*]{} [**29(1)**]{} 61-67(1990) A.D. Raptis, Two-Step Methods for the Numerical Solution of the Schrödinger Equation, [*Computing*]{} [**28**]{} 373-378(1982). T.E. Simos. A new Numerov-type method for computing eigenvalues and resonances of the radial Schrödinger equation, International Journal of Modern Physics C-Physics and Computers, [**7(1)**]{} 33-41(1996) T.E. Simos, Predictor Corrector Phase-Fitted Methods for Y”=F(X,Y) and an Application to the Schrödinger-Equation, International Journal of Quantum Chemistry, [**53(5)**]{} 473-483(1995) T.E. Simos, Two-step almost P-stable complete in phase methods for the numerical integration of second order periodic initial-value problems, [*Inter. J. Comput. Math.*]{} [**46**]{} 77-85(1992). R. M. Corless, A. Shakoori, D.A. Aruliah, L. Gonzalez-Vega, Barycentric Hermite Interpolants for Event Location in Initial-Value Problems, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{}, 3, 1-16 (2008) M. Dewar, Embedding a General-Purpose Numerical Library in an Interactive Environment, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{}, 3, 17-26 (2008) J. Kierzenka and L.F. Shampine, A BVP Solver that Controls Residual and Error, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{}, 3, 27-41 (2008) R. Knapp, A Method of Lines Framework in Mathematica, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{}, 3, 43-59 (2008) N. S. Nedialkov and J. D. Pryce, Solving Differential Algebraic Equations by Taylor Series (III): the DAETS Code, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{}, 3, 61-80 (2008) R. L. Lipsman, J. E. Osborn, and J. M. Rosenberg, The SCHOL Project at the University of Maryland: Using Mathematical Software in the Teaching of Sophomore Differential Equations, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{}, 3, 81-103 (2008) M. Sofroniou and G. Spaletta, Extrapolation Methods in Mathematica, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{}, 3, 105-121 (2008) R. J. Spiteri and Thian-Peng Ter, pythNon: A PSE for the Numerical Solution of Nonlinear Algebraic Equations, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{}, 3, 123-137 (2008) S.P. Corwin, S. Thompson and S.M. White, Solving ODEs and DDEs with Impulses, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{}, 3, 139-149 (2008) W. Weckesser, VFGEN: A Code Generation Tool, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{}, 3, 151-165 (2008) A. Wittkopf, Automatic Code Generation and Optimization in Maple, [*JNAIAM J. Numer. Anal. Indust. Appl. Math*]{}, 3, 167-180 (2008) [^1]: e-mail: [email protected] [^2]: e-mail: [email protected] [^3]: Highly Cited Researcher, Active Member of the European Academy of Sciences and Arts, Address: Dr. T.E. Simos, 26 Menelaou Street, Amfithea - Paleon Faliron, GR-175 64 Athens, GREECE, Tel: 0030 210 94 20 091, e-mail: [email protected], [email protected]
--- abstract: 'Exact Newtonian potential with an extra space dimension compactified on a circle is derived and studied. It is found that a point mass located at one side of the circle can generate almost the same strength of gravitational force at the other side. Combined with the brane-world scenarios, it means that although one can not see a particle which is confined to another brane, one can feel its gravity explicitly as if it were located on our brane. This leads to the conclusion that matter on all branes may contribute equally to the curvature of our brane. Therefore, dark matter is probably nothing unusual but a ‘shadow’ cast on our brane by hidden ordinary matter of other branes. It is shown that the physical effect of this ‘shadow’ could be modeled by a perfect fluid with an effective mass density and a non-zero pressure. This fluid could serve as a dark matter candidate with an equation of state $p_{eff}=\omega \rho _{eff}$ where $\omega $ is a positive $r$-dependent function and $\omega \neq 0,1/3$, implying that it is neither cold nor hot. So, if the higher-dimensional universe contains several branes, the total amount of hidden matter would be able to provide the required amount for dark matter. Meanwhile, dark matter halos of galaxies might be just halos of the effective mass distribution yielded by hidden ordinary stars, and these stars are confined on other branes but captured by our galaxies. Some more properties and implications are also discussed.' author: - Hongya Liu title: Compactified Newtonian Potential and a Possible Explanation for Dark Matter --- The idea of extra compact spatial dimensions was firstly used in Kaluza-Klein theory [@Kaluza] to unify gravity with electromagnetism and then developed in superstrings, supergravity, and M-theories to unify all the four fundamental forces in nature. It was firstly pointed out by Arkani-Hamed, Dimopoulos and Dvali (ADD) that the well known Newtonian inverse square law could be violated if there exist extra space dimensions. They showed that if the $N$ extra dimensions are curled up with radius $R$, the Newtonian force between two point masses would undergo a transition from the familiar form $1/r^{2}$ at large distances ($r\gg R$) to a form $% 1/r^{2+N}$ at small distances ($r\ll R$) [@ADD98][@ADD99]. Thus the $(4+N)$ dimensional gravity and other interactions could be united at the electroweak scale and the hierarchy problem could be resolved. This has attracted much attention in recent years from both theoretical and experimental physicists. Nowadays, a dozen groups over the world have set up to look for departure from the inverse square law at small distances [Experi]{}. Therefore, it is of great interest to know how the force varies along the $r$-direction from $1/r^{2}$ to $1/r^{2+N}$ in more details. It was shown [@Flor99][@Kehag00] that as the separation $r$ decreases, a Yukawa term appears as a first correction to the Newtonian $1/r$ potential,$$V_{3+N}(r)\approx -\frac{G_{4}M}{r}\left( 1+\alpha e^{-r/\lambda }\right) \;, \label{Yukawa}$$where $\alpha =2N$ for the compactification with $N$-torus, and $\alpha =N+1 $ for an $N$-sphere. However, this approximate expression can not be used at small separation with $r\sim \lambda $ at which the deviation might be very large. So a global exact solution for the Newtonian potential is of particular importance in both the experimental calculation and theoretical analysis. In what follows, we study the simplest case with $N=1$. Here we should mention that for $N=1$ the Yukawa parameters in (\[Yukawa\]) are $% \alpha =2$ and $\lambda =R$. Using the $\alpha -\lambda $ plot of Ref. [Experi]{}, we find that experimental constraint on the $N=1$ case is $R<0.4mm$. Therefore, $N=1$ is not experimentally excluded at all; it is just excluded by the ADD unification for the purpose to resolve the hierarchy problem. The ($3+1$) dimensional Newtonian potential $V_{3+1}$ satisfies the ($3+1$) dimensional Poisson equation$$\nabla _{3+1}^{2}V_{3+1}=2\pi ^{2}G_{4+1}\sigma \label{Poisson}$$where $G_{4+1}$ is the Newtonian constant in ($4+1$) dimensional gravity. Consider a point mass $M$ located at the origin ($r=y=0$) and suppose the extra dimension $y$ is compactified to a circle with radius $R$ and circumference $L=2\pi R$. To mimic a compact dimension, we follow ADD [ADD98]{}[@ADD99] to use a non-compact dimension with “mirror” masses placed periodically along the $y$-axis. Thus we obtain$$V_{3+1}=-\frac{G_{4+1}M}{2}\sum\limits_{n=-\infty }^{\infty }\frac{1}{% r^{2}+\left( nL-y\right) ^{2}}\;, \label{phiSum}$$which satisfies the periodic identification $y\rightarrow L\pm y$. Then we can show that$$\frac{1}{r^{2}+\left( nL-y\right) ^{2}}=\frac{1}{2iLr}\left( \frac{1}{n+z}-% \frac{1}{n+z^{\ast }}\right) ,\qquad z=\frac{-y-ir}{L}\;. \label{n+Z}$$Using this relation in (\[phiSum\]), we bring the Newtonian potential $% V_{3+1}$ to a meromorphic function. Then, with use of the identity $$\sum\limits_{n=-\infty }^{\infty }\frac{1}{z+n}=\pi \cot \pi z\; \label{meromorphic}$$in (\[phiSum\]), we finally obtain$$V_{3+1}=-\frac{G_{4+1}M}{4Rr}\left( \frac{\sinh \frac{r}{R}}{\cosh \frac{r}{R% }-\cos \frac{y}{R}}\right) \;. \label{phiAnal}$$This is the global exact Newtonian potential for $N=1$ which was firstly derived by Floratos and Leontaris with use of the method of Fourier transform [@Flor99]. Here we derived it directly from the ADD model ([phiSum]{}). The periodicity for $y$ along the circle can be seen clearly from (\[phiAnal\]). One can check it by directly substituting (\[phiAnal\]) into the ($3+1$) dimensional Laplace equation$$\left( \frac{\partial ^{2}}{\partial r^{2}}+\frac{2}{r}\frac{\partial }{% \partial r}+\frac{\partial ^{2}}{\partial y^{2}}\right) V_{3+1}=0\;, \label{Laplas}$$which is found to be satisfied by (\[phiAnal\]) as expected. Thus we can define the Newtonian force along the $r$-direction as$$F_{3+1}\equiv -\frac{\partial }{\partial r}V_{3+1}=-\frac{G_{4+1}M}{4Rr^{2}}% \left[ \frac{\sinh \frac{r}{R}}{\cosh \frac{r}{R}-\cos \frac{y}{R}}-\frac{r}{% R}\frac{1-\cosh \frac{r}{R}\cos \frac{y}{R}}{\left( \cosh \frac{r}{R}-\cos \frac{y}{R}\right) ^{2}}\right] \;. \label{r-force}$$ Equations (\[phiAnal\]) and (\[r-force\]) are exact global expressions valid everywhere except at the origin ($r=y=0$) at which the point mass $M$ is located. At large distances $r\gg L$, these two equations lead to$$V_{3+1}=-\frac{G_{4+1}M}{4Rr}\left( 1+2e^{-\frac{r}{R}}\cos \frac{y}{R}% +...\right) ,\;\text{for }r\gg R\;, \label{phi-r>>L}$$and$$F_{3+1}\equiv -\frac{G_{4+1}M}{4Rr^{2}}\left[ 1+2\left( 1+\frac{r}{R}\right) e^{-\frac{r}{R}}\cos \frac{y}{R}+...\right] ,\;\text{for }r\gg R\;. \label{F-r>>L}$$At small distances $r,\left| y\right| \ll R$, we let $r/R\sim y/R\sim \varepsilon $ with $\varepsilon $ being a small parameter, then the two equations (\[phiAnal\]) and (\[r-force\]) give$$V_{3+1}=-\frac{G_{4+1}M}{2\left( r^{2}+y^{2}\right) }\left[ 1+\frac{% r^{2}+y^{2}}{12R^{2}}+O(\varepsilon ^{4})\right] ,\;\text{for }r,\left| y\right| \ll R\;, \label{V-<<R}$$and$$F_{3+1}=-\frac{G_{4+1}Mr}{\left( r^{2}+y^{2}\right) ^{2}}\left[ 1+O(\varepsilon ^{4})\right] ,\qquad \text{for\ }r,\left| y\right| \ll R\;. \label{F-<<R}$$Thus the 4-dimensional Newtonian constant $G_{4}$ is found to be$$G_{4}=\frac{G_{4+1}}{4R}\; \label{G_4}$$as is given by ADD [@ADD98][@ADD99]. In the brane-world scenarios [@Witten][@ADD98][@ADD99], our universe is a 3-brane embedded in a higher dimensional space. While gravity can freely propagate in all dimensions, the standard matter particles and forces are confined to the 3-brane only. For $N=1$, it is natural to have two 3-branes located at $y=0$ and $y=\pi R$, respectively. Suppose we live in the visible $y=0$ brane. Then the Newtonian potential and force for a visible point mass $M$ located at $r=y=0$ are, respectively,$$\begin{aligned} V_{vis} &=&-\frac{G_{4}M}{r}\left( \frac{\sinh \frac{r}{R}}{\cosh \frac{r}{R}% -1}\right) \;, \notag \\ V_{vis} &=&-\frac{G_{4}M}{r}\left( 1+2e^{-\frac{r}{R}}+...\right) ,\;\text{% for }r\gg R\;.\; \label{V-vis}\end{aligned}$$and$$\begin{aligned} F_{vis} &\equiv &-\frac{\partial }{\partial r}V_{vis}=-\frac{G_{4}M}{r^{2}}% \left( \frac{\sinh \frac{r}{R}+\frac{r}{R}}{\cosh \frac{r}{R}-1}\right) \;, \notag \\ F_{vis} &\equiv &-\frac{G_{4}M}{r^{2}}\left[ 1+2\left( 1+\frac{r}{R}\right) e^{-\frac{r}{R}}+...\right] ,\;\text{for }r\gg R\;, \notag \\ F_{vis} &\rightarrow &-\frac{G_{4+1}M}{r^{3}}\;,\qquad \text{for\ }% r\rightarrow 0\;. \label{F-vis}\end{aligned}$$ Another case that a point mass $M$ is confined to the hidden side of the circle ($r=0,$ $y=\pi R$) might be more interesting. For this case we should replace $y/R$ with $\left( \pi +y/R\right) $ in the general solution ([phiAnal]{}), then the Newtonian potential and force for a hidden point mass $% M $ located at $r=0$, $y=\pi R$ are$$\begin{aligned} V_{hid} &=&-\frac{G_{4}M}{r}\left( \frac{\sinh \frac{r}{R}}{\cosh \frac{r}{R}% +1}\right) \;, \notag \\ V_{hid} &=&-\frac{G_{4}M}{r}\left( 1-2e^{-\frac{r}{R}}+...\right) ,\;\text{% for }r\gg R\;.\; \label{V-hid}\end{aligned}$$and$$\begin{aligned} F_{hid} &\equiv &-\frac{\partial }{\partial r}V_{hid}=-\frac{G_{4}M}{r^{2}}% \left( \frac{\sinh \frac{r}{R}-\frac{r}{R}}{\cosh \frac{r}{R}+1}\right) \;, \notag \\ F_{hid} &\equiv &-\frac{G_{4}M}{r^{2}}\left[ 1-2\left( 1+\frac{r}{R}\right) e^{-\frac{r}{R}}+...\right] ,\;\text{for }r\gg R\;, \notag \\ F_{hid} &\rightarrow &-\frac{G_{4+1}M}{12R^{3}}r\rightarrow 0\;,\qquad \text{% for\ }r\rightarrow 0\;. \label{F-hid}\end{aligned}$$Thus we see that $V_{hid}$ and $F_{hid}$ are regular everywhere including at $r=0$. So one can not see the particle itself because it is hidden in another side of the extra dimension. However, one can “feel” its gravitational force, and this force is, to the leading term at large separation, of the same strength as if that particle $M$ is located at our side. Comparing (\[V-vis\]), (\[V-hid\]) with (\[Yukawa\]), we also find that a difference between these two cases appears in the first correction Yukawa term: If the source particle is at our side, the Yukawa parameter is $\alpha =2$; if it is at the hidden side, it is $\alpha =-2$. Equations (\[V-vis\]) and (\[V-hid\]) enable us to use the superposition theorem to write down the Newtonian potential for discrete particles,$$V(r)=-G_{4}\left[ \sum_{i}\frac{M_{vis}^{i}}{\left| \mathbf{r-r}_{i}\right| }% \left( \frac{\sinh \frac{\left| \mathbf{r-r}_{i}\right| }{R}}{\cosh \frac{% \left| \mathbf{r-r}_{i}\right| }{R}-1}\right) +\sum_{i}\frac{M_{hid}^{i}}{% \left| \mathbf{r-r}_{i}\right| }\left( \frac{\sinh \frac{\left| \mathbf{r-r}% _{i}\right| }{R}}{\cosh \frac{\left| \mathbf{r-r}_{i}\right| }{R}+1}\right) % \right] , \label{V-suppos}$$where $M_{vis}$ and $M_{hid}$ are masses of particles located at the visible and hidden brane, respectively. At large distances, i.e., $\left| \mathbf{r-r}_{i}\right| \gg R$ for all these particles, equation ([V-suppos]{}) approaches to$$V(r)\approx -G_{4}\left( \sum_{i}\frac{M_{vis}^{i}}{\left| \mathbf{r-r}% _{i}\right| }+\sum_{i}\frac{M_{hid}^{i}}{\left| \mathbf{r-r}_{i}\right| }% \right) \;,\qquad \text{for }\left| \mathbf{r-r}_{i}\right| \gg R\;.\text{\ } \label{V-supAp}$$Similar results can be written down if matter is continuously distributed over the branes. Thus, from (\[V-supAp\]), we conclude that, *at large distances, particles at the hidden side of the extra dimension contribute to the Newtonian potential with the same strength as if they were at the visible side*. This is an important result which implies that all particles, no matter they are located at our brane or not, will *equally* contribute to the curvature of our brane. ADD and other authors have already discussed this case in more detail and for a variety of brane models [@ADD99][Manyfold00]{}. Here, in this paper, we provide with an exactly solvable model to exhibit it explicitly as given above. Let us discuss it from a purely three-dimensional point of view. In ([V-hid]{}), although the source particle $M$ is hidden at another side, it generates, in our 3D surface, a Newtonian potential from which we can define an effective mass density $\rho _{eff}$ via$$\rho _{eff}\equiv \frac{1}{4\pi G_{4}}\nabla _{3}^{2}V_{hid}=\frac{M}{4\pi R^{2}}\frac{\sinh \frac{r}{R}}{r\left( \cosh \frac{r}{R}+1\right) ^{2}}\;. \label{rhoEff}$$This density is regular everywhere including at $r=0$, at which it reaches to an finite maximum $M(16\pi R^{3})^{-1}$. Using (\[rhoEff\]) we can calculate the total amount of effective mass inside a shell of radius $r$, which is$$\mu (r)\equiv 4\pi \int_{0}^{r}\rho _{eff}r^{2}dr=M\frac{\sinh \frac{r}{R}-% \frac{r}{R}}{\cosh \frac{r}{R}+1}\;. \label{mu(r)}$$We find $\mu (r)\rightarrow M$ as $r/R\rightarrow \infty $. So *the total amount of the effective mass distributed over our brane equals exactly to the same mass of the hidden particle*; it does not depend on the size $R$ of the extra dimension at all, though the density $\rho _{eff}$ does depend on $R$. Thus we arrive at a conclusion that hidden particles contribute to our brane with the same amount of effective mass. This conclusion is drawn from the special case with only one extra dimension, however, it is reasonable to expect it to hold for more extra compact dimensions as discussed by ADD in Ref. \[3\]. So, as a whole, all matter hidden on other branes may yield an effective mass density distributed over our brane. This kind of mass density is “dark” because one can not “see” it. But its gravity contributes to the curvature of our brane and can be measured without any problem. Thus the effective mass distribution provides us with a natural candidate for dark matter. Be aware that the amount of dark matter required by observations is about six times of the visible ordinary matter [@Spergel03]. Meanwhile, the string-membrane theories require up to seven extra dimensions. So if there are several branes and each brane has the same order of amount of ordinary matter as ours, then the required amount of dark matter could be found. If we accept this explanation, then *dark matter probably is nothing unusual but a ‘shadow’ cast on our brane by hidden ordinary matter of other branes*. Now let us go further to study the effective mass distribution $\rho _{eff}$ in Eq. (\[rhoEff\]). If we explain $\rho _{eff}$ as a dark matter candidate, then this kind of dark matter has a significant property: As a whole, it yields an attractive force to other particles as seen from ([F-hid]{}), — this can explain the dark halos of galaxies as well as the mass density for the universe. However, $\rho _{eff}$ is spread over our brane implying that different parts of itself can not collapse. This is distinctive from other dark matter candidates but does not contradict with current observations. To explain it, a natural way is to introduce an effective pressure $p_{eff}$. So we obtain a perfect fluid model for which we can write the hydrostatic equilibrium equation as$$\frac{dp_{eff}}{dr}=-\frac{G_{4}\mu (r)\rho _{eff}}{r^{2}}\;. \label{dp/dr}$$Substituting (\[mu(r)\]) and (\[rhoEff\]) in this equation, we obtain$$p_{eff}=\frac{G_{4}M^{2}}{4\pi R^{2}}\int\nolimits_{r}^{\infty }\frac{\sinh \frac{r}{R}\left( \sinh \frac{r}{R}-\frac{r}{R}\right) }{r^{3}\left( \cosh \frac{r}{R}+1\right) ^{3}}dr \label{p-eff-int}$$where we have chosen $p_{eff}$ such that $p_{eff}\rightarrow 0$ as $% r\rightarrow \infty $. This corresponds to an equation of state $% p_{eff}=\omega \rho _{eff}$ with $\omega $ being a function of the coordinate $r$, i.e., $\omega =\omega (r)$. We can show that $\omega $ is positive but not equal to zero or $1/3$. *So this kind of dark matter is neither cold (*$\omega =0$*) nor hot (*$\omega =1/3$*)*. In 5D relativity, the Newtonian potential and Newtonian force derived in this paper should be recovered as the corresponding Newtonian limit. For the exterior field of a point mass, the field equations should be the vacuum ones, $R_{ab}=0$ ($a,b=0,1,2,3;5$). It is known from the induced matter theory [@Wesson] [@Overduin] that five-dimensional Ricci-flat equations $R_{ab}=0$ contain the four-dimensional Einstein equations, which in terms of the 4D Einstein tensor and an induced energy-momentum tensor are$$^{(4)}G_{\alpha \beta }=\kappa T_{\alpha \beta }^{eff}\;. \label{4DEinEqs}$$It is also known that this induced energy-momentum tensor $T_{\alpha \beta }^{eff}$ could be modeled by a perfect fluid with an effective mass density and an effective pressure for the 5D spherically symmetric static solutions [@LiuJMP92], the 5D cosmological solutions [@PoncedeLeon88] [LiuMash95]{} [@LiuWessonAPJ] and some other kinds of solutions [Wesson]{} [@Overduin]. This is guaranteed by Campbell’s theorem which states that any solution of the Einstein equations in $N$-dimensions can be locally embedded in a Ricci-flat manifold of ($N+1$)-dimensions [Campbell]{}. Thus we conclude that the 4D effective energy-momentum tensor $% T_{\alpha \beta }^{eff}$ appeared in (\[4DEinEqs\]) is actually the counterpart of the effective $\rho _{eff}$ and $p_{eff}$ of (\[rhoEff\]) and (\[p-eff-int\]) of the Newtonian model. We also conclude that the induced matter theory could occupy a central position in the description of dark matter in brane world scenarios. From equations (\[V-supAp\]) and (\[V-suppos\]) we see clearly that a hidden particle located at another brane can generate almost the same strength of force on our brane. So, generally speaking, a visible star of our brane could be captured by a hidden star of another brane and thus forms a special kind of binary. A visible galaxy of our brane can capture hidden stars from other branes, and this would provide us with a possible interpretation for the dark halos of galaxies. A hidden galaxy of another brane can also capture visible matter from our brane. Recently, it was reported [@NewSci] that Simon, Robishaw and Blitz observed a cloud of hydrogen gas rotating fast around a devoid region and they called this region as a dark galaxy. If it is true, this dark galaxy could be just the *‘shadow’* of a galaxy which is hidden on another brane. We should emphasize that all these speculations are based on the possible existence of extra compact space dimensions which is required by Kaluza-Klein and by string-membrane theories. We hope that future observations may show whether these speculations are correct or not and whether there exist extra dimensions. From our 4D point of view, the net effect of matter hidden on all other branes could be described by an effective mass distribution or an effective energy-momentum tensor which could serve as a candidate for dark matter. This candidate may provide us with the same mathematical results as provided by other dark matter candidates, with an exception that the physical nature of this dark matter could be just the higher dimensional gravitational field or gravitons. We thank James Overduin for comments. This work was supported by National Natural Science Foundation (10273004) and National Basic Research Program (2003CB716300) of P. R. China. [99]{} T. Kaluza, Sitz. Preuss. Akad. Wiss. 33, 966 (1921); O. Klein, Z. Phys. 37, 895 (1926). N. Arkani-Hamed, S. Dimopoulos, and G. Dvali, Phys. Lett. B** 429**, 263 (1998), hep-th/9803315; I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos, and G. Dvali, Phys. Lett. B** 436**, 257 (1998), hep-ph/9804398. N. Arkani-Hamed, S. Dimopoulos, and G. Dvali, Phys. Rev. D **59**, 086004 (1999), hep-ph/9807344. E.G. Adelberger, B.R. Heckel, and A.E. Nelson, Annu. Rev. Nucl. Part. Sci. **53** (2003) (in press), hep-ph/0307284. E.G. Floratos and G.K. Leontaris, Phys. Lett. B, **465**, 95 (1999), hep-ph/9906238. A. Kehagias and K. Sfetsos, Phys. Lett. B, **472**, 39 (2000), hep-ph/9905417. P. Horava and E. Witten, Nucl. Phys. B **460**, 506 (1996), hep-th/9510209; E. Witten, Nucl. Phys. B **471**, 135 (1996), hep-th/9602070; P. Horava and E. Witten, Nucl. Phys. B. **475**, 94 (1996), hep-th/9603142. N. Arkani-Hamed, S. Dimopoulos, G. Dvali, and N. Kaloper, JHEP, **0012**, 010 (2000), hep-ph/9911386. D.N. Spergel et al., Astrophys. J. Suppl., 148, 175 (2003), astro-ph/0302209. P.S. Wesson, *Space-Time-Matter* (Singapore, World Scientific, 1999). J.M. Overduin and P.S. Wesson, Phys. Rep. **283**, 303 (1997), gr-qc/9805018. H.Y. Liu and P.S. Wesson, J. Math. Phys. **33,** 3888 (1992). J. Ponce de Leon, Gen. Rel. Grav. **20**, 539 (1988). H.Y. Liu and B. Mashhoon, Ann. Physik. **4**, 565 (1995). H.Y. Liu and P.S. Wesson, Astrophys. J. **562**, 1 (2001), gr-qc/0107093. J.E. Campbell, *A Course of Differential Geometry* (Oxford, Clarendon, 1926). See: “Astronomers find first‘dark galaxy’”, New Scientists, 20 October, 2003.
--- abstract: 'Given an arithmetic function $g(n)$ write $M_g(x) := \sum_{n \leq x} g(n)$. We extend and strengthen the results of a fundamental paper of Halász in several ways by proving upper bounds for the ratio of $\frac{|M_g(x)|}{M_{|g|}(x)}$, for any strongly multiplicative, complex-valued function $g(n)$ under certain assumptions on the sequence $\{g(p)\}_p$. We further prove an asymptotic formula for this ratio in the case that $|\text{arg}(g(p))|$ is sufficiently small uniformly in $p$. In so doing, we recover a new proof of an explicit lower mean value estimate for $M_{f}(x)$ for any non-negative, multiplicative function satisfying $c_1 \leq |f(p)| \leq c_2$ for $c_2 \geq c_1 > 0$, by relating it to $\frac{x}{\log x}\prod_{p \leq x} \left(1+\frac{f(p)}{p}\right)$. As an application, we generalize our main theorem in such a way as to give explicit estimates for the ratio $\frac{|M_g(x)|}{M_{f}(x)}$, whenever $f: {\mathbb}{N} {\rightarrow}(0,\infty)$ and $g: {\mathbb}{N} {\rightarrow}{\mathbb}{C}$ are strongly multiplicative functions that are uniformly bounded on primes and satisfy $|g(n)| \leq f(n)$ for every $n \in {\mathbb}{N}$. This generalizes a theorem of Wirsing and extends recent work due to Elliott.' address: | Department of Mathematics\ University of Toronto\ Toronto, Ontario, Canada author: - 'Alexander P. Mangerel' bibliography: - 'bibWirsing.bib' title: A Strengthening of Theorems of Halász and Wirsing --- Introduction ============ Let $g : {\mathbb}{N} {\rightarrow}{\mathbb}{C}$ be a multiplicative function, i.e., such that for any coprime $m,n \in {\mathbb}{N}$, $g(mn) = g(m)g(n)$. Put $$M_g(x) := \sum_{n \leq x} g(n).$$ Many difficult problems in number theory can be recast into a problem about the rate of growth of $M_g(x)$, for $g$ multiplicative. For example, when $g = \mu$, the Möbius function, Landau proved that the Prime Number Theorem is equivalent to $M_{\mu}(x) = o(x)$ (see Theorem 8 in I.3 of [@Ten2]), and Littlewood showed that the Riemann hypothesis is equivalent to $M_{\mu}(x) = O_{{\epsilon}}\left(x^{\frac{1}{2}+{\epsilon}}\right)$ (see paragraph 14.25 in [@Tit]). In another direction, let $q \in {\mathbb}{N}$, $q \geq 2$ and suppose $\chi$ is a Dirichlet character modulo $q$, i.e., a periodic extension of a group homomorphism $\chi: \left({\mathbb}{Z}/q{\mathbb}{Z}\right)^{\ast} {\rightarrow}{\mathbb}{T}$, with $\chi(n) = 0$ for $(n,q) > 1$. Finding sharp upper and lower bounds for $M_{\chi}(x)$ that are uniform in the conductor $q$ is a notoriously difficult problem whose study goes back at least to Pólya. There are various important consequences of bounds for $M_{\chi}(x)$. For instance, if $\chi$ is a quadratic character then sharp upper bounds have implications in the study of the Class Number problem for real quadratic fields, whose study was initiated by Gau$\ss$ (see, for instance, [@Lam] and [@LamD]).\ In this vein, it is of interest to relate $M_g(x)$ to other arithmetic data that are known, and the most basic such approach is to consider its growth relative to $x$. We say that $g$ possesses a *mean value* if there exists a constant $\alpha \in {\mathbb}{C}$ such that $x^{-1}M_g(x) {\rightarrow}\alpha$ as $x {\rightarrow}\infty$. It is a classical topic in number theory to consider when a multiplicative function possesses a mean value. In the case where $g$ is non-negative this is simplest, and it is known that any such function taking values in $[0,1]$ possesses a mean value. For real-valued functions, Wintner [@Win] proved that if $g$ is multiplicative with $|g(n)| \leq 1$ for all $n \in {\mathbb}{N}$ then it is sufficient that $\sum_p \frac{1-g(p)}{p}$ converges in order for $g$ to have a non-zero mean value, and in particular, $$\lim_{x {\rightarrow}\infty} x^{-1}M_g(x) = \prod_p \left(1-\frac{1}{p}\right)\sum_{k \geq 0} g(p^k)p^{-k}.$$ Wirsing, improving on this result and others due to Delange, proved a famous conjecture of Erdős that as long as $|g(n)| \leq 1$, $g$ possesses a mean value, and that this mean value is zero if, and only if, $\sum_p \frac{1-g(p)}{p}$ diverges [@Ell].\ It is known that this latter criterion is insufficient to describe the situation in the case that $g$ is complex-valued, e.g., when $g(n) = n^{i\alpha}$ (a so-called *archimedean character*). This example also demonstrates that the limit of $x^{-1}M_g(x)$ may not even exist, as a simple contour integration argument shows that $M_{n^{i\alpha}}(x) \sim x^{1+i\alpha}/(1+i\alpha)$. It turns out that the sole counterexamples to this criterion for complex-valued multiplicative functions are those $g$ that behave like $p^{i\alpha}$ times a real-valued multiplicative function at primes for some $\alpha \in {\mathbb}{R}$, and Halász [@Hal1] generalized Wirsing’s theorem, to complex-valued functions with $|g(n)| \leq 1$, accounting for this behaviour. In particular, he showed that $g$ has non-zero mean value if, and only if, $\sum_p \frac{1-\text{Re}(g(p)p^{-i\alpha})}{p}$ converges for some $\alpha$. It is conventional in the literature to say that if $g$ has a non-zero mean value and an $\alpha$ exists for which the latter series converges then $g$ ”pretends” to be the product of a real function and an archimedean character.\ While the above results classify those situations in which a function $g$ has mean-value zero, it does not provide a rate of convergence of $x^{-1}M_g(x)$ to its limiting value, an important consideration in many number-theoretic problems. Shortly after proving his mean value theorem, Halász ([@Hal2], [@Hal3]) proved in two separate papers *explicit mean-value estimates* for multiplicative functions. Of particular interest to us is [@Hal2], in which an estimate, uniform over a collection of complex-valued, multiplicative functions $g$, is given in the case that the values of $g$ at primes are not uniformly distributed in argument about 0.\ (Due to the multiplicativity of $g$, its values at primes are significant in determining the nature of mean-value estimates; these can therefore be improved if certain assumptions are made about the uniformity of distribution of values of the sequence $\{g(p)\}_p$. For further examples, see, for instance, [@Hall]).\ To be precise, suppose $g$ is a *completely multiplicative* function, i.e., $g(mn) = g(m)g(n)$ for all $m,n \in {\mathbb}{N}$, with the following properties. First, assume there exists $\delta > 0$ with $\delta \leq |g(p)| \leq 2-\delta$ for all primes $p$. Second, suppose that there exists an angle $\theta \in [-\pi,\pi)$ and $\beta > 0$ for which $|\text{arg}(g(p))-\theta| \geq \beta$, where $\text{arg}(z)$ is taken to be the branch of argument with cut line along the negative real axis. Then Halász proved the existence of a constant $c = c(\delta,\beta) > 0$ such that $$x^{-1}M_g(x) \ll_{\delta,\theta,\beta} \exp\left(\sum_{p \leq x} \frac{|g(p)|-1}{p} - c\sum_{p \leq x} \frac{|g(p)|-\text{Re}(g(p))}{p}\right) \label{HALLLL}.$$ In the specific situation in which $|g(p)-1|$ is uniformly small over all primes $p$ in a precise sense then $$M_g(x) \sim x\exp\left(\sum_{p \leq x} \frac{g(p)-1}{p}\right) =: Ax \label{HALLL} $$ This last result essentially implies that completely multiplicative functions $g$ whose Dirichlet series are sufficiently close to the $\zeta$ function for $s = {\sigma}+ i\tau$ with ${\sigma}> 1$ and $|\tau|$ small will possess a mean value. It should be emphasized that the first result just mentioned is presented in [@Ell2] with $c$ a constant multiple of $\delta \beta^3$. In particular, the upper bound is less advantageous in cases where $\delta$ is small.\ Thus far, all of the results that we have mentioned from the literature compare the summatory function $M_g(x)$ with $x$ (or rather, essentially $M_1(x)$), and in , the Dirichlet series $G(s) := \sum_{n \geq 1} g(n)n^{-s}$ is compared with $\zeta(s)$. A better standard of comparison for $G(s)$ than $\zeta(s)$, demonstrating similar nuances in the distribution of its coefficients and highlighting, instead, the effect of cancellation due to the arguments $\text{arg}(g(p))$ on the sum $M_g(x)$, is ${\mathcal}{G}(s) := \sum_{n \geq 1} |g(n)|n^{-s}$. With the broadened perspective of comparing the summatory functions of two multiplicative functions, neither of which the constant function 1, the more general problem of comparing $M_g(x)$ to $M_{f}(x)$ where $f$ is *any* non-negative multiplicative function satisfying $|g(n)| \leq f(n)$ for all $n \in {\mathbb}{N}$, is also natural. This general viewpoint is elaborated upon in the next section as well (see the discussion surrounding Theorem \[WIRSINGEXT\]). Besides a result of this nature extending a theorem of Wirsing, as discussed below, ratios of the form $M_{g}(x)/M_{f}(x)$ are useful in arithmetic applications. Indeed, non-negative multiplicative functions and their summatory functions are typically simpler to estimate, as properties such as positivity and monotonicity can be used to extract upper and lower bounds. One would therefore expect that evaluating $M_g$ indirectly through $M_{f}$, where $M_f$ is well-understood, may be more efficient than a direct attempt at evaluating $M_g$.\ In our main theorems below, we consider estimates of the above kind in case $g$ is either completely multiplicative or *strongly* multiplicative, i.e., $g(p^k) = g(p)$ for all $k \in {\mathbb}{N}$ and primes $p$. A consequence of Theorem \[WIRSINGEXT\] is the following. Suppose $f,g$ are strongly or completely multiplicative functions such that: i) $f$ satisfies $\delta \leq |f(p)| \leq B$ for all primes $p$ with $B,\delta > 0$, and does not ”pretend” to be a product of a real function and an archimedean character; ii) $g$ is real-valued such that $|\text{arg}(f(p)) + \beta g(p) -\tau \log p|$ is not too small (in a precise sense) for $p \leq x$ and any $|\tau| \leq \log^{O(1)} x$. Then for any $\alpha,\beta \in {\mathbb}{R}$ fixed, $$\sum_{n \leq x} f(n)^{\alpha}g(n)^{i\beta} \ll_B \left(\frac{B}{\delta}\right)^3\left(\sum_{n \leq x} |f(n)|^{\alpha}\right) \exp\left(-c\sum_{p \leq x} \frac{|f(p)|^{\alpha}-\text{Re}\left(f(p)^{\alpha}e^{i\beta\log g(p)}\right)}{p}\right),$$ where $c$ depends at most on the ratio $B/\delta$ (see Theorem \[HalGen\] below). In particular, if $f = g$ then our theorem gives an estimate for the summatory functions of complex moments of real-valued multiplicative functions. In certain cases, depending on the distribution of the values of $f(p)$, we can even find asymptotic formulae for such quantities. Estimates for moments of this type are relevant in the context of the moment method in the analysis of number-theoretic, probabilistic models (for a related example, see [@LamD]). For another example, see [@Res2]. Results ======= Definitions and Conventions --------------------------- \[REAS\] Given $g$ a multiplicative function and ${\boldsymbol}{E} := (E_1,\ldots,E_m)$ a partition of the primes, the pair $(g,{\boldsymbol}{E})$ is said to be *reasonable* if for any $t \geq 2$ sufficiently large and $\kappa > 0$ fixed there exists an index $1 \leq j_0 \leq m$ such that $$\begin{aligned} \sum_{t^{\kappa} < p \leq t \atop p \in E_{j_0}} \frac{1}{p} &\geq \frac{1}{m} \sum_{t^{\kappa} < p \leq t} \frac{1}{p} \label{CONDIT1}\\ \sum_{p \leq t \atop p \in E_{j_0}} \frac{|g(p)|}{p} &\geq \frac{1}{m}\sum_{p \leq t} \frac{|g(p)|}{p}.\label{CONDIT2}\end{aligned}$$ We also say that $(g,{\boldsymbol}{E})$ is *non-decreasing* if for each $1 \leq j \leq m$, $\{|g(p)|\}_{p \in E_j}$ is a non-decreasing sequence. It is an obvious consequence of the pigeonhole principle that for any fixed $t$ and $\kappa$ one can find two (possibly distinct) indices $1 \leq j_1,j_2 \leq m$ such that holds with $j_1$ in place of $j_0$ and holds with $j_2$ in place of $j_0$. However, it need not be the case that $j_1 = j_2$. Thus, the ”reasonable” property permits us to choose a common index satisfying both conditions simultaneously. Any partition containing even *one* set with Dirichlet density, i.e., such that $\log_2^{-1} x\left(\sum_{p \leq x} \frac{1}{p}\right) {\rightarrow}\lambda$ with $\lambda > 0$, and such that $|g(p)|$ is not too small on this set, will provide a reasonable pair $(g,{\boldsymbol}{E})$. There is therefore a wealth of natural cases in which $(g,{\boldsymbol}{E})$ is reasonable (even when $\frac{\delta}{B}$ is small). This definition is used in Section 6 alone.\ \[GOOD\] We say that a subset of the primes $E$ is *good* if there exists $\lambda_j > 0$ and a neighbourhood of $s = 1$ such that $\sum_{p \in E_j} \frac{1}{p^s} - \lambda_j \log\left(\frac{1}{s-1}\right)$ is holomorphic. ${\boldsymbol}{E}$ is a *good partition* if, for each $1 \leq j \leq m$, $E_j$ is good. As a consequence of the Prime Number Theorem and accompanying zero-free regions associated to $\zeta$, it is evident that the set of all primes give a trivial, good partition, as is the case for a partition of (unions of) arithmetic progressions modulo any $q \in {\mathbb}{N}$. For a different example, it can be checked that, for a fixed $\tau \in {\mathbb}{R}$ and $0\leq \alpha < \beta \leq 1$, the set of primes $$E := \left\{p : \left\{\frac{\tau \log p}{2\pi}\right\} \in (\alpha,\beta]\right\}$$ satisfies $\sum_{p \in E \atop p \leq x} \frac{1}{p} \sim (\beta-\alpha) \log_2 x + O(1)$ (see, for instance, Lemma 3.6 of [@prev]). Hence, taking ${\sigma}:= 1+\frac{1}{\log x}$ and letting $x {\rightarrow}\infty$, it is not hard to show (using and below) that $\sum_{p \in E_j} \frac{1}{p^{s}} - (\beta-\alpha) \log\left(\frac{1}{s-1}\right)$ is holomorphic near $s = 1$. Thus, partitioning $[0,2\pi]$ into intervals $(\alpha_j,\beta_j]$ and choosing sets $E_j$ in analogy to the choice of $E$ above, we can generate a good partition.\ By a *squarefree-supported* multiplicative function, we mean a multiplicative function of the form $f(n) = \mu^2(n)g(n)$, where $g$ is multiplicative and $\mu$ is the Möbius function.\ For $x \geq 2$, $T > 0$, a set $E$ of primes and arithmetic functions $f,g: {\mathbb}{N} {\rightarrow}{\mathbb}{C}$ taking values in the unit disc we put $$\begin{aligned} D_E(g,n^{i\tau};x) &:= \sum_{p \in E \atop p \leq x} \frac{1-\text{Re}(g(p)\overline{f(p)})}{p} \\ \rho_E(x;g,T) &:= \min_{|\tau| \leq T} D_E(g,n^{i\tau};x). \end{aligned}$$ Unless stated otherwise, given a set of primes $E$, $E(t) := \sum_{p \in E \atop p \leq t} \frac{1}{p}$.\ We will frequently refer to a finite collection of parameters by a corresponding vector whose entries are those parameters. Thus, for instance, if $\{r_1,\ldots,r_k\}$ are positive real numbers then we refer to this collection with the shorthand ${\boldsymbol}{r}$, defined as the vector $(r_1,\ldots,r_k)$.\ \ Let us define the collection of strongly and completely multiplicative functions ${\mathcal}{C}$ as follows: if $g \in {\mathcal}{C}$ then there is a partition $\{E_1,\ldots,E_m\}$ of the set of primes, and a set of primes $S$ such that there exist non-decreasing and non-increasing positive functions $B_j= B_j(x)$ and $\delta_j = \delta_j(x)$, respectively, such that:\ i) for $p \in E_j {\backslash}S$, $\delta_j \leq |g(p)|\leq B_j$, and given $\delta := \min_{1 \leq j \leq m} \delta_j$, $|g(p)| < \delta$ for $p \in S$ (in case $g$ is completely multiplicative, we require that $B_j(x) < 2$ uniformly in $x$);\ ii) for $P_x := \prod_{p \in S \atop p \leq x} p$, $P_x \ll_{\alpha} x^{\alpha}$ for any $\alpha > 0$.\ We also define subcollections ${\mathcal}{C}_a$ and ${\mathcal}{C}_b$ of ${\mathcal}{C}$ as follows:\ $g \in {\mathcal}{C}_a$ if: iii) there are positive functions $\eta_j = \eta_j(x)$ such that $|\text{arg}(g(p))| \leq \eta_j$ for each $p \in E_j$;\ $g \in {\mathcal}{C}_b$ if: iv) ${\boldsymbol}{E}$ is a good partition (in the sense of Definition \[GOOD\]);\ v) there are angles $\{\phi_1,\ldots,\phi_m\}$ and $\{\beta_1,\ldots,\beta_m\} \subset (0,\pi)$ such that $|\text{arg}(g(p))-\phi_j| \geq \beta_j$ for each $p \in E_j$;\ vi) $D_{E_j}(g,n^{i\tau};x) \geq C\log_3 x$ for all $|\tau| \ll \log^A x$, where the implicit constant here is sufficiently large with respect to $r,A$ and ${\boldsymbol}{\beta}$.\ Main Result and Consequences ---------------------------- Our main result is the following. \[HalGen\] i) Let $x \geq 3$. Suppose $g\in {\mathcal}{C}$, and let $B := \max_{1 \leq j \leq m} B_j$, where ${\boldsymbol}{B}$ implicitly chosen for $g$. Set $\tilde{g}(p) := g(p)/B_j$ for $p \in E_j$, and extend $\tilde{g}$ as a strongly or completely multiplicative function. Then for any $\alpha,D > 2$, $T := \log^D x$, $$\label{UPPERGen} M_g(x) \ll_{\alpha,B,m} \frac{B^2}{\delta}\frac{P}{\phi(P)}M_{|g|}(x) \exp\left(-\sum_{1 \leq j \leq m} B_j\left(\rho_{E_j}(x;\tilde{g},T) - \sum_{p \in E_j \atop p \leq x} \frac{1-|\tilde{g}(p)|}{p}\right)\right).$$ Suppose $g \in {\mathcal}{C}_b$, and set $\gamma_{0,j} := \frac{27\delta_j}{1024 \pi B_j}\beta_j^3$. Then we have, more precisely, $$\label{UPPER} M_g(x) \ll_{B,m,{\boldsymbol}{\phi},{\boldsymbol}{\beta},r,\alpha} \frac{B^2}{\delta}\frac{P}{\phi(P)}M_{|g|}(x)\exp\left(-\sum_{1 \leq j \leq m}c_j\sum_{p \leq x \atop p \in E_j} \frac{|g(p)|-\text{Re}(g(p))}{p}\right),$$ where $$c_j := \begin{cases} \frac{1}{2}\min\left\{\frac{1}{4m^2},\frac{\gamma_{0,j}}{1+\gamma_{0,j}}\right\} &\text{ if $(g,{\boldsymbol}{E})$ is non-decreasing} \\ \frac{1}{2}\min_j\left\{\frac{\delta_j}{B_j}\right\}\min\left\{\frac{1}{4m^2},\frac{\gamma_{0,j}}{1+\gamma_{0,j}}\right\} &\text{ if $(g,{\boldsymbol}{E})$ is not non-decreasing}. \end{cases}$$ ii) Suppose now that $g \in {\mathcal}{C}_a$. Set $A := \exp\left(\sum_{p \leq x} \frac{g(p)-|g(p)|}{p}\right)$, and let $\eta := \max_{1 \leq j \leq m} \eta_j$. Then $$\label{ASYMP} M_g(x) = M_{|g|}(x)\left(A+O_{B,m,\alpha}\left({\mathcal}{R}\right)\right),$$ where, for $d_1 := \delta^{-1}\sqrt{\eta}$ if $B\eta \leq \delta \leq \eta^{\frac{1}{2}}$ and $d_1 := 1$ otherwise, and $\gamma_0 := \min_{1 \leq j \leq m} \gamma_{0,j}$, $${\mathcal}{R} := \frac{B^2}{\delta}\frac{P}{\phi(P)}\left(\eta^{\frac{1}{2}}|A|\left(d_1 + \log^{-\frac{\delta}{3}} x + \delta^{-\frac{1}{2}}e^{-\frac{d_1}{\sqrt{\eta}}}\right) + |A|^{\frac{\gamma_0}{4(1+\gamma_0)}}\left(\log^{-\frac{\delta\beta^3}{2}} x + \delta^{-1}e^{-\frac{d_1}{\sqrt{\eta}}}\right)\right).$$ (We have left an explicit dependence on $B$ in each of the above statements because, in the case that $B$ and $\delta$ are of the same order of magnitude and *small*, the relative sizes of $B$ and $\delta$ can be accounted for, mitigating the large factor $\delta^{-1}$.)\ \ Let us explain the motivations for our choices for the collections ${\mathcal}{C}$, ${\mathcal}{C}_a$ and ${\mathcal}{C}_b$. In the definition of ${\mathcal}{C}$, we introduce a partition ${\boldsymbol}{E}$ in order to deal with functions $g$ that may have vastly different behaviours in different sets. For an example of this, see [@Res2], in which we consider functions $g$ that are constant on each set of a partition, but where the values of these constants can have vastly different moduli. We are also allowing a fairly small set of primes $S$ on which $g$ is arbitrarily small, or possibly zero. Indeed, $S$ is small, because, as follows from the prime number theorem, $\prod_{p \leq x} p \sim e^{(1+o(1))x}$. Hence, polynomial growth of the product of primes from $S$ implies that $S$ is quite sparse. Our choice on the growth of $S$ is two-fold. Firstly, we would like to be able to handle twists of functions in ${\mathcal}{C}$ by characters in our main result. Thus, if $g \in {\mathcal}{C}$ is completely multiplicative and $\chi$ is a Dirichlet character modulo $q \leq x$, we would like to have access to estimates for the function $n \mapsto \chi(n)g(n)$, which, of course, is zero on primes $p|q$. In such a case, if $S$ is the associated set for $g$ and $S' := S \cup \{p: p|q\}$ then ii) applies to $\chi g$ as well and thus $\chi g \in {\mathcal}{C}$. Secondly, while we would like to admit the possibility for $g \in {\mathcal}{C}$ to have a zero set, it is more difficult to handle estimates for $M_{f}(x)$, for $f$ non-negative and 0 on $S$, if $f$ is supported on a set containing very few primes, e.g., $y$-smooth numbers, with $y$ quite small relative to $x$. For a discussion that highlights this difficulty, see [@Hil]. As is clear from our arguments below (and as is necessary in the proof of Theorem \[LOWERMV\]), sharp lower estimates on summatory functions of non-negative functions is crucial in our analysis.\ The choice of subcollection ${\mathcal}{C}_a$ is made to consider the specific case in which $g$ is ”close” to being real valued, and in which case asymptotic formulae exist for the ratio $M_g(x)/M_{|g|}(x)$ (as well as in a more general context; see the discussion surrounding Theorem \[WIRSINGEXT\]).\ Our choice of definition of ${\mathcal}{C}_b$ is motivated by a desire to give explicit estimates for the growth rate of the ratio $\frac{M_g(x)}{M_{|g|}(x)}$ in Theorem \[HalGen\] (see ). First, introducing multiple angular distribution conditions provides us with more leniency in our choice of functions than what is provided by Halász’ condition for . Moreover, our qualification of our partition as being good, in the sense of Definition \[GOOD\], also affords us a general context in which complex analytic arguments are available to us.\ \ Let us make a few remarks about Theorem \[HalGen\]. First, while we only prove the above theorem for strongly multiplicative functions, the reader will notice that these results also hold for completely multiplicative functions as well, as noted in the statement. We highlight the relevant modifications necessary to make our arguments valid for completely multiplicative functions in remarks following each result in which arithmetic conditions on $g$ are implicitly being used.\ When $P$ is large the factor $\frac{P}{\phi(P)}$ can be as large as $\log_2 P \sim \log_2 x$. If the series $\sum_{p \leq x} \frac{|g(p)|-\text{Re}(g(p))}{p}$ is not large, or if some of the ratios $\frac{\delta_j}{B_j}$ are small enough then we cannot beat the trivial bound $|M_g(x)|/M_{|g|}(x) \leq 1$ coming from the triangle inequality. When the first of the two scenarios just listed occurs, it is often true that $g \in {\mathcal}{C}_a$ (i.e., $|g(p)-|g(p)||$ is small uniformly in $p$), so in this case we can still get a sharp estimate.\ The reader will also note that, given a strongly (or completely) multiplicative function, one can always partition the primes into sets $E_j := \{p : \text{arg}(f(p)) \in I_j\}$, for some partition $\{I_1,\ldots,I_m\}$ of $[-\pi,\pi]$, with $\phi_j-\pi$ chosen as the midpoint of $I_j$ and $\beta_j$ the minimum distance from $\phi_j$ and the endpoints of $I_j$. Thus, our Theorem \[HalGen\] actually applies in general to *any* strongly (or completely) multiplicative function whose values at primes are bounded in absolute value below and above by positive real numbers. The explicit results particular to the subcollection ${\mathcal}{C}_b$ can be used furthermore when such a partition is good (e.g., if $\chi$ is a character with conductor $q$ then we can classify the values of $\chi$ at primes according to residue classes modulo $q$, so $E_j$ is a union of arithmetic progressions).\ We have given choices of constants for non-decreasing pairs $(g,{\boldsymbol}{E})$ as, in many natural cases (such as $\frac{\phi(n)}{n}$ or $\lambda$, i.e., *Liouville’s function*), $|g|$ is either constant or monotone in subsets of the primes. At any rate, the explicit estimates in Theorem \[HalGen\] turn out to be better in this case.\ The constant $\frac{27}{1024\pi}$ implicit in the definition of $\gamma_{0,j}$ is found as an admissible choice for $D$ in a pointwise estimate of the form $\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right| \ll \left|\frac{G({\sigma})}{{\mathcal}{G}({\sigma})}\right|^D$. We have not made an attempt to find the optimal constant.\ It was pointed out to the author that Theorem \[HalGen\] is hinted at in the concluding section of Chapter 21 of Elliott’s monograph [@Ell2] as an unpublished result of Halász, but the author has not been able to find an explicit proof of it in the literature. Furthermore, it is to be emphasized that the result here is substantially stronger and more general than what is suggested in [@Ell2] on several fronts. First, our estimate admits the potential to exploit *localized* behaviour of primes, i.e., to treat several sets of primes, on which $g$ behaves differently, separately. Second, the result mentioned in [@Ell2] assumes $\delta$ is fixed with $x$, so the scope of the theorem above is somewhat broader than what is explicit from Halász’ result. Third, the constant in the exponential in is dependent only on the ratio $\frac{\delta_j}{B_j}$, meaning that the estimate is more flexible and more widely applicable in case $|g(p)|$ is frequently quite small . Finally, as mentioned above, our theorem allows us to consider *any* strongly or completely multiplicative function, provided its argument does not resemble that of an archimedean character, rather than those whose arguments do not belong to some distinguished, omitted sector of the unit disc, as in .\ \ The main thread of the proof of Theorem \[HalGen\] is to consider the function $h(n) := g(n)-A|g(n)|$, where here $A$ is either 0 in i) or the choice of $A$ given in ii), and transform the functions $M_g(x)$ (which arises even in the analysis of $M_h(x)$ for $A\neq 0$) and $M_{|g|}(x)$ into sums over primes by interchanging $\frac{M_g(x)}{M_{|g|}(x)}$ with a ratio of integral averages of $N_g(x) := \sum_{n \leq x} g(n)\log n$ and $N_{|g|}(x)$. The convolution $\log n = 1\ast \Lambda$ allows us to then use the bounds on primes directly, and translate this ostensibly arithmetic problem into an analytic problem involving Dirichlet series, in which case complex and harmonic analytic techniques are at our disposal. The themes in this treatment are all due originally to Halász, but as our goal differs from his, a number of natural distinctions and refinements in relation to his work must arise in our analysis.\ In proving the above result, we will deduce the following lower mean value estimate for a class of (not necessarily strongly or completely) multiplicative functions. While lower estimates of this kind are already known in a broader context (see, for instance, [@Hil]), our proof is substantially simpler and fits naturally into the framework of the rest of the proof of Theorem \[HalGen\]. \[LOWERMV\] Let $\lambda:{\mathbb}{N} {\rightarrow}[0,\infty)$ be a multiplicative function such that: a) there exists a set $S$ of primes for which $P_x:= \prod_{p \in S \atop p \leq x} p \ll_{\alpha} x^{\alpha}$; b) there exist $\delta, B > 0$ for which $\delta \leq \lambda(p) \leq B$ for $p \notin S$ and $|g(p)| \leq \delta$ for $p \in S$; and c) $\sum_{p,k \geq 2} \frac{\lambda(p^k)}{p^k} \ll_B 1$. Then for $x$ sufficiently large and $P = P_x$, $$\sum_{n \leq x} \lambda(n) \gg_B \delta \frac{\phi(P_x)}{P_x}\frac{x}{\log x} \sum_{n \leq x} \frac{\lambda(n)}{n} \gg_B \delta \frac{\phi(P_x)}{P_x}\frac{x}{\log x} \prod_{p \leq x} \left(1+\frac{\lambda(p)}{p}\right).$$ As an application of Theorem \[HalGen\], we extend a theorem of Wirsing. Satz 1.2.2 in [@Wir] states the following (in slightly different notation). \[WIRSING\] Let $f(n)$ be a non-negative multiplicative function satisfying the asymptotic estimate $\frac{1}{\log x}\sum_{p \leq x} \frac{f(p)\log p}{p} \sim \tau$ for some $\tau > 0$, and suppose that $f(p) \leq C$ for all primes $p$. If $g:{\mathbb}{N} {\rightarrow}{\mathbb}{R}$ satisfies $|g(n)| \leq f(n)$ for all $n$ then $$\lim_{x {\rightarrow}\infty}\frac{M_g(x)}{M_{f}(x)} = \prod_p \left(1+\sum_{k \geq 1} p^{-k}g(p^k)\right)\left(1+\sum_{k \geq 1} p^{-k}f(p^k)\right)^{-1}, \label{LIMIT}$$ where the right side of the above limit is interpreted to be zero if the partial products over primes up to $x$ vanish as $x {\rightarrow}\infty$. It is natural to consider finding an explicit estimate for the rate of convergence to the limiting value of the ratio $\frac{M_g(x)}{M_{f}(x)}$ as $x {\rightarrow}\infty$. We can easily prove such an estimate in the case that the ratio converges to 0 as a direct application of Theorems \[LOWERMV\] and \[HalGen\] i) with $g$ even *complex-valued*. We can also adapt our proof of Theorem \[HalGen\] ii) in order to prove estimates of this type for more general limiting values. In *neither* of these results do we require the regularity assumption $\sum_{p \leq x} \frac{f(p)\log p}{p} \sim \tau \log x$. \[WIRSINGEXT\] i) Let $g \in {\mathcal}{C}$, and let $\{c_j\}_j$ be the collection of coefficients defined in Theorem \[HalGen\]. Suppose $f:{\mathbb}{N} {\rightarrow}[0,\infty)$ is a multiplicative function satisfying $\delta \leq |g(p)| \leq f(p)\leq B$, $|g(n)| \leq f(n)$ for all $n$, and such that the right side of is zero. Then, for $P = P_x$, $$\frac{|M_g(x)|}{M_{f}(x)} \ll_{B,\phi,\beta} \left(\frac{B}{\delta}\right)^3 \left(\frac{P}{\phi(P)}\right)^2e^{-{\mathcal}{F}(x;g)}\exp\left(-\sum_{p \leq x} \frac{f(p)-|g(p)|}{p}\right),$$ where, for $D > 2$ and $T := \log^D x$ and $\tilde{g}(p) := g(p)/B_j$ for $p \in E_j$ as above, $${\mathcal}{F}(x;g) := \begin{cases} \sum_{1 \leq j \leq m} c_j \sum_{p \in E_j \atop p \leq x} \frac{|g(p)|-\text{Re}(g(p))}{p} &\text{ if $g \in {\mathcal}{C}_b$} \\ \sum_{1 \leq j \leq m} B_j\left(\rho_{E_j}(x;\tilde{g},T) - \sum_{p \in E_j \atop p \leq x} \frac{1-|\tilde{g}(p)|}{p}\right) &\text{ otherwise}. \end{cases}$$ ii) Suppose either $f$ and $g$ are both strongly or both completely multiplicative, and $g$ additionally satisfies the hypothesis $|g(p)-f(p)| \leq \eta$ for each prime $p$. Then if $A := \exp\left(-\sum_{p \leq x} \frac{|g(p)|-g(p)}{p}\right)$ and $X := \exp\left(-\sum_{p \leq x} \frac{f(p)-|g(p)|}{p}\right)$ then $$M_{g} (x) = M_{f}(x)\left(\exp\left(-\sum_{p \leq x} \frac{f(p)-g(p)}{p}\right) + O_{B,m}\left(\frac{P}{\phi(P)}\left({\mathcal}{R}_1|A| + {\mathcal}{R}_2X\right)\right)\right),$$ where we have put $$\begin{aligned} {\mathcal}{R}_1 := \left(\frac{B}{\delta}\right)^2 \left(X\left(d_1\eta^{\frac{1}{2}} + \log^{-\frac{\delta\beta^3}{2}} x + \delta^{-1}e^{-\frac{d_1}{\sqrt{\eta}}}\right) + \log^{-\frac{2\delta}{3}}x + \delta^{-1}e^{-\frac{2d_1}{\sqrt{\eta}}}\right). \\ {\mathcal}{R}_2 := \left(\frac{B}{\delta}\right)^2 \left(d_1\eta^{\frac{1}{2}}|A| + \log^{-\frac{2\delta}{3}}x + \delta^{-1}e^{-\frac{2d_1}{\sqrt{\eta}}} + |A|^{\frac{\gamma_0}{4(1+\gamma_0)}}\left(\log^{-\frac{\gamma}{2}} x + \gamma^{-1}e^{-\frac{d_1}{\sqrt{\eta}}}\right)\right).\end{aligned}$$ Clearly, part ii) contains the case where the sum $\sum_p \frac{f(p)-g(p)}{p}$ (and therefore the ratio on the right side of ) converges.\ The case $m = 1$ *is* Halász’ theorem when $f \equiv 1$. Note that in this case, ${\boldsymbol}{E}$ is trivially good and, in the worst case, $c := c_1 = \frac{\delta}{B}$ in Theorem \[HalGen\], which improves on the coefficient for Halász’ theorem given in Elliott [@Ell2] when $B$ is small.\ It should be noted that Elliott [@Ell3] showed very recently a theorem of the above type that holds for *any* complex-valued multiplicative function $g$ and a multiplicative function $f$ such that $|g(n)| \leq f(n)$, with the lone assumptions that $f$ (and thus $g$) is uniformly bounded on primes and that $\sum_{p,k\geq 2} \frac{f(p^k)}{p^k} < \infty$. However, his results are *not* effective in that his asymptotic formulae do not have *explicit* error terms. In particular, in the situation of i) in Theorem \[WIRSINGEXT\] he does not provide a rate of convergence to the limiting value 0. His method is also completely different from ours.\ \ The structure of this paper is as follows. In Section 3 we prove some auxiliary results to be used in the remainder of the paper. In Section 4 we translate the problem of determining the ratio $|M_h(x)|/M_{|g|}(x)$ to a problem of relating $\int_{({\sigma})} |H'(s)/s|^2 |ds|$ (appearing in Section 6) to the Dirichlet series ${\mathcal}{G}({\sigma})$, a purely analytic quantity. In Section 4.1 specifically we derive a proof of Theorem \[LOWERMV\] as a step in this translation process, while in Section 4.2, we generalize arguments of Halász in a manner that is suitable for our purposes. In Section 5, we collect all of the analytic estimates we need to finish the proof of Theorem \[HalGen\]. In particular, we strengthen a uniform estimate due to Halász for $\left|G(s)/{\mathcal}{G}(s)\right|$ in terms of $\left|G({\sigma})/{\mathcal}{G}({\sigma})\right|$, for $s = {\sigma}+ i\tau$ that was mentioned earlier. In deriving this estimate we pay careful attention to treat each of the prime sets $E_j$ separately. In Section 6, we complete the proof of Theorem \[HalGen\], by implementing the estimates from Section 5 in the context of the results of the previous two sections. In Section 7 we prove Theorem \[WIRSINGEXT\]. Auxiliary Lemmata ================= We shall need the following easy estimate. \[OLD\] Let $x \geq 2$, ${\sigma}= 1+\frac{1}{\log x}$ and $s = {\sigma}+ i\tau_j$, with $\tau \in {\mathbb}{R}$. Then $$\sum_p \left|\frac{1}{p^{{\sigma}}}-\frac{1}{p^{s}}\right| \ll 1+ \left|\log\left|\frac{|\tau|}{{\sigma}-1}\right|\right|.$$ By partial summation with the prime number theorem with de la Vallée-Poussin error term, we have $$\begin{aligned} \label{TAIL} \sum_{p > u} \frac{1}{p^{{\sigma}}} &= \int_u^{\infty} v^{-{\sigma}}d\pi(v) = \int_u^{\infty} \frac{dv}{v\log v} e^{-({\sigma}-1)\log v} + O\left(\int_u^{\infty} \frac{dv}{v^{1+{\sigma}}}e^{-\sqrt{\log v}}\right) \\ &= \int_{({\sigma}-1)\log u}^{\infty} \frac{dv}{v}e^{-v} + O\left(e^{-\sqrt{\log u}}\right) \ll 1.\end{aligned}$$ Also, for any $\tau$ and $s ={\sigma}+ i\tau$, we have $$\label{HEAD} \sum_{p\leq e^{\frac{1}{|s-1|}}} \left|\frac{1}{p^s}-\frac{1}{p}\right| = \sum_{p\leq e^{\frac{1}{|s-1|}}} \frac{1}{p}\left|e^{-(s-1)\log p}-1\right| \leq |s-1|\sum_{p \leq e^{\frac{1}{|s-1|}}} \frac{\log p}{p} \ll 1,$$ by Mertens’ first theorem. Note that $e^{\frac{1}{|s-1}} \leq e^{\frac{1}{{\sigma}-1}}$ since $|s-1| \geq {\sigma}-1$. Applying to both of ${\sigma}$ and $s$, we have $$\begin{aligned} \sum_p \left|\frac{1}{p^{{\sigma}}}-\frac{1}{p^{s}}\right| &\leq \sum_{p \leq e^{\frac{1}{{\sigma}-1}}} \left|\frac{1}{p^{{\sigma}}} - \frac{1}{p^{s}}\right| + 2\sum_{p > e^{\frac{1}{{\sigma}-1}}} \frac{1}{p^{{\sigma}}} \\ &\leq \sum_{p \leq e^{\frac{1}{{\sigma}-1}}} \left|\frac{1}{p^{{\sigma}}}-\frac{1}{p}\right| + \sum_{p \leq e^{\frac{1}{|s-1|}}} \left|\frac{1}{p^{s}}-\frac{1}{p} \right| + 2\sum_{e^{\frac{1}{|s-1|}} < p \leq e^{\frac{1}{{\sigma}-1}}} \frac{1}{p} +O(1) \\ &=\left|\log\left|\frac{|s-1|}{{\sigma}-1}\right|\right| + O(1), $$ by applying Mertens’ second theorem (note that in the second last expression only one of the sums in brackets is non-empty). The second claim is immediate upon bounding $|s-1| \leq ({\sigma}-1) + |\tau|$. The following lemma is a simple consequence of the Prime Number Theorem, but we prove it for completeness. \[PRECMERTENS\] i) There is a constant $c \in {\mathbb}{R}$ such that for any $A > 0$ and $z$ sufficiently large, we have $$\sum_{n \leq z} \frac{\Lambda(n)}{n} = \log z + c + O\left(e^{-\theta\sqrt{\log z}}\right),$$ for some $\theta > 0$.\ ii) Let $P \in {\mathbb}{N}$. If $x$ is sufficiently large and $\theta' > 0$ sufficiently small, and $y > xe^{-\theta'\sqrt{\log x}}$ then $$\sum_{x-y < n \leq x \atop (n,P) = 1} \frac{\Lambda(n)}{n} = \frac{\phi(P)}{P}\frac{y}{x} + O\left(\left(\left(\frac{y}{x}\right)^2 + e^{-\theta'\sqrt{\log x}}\right)\log P\right)$$ i\) By the Prime Number theorem with de la Vallée-Poussin error term, $\psi(t) = t + {\mathcal}{E}(t)$ with $|{\mathcal}{E}(t)| \leq Cte^{-\theta\log^{\frac{1}{2}}t}$, for some $C$ sufficiently large and $\theta$ a positive absolute constant. Hence, applying partial summation, we have $$\begin{aligned} \sum_{n \leq z} \frac{\Lambda(n)}{n} &= \int_{2^-}^z \frac{d\psi(t)}{t} = \int_{1}^z \frac{dt}{t} + \int_{2^-}^z \frac{d{\mathcal}{E}(t)}{t} \\ &= \log z + \frac{{\mathcal}{E}(z)}{z} - \frac{{\mathcal}{E}(2)}{2} + \int_{2^-}^z \frac{{\mathcal}{E}(t)}{t^2} dt =: \log z + c + O\left(e^{-\frac{\theta}{2}\sqrt{\log z}}\right),\end{aligned}$$ where we set $c := \int_{2^-}^{\infty} \frac{{\mathcal}{E}(t)}{t^2} dt - \frac{{\mathcal}{E}(2)}{2}$. This estimate follows because $$\left|\int_z^{\infty} \frac{{\mathcal}{E}(t)}{t^2} dt\right| \ll e^{-\frac{\theta}{2} \log^{\frac{1}{2}} z} \int_z^{\infty} \frac{dt}{t}e^{-\frac{\theta}{2} \sqrt{\log t}} \ll e^{-\frac{\theta}{2}\sqrt{\log z}},$$ and this upper bound also clearly exceeds $\frac{{\mathcal}{E}(z)}{z}$ (this estimate also implies that the integral in the definition of $c$ exists, so $c$ is well-defined). This implies the estimate in i).\ ii) Applying i) first with $z = x-y$ and then with $z = x$, and subtracting these two results gives $$\sum_{x-y < n \leq x} \frac{\Lambda(n)}{n} = -\log\left(1-\frac{y}{x}\right) + O\left(e^{-\frac{\theta}{2}\sqrt{\log (x-y)}}\right) = \frac{y}{x} + O\left(\left(\frac{y}{x}\right)^2 + e^{-\theta'\sqrt{\log x}}\right).$$ Now, given that $\Lambda(md) = \Lambda(m)$ whenever $\Lambda(md) \neq 0$ and $m > 1$, we have $$\begin{aligned} \sum_{x-y < n \leq x \atop (n,P)=1} \frac{\Lambda(n)}{n} &= \sum_{d | P} \mu(d)\sum_{(x-y)/d < m \leq x/d} \frac{\Lambda(md)}{md} = \sum_{d|P} \frac{\mu(d)}{d}\sum_{(x-y)/d < m \leq x/d} \frac{\Lambda(m)}{m} \\ &= \frac{y}{x}\sum_{d|P} \frac{\mu(d)}{d} + O\left(\left(\sum_{d\leq P} \frac{1}{d}\right)\left(\left(\frac{y}{x}\right)^2 + e^{-\theta'\sqrt{\log x}}\right)\right) \\ &= \frac{\phi(P)}{P}\frac{y}{x} + O\left(\left(\left(\frac{y}{x}\right)^2 + e^{-\theta'\sqrt{\log x}}\right)\log P\right).\end{aligned}$$ This completes the proof. The following estimates, due to Selberg, will provide us with a maximal growth order for our mean values, and will also play a role in the proof of Lemma \[MAIN1\]. \[SELBERG\] Let $x$ be sufficiently large, $B > 0$ and $0 < \rho \leq B$. Then $$\label{SelSum} \sum_{n \leq x} \rho^{\omega(n)} = x(\log x)^{\rho-1}F(\rho)\left(1+O_B\left(\frac{1}{\log x}\right)\right),$$ where $F(\rho) := \Gamma(\rho)^{-1}\prod_p \left(1+\frac{\rho}{p-1}\right)\left(1-\frac{1}{p}\right)^{\rho}$. In particular, if $\delta > 0$ is fixed with $\delta \leq B$ and $g$ is a strongly multiplicative function satisfying $\delta \leq |g(p)| \leq B$ for each $p$ then $x\log^{-\delta + 1} x \ll_{\delta} \left|M_g(x)\right| \ll_B x\log^{B-1} x$.\ The first estimate, due to Selberg, is proved, for instance, in Chapter II.6 of [@Ten2]. The second follows from the first via the triangle inequality, given that $\delta^{\omega(n)} \leq |g(n)| = \prod_{p|n} |g(p)| \leq B^{\omega(n)}$ whenever $g$ is strongly multiplicative. An analogous estimate exists when $\omega$ is replaced by $\Omega$ and $B < 2$, but for a different choice of $F(\rho)$. Consequently, an upper bound for $|M_g(x)|$ of the same shape as above holds for completely multiplicative functions as well (indeed, $|f(m)| = \prod_{p^k || m} |f(p)|^k \leq B^{\sum_{p^k||m} k} = B^{\Omega(m)}$ in this case). A crucial element of the proof of Theorem \[HalGen\] is a uniform bound in $\tau$ of $\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|$, for $s = {\sigma}+ i\tau$, by a (fractional) power of $\left|\frac{{\mathcal}{G}({\sigma})}{{\mathcal}{G}({\sigma})}\right|$. As a first step in deriving such a bound, we will need to relate $\frac{G(s)}{{\mathcal}{G}({\sigma})}$ to a power of $\frac{{\mathcal}{G}(s)}{{\mathcal}{G}({\sigma})}$. The following simple trigonometric inequality will bridge the gap between the first step and the desired estimate (see Lemma \[INTBOUND\] below). \[TrigProp\] Let $m \in {{\mathbb}{N}}$. Then for $a_1,\ldots,a_m \in {{\mathbb}{R}}$, $$\begin{aligned} \sin^2\left(\sum_{1 \leq j \leq m} a_j\right) \leq m\sum_{1 \leq j \leq m} \sin^2 a_j \label{PROP1}. $$ This follows by induction on $m\geq 1$ upon setting $A := a_1 + \ldots + a_{m-1}$ via the inequality $$\sin^2(A+a_m) \leq (|\sin(A)||\cos(a_m)| + |\cos(A)||\sin(a_m)|)^2 \leq (|\sin(A)| + |\sin(a_m)|)^2,$$ and the inequality $\left(\sum_{1 \leq j \leq m} r_j\right)^2 \leq m\sum_{1 \leq j \leq m} r_j^2$. In spite of our limited local knowledge of $g$ on short intervals, we can at least approximate the behaviour of $M_{|g|}$ on intervals of the form $(a,ca]$ with $c > 1$. We will use this in the proof of Theorem \[LOWERMV\] below. \[DobApp\] Let $a \geq 2$, $B \geq 1$ and $1 < c \ll 1$ as $a {\rightarrow}\infty$. Suppose $g:{\mathbb}{N} {\rightarrow}(0,\infty)$ is a strongly or squarefree-supported multiplicative function, such that $g(p) \leq B$ for all primes $p$. Furthermore, assume that $\frac{M_g(t) \log t}{t} {\rightarrow}\infty$ as $t {\rightarrow}\infty$. Then $M_g(ca)-M_g(a) \ll_c BM_g(a)$. By the Prime Number theorem, $$\begin{aligned} \sum_{a < p \leq ca} g(p) &\leq B\left(\pi(ca) - \pi(a)\right) = B\left(\frac{ca}{\log a + \log c} - \frac{a}{\log a} + O\left(\frac{a}{\log^2 a}\right)\right) \\ &= B(c-1)\frac{a}{\log a}\left(1 + O_c\left(\frac{1}{\log a}\right)\right) = o_{B,c}(M_g(a)).\end{aligned}$$ Thus, $M_g(ca)-M_g(a) = \sum_{a < n \leq ca \atop a \text{ not prime}} g(n) + o_{B,c}(M_{g}(a))$. Call the sum in this last expression $S(a)$. We split $S(a) = S_1(a) + S_2(a)$, where $S_1(a)$ is supported by integers $n \in (a,ca]$ satisfying $P^+(n) \leq c$, where $P^+(n)$ denotes the largest prime factor of $n$, and $S_2(a)$ is supported on the complement of the support of $S_1(a)$. Now, since $g$ is strongly multiplicative and each $c$-smooth number has at most $\pi(c)$ distinct prime factors, $$S_1(a) = \sum_{a < n \leq ca \atop a \text{not prime}, P^+(n) \leq c} g(n) \leq B^{\pi(c)} \sum_{a < n \leq ca \atop P^+(n) \leq c} 1 =: B^{\pi(c)}\left(\Psi(ca,c)-\Psi(a,c)\right),$$ where $\Psi(u,v) := |\{n \leq u : P^+(n) \leq v\}|$. By a theorem of Ennola (see Theorem III.5.2 in [@Ten2]), we have, for $ c \ll 1$, $$\psi(u,c) = \frac{1}{\pi(c)!}\left(1+O_c\left(\frac{1}{\log u}\right)\right)\prod_{p \leq c} \frac{\log u}{\log p},$$ whence we have $$\psi(ca,c)-\psi(a,c) = \left(1+O_c\left(\frac{1}{\log a}\right)\right)\left(\left(1+\frac{\log c}{\log a} \right)^{\pi(c)}-1\right)\left(\prod_{p \leq c} \frac{\log a}{\log p}\right) \ll_c \log^{\pi(c)} a.$$ Hence, we have $S_1(a) \ll_c \frac{a}{\log a}$ easily.\ Now, for each $n$ in the support of $S_2(a)$ we can write $n = p^k m$ with $p > c$, $(m,p) = 1$, and $\frac{a}{p^k} < m \leq \frac{ca}{p^k} \leq a$ (if $g$ is only supported on squarefrees then assume $k = 1$). For each such $m$, we let $n_a(m)$ denote the number of choices of $n \in (a,ca]$ composite for which $m = \frac{n}{p^k}$, with $p^k || n$. We claim that $n_a(m) \leq D$ uniformly in $a$, where $D\ll_c 1$. The statement of the lemma will then follow because $$\sum_{a < n \leq ca \atop n \text{ not prime}, P^+(n) > c} g(n) \leq B\sum_{m \leq a} n_a(m)g(m) \leq DBM_g(a).$$ Assume for the sake of contradiction that $\limsup_{a {\rightarrow}\infty} \left(\max_{m \leq a} n_a(m)\right) = \infty$. Thus, for any $N$ and $a$ sufficiently large we can select an integer $m \leq a$ such that $n_a(m) := R \geq 2N+1$. Accordingly, there exist $R$ prime powers $p_1^{k_1} < \ldots < p_R^{k_R}$ for which $p_j^{k_j}m \in (a,ca]$. Let $r_{ij} := p_i^{k_i}p_j^{-k_j}$, with $i > j$. There are $\frac{1}{2}R(R-1) \geq 2N^2+1$ such ratios, and $r_{ij} \in (1,c]$. We split this latter interval into $N^2$ intervals $(1+\frac{(c-1)l}{N^2}, 1+\frac{(c-1)(l+1)}{N^2}] =: I_N(l)$. By the pigeonhole principle, there exists some $l_0$ for which there are two distinct pairs $(i_1,j_1)$ and $(i_2,j_2)$ such that $r_{i_1j_1} < r_{i_2j_2} \in I_N(l_0)$. Set $r := \frac{r_{i_2j_2}}{r_{i_1j_1}} \in \left(1,1+\frac{c-1}{N^2}\right)$. Assume for the moment that one of $k_{i_1},k_{i_2},k_{j_1},k_{j_2}$ is not divisible by some fixed prime $q$ (if $g$ is supported on squarefree integers then this is trivial since they are all 1 whenever $g(m) \neq 0$). Thus, let $\alpha_q := r^{\frac{1}{q}}$, which is an algebraic integer of degree $q$. Note that ${\mathbb}{Q}(\alpha_q)$ is a Kummer extension of degree at least 2 with abelian Galois group corresponding to multiplication by $q$th roots of unity, with minimal polynomial $x^q - r$. It follows that $|{\sigma}(\alpha_q)| = |\alpha_q|$ for each ${\sigma}\in \text{Gal}({\mathbb}{Q}(\alpha_q)/{\mathbb}{Q})$, and the Mahler measure of $x^q-r$ is $M(\alpha_q) = r$; thus, the Weil height of $\alpha$ is $\log\alpha = \frac{1}{q}\log r$. Now, by Dobrowolski’s theorem (see Section 4.4 in [@BoG]), $\log \alpha \geq \frac{1}{4q} \left(\frac{\log_2 3q}{\log 3q}\right)^3$, whence it follows that $\log r \geq \frac{1}{4}\left(\frac{\log_2 3q}{\log 3q}\right)^3$. Hence, for $q \ll 1$ as $a {\rightarrow}\infty$, we see that we can choose $N$ large enough so that $\frac{1}{N^2} < \frac{1}{8}\left(\frac{\log_2 q}{\log q}\right)^3$. On the other hand, $\log r \leq \log\left(1+\frac{1}{N^2}\right) \leq \frac{2}{N^2}$, contradicting the conclusion of Dobrowolski’s theorem.\ Assume now that no such $2 \leq q \ll 1$ exists. Then $x^2 + r$ generates a quadratic extension for which $\log r \geq \frac{1}{2}\left(\frac{\log_2 6}{\log 6}\right)^3$, by Dobrowolski’s theorem (which holds for extensions of degree at least 2). The same contradiction as above follows. Let $L_g(u) := \sum_{n \leq u} \frac{g(n)}{n}$ and $P_g(u) := \sum_{p \leq u} \left(1+\frac{g(p)}{p}\right)$. The following lemma relating $L_g(u)$ and $P_g(u)$ is standard, but we prove it for completeness. It will be necessary for us in conjunction with Theorem \[LOWERMV\] in order to relate $M_{|g|}(x)$ with ${\mathcal}{G}({\sigma})$. \[LOGSUM\] Let $B \geq 1$ and let $g : {\mathbb}{N} {\rightarrow}[0,\infty)$ be a multiplicative function with $|g(p)| \leq B$ for each prime $p$, and such that $\sum_{p^k, k \geq 2} \frac{g(p^k)}{p^k} \ll_B 1$. Then for each $u$ sufficiently large, $L_g(u) \asymp_B P_g(u)$. The hypotheses of this lemma are clearly satisfied by a strongly or completely multiplicative function that is uniformly bounded on primes. Set $S := \sum_{p, k \geq 2} \frac{g(p^k)}{p^k}$. The upper bound follows immediately from $$\begin{aligned} L_g(u) &\leq \sum_{P^+(n) \leq u} \frac{g(n)}{n} = \prod_{p \leq u} \left(1+\frac{g(p)}{p} + \sum_{k \geq 2} \frac{g(p^k)}{p^k}\right) \leq P_g(u)\prod_{p \leq u}\left(1+\sum_{k \geq 2} \frac{g(p^k)}{p^k}\right) \leq e^SP_g(u). $$ To derive a lower bound we can no longer bound the set of $n \leq u$ by those with largest prime factor less than $u$. Instead, we let $\kappa$ be a parameter to be chosen, and bound $L_g(u)$ from below by $P_g(u^{\kappa})$, since for $\kappa > 0$ this should be of the same order as $P_g(u)$. Precisely, $$L_g(u) \geq \sum_{n \leq u \atop P^+(n) \leq u^{\kappa}} \mu^2(n)\frac{g(n)}{n} = \sum_{P^+(n) \leq u^{\kappa}} \mu^2(n)\frac{g(n)}{n} - \sum_{n > u \atop P^+(n) \leq u^{\kappa}} \mu^2(n)\frac{g(n)}{n} = P_g(u^{\kappa}) - \sum_{n > u \atop P^+(n) \leq u^{\kappa}} \mu^2(n)\frac{g(n)}{n}.$$ We will also bound the second sum above by a multiple $P_g(u^{\kappa})$, using the condition $n > u$ to save a constant factor. Indeed, by Rankin’s trick, for ${\epsilon}> 0$ a second parameter to be chosen, $$\sum_{n > u \atop P^+(n) \leq u^{\kappa}} \mu^2(n)\frac{g(n)}{n} \leq u^{-{\epsilon}}\sum_{P^+(n) \leq u^{\kappa}} \mu^2(n) \frac{g(n)}{n^{1-{\epsilon}}} = u^{-{\epsilon}} \prod_{p \leq u^{\kappa}} \left(1+\frac{g(p)}{p}e^{{\epsilon}\log p}\right).$$ When ${\epsilon}$ is sufficiently small then this last factor is approximately $P_g(u^{\kappa})$. Indeed, observe that for each $p \leq u^{\kappa}$, $$\left(1+\frac{g(p)}{p}e^{{\epsilon}\log p}\right)\left(1+\frac{g(p)}{p}\right)^{-1} = 1+\frac{g(p)\left(e^{{\epsilon}\log p}-1\right)}{p+g(p)} \leq 1+2B{\epsilon}\frac{\log p}{p}.$$ so, setting ${\epsilon}:= \frac{1}{\kappa B \log u}$, we have $$P_g(u^{\kappa})^{-1}\prod_{p \leq u^{\kappa}} \left(1+\frac{g(p)}{p^{1-{\epsilon}}}\right) \leq \prod_{p \leq u^{\kappa}} \left(1+2B{\epsilon}\frac{\log p}{p}\right) \leq \exp\left(2B{\epsilon}\sum_{p \leq u^{\kappa}} \frac{\log p}{p}\right) = e + o(1).$$ It therefore follows that $$\sum_{n > u \atop P^+(n) \leq u^{\kappa}} \mu^2(n)\frac{g(n)}{n} \leq u^{-{\epsilon}}eP_g(u) = e^{1-\frac{1}{\kappa B}}P_g(u)(1+o(1)).$$ We now select $\kappa = \frac{1}{2B}$ if $B \geq \frac{1}{2}$, and $\kappa = \frac{1}{2}$ otherwise. Then $$L_g(u) \geq P_g(u^{\kappa}) \left(1-2e^{1-\frac{1}{\kappa B}}\right)P_g(u^{\kappa}) \geq (1-2e^{-1})P_g(u^{\kappa}),$$ for $u$ sufficiently large. Finally, to complete the proof it suffices to show that $P_g(u) \asymp P_g(u^{\kappa})$. Indeed, when $u > B^{2B}$, $$\frac{P_g(u)}{P_g(u^{\kappa})} = \prod_{u^{\kappa} < p \leq u} \left(1+\frac{g(p)}{p}\right) \leq \exp\left(B\sum_{u^{\kappa} < p \leq u} \frac{1}{p}\right) \leq e^{(B+1)\log\left(\frac{1}{\kappa}\right)} \leq e^{(B+1)\log (4\max\{B,1/2\})}.$$ Hence, we have $$L_g(u) \geq (1-2e^{-1})P_g(u)\left(\frac{P_g(u^{\kappa})}{P_g(u)}\right) \geq (1-2e^{-1})e^{-(B+1)\log(4\max\{B,1/2\})}P_g(u),$$ and the proof is complete. As a consequence of the lower bound in the previous lemma (which is the non-trivial part of it), we have the following. \[DENOMPARS\] Let $u \geq 3$ be sufficiently large and ${\sigma}:= 1+\frac{1}{\log u}$. Let $g$ satisfy the hypotheses of the Lemma \[LOGSUM\], let $G(s)$ be its Dirichlet series and assume that this converges absolutely in the half-plane $\text{Re}(s) > 1$. Then $L_g(u) \gg_B G({\sigma})$. By Lemma \[LOGSUM\] we have $$\begin{aligned} L_g(u) &\gg_B \prod_{p \leq u} \left(1+\frac{g(p)}{p}\right) = \prod_{p} \left(1+\frac{g(p)}{p^{{\sigma}}}\right)\left(\prod_{p \leq u} \left(1+\frac{g(p)}{p}\right)\left(1+\frac{g(p)}{p^{{\sigma}}}\right)^{-1}\right)\prod_{p > u} \left(1+\frac{g(p)}{p}\right)^{-1} \\ &\geq \prod_p \left(1+\frac{g(p)}{p^{{\sigma}}}\right)\left(\prod_{p > u} \left(1+\frac{g(p)}{p}\right)^{-1}\right),\end{aligned}$$ the last estimate following because $g$ is non-negative and ${\sigma}> 1$. Now, $$\prod_{p > u} \left(1+\frac{g(p)}{p^{{\sigma}}}\right) \leq \exp\left(B\sum_{p > u} \frac{1}{p^{{\sigma}}}\right) \ll_B 1,$$ by . Also, $$G({\sigma})\prod_p \left(1+\frac{g(p)}{p^{{\sigma}}}\right)^{-1} \leq \prod_p \left(1+\sum_{\nu \geq 2} \frac{g(p^{\nu})}{p^{\nu{\sigma}}}\right) \ll_B 1.$$ It follows that $$L_g(u) \gg_B G({\sigma})\left(G({\sigma})\prod_p \left(1+\frac{g(p)}{p^{{\sigma}}}\right)^{-1}\right)^{-1} \gg_B G({\sigma}),$$ as claimed. Arithmetic Estimates ==================== Lower Bounds for $M_{|g|}$ -------------------------- In this section we bound $M_{|g|}(t)$ from below. Laterally, our estimate will essentially suffice to prove Theorem \[LOWERMV\].\ We will need to relate $N_{|g|}(t)$ to an integral average of itself on a short interval. This same method, used in the next section as well to deal with $N_h(t)$ with $h(n) := g(n)-A|g(n)|$ will permit us a passage towards harmonic analytic methods. The next lemma will be essential in this regard. \[STRONG\] Suppose $g$ is strongly multiplicative. Then $N_g(x) = \sum_{d \leq x} \Lambda(d)g(d)M(x/d)$ for any $x \geq 1$. By definition, we have $$N_g(x) = \sum_{n \leq x} g(n)\log(n) = \sum_{d \leq x} \Lambda(d) \sum_{m\leq x/d} g(md).$$ Write $d = p^l$. Then $$\sum_{m \leq x/p^l} g(p^lm) = \sum_{t \geq l} \sum_{p^tm \leq x \atop p\nmid m} g(p^tm) = \sum_{t \geq l} g(p^t)\sum_{p^tm \leq x \atop p\nmid m} g(m) = g(p)\sum_{t\geq l} \sum_{m \leq x/p^l \atop p^{t-l}||m} g(m) = g(p)\sum_{m \leq x/p^l} g(m),$$ so that $$\sum_{p^l \leq x} \Lambda(p^l) \sum_{m \leq x/p^l} g(mp^l) = \sum_{p^l \leq x} \Lambda(p^l) g(p)M_g(x/p^l).$$ The above proof is completely trivial when $g$ is completely multiplicative, because $g(p^tm) = g(p^t)g(m)$, regardless of $m$. \[COMPAVG\] Let $2 \leq y \leq t \leq x$ with $y = e^{-\log^c t}$ for $c < \frac{1}{2}$. For $A \in {\mathbb}{C}$ with $|A| \in [0,1]$ let $h(n) := g(n)-A|g(n)|$ and set $\mu:= \max_p |\text{arg}(g(p))|$. Then $$\begin{aligned} |N_h(t)| &= y^{-1}\int_{t-y}^t |N_h(u)| du + O_B\left(\frac{M_{|g|}(t)}{\log^2 t}\right)\\ &\leq y^{-1}\int_{t-y}^t du \left(\sum_{d \leq u} \Lambda(d)|g(d)||M_h(u/d)| + |A|\mu \sum_{d \leq u} \Lambda(d)|g(d)|M_{|g|}(u/d)\right) + O_B\left(\frac{M_{|g|}(t)}{\log^2 t}\right).\end{aligned}$$ When $A = 0$ then $h = g$, so this lemma reduces trivially to a statement about sums of $g$ as well. Observe that for $t-y < u \leq t$, Lemma \[SELBERG\] implies that $$\begin{aligned} ||N_h(u)|-|N_h(t-y)|| &\leq |N_h(u)-N_h(t-y)| = \left|\sum_{t-y < n \leq u} \left(g(n)-A|g(n)|\right)\log n\right| \\ &\leq 2(1+|A|)\left(\sum_{t-y < n \leq t} |g(n)| \right) \log t \\ &\asymp_B (1+|A|)\left(t\log^{B-1}t-(t-y)\log^{B-1}(t-y)\right)\log t \\ &\leq (1+|A|)\left(t\left(1+\frac{\log \left(1+\frac{y}{t-y}\right)}{\log t}\right)^{B-1} - t\right)\log^{B} t + y(1+|A|)\log^B t\\ &\leq (1+|A|)\left(t\left(1+2B\frac{y}{(t-y)\log t}\right) - t\right)\log^B t +y(1+|A|)\log^B t\ll y\log^B t, $$ the second last inequality holding when $t$ is sufficiently large in terms of $B$. Hence, for each $u \in (t-y,t]$ and any $K > 2$, $$|N_h(u)| = |N_h(t)| + O\left(\frac{t}{\log^{K-1} t}\right) = |N_h(t)| + O\left(\frac{M_{|g|}(t)}{\log^2 t}\right),$$ upon taking $K = 4$ and using the lower bound from Lemma \[SELBERG\]. It then follows that $$\label{FIRST} |N_h(t)| = y^{-1}\int_{t-y}^{t} |N_h(u)| du + O\left(\frac{M_{|g|}(t)}{\log^2 t}\right).$$ Now, taking $u \in (t-y,t]$ and using $\log = 1 \ast \Lambda$, we have $$|N_h(u)| = \left|\sum_{d\leq u} \Lambda(d)\sum_{m \leq u/d} \left(g(md)-A|g(md)|\right) \right| = \left|\sum_{d \leq u} \Lambda(d) \sum_{m \leq u/d} g(md) - A\sum_{d\leq u} \Lambda(d)\sum_{m \leq u/d} |g(md)|\right|.$$ Note that both $g$ and $|g|$ are strongly multiplicative. Applying Lemma \[STRONG\], we get $$\begin{aligned} |N_h(u)| &= \left|\sum_{d \leq u} \Lambda(d) g(d) \sum_{m \leq u/d} g(m) - A\sum_{d\leq u} \Lambda(d)|g(d)|\sum_{m \leq u/d} |g(m)|\right| \\ &\leq \left|\sum_{d \leq u} \Lambda(d)g(d) \sum_{m \leq u/d} \left(g(m) -A |g(m)|\right)\right| + |A|\left|\sum_{d\leq u} \Lambda(d)(g(d)-|g(d)|)\sum_{m \leq u/d} |g(m)|\right| \nonumber\\ &\leq \sum_{d \leq u} \Lambda(d)|g(d)||M_h(u/d)| + |A|\sum_{d \leq u} \Lambda(d)|g(d)|\left|\frac{g(d)}{|g(d)|} - 1\right|M_{|g|}(u/d) \label{TWOSUMS}.\end{aligned}$$ As $\frac{g(p)}{|g(p)|} = e^{i\arg{g(p)}}$, the mean value theorem (of calculus) implies that if $d = p^k$ for some $k \in {\mathbb}{N}$, $$\left|\frac{g(d)}{|g(d)|} - 1\right| = \left|e^{i\arg(g(p))} - 1\right| = \left|\arg(g(p))\right| \leq \mu.$$ Inserting this estimate into , we get $$|N_h(u)| \leq \sum_{d \leq u} \Lambda(d)|g(d)||M_h(u/d)| + |A|\mu\sum_{d \leq u} \Lambda(d)|g(d)|M_{|g|}(u/d).$$ Inputting this estimate into for each $u \in (t-y,t]$ implies the claim. When $g$ is completely multiplicative, in which case $|g(d)/|g(d)| - 1| = |\text{arg}(g(p^k))|$ for $k$ possibly at least 2, we must split the sum as $$\label{COMPCHANGE} \sum_{d \leq u} \Lambda(d)|g(d)|\left|\frac{g(d)}{|g(d)|} - 1\right|M_{|g|}(u/d) \leq \mu\sum_{p \leq u} \Lambda(p) |g(p)|M_{|g|}(u/p) + \mu\sum_{k \geq 2} k\sum_{p^k \leq u} \Lambda(p^k)|g(p^k)|M_{|g|}(u/p^k). $$ We claim that we can bound the second sum by $\ll_B \mu M_{|g|}(u)\log u$. When $B \leq 1$, this can easily be verified since, by Lemma \[SELBERG\], $$\sum_{k \leq 2} k\sum_{p^k \leq u} \Lambda(p^k)|g(p^k)|M_{|g|}(u/p^k) \leq B^2u \sum_p \frac{\log p}{p^2} \sum_{k \geq 0}\frac{(k+2)B^k}{p^k} \ll B^2 u \ll_{\delta} B^2 M_{|g|}(u)\log^{1-\delta u}.$$ For $1 < B < 2$ we must work a little harder. Let $Q \geq 2$ be a parameter to be chosen. We split the second sum into sums over $[1,Q]$ and $[Q,x]$. In the first interval, using $M_{|g|}(u/d)|g(d)| \leq M_{|g|}(u)$, we have $$\sum_{p^k \leq Q \atop 2 \leq k \leq \log Q/\log 2} k\Lambda(p^k) |g(p^k)| M_{|g|}(u/p^k) \ll \left(\sum_{a \leq Q^{\frac{1}{2}}} \Lambda(a)\right)M_{|g|}(u)\log^2 Q \ll M_{|g|}(u) Q^{\frac{1}{2}} \log^2 Q.$$ In the second interval, we further split the sum over $Q < p^k \leq u$, according as $p \leq P_0$ or not, for $P_0 \geq 2$ a parameter to be chosen. In the first case, we have $$\sum_{Q < p^k \leq u \atop p \leq P_0, k \geq 2} k\Lambda(p^k) |g(p^k)| M_{|g|}(u/p^k) = \sum_{n \leq u} |g(n)| \sum_{p^k | n}^{\ast} k\log p,$$ where the asterisk indicates that the support of the sum consists of $p^k | n$ such that $Q < p^k \leq u$, $p \leq P_0$ and $k \geq 2$. Since $k \log p > \log Q$, each $n$ has at most $\frac{\log n}{\log Q}$ factors $p^k$ of the latter shape and it follows that $$\label{SECONDONE} \sum_{n \leq u} |g(n)| \sum_{p^k | n}^{\ast} k\log p \leq \frac{\log u}{\log Q} M_{|g|}(u).$$ In the remaining sum, we apply Lemma \[SELBERG\] and the elementary identity $\sum_{l \geq 1} lt^l = t/(1-t)^2$ to get $$\begin{aligned} &\sum_{Q < p^k \leq u \atop p > P_0, k \geq 2} k\Lambda(p^k) B^k M_{|g|}(u/p^k) \ll_B Bu\log^{B-1}(u/Q)\sum_{p} \frac{\log p}{p^2}\sum_{k \geq 1 \atop p^k > Q} k\frac{B^{k-1}}{p^{k-1}} \nonumber\\ &\ll Bu\log^{B-1}(u/Q)\sum_{p > P_0} \frac{\log p}{p}\sum_{k \geq \log Q/\log p} k\frac{B^k}{p^k} \nonumber\\ &\ll B^2 u \log^{B-1}(u/Q) \sum_{p > P_0} \frac{\log p}{(p-B)^2}\left(\frac{B}{p}\right)^{\frac{\log Q}{\log p}} \nonumber.\end{aligned}$$ The last expression is $$e^{-\frac{\log Q}{\log p}\left(\log p - \log B\right)} \leq e^{-\left(1-\frac{\log B}{\log P_0}\right)\log Q}.$$ Thus, $$\sum_{Q < p^k \leq u \atop k \geq 2, p > P_0} k\Lambda(p^k) B^k M_{|g|}(u/p^k) \ll_B B^2 u \log^{B-1}(u/Q)e^{-(1-\frac{\log B}{\log P_0})\log Q}.$$ Write $Q = \log^{2-r} u$, for $0 < r < 2$. Choosing $r > 0$ and $P_0$ such that $1+(2-r)\left(1-\frac{\log B}{\log P_0}\right) \geq B-\delta$, which is possible since $B-\delta < 2-\delta$, it follows that this last bound is $\ll B^2u \log^{1-\delta} u \ll B^2 N_{|g|}(u)\log^{-\delta}u$, as above (when we considered $B \leq 1$). Of course, with this choice of $Q$, we have $M_{|g|}(u) Q^{\frac{1}{2}}\log^2 Q \ll M_{|g|}(u)\log^{1-r/2 +o(1)}u$, and we may take $r = 1/2$, $P_0 = B^3$, and then the bound for the sum is $\ll_B M_{|g|}(u) \log u/\log_2 u$. All told, this indeed shows that $$\sum_{p^k \leq Q \atop k \geq 2} k\Lambda(p^k) |g(p^k)| M_{|g|}(u/p^k) \ll B^2 \mu M_{|g|}(u) \log u.$$ Thus, in the case of completely multiplicative functions we only add the term $\mu M_{|g|}(t)\log t$ when $u = t$. Once divided by $\log t$ in transitioning from $|M_{h}(t)|$ to $|N_h(t)|$ (see Lemma \[NUM\]) and by $M_{|g|}(t)$ when calculating the ratio $|M_h(t)/M_{|g|}(t)|$, this error term has the same order of magnitude $\mu$ as what results in the strongly multiplicative case (see Proposition \[STARTER\]). We next bound $M_{|g|}$ from below by an integral of itself on a longer interval. It turns out that this latter integral, in turn, is bounded below by ${\mathcal}{G}({\sigma})$, which will be of use in later sections. \[DENOM\] Let $t$ be sufficiently large, $g: {\mathbb}{N} {\rightarrow}{\mathbb}{C}$ and given $S := \{p : g(p) = 0\}$, let $P_t := \prod_{p \leq t \atop p \in S} p$. Suppose that $P_t \ll_{\alpha} t^{\alpha}$ for any $\alpha > 0$. Then $$M_{|g|}(t) \gg_{B,r} \delta \frac{\phi(P_t)}{P_t}\frac{t}{\log t} \int_1^{t} \frac{M_{|g|}(u)}{u^2} du. $$ Write $\Delta_{f}(u) := M_{f}(u)\log u - N_{f}(u)$, for $u \in (1,t]$ and an arithmetic function $f$. By definition, we have $$\Delta_{|g|}(u) = \sum_{n \leq u} |g(n)|\log(u/n) = \sum_{n \leq u} |g(n)|\int_n^u \frac{dv}{v} = \int_1^u \frac{M_{|g|}(v)}{v}dv. \label{EXACT}.$$ Now, for $u \leq t$ fixed but arbitrary and $u_1 \leq \frac{u}{\log^{K_1}u}$ for some $K_1 > 0$ to be chosen, we have $$\Delta_{|g|}(u) \leq M_{|g|}(u_1)\int_1^{u_1} \frac{dv}{v} + M_{|g|}(u) \int_{u_1}^u \frac{dv}{v} \leq M_{|g|}(u)\log u \left(\frac{M_{|g|}(u_1)}{M_{|g|}(u)} + \left(1-\left(\frac{\log u_1}{\log u}\right)\right)\right). \label{DELTBOUND}$$ By Lemma \[SELBERG\], it follows that $$\frac{M_{|g|}(u_1)}{M_{|g|}(u)} \ll_{B,\delta} \frac{u_1\log^{B-1}u_1}{u}\log^{1-\delta} u \ll \log^{B-\delta-K_1} u.$$ Also, $1-\frac{\log u_1}{\log u} \leq K_1\frac{\log_2 u}{\log u}$. Hence, choosing $K_1 := B+2$, we have $$N_{|g|}(u) \leq M_{|g|}(u)\log u\left(1+ 2\frac{\log_2 u}{\log u}\right) \leq 2M_{|g|}(u) \log u$$ for $u > e^{5}$.\ Furthermore, given $y = te^{-\log^{c} t}$ for any $0 < c < \frac{1}{2}$ then by Lemma \[COMPAVG\] we have, uniformly in $t$, $$M_{|g|}(t) \gg \frac{1}{\log t}N_{|g|}(t) = \frac{1}{y\log t}\int_{t-y}^{t} N_{|g|}(u) du +O\left(\frac{M_{|g|}(t)}{\log^3 t}\right)\geq \frac{t^2}{y\log t} \int_{t-y}^t \frac{N_{|g|}(u)}{u^2} du +O\left(\frac{M_{|g|}(t)}{\log^3 t}\right), \label{INTAVGLOW}$$ the second estimate coming from with $A = 0$ and $|g|$ in place of $g$. We exploit this integral average as follows. Applying Lemma \[STRONG\], we get $$\begin{aligned} \int_{t-y}^t \frac{N_{|g|}(u)}{u^2} du &= \int_{t-y}^t\frac{du}{u^2}\left(\sum_{m \leq u} |g(m)|\log m\right) = \int_{t-y}^t \frac{du}{u^2} \left(\sum_{a \leq u} \Lambda(a)|g(a)| \sum_{m \leq u/a} |g(m)|\right) \\ &= \sum_{a \leq t} |g(a)|\frac{\Lambda(a)}{a}\int_{(t-y)/a}^{t/a} \frac{du}{u^2}M_{|g|}(u) \geq \delta \int_1^{t} \frac{M_{|g|}(u)}{u^2} du \left(\sum_{(t-y)/u < a \leq t/u \atop (a,P_t) = 1} \frac{\Lambda(a)}{a}\right),\end{aligned}$$ the last inequality following by reordering summation and integration (noting that $(t-y)/a < u \leq t/a$ if, and only if, $(t-y)/v < a \leq t/v$) and using $|g(n)|\Lambda(n) \geq \delta \Lambda(n)$. Now, as $y = te^{-\log^{c}t}$ with $c < \frac{1}{2}$ we may apply ii) of Lemma \[PRECMERTENS\] (noting that $\log P_te^{-\theta \sqrt{\log t}} = o_r(1)$ , so that $$M_{|g|}(t) \gg_{B,r} \frac{\phi(P_t)}{P_t}\frac{t^2}{y \log t}\left(\int_{t-y}^t \frac{N_{|g|}(u)}{u^2} du\right)\gg \delta \frac{t}{\log t}\int_{1}^t \frac{M_{|g|}(u)}{u^2} du. $$ \[REMSQFSUPP\] For the purposes of the proof of Theorem \[LOWERMV\], we require a version of Lemma \[DENOM\] that holds when $g$ is squarefree-supported. In this case, Lemma \[STRONG\] no longer holds, and our argument must change slightly at the juncture at which this lemma is quoted. Indeed, we have $$\begin{aligned} \int_{t-y}^t \frac{N_{|g|}(u)}{u^2} du &= \int_{t-y}^t\frac{du}{u^2}\left(\sum_{m \leq u} |g(m)|\log m\right) = \int_{t-y}^t \frac{du}{u^2} \left(\sum_{a \leq u} \Lambda(a)|g(a)| \sum_{m \leq u/a \atop (m,a) = 1} |g(m)|\right) \\ &= \sum_{a \leq t} |g(a)|\frac{\Lambda(a)}{a}\int_{(t-y)/a}^{t/a} \frac{du}{u^2}\sum_{m \leq u \atop (m,a) = 1} |g(m)| \geq \delta \int_1^{t} \frac{du}{u^2} \sum_{m \leq u} |g(m)|\left(\sum_{(t-y)/u < a \leq t/u \atop (a,P_tm) = 1} \frac{\Lambda(a)}{a}\right).\end{aligned}$$ Proceeding as above and using the inequality $\frac{\phi(ab)}{ab} \geq \frac{\phi(a)}{a}\frac{\phi(b)}{b}$ for $a = P_t$ and $b = m$ and each $m \leq u$, it follows that $$\int_{t-y}^t \frac{N_{|g|}(u)}{u^2} du \gg \delta \frac{\phi(P_t)}{P_t}\frac{y}{t}\int_1^{t} \frac{M_{g'}(u)}{u^2} du,$$ where we have set $g'(m) := g(m)\phi(m)/m$. Note that this argument actually works for any multiplicative $g$. \[CHEAP\] Let $t \geq 2$ and ${\sigma}:= 1+\frac{1}{\log t}$. Then $\int_1^t \frac{M_{|g|}(u)}{u^2} du \gg_B L_{|g|}(t) \gg_B {\mathcal}{G}({\sigma})$. By partial summation, we have $$\sum_{n \leq t} \frac{|g(n)|}{n} = \frac{M_{|g|}(t)}{t} + \int_1^t \frac{M_{|g|}(v)}{v^2}dv.$$ Now, observe that $$t\int_{t/2}^t \frac{M_{|g|}(v)}{v^2} dv \geq \int_{t/2}^t \frac{M_{|g|}(v)}{v} dv \geq M_{|g|}(t/2)\log 2 \geq \frac{\log 2}{1+DB} M_{|g|}(t),$$ the last estimate following by Lemma \[DobApp\] with $c = 2$ and some $D > 0$. Thus, $$\sum_{n \leq t} \frac{|g(n)|}{n} \geq \frac{M_{|g|}(t)}{t} + \frac{\log 2}{1+DB} \frac{M_{|g|}(t)}{t} = \left(1+\frac{\log 2}{1+DB}\right)\frac{M_g(t)}{t},$$ and thus $$\int_1^t \frac{M_{|g|}(v)}{v^2}dv \geq \left(1-\frac{1+DB}{1+DB+\log 2}\right)L_{|g|}(t). \label{LOWLg}$$ Since $L_{|g|}(t) \gg_B {\mathcal}{G}({\sigma})$ by Lemma \[DENOMPARS\], the claim follows. Note that the proofs of the last lemma above work equally well when $g$ is a squarefree-supported multiplicative function since Lemma \[DobApp\] holds. With this observation and Remark \[REMSQFSUPP\], Theorem \[LOWERMV\] follows immediately. Let $\lambda$ be a non-negative, multiplicative function satisfying $\delta \leq \lambda(p) \leq B$ for all primes $p$. Let $\lambda'(m) := \mu^2(m)\lambda(m)\phi(m)/m$. Combining Lemma \[CHEAP\] with Remark \[REMSQFSUPP\] gives $$\begin{aligned} M_{\lambda}(t) &\geq \sum_{n \leq x} \mu^2(n)\lambda(n) \gg_B \delta \frac{\phi(P_t)}{P_t}\frac{t}{\log t} \int_1^t \frac{M_{\lambda'}(u)}{u^2} du \gg_B \delta \frac{\phi(P_t)}{P_t}\frac{t}{\log t} \sum_{n \leq t} \frac{\lambda'(n)}{n} \\ &\gg_B \delta \frac{\phi(P_t)}{P_t}\frac{t}{\log t} \prod_{p \leq t} \left(1+\frac{\lambda(p)(p-1)}{p^2}\right) \geq \delta \frac{\phi(P_t)}{P_t}\frac{t}{\log t} \prod_{p \leq t} \left(1+\frac{\lambda(p)}{p}\right)\left(1-\frac{\lambda(p)}{p^2}\right) \\ &\gg_B \delta \frac{\phi(P_t)}{P_t}\frac{t}{\log t} \prod_{p \leq t} \left(1+\frac{\lambda(p)}{p}\right),\end{aligned}$$ the last estimate coming from Lemma \[LOGSUM\] and the convergence of the product $\prod_{p} \left(1-\frac{\lambda(p)}{p^2}\right)$. Upper Bounds for $|M_h|$ ------------------------ The following constitutes an analogue of Lemma \[DENOM\] for $M_h(t)$. Our upper bound integral will instead be dealt with via Parseval’s theorem in Section 6. \[NUM\] Let $t$ be sufficiently large, let $|A| \in [0,1]$, and put $h(n):= g(n)-A|g(n)|$ and $\mu := \max_{p} \text{arg}(g(p))$. Set $R_h(\beta) := \max_{2 \leq u \leq x} \frac{|M_h(u)|}{M_{|g|}(u)} \log^{\beta} u$, for $\beta > 0$. Then for any $\lambda > 0$ and $\kappa \in (0,1)$, $$\begin{aligned} |M_h(t)| &\ll_B B\frac{t}{\log t}(\frac{1}{(1-\kappa)\log t}\int_{t^{\kappa}}^{t} \frac{|M_h(u)|\log u}{u^{2}}du +R_h(\lambda)\left(\int_1^{t^{\kappa}} \frac{M_{|g|}(u)}{u^2\log^{\lambda}(3u)} du +M_{|g|}(t)\frac{\log_2 t}{\log^{\lambda} t}\right) \\ &+ |A|\mu\int_1^{t} \frac{M_{|g|}(u)}{u^{2}} du +\frac{M_{|g|}(t)}{\log^2 t}). $$ First, using , we can write $$\begin{aligned} |\Delta_{h}(t)|&= \left|\int_1^t \frac{M_{h}(u)}{u}du\right| \leq \int_1^t \frac{|M_h(u)|}{u}du \leq R_h(\lambda)\int_1^t du\frac{M_{|g|}(u)}{u\log^{\lambda} u} du.\end{aligned}$$ Arguing as in , $$\int_1^t \frac{M_{|g|}(u) }{u\log^{\lambda} u} du \ll (B+1)M_{|g|}(t)\frac{\log_2 t}{\log^{\lambda} t}, \label{REMAINDER}$$ whence follows the estimate $|\Delta_{h}(t)| \leq (B+1)R_h(\lambda)M_{|g|}(t)\frac{\log_2 t}{\log^{\lambda} t}$.\ We now consider $M_h(t)$. Using the above observation, $$|M_h(t)| = \frac{1}{\log t} |N_h(t)-\Delta_{h}(t)| \leq \frac{1}{\log t}|N_h(t)| + (B+1)R_h(\lambda)M_{|g|}(t)\frac{\log_2 t}{\log^{1+\lambda} t}. \label{GAPMN}$$ Now, as before, take $y = te^{-\log^{c} t}$ with $0 < c < \frac{1}{2}$. Applying Lemma \[COMPAVG\], we have $$\begin{aligned} &|N_{h}(t)| \leq y^{-1}\int_{t-y}^{t} |N_{h}(u)| du + O\left(\frac{M_{|g|}(t)}{\log^2 t}\right) \leq \frac{t^2}{y} \int_{t-y}^{t} \frac{|N_{h}(u)|}{u^2} du + O\left(\frac{M_{|g|}(t)}{\log^2 t}\right) \nonumber\\ &\leq \frac{t^2}{y}\left(\sum_{a \leq t} \Lambda(a)|g(a)|\int_{t-y}^{t} \frac{du}{u^2} |M_h(u/a)| + |A|\mu \sum_{a \leq t} \Lambda(a)|g(a)|\int_{t-y}^t \frac{du}{u^2} M_{|g|}(u/a)\right) + O\left(\frac{M_{|g|}(t)}{\log^2 t}\right) \nonumber\\ &=: \frac{t^2}{y}(I_1+|A|\mu I_2) + O\left(\frac{M_{|g|}(t)}{\log^2 t}\right) \label{SETUP1} $$ Let $Y := t^{\kappa}$. We divide the sum over $a$ in $I_1$ into the segments $[1,Y]$ and $[Y,t]$. Over the first segment, we make the substitution $v := u/a$ and insert a logarithmic factor, giving $$\begin{aligned} &\sum_{a \leq Y} \Lambda(a)|g(a)| \int_{t-y}^t \frac{du}{u^2}|M_h(u/a)| \leq B\sum_{a \leq Y} \frac{\Lambda(a)}{a}\int_{(t-y)/a}^{t/a} \frac{du}{u^2}|M_h(u)| \label{COMPGEN}\\ &\leq B\sum_{a \leq Y} \frac{\Lambda(a)}{a\log((t-y)/a)}\int_{(t-y)/a}^{t/a} \frac{du}{u^2}|M_h(u)|\log u \nonumber\\ &\ll B\frac{1}{\log t}\int_{(t-y)/Y}^{t} \frac{du}{u^2}|M_h(u)|\log u \sum_{(t-y)/v < a \leq t/v} \frac{\Lambda(a)}{a}\left(1-\frac{\log a}{\log t}\right)^{-1} \nonumber\\ &\leq B\frac{1}{(1-\kappa)\log t}\int_{(t-y)/Y}^{t} \frac{du}{u^2}|M_h(u)|\log u \sum_{(t-y)/v < a \leq t/v} \frac{\Lambda(a)}{a} \nonumber\\ &\ll B\frac{y}{(1-\kappa)t\log t} \int_{t^{\kappa}}^t \frac{du}{u^2}|M_h(u)| \log u, \nonumber\end{aligned}$$ the last estimate following from Lemma \[PRECMERTENS\]. Over the second segment, we simply have $$\begin{aligned} &\sum_{Y< a \leq t} \Lambda(a)|g(a)| \int_{t-y}^t \frac{du}{u^2}|M_h(u/a)| \leq B\sum_{Y < a \leq t} \frac{\Lambda(a)}{a}\int_{(t-y)/a}^{t/a} \frac{du}{u^2}|M_h(u)| \\ &= B\int_1^Y \frac{du}{u^2}|M_h(u)| \sum_{(t-y)/v < n \leq t/v} \frac{\Lambda(a)}{a} \ll B\frac{y}{t}\int_1^Y \frac{du}{u^2}|M_h(u)|.\end{aligned}$$ We estimate $I_2$ in the same as with the second segment above, thus $$\begin{aligned} I_2 &\leq B\sum_{a \leq t}\frac{\Lambda(a)}{a} \int_{(t-y)/a}^{t/a} \frac{dv}{v^2}M_{|g|}(v) = B\int_{1}^{t} \frac{dv}{v^2}M_{|g|}(v)\left(\sum_{(t-y)/v<a \leq t/v} \frac{\Lambda(a)}{a}\right) \asymp B \frac{y}{t}\int_1^{t} \frac{dv}{v^2}M_{|g|}(v).\end{aligned}$$ It follows that $$\begin{aligned} \int_{t-y}^t \frac{|N_{h}(u)|}{u^2} du &\ll B \frac{y}{t}(\frac{1}{(1-\kappa)\log t}\int_{t^{\kappa}}^{t} \frac{|M_{h}(u)|\log u}{u^2} du+R_h(\lambda)\int_1^{t^{\kappa}} \frac{M_{|g|}(u)}{u^2\log^{\lambda}(3u)}du \nonumber\\ &+ |A|\mu \int_1^{t} \frac{M_{|g|}(u)}{u^2}du) \label{SETUP2}, $$ which, when combined with , gives the claim. \[NUMCOMP\] The above argument can be modified to deal with completely multiplicative functions $g$ as well, provided we take $B < 2$. The lone emendment must be made in the inequality in and the corresponding estimate on the interval $[Y,t]$ (which proceeds similarly), and these modifications can be made in the following way. Splitting the sum into pieces with $a$ prime and $a$ a prime power with multiplicity at least 2, and applying Cauchy-Schwarz to the second, we have $$\begin{aligned} &\sum_{a \leq Y} \frac{\Lambda(a)}{a}|g(a)|\int_{(t-y)/a}^{t/a} \frac{du}{u^2} |M_h(u)| \leq B\sum_{p \leq y} \frac{\Lambda(p)}{p}\int_{(t-y)/p}^{t/p} \frac{du}{u^2}|M_h(u)| + \sum_{p^k \leq y \atop k \geq 2} \Lambda(p) \left(\frac{B}{p}\right)^k \int_{(t-y)/p^k}^{t/p^k} \frac{du}{u^2}|M_h(u)| \\ &\leq B\sum_{a \leq Y} \frac{\Lambda(a)}{a}\int_{(t-y)/a}^{t/a} \frac{du}{u^2}|M_h(u)| + \left(\sum_{p, k\geq 2} \log^2 p \left(\frac{B}{p}\right)^k\right)^{\frac{1}{2}} \left(\sum_{p^k \leq Y \atop k \geq 2} \frac{\Lambda(p)^2}{p^k}B^k \left(\int_{(t-y)/p^k}^{t/p^k} \frac{du}{u^2}|M_h(u)|\right)^2 \right)^{\frac{1}{2}} \\ &\leq B\sum_{a \leq Y} \frac{\Lambda(a)}{a}\int_{(t-y)/a}^{t/a} \frac{du}{u^2}|M_h(u)| + 2B^2\left(\sum_{p} \frac{\log^2 p}{p(p-B)}\right)^{\frac{1}{2}} \sum_{a \leq Y} \frac{\Lambda(a)}{a}\left(\int_{(t-y)/a}^{t/a} \frac{du}{u^2}|M_h(u)|\right),\end{aligned}$$ where, in the last inequality, we used the fact that if $k/2$ is a half-integer (necessarily strictly greater than 1) then the corresponding term in $p^{k}$ is smaller than that for $p^{2{\left\lfloor}k/2{\right\rfloor}}$ (since the integral is longer in the latter case, and $(B/p)^k \leq (B/p)^{2{\left\lfloor}k/2 {\right\rfloor}}$ for all $p$). \[REMGEN\] It should be noted that all of the arguments in this section remain valid if we replace the function $h(n) := g(n)-A|g(n)|$ with the function $h(n) := a(n)-Ab(n)$, where $a$ and $b$ are real-valued, strongly or completely multiplicative functions such that $|a(n)| \leq b(n)$, $\delta \leq |a(p)| \leq b(p) \leq B$ (with $B < 2$ in case $g$ is completely multiplicative) and $\left|\frac{a(p)}{b(p)} -1 \right| \leq \eta$. This will be of value in the proof of ii) in Theorem \[WIRSINGEXT\]. Analytic Estimates ================== In this section, we collect estimates for several line integrals related to the integral bounds in the previous section. Throughout, for ${\sigma}> 1$ the operator $\int_{({\sigma})}$ denotes integration on the line $\{s \in {\mathbb}{C} : \text{Re}(s) = {\sigma}\}$.\ On occasion, we will find it convenient to estimate $G(s)$ directly in terms of the value of $g$ at primes. To this end, we prove the following simple result. It appears in [@Ell] in a different form, so we prove it in our context for completeness. \[StrongComp\] Let $g : {\mathbb}{N} {\rightarrow}{\mathbb}{C}$ be a strongly multiplicative function for which there is a $B \in (0,\infty)$ such that $|g(p)| \leq B$ for all primes $p$, and let $G$ be its Dirichlet series, valid for $\text{Re}(s) > 1$. Then we can write $$G(s) = G_0(s)\exp\left(\sum_p g(p)p^{-s}\right), \label{USESTRONG}$$ where $G_0(s)$ is a Dirichlet series converging absolutely and uniformly in the half-plane ${\sigma}> \frac{1}{2}$, and such that, uniformly in $\tau \in {\mathbb}{R}$ and ${\sigma}> 1$, $|G_0(s)| \leq 2e^{B(B+1)}\log(1+B)^{4B}$.\ In several lemmata to follow, we employ a factorization of the type $A'(s) = \frac{A'(s)}{A(s)}\cdot A(s)$, where $A = G$ or ${\mathcal}{G}$. Since we can always exclude measure-zero sets from our integrals, we may ignore each case in which $G(s)$ and/or ${\mathcal}{G}(s)$ is non-zero, since $G$ and ${\mathcal}{G}$ are holomorphic on the half-plane $\text{Re}(s) > 1$, and thus their zero sets are discrete. It will, of course, still be necessary to show that the resulting factored expression is still bounded in neighbourhoods of these zeros, and we shall do this in what follows. Set $G_0(s) := G(s)\exp\left(-\sum_p g(p)p^{-s}\right)$ and estimate $G_0$. First, for ${\sigma}> 1$ and $s = {\sigma}+ i\tau$ fixed, we write $$A(s) := \prod_{p \leq B^{\frac{1}{{\sigma}}} + 1} \left(1+\frac{g(p)}{p^s-1}\right)\exp\left(-g(p)p^{-s}\right),$$ so that by definition, $$G_0(s) = A(s) \left(\prod_{p > B^{\frac{1}{{\sigma}}}+1} \left(1+\frac{g(p)}{p^s-1}\right)e^{-g(p)p^{-s}}\right) = A(s) \exp\left(\sum_{p > B^{\frac{1}{{\sigma}}}+1} \left(\log\left(1+\frac{g(p)}{p^s-1}\right) - g(p)p^{-s}\right)\right).$$ In particular, when $p > B^{\frac{1}{{\sigma}}} + 1$ we have $|\frac{g(p)}{p^s-1}| \leq \frac{|g(p)|}{p^{{\sigma}}-1} < 1$, whence, Taylor expanding the factor $\log\left(1+\frac{g(p)}{p^s-1}\right)$ for each $p$, we get $$\begin{aligned} \log\left(1+\frac{g(p)}{p^s-1}\right) - g(p)p^{-s} &= \left(\log\left(1+\frac{g(p)}{p^s-1}\right) - \frac{g(p)}{p^{s}-1}\right) + \left(\frac{g(p)}{p^s-1} - \frac{g(p)}{p^s}\right) \label{Str}\\ &= -\frac{g(p)^2}{(p^{s}-1)^2}\sum_{l \geq 2} (-1)^{l} \frac{1}{l}\left(\frac{g(p)}{p^s-1}\right)^{l-2} + \frac{g(p)}{p^s(p^s-1)} \nonumber.\end{aligned}$$ Summing all of these terms and estimating the result gives $$\begin{aligned} \left|\sum_{p > B^{\frac{1}{{\sigma}}}+1} \left(\log\left(1+\frac{g(p)}{p^s-1}\right) - g(p)p^{-s}\right)\right| &\leq B(B+1)\sum_{p > B^{\frac{1}{{\sigma}}}+1} \frac{1}{p^{{\sigma}}(p^{{\sigma}}-1)}. $$ This estimate implies immediately that the second factor in the definition of $G_0(s)$ converges absolutely and uniformly for ${\sigma}> \frac{1}{2}$. Clearly, as $A(s)$ is a finite product of functions that are holomorphic in the half-plane $\text{Re}(s) > 0$, $G_0(s)$ must also converge absolutely and uniformly in the half-plane ${\sigma}> \frac{1}{2}$, and thus be holomorphic there as well. Moreover, for ${\sigma}> 1$, $\sum_p \frac{1}{p^{{\sigma}}(p^{{\sigma}}-1)} \leq \sum_{n \geq 2} \frac{1}{n(n-1)} = 1$, so that $\left|G_0(s)\right| \leq e^{B(B+1)}|A(s)|$. In the expression $A(s)$, we can simply bound trivially via $$\left|1+\frac{g(p)}{p^s-1}\right| \leq 1+\frac{B}{p^{{\sigma}}-1}\leq 1+\frac{2B}{p^{{\sigma}}},$$ which gives $$|A(s)| \leq \exp\left(2B\sum_{p \leq B^{\frac{1}{{\sigma}}}+1} \frac{1}{p^{{\sigma}}}\right) \leq 2\log(1+B)^{4B}.$$ Combining these two estimates implies the claim. A similar argument holds for $g$ completely multiplicative, with $B < 2$. While the form of $G_0(s)$ differs, the analytic estimates are essentially identical (dealing with factors $(1-g(p)/p^s)^{-1}$ instead of $1+g(p)/(p^s-1)$). It will be helpful to treat integrals in $G(s)$ in terms of those same integrals in ${\mathcal}{G}(s)$ without losing the effect of $\tau$ in the integral, since the latter has non-negative coefficients. The following tool is helpful in this regard. \[MODMONT\] Let $A(s) := \sum_{n \geq 1} a_nn^{-s}$ be a Dirichlet series such that there exists a non-negative real sequence $\{b_n\}_n$ such that $|a_n| \leq b_n$ for all $n \in {\mathbb}{N}$. Furthermore, let $B(s) := \sum_{n \geq 1} b_nn^{-s}$ and suppose it converges absolutely and uniformly in the half-plane ${\sigma}> {\sigma}_0$. Then for any $T \geq 0$ and ${\sigma}> {\sigma}_0$, $$\int_{-T}^T |A({\sigma}+ i\tau)|^2 |ds| \leq 3\int_{-T}^{T} |B({\sigma}+i\tau)|^2 |ds|.$$ This result, due essentially to Montgomery, is Lemma 6.1 in III.4 of [@Ten2]. In Lemma \[SHARPH\] below we will need to know that $G_0(s)$ and ${\mathcal}{G}_0(s)$ are non-zero for $|\tau|$ small on the line $\text{Re}(s) = {\sigma}$. To this end, we have the following. \[ZEROLEM\] Let $B > 0$. Suppose $g$ is strongly multiplicative satisfying $|g(p)| \leq B$ for each $p$. Suppose, moreover, that $|\text{arg}(g(p))| < 1$ for each $p$. If $B \leq 1$ then $G({\sigma}+ i\tau) \neq 0$ for every ${\sigma}> 1$, $\tau \in {{\mathbb}{R}}$. If $B > 1$ and $G({\sigma}+ i\tau) = 0$ then $|\tau| \geq \frac{{\sigma}}{\log B}$, and the same is true of the Dirichlet series ${\mathcal}{G}(s) := \sum_{n \geq 1} \frac{|g(n)|}{n^s}$. As $G(s) = \prod_p \left(1+\frac{g(p)}{p^s-1}\right)$, this can only be zero if there exists some $p$ such that $1-g(p) = p^s$, i.e., $p^{{\sigma}} \leq |g(p)|+1 \leq B+1$; it thus suffices to consider $p < B^{\frac{1}{{\sigma}}} + 1$. Clearly, when $B\leq 1$ this is impossible. Thus, consider $B > 1$, and suppose that $|\tau| < \frac{{\sigma}}{\log B}$. Then, for $p$ such that $g(p) = 1-p^s$, we must have $$1-\text{Re}(g(p)) = p^{{\sigma}}\cos(|\tau| \log p) \geq p^{{\sigma}}\cos\left(\frac{{\sigma}\log p}{\log B}\right) \geq \frac{1}{\sqrt{2}}p^{{\sigma}}.$$ Clearly, since $|\arg(g(p))| < 1$, $\text{Re}(g(p)) > 0$, which implies that $\frac{1}{\sqrt{2}} \leq \frac{1}{p}$ for some prime $p$, a clear contradiction. The first claim follows. The second one is immediate by applying the first claim to $|g|$. When $g$ is completely multiplicative and $|g(p)| < 2$ uniformly, $1-g(p)/p^s$ can never be zero and always has bounded norm, so no such lemma is needed in this case. In the proof of part i) of Theorem \[HalGen\], we will need to compute the ratio of $\int_{({\sigma})} |G'(s)|^2 \frac{|ds|}{|s|^2}$ with ${\mathcal}{G}({\sigma})^2$, coming from Lemma \[CHEAP\]. An essential ingredient in this computation is an estimate in $\tau$, uniform over a large range, for the ratio of $\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|$. This provides the main term in the estimate in and . We establish an estimate for this ratio in the following two lemmata. The first is geared towards , the second towards .\ We recall that $\tilde{g}$ is defined on primes by $\tilde{g}(p) := g(p)/B_j$ in case $|g(p)| \leq B_j$ whenever $p \in E_j$, and that $$\rho_{E_j}(x;\tilde{g},T) := \min_{|\tau| \leq T} D_{E_j}(\tilde{g},p^{i\tau};x) := \min_{|\tau| \leq T} \left(\sum_{p \in E_j \atop p \leq u} \frac{1-\text{Re}\left(\tilde{g}(p)p^{-i\tau}\right)}{p}\right),$$ for $T > 0$ and $x \geq 2$. \[EASYHalDecay\] Let ${\sigma}:= 1+ \frac{1}{\log t}$, and suppose $g \in {\mathcal}{C}$. For $D > 1$, $u \geq 2$ and $T \ll ({\sigma}-1)^{-D}$, we have, uniformly in $|\tau| \leq T$, $$\left|\frac{G({\sigma}+ i\tau)}{{\mathcal}{G}({\sigma})}\right| \ll \exp\left(-\sum_{1 \leq j \leq m} B_j\left(\rho_{E_j}(x;\tilde{g},t) + \sum_{p \in E_j \atop p \leq x} \frac{1 -|\tilde{g}(p)|}{p}\right)\right).$$ By Lemma \[StrongComp\], whenever $s$ is not a zero of $G$ we have $$\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right| \ll_B \exp\left(\sum_p\left(\frac{g(p)}{p^s}-\frac{|g(p)|}{p^{{\sigma}}}\right)\right) = \exp\left(-\sum_p \frac{|g(p)|-\text{Re}(g(p)p^{-i\tau}}{p^{{\sigma}}}\right). \label{ONE}$$ By and , we have $$\begin{aligned} \sum_{p} \frac{|g(p)|-\text{Re}(g(p)p^{-i\tau})}{p^{{\sigma}}} = \sum_{p \leq x} \frac{|g(p)|-\text{Re}(g(p)p^{-i\tau})}{p} + O(B).\end{aligned}$$ Decomposing the sum over primes into the sums over each set $E_j$ in the partition, we have $$\begin{aligned} \sum_{p \leq x} \frac{|g(p)|-\text{Re}(g(p)p^{-i\tau}}{p} &= \sum_{1 \leq j \leq m} \left(\sum_{p \in E_j \atop p \leq x} \frac{|g(p)|-B_j}{p} + B_j\sum_{p \in E_j \atop p \leq x} \frac{1-\text{Re}(g(p)p^{-i\tau}/B)}{p}\right) \\ &= \sum_{1 \leq j \leq m} \left(B_j\rho_{E_j}(g;x) - \sum_{p \in E_j \atop p \leq x} \frac{|g(p)|-B_j}{p}\right).\end{aligned}$$ The proof of the claim follows upon reinserting this expression into and using the definition of $\tilde{g}$. Define ${\mathcal}{G}_j(s) := \prod_{p \in E_j} \left(1+\frac{|g(p)|}{p^{s}-1}\right)$ for each $1 \leq j \leq m$, which is well-defined and non-zero for ${\sigma}> 1$ provided that ${\mathcal}{G}(s) \neq 0$ (indeed, since each ${\mathcal}{G}_j(s)$ converges absolutely and uniformly on compact subsets of $\text{Re}(s) > 1$, they are all pole-free, so any zero of ${\mathcal}{G}(s)$ corresponds to a zero of (at least) one of the factors ${\mathcal}{G}_j$), and define the functions $G_j(s)$ analogously. Note in particular that $\prod_{1 \leq j \leq m} {\mathcal}{G}_j(s) = {\mathcal}{G}(s)$ by the partition property. Also, set $\beta := \min_{1 \leq j \leq m} \beta_j$. \[HalDecay\] Let $t \geq 3$, ${\sigma}= 1 + \frac{1}{\log t}$ and suppose ${\mathcal}{G}$ is non-zero on the line $\text{Re}(s) = {\sigma}$. Then for any $s = {\sigma}+ i\tau$, $$\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right| \ll_{m,B,{\boldsymbol}{\phi},{\boldsymbol}{\beta}} \left(\prod_{1 \leq j \leq m} \left|\frac{{\mathcal}{G}_j(s)}{{\mathcal}{G}_j({\sigma})}\right|^{\frac{\delta_j\beta_j^3}{B_j}}\right)^{\frac{27}{1024\pi}}. $$ \[REMDECAY\] Note that the trivial bound on $|g(p)|$ shows, using this estimate and Lemma \[StrongComp\], that $$\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right| \ll_{m,B,{\boldsymbol}{\phi},{\boldsymbol}{\beta}} \exp\left(-\frac{27}{1024\pi}\beta^3\delta\sum_p \frac{1}{p^{{\sigma}}}\left(1-\cos\tau \log p\right)\right) \ll_B \left|\frac{\zeta(s)}{\zeta({\sigma})}\right|^{\frac{27}{1024\pi}\delta\beta^3}.$$ This recovers (up to a constant factor) the coefficient for the exponential in [@Ell2], as mentioned in Section 2. When $|\tau| \leq 2$, the Laurent series expansion of $\zeta$ around $s = 1$ gives $$\left|\frac{\zeta(s)}{\zeta({\sigma})}\right| \ll \left|\frac{{\sigma}-1}{{\sigma}-1+i\tau}\right| \leq \left|1+i\frac{\tau}{{\sigma}-1}\right|^{-1} \leq \left(1+\left(\frac{|\tau|}{{\sigma}-1}\right)^2\right)^{-\frac{1}{2}} = e^{-\frac{1}{2}\log\left(1+\left(\frac{|\tau|}{{\sigma}-1}\right)^2\right)} \ll e^{-\log\left(1+\frac{|\tau|}{{\sigma}-1}\right)},$$ while if $2 < |\tau| < ({\sigma}-1)^{-D}$ for $D > 2$, using $|\zeta(s)| \ll \log |\tau|$ (see, for instance, Theorem 7 in Chapter II.3 of [@Ten2]) $$\left|\frac{\zeta(s)}{\zeta({\sigma})}\right| \ll ({\sigma}-1)\log|\tau| \leq \exp\left(-\frac{1}{2}\log\left(1+\frac{1}{{\sigma}-1}\right)\right),$$ for ${\sigma}$ sufficiently close to 1. Thus, $$\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right| \ll_{m,B,{\boldsymbol}{\phi},{\boldsymbol}{\beta}} \begin{cases} \exp\left(-\frac{27}{1024 \pi} \log\left(1+\frac{|\tau|}{{\sigma}-1}\right)\right) &\text{ if $|\tau| \leq 2$} \\ \exp\left(-\frac{27}{1024\pi}\log\left(1+\frac{1}{{\sigma}-1}\right)\right) &\text{ if $2 < |\tau| \leq ({\sigma}-1)^{-D}$}. \end{cases}$$ We will use this estimate repeatedly in this section and the next. We begin with an idea of Halász. By Lemma \[StrongComp\], whenever $s$ is not a zero of $G$ we have $$\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right| \ll_B \exp\left(\sum_p\left(\frac{g(p)}{p^s}-\frac{|g(p)|}{p^{{\sigma}}}\right)\right) = \exp\left(-\sum_p \frac{|g(p)|}{p^{{\sigma}}}(1-\cos(\xi_p))\right),$$ where, for $\theta_p := \text{arg}(g(p))$, put $\xi_p := \theta_p - \tau \log p$. Let $c \in (0,1)$ be a constant to be chosen. For each $j$, if we restrict to those primes for which $|p^{i\tau} - e^{i\phi_j}| \leq c\beta_j$ then, in light of the condition $|e^{i\theta_p}-e^{i\phi_j}| = |\theta_p-\phi_j| \geq \beta_j$, it follows that $$|\xi_p| = |1-e^{i\xi_p}| = |p^{i\tau}-e^{i\theta_p}| \geq |e^{i\theta_p}-e^{i\phi_j}| - |e^{i\phi_j}-p^{i\tau}| \geq (1-c)\beta_j.$$ We note that the collection of $p$ for which this condition holds is non-empty by our assumption on $\rho_j(x;gT)$. In particular, $1-\cos \xi_p \geq \frac{1}{2}(1-c)^2\beta_j^2$ whenever $p \in E_j$.\ Thus, we will choose functions $h_j:{\mathbb}{T} {\rightarrow}[0,1]$ such that $h_j(e^{i\theta}) \leq \frac{1}{2}(1-c)^2\beta_j^2$ uniformly when $|\theta-\phi_j| \leq c\beta_j$. Specifically, we let $h_j(e^{i\theta}) = \frac{1}{2}(1-c)^2\beta_j$ for each $|\theta-\phi_j| \leq c(1-c)\beta_j$, $h(e^{i\theta}) = 0$ for $|\theta - \phi_j| \geq c\beta_j$, and interpolate in the intervals $[\phi_j-c\beta_j,\phi_j-c(1-c)\beta_j]$ and $[\phi_j+c(1-c)\beta_j,\phi_j+c\beta_j]$ with twice-continuously differentiable functions. Since $h_j \in L^2({\mathbb}{T})$ for each $j$, we write $h_j(e^{i\theta}) = \sum_{n \in {\mathbb}{Z}} a_{j,n}e^{in\theta}$, with $a_{j,n} := \frac{1}{2\pi}\int_{-\pi}^{\pi} h_j(e^{i\theta})e^{-in\theta} d\theta$. Since $h_j \in {\mathcal}{C}^2({\mathbb}{T})$, integration by parts, periodicity of $h_j$ and the extreme value theorem imply that for $l \neq 0$, $$2\pi a_{j,l} = \frac{1}{l}\int_{-\pi}^{\pi} h_j'(e^{i\theta})e^{-il\theta} d\theta = \frac{1}{l^2}\int_{-\pi}^{\pi} h_j''(e^{i\theta})e^{-il\theta} d\theta \ll \frac{1}{l^2+1}.$$ We will use this condition later.\ It follows that, for $s_n := {\sigma}-in\tau$, $$\begin{aligned} \sum_{p\in E_j} \frac{|g(p)|}{p^{{\sigma}}}(1-\cos \xi_p) &\geq \sum_{p\in E_j} \frac{|g(p)|}{p^{{\sigma}}}h_j(p^{i\tau}) = \sum_{p \in E_j} \frac{|g(p)|}{p^{{\sigma}}} \sum_{n \in {\mathbb}{Z}} a_{j,n}p^{in\tau} \nonumber\\ &= \sum_{n \in {\mathbb}{Z}} a_{j,n}\sum_{p \in E_j} |g(p)|p^{-s_n}, \label{RATBOUND}\end{aligned}$$ this last interchange justified by the convergence of $\sum_{p \in E_j} \frac{1}{p^{{\sigma}}}$ for ${\sigma}> 1$ fixed. Since $\sum_{n \in {\mathbb}{Z}} a_{j,n} \ll 1$, it follows by Lemma \[StrongComp\] that $$\label{FOURIERDECOMP} \sum_{n \in {\mathbb}{Z}} a_{j,n}\sum_{p\in E_j} |g(p)|p^{-s_n} = \sum_{n \in {\mathbb}{Z}} a_{j,n} \log {\mathcal}{G}_j(s_n) + O_B(1).$$ Observe that when $n = 0$, $$a_{j,0} = \frac{1}{2\pi} \int_{-\pi}^{\pi} h_j(e^{i\theta}) d\theta \geq \frac{1}{4\pi}(1-c)^2\beta_j^2\int_{\theta_0-c(1-c)\beta_j}^{\theta_0+c(1-c)\beta_j} d\theta = \frac{1}{2\pi}c(1-c)^3\beta_j^3.$$ Choosing the optimal $c = \frac{1}{4}$ gives $a_{j,0} = \frac{27}{512\pi} \geq \frac{1}{10\pi}$ as the constant in front of $\beta_j^3$. (Note that replacing $(1-c)^3$ with any higher power of $(1-c)$ gives an optimal value when $c$ is small, so the resulting constant here is most advantageous.) Of course, $h_j$ being non-negative, we can bound each term $|g(p)| \geq \delta_j$. Now, observe that since $\log u = \log|u| + O(1)$ for any non-zero complex number (and an appropriate branch of logarithm) and any $k \in {\mathbb}{N}$, $$\begin{aligned} &\sum_{n \in {\mathbb}{Z}} a_{j,n} \log {\mathcal}{G}_{j}(s_n) \geq \delta_j\left(a_{j,0}\log\zeta_{E_j}(s_0) + \sum_{n \neq 0} a_{j,n}\log \zeta_{E_j}(s_n)\right) \\ &=\delta_j\left(a_{j,0}\log\zeta_{E_j}(s_0) + \sum_{n \neq 0} a_{j,n}\log \left(\zeta_{E_j}(s_n)/\zeta_{E_j}(s_k)\right) + \sum_{n \neq 0} a_{j,n} \log\zeta_{E_j}(s_k)\right)\\ &= \delta_j\left(a_{j,0}\log |\zeta_{E_j}(s_0)| + \sum_{n \neq 0} a_{j,n}\log \left|\zeta_{E_j}(s_n)/\zeta_{E_j}(s_k)\right| + (h_j(1)-a_{j,0})\log|\zeta_{E_j}(s_k)| + O_{\phi_j,\beta_j}(1)\right) \\ &=\delta_j\left(a_{j,0}\log\left|\zeta_{E_j}(s_0)/\zeta_{E_j}(s_k)\right| + \sum_{n \neq 0} a_{j,n}\log \left|\zeta_{E_j}(s_n)/\zeta_{E_j}(s_k)\right| + h_j(1)\log|\zeta_{E_j}(s_k)| +O_{\phi_j,\beta_j}(1)\right)\end{aligned}$$ Since $E_j$ is good, there is some $\lambda_j \in (0,1)$ such that if $f_j(s) := \log\zeta_{E_j}(s)-\lambda_j \log\left(\frac{1}{s-1}\right)$ then $f_j$ is holomorphic in a neighbourhood of ${\sigma}= 1$. Moreover, $f_j$ extends by convergence to $|\tau| \leq 2$ with ${\sigma}> 1$. Now, if $|n\tau|,|k\tau| \leq 2$, we have $$\log\left|\zeta_{E_j}(s_n)/\zeta_{E_j}(s_k)\right| = \lambda_j\log\left|\frac{s_k-1}{s_n-1}\right| + O(1) \leq \log\left(1+|n|/|k|\right) \leq \log(1+n).$$ If at least one of $|n\tau|$ or $|k\tau|$ exceeds 2 but $|\tau| \leq 2$, upon noting that $\zeta_{E_j}$ is non-zero in zero-free regions of $\zeta$ and applying Theorem 16 in II.3 of [@Ten2], we have $$\log\left|\zeta_{E_j}(s_n)/\zeta_{E_j}(s_k)\right| \leq 2\log_2\max\{|k\tau|,|n\tau|\} \leq 2\log_2(1+\max\{2k,2n\}).$$ Hence, when $|\tau| \leq 2$, we have $$\begin{aligned} &a_{j,0}\log\left|\zeta_{E_j}(s_0)/\zeta_{E_j}(s_k)\right| + \sum_{n \neq 0} a_{j,n}\log \left|\zeta_{E_j}(s_n)/\zeta_{E_j}(s_k)\right| + h_j(1)\log|\zeta_{E_j}(s_k)| +O_{\phi_j,\beta_j}(1) \\ &= a_{j,0}\log\left|\zeta_{E_j}(s_0)/\zeta_{E_j}(s_k)\right| + O\left(k \log_2(2+k) + \sum_{n \geq 1} \frac{\log (1+n)}{1+n^2}\right) = a_{j,0} \log\left|\zeta_{E_j}(s_0)/\zeta_{E_j}(s_k)\right| + O(k\log_2(2+k)),\end{aligned}$$ where we used $h_j(1)\log\left|\zeta_{E_j}(s_j)\right| = O\left(\log(1+2k)\right)$. On the other hand, if $|\tau| \geq 2$ then, again by this last argument, $$\sum_{n \neq 0} a_{j_n}\log\left|\zeta_{E_j}(s_n)\right| \leq \sum_{n \neq 0} a_{j,n}\log_2 |n\tau| = O\left(\log_2(1+|\tau|)\right).$$ Hence, using this same bound for $\log\left|\zeta_{E_j}(s_k)\right|$, it follows that in all cases, $$\sum_{n \in {\mathbb}{Z}} a_{j,n} \log {\mathcal}{G}_{j}(s_n) \geq \delta_ja_{j,0}\log\left|\zeta_{E_j}({\sigma})/\zeta_{E_j}(s_k)\right| + O_k\left(\log_2 |\tau| + 1\right) \geq \frac{\delta_j}{B_j} \log\left|{\mathcal}{G}_j({\sigma})/{\mathcal}{G}_j(s_k)\right| + O_k\left(\log_2(1+|\tau|)\right).$$ Note that this bound is non-trivial because $|\tau| \leq \log^D x$ and $D_{E_j}(g/|g|,p^{i\tau},{\sigma}) \gg \frac{B_j}{\delta_j}\log_3 x$, with some sufficiently large implicit constant by assumption. At any rate, replacing $a_{j,0}$ by $a_{j,0}/2$ to compensate, it follows that $$\begin{aligned} \left|\frac{G_j(s)}{{\mathcal}{G}_j({\sigma})}\right| &\ll_{B, \phi_j,\beta_j,k} \exp\left(-\frac{\delta_j}{2B_j}a_{j,0}\sum_{p \in E_j} |g(p)|\left(\frac{1}{p^{{\sigma}}}-\frac{1}{p^{s_k}}\right)\right) \ll_B \left|\frac{{\mathcal}{G}_j(s_k)}{{\mathcal}{G}({\sigma})}\right|^{\frac{\delta_ja_{j,0}}{2B_j}} = \left|\frac{{\mathcal}{G}_j(s_k)}{{\mathcal}{G}({\sigma})}\right|^{\frac{27\delta_j}{1024 \pi B_j}\beta_j^3},\end{aligned}$$ the second last estimate applying Lemma \[StrongComp\] again and the last resulting from our earlier choice of $c$. Set $\gamma_{0,j} := \frac{27\delta_j}{1024\pi B_j}\beta_j^3$, and $\gamma_0 := \min_{1 \leq j \leq m} \gamma_{0,j}$. The major contribution to the upper bound in Theorem \[HalGen\] comes from establishing a bound for the maximum of $\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|$ for $|\tau| \leq T$. We deal with this as follows. \[INTBOUND\] Let ${\sigma}= 1+\frac{1}{\log t}$ for $t \geq 2$ sufficiently large. Then for each $\tau \in {\mathbb}{R}$ and each $j$, $$\left|\frac{G_j(s)}{{\mathcal}{G}_j({\sigma})}\right| \ll_{B,\phi_j,\beta_j} \left|\frac{G_j({\sigma})}{{\mathcal}{G}_j({\sigma})}\right|^{\frac{\gamma_{0,j}}{2(1+\gamma_{0,j})}}.$$ As a corollary, we have the weaker bound $$\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right| \ll_{B,m,{\boldsymbol}{\phi},{\boldsymbol}{\beta}} \left|\frac{G({\sigma})}{{\mathcal}{G}({\sigma})}\right|^{\frac{\gamma_0}{2(1+\gamma_0)}}.$$ Applying Lemma \[TrigProp\] with $k = 1$, observe that for each $j$, $$\sum_{p \in E_j} \frac{|g(p)|}{p^{{\sigma}}}(1-\cos \theta_p) \leq 2\left(\sum_{p \in E_j} \frac{|g(p)|}{p^{{\sigma}}}(1-\cos (\theta_p-\tau\log p)) + \sum_{p \in E_j} \frac{|g(p)|}{p^{{\sigma}}}(1-\cos \tau \log p)\right),$$ with $\theta_p := \text{arg}(g(p))$ as above. Combining this with Lemma \[StrongComp\] gives $$\begin{aligned} \left|\frac{G_j({\sigma})}{{\mathcal}{G}_j({\sigma})}\right| &\gg_B \exp\left(-\sum_{p\in E_j} \frac{|g(p)|}{p^{{\sigma}}}(1-\cos \theta_p)\right) \\ &\geq \exp\left(-2\sum_{p \in E_j} \frac{|g(p)|}{p^{{\sigma}}}(1-\cos (\theta_p-\tau\log p))\right) \cdot \exp\left(-2\sum_{p \in E_j} \frac{|g(p)|}{p^{{\sigma}}}(1-\cos \tau \log p)\right)\\ &\gg_B \left(\left|\frac{G_j(s)}{{\mathcal}{G}_j({\sigma})}\right|\left|\frac{{\mathcal}{G}_j(s)}{{\mathcal}{G}_j({\sigma})}\right|\right)^2 \gg_{B,\phi_j,\beta_j} \left|\frac{G_j(s)}{{\mathcal}{G}_j({\sigma})}\right|^{2\left(1+\frac{1}{\gamma_{0,j}}\right)},\end{aligned}$$ the last estimate coming by Lemma \[HalDecay\]. The first claim of the lemma follows immediately. The second comes from observing that the map $u \mapsto \frac{u}{2(1+u)}$ is increasing and that $\left|\frac{G_j({\sigma})}{{\mathcal}{G}_j({\sigma})}\right| \leq 1$ by the triangle inequality. In what follows, given a Dirichlet series $F(s)$ that is absolutely and uniformly convergent in the half-plane $\text{Re}(s) > 1$, we will write $$J_F({\sigma}) := \int_{({\sigma})} |F'(s)| \frac{|ds|}{|s|^2},$$ for any ${\sigma}> 1$. In order to derive Theorem \[HalGen\] it will be essentially sufficient to derive sharp bounds for ${\mathcal}{G}({\sigma})^{-2}J_{F}({\sigma})$ in terms of ${\sigma}$ when $F = G$ and $H$. Therefore, in the next several lemmata we estimate $J_F({\sigma})$ for $F = G$ and $H$. \[MAIN1\] Let $k \geq 1$. Let $D > 2$ be fixed and set $T := ({\sigma}-1)^{-D}$. Then $$J_{G}({\sigma}) \ll_{m,B,{\boldsymbol}{\phi},{\boldsymbol}{\beta}} {\mathcal}{G}({\sigma})^2\left(B^2({\sigma}-1)^{-1}\prod_{1 \leq j \leq m} \left|\frac{G_j({\sigma})}{{\mathcal}{G}_j({\sigma})}\right|^{\frac{\gamma_{0,j}}{1+\gamma_{0,j}}} + 1 \right) $$ By definition, $$\begin{aligned} {\mathcal}{G}({\sigma})^{-2}J_{G}({\sigma})&= \int_{({\sigma})} \left|\frac{G'(s)}{G(s)}\right|^2\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2} \nonumber\\ &\leq \left(\int_{|\tau| \leq T} \left|\frac{G'(s)}{G(s)}\right|^2 \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2} + \int_{|\tau| > T} \left|\frac{G'(s)}{G(s)}\right|^2 \frac{|ds|}{|s|^2}\right), \label{GOODONE}\end{aligned}$$ upon using $|G(s)| \leq {\mathcal}{G}({\sigma})$ in the second integral. Factoring $G'(s) = \frac{G'(s)}{G(s)} \cdot \frac{G(s)}{{\mathcal}{G}({\sigma})} \cdot {\mathcal}{G}({\sigma})$ for $s$ not a zero of $G$, Lemma \[StrongComp\] implies that $$\begin{aligned} \frac{G'(s)}{{\mathcal}{G}({\sigma})}&= \frac{G(s)}{{\mathcal}{G}({\sigma})}\frac{d}{ds}\log G(s) = \frac{G(s)}{{\mathcal}{G}({\sigma})}\left(\frac{d}{ds}\left(\sum_p g(p)p^{-s}\right) + \frac{d}{ds}\log G_0(s)\right) \nonumber\\ &= -\frac{G(s)}{{\mathcal}{G}({\sigma})}\sum_p \frac{g(p)\log p}{p^s} + \frac{G_0'(s)}{{\mathcal}{G}_0({\sigma})}\exp\left(-\sum_p \frac{|g(p)|}{p^{{\sigma}}}(1-\cos(\xi_p(\tau)))\right). \label{BALANCE2}\end{aligned}$$ Now, since $G_0(s)$ is uniformly convergent on ${\sigma}> \frac{1}{2}$, $\left|G_0'(s)\right| \ll_B 1$ uniformly on the line $\text{Re}(s) = {\sigma}$. Similarly, ${\mathcal}{G}_0({\sigma}) \gg_B 1$. Thus, the second term above in is $\ll_B 1$, even in neighbourhoods of zeros of $G$. Taking absolute values and squaring, and bounding $\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right| \leq 1$ trivially in the first term, the second integral in becomes $$\begin{aligned} \int_{|\tau| > T} \left|\frac{G'(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2} &\ll_B \int_{|\tau| > T} \left(\left|\sum_p \frac{g(p)\log p}{p^s}\right|^2 + 1\right)\frac{|ds|}{|s|^2} \ll \left(B^2\left(\sum_p \frac{\log p}{p^{{\sigma}}}\right)^2 + 1\right)\int_{|\tau| > T} \frac{|ds|}{|s|^2} \\ &\ll B^2 ({\sigma}-1)^{-2}T^{-1} = B^2({\sigma}-1)^{D-2},\end{aligned}$$ the second last estimate following from the estimates $$\sum_p \frac{\log p}{p^{{\sigma}}} = \sum_{n \geq 1} \frac{\Lambda(n)}{n^{{\sigma}}} + O(1) = -\frac{\zeta'({\sigma})}{\zeta({\sigma})} + O(1) = \frac{1}{{\sigma}-1} + O(1),$$ from the Laurent series of $\zeta$.\ For the first integral above, we ignore the (measure zero) set of $s$ for which ${\mathcal}{G}(s)=0$. Using the factorization mentioned earlier and , and interchanging $\sum_p \frac{g(p)\log p}{p^s}$ with $\sum_{n \geq 1} g(n)\Lambda(n)n^{-s}$ as above, we have $$\begin{aligned} \int_{|\tau| \leq T} \left|\frac{G'(s)}{G(s)}\right|^2 \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2} &\ll \left(\max_{|\tau| \leq T} \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|\right)^2\int_{|\tau| \leq T} \left|\sum_p \frac{g(p)\log p}{p^s}\right|^2 \frac{|ds|}{|s|^2} + O_B(1) \\ &\leq \left(\max_{|\tau| \leq T} \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|\right)^2\int_{({\sigma})} \left|\sum_{n \geq 1} \frac{g(n)\Lambda(n)}{n^s}\right|^2 \frac{|ds|}{|s|^2} + O_B(1).\end{aligned}$$ Now, applying Parseval’s theorem, the integral in this last expression is $$\label{PARS} \int_{({\sigma})} \left|\sum_{n \geq 1} \frac{g(n)\Lambda(n)}{n^s}\right|^2 \frac{|ds|}{|s|^2} = 2\pi\int_0^{\infty} e^{-2v{\sigma}} \left|\sum_{n \leq e^v} g(n)\Lambda(n) \right|^2 dv \leq B^2 \int_0^{\infty} e^{-2v({\sigma}-1)}\left(e^{-v} \sum_{n \leq e^v} \Lambda(n)\right)^2 dv.$$ As in the proof of the last lemma, this integral is $\ll ({\sigma}-1)^{-1}$. It thus follows that $$\int_{|\tau| \leq T} \left|\frac{G'(s)}{G(s)}\right|^2 \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2} \ll B^2({\sigma}-1)^{-1}\left(\max_{|\tau| \leq T} \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|\right)^2 + O_B(1).$$ Inserting the estimates for our two integrals into , we get $$\begin{aligned} {\mathcal}{G}({\sigma})^{-2}J_{G}({\sigma})&\ll_B B^2\left(\max_{|\tau| \leq T} \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|\right)^2 + B^2({\sigma}-1)^{D-2} + 1\ll B^2({\sigma}-1)^{-1}\left(\max_{|\tau| \leq T} \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|\right)^2 + 1,\end{aligned}$$ since $D > 2$. Appealing to Lemma \[INTBOUND\] completes the proof. In the case that $A = A_t := \exp\left(\sum_{p \leq t} \frac{g(p)-|g(p)|}{p}\right)$ and $|\theta_p| \leq \eta_j$ for all $p \in E_j$ and all $j$, we will need to take advantage of the small argument condition to derive an asymptotic formula. In order to account for this condition, therefore, it will be helpful to know that when $\text{Im}(s)$ is sufficiently small, $G(s)$ is approximated well by $A{\mathcal}{G}(s)$. In this spirit, we prove the following lemma. \[SHARPH\] Let $g$ be as above, such that for each $1\leq j \leq m$ and $p \in E_j$, we have $|\text{arg}(g(p))| \leq \eta_j < 1$, and let $\eta := \max_{1\leq j \leq n} \eta_j$. Assume also that $B\eta < 1$. For $t \leq x$ sufficiently large and fixed, let $h(n) := g(n)-A|g(n)|$ and $H(s) := \sum_{n \geq 1} \frac{h(n)}{n^s}$, for $\text{Re}(s) > 1$, where $A = A_t$ is as above. Let ${\sigma}= 1+\frac{1}{\log t}$ and $s = {\sigma}+ i\tau$. Then for any $K> 0$ such that $B\eta \log(1+K) < 1$ and $K({\sigma}-1)\log B < 1$, $$\int_{|\tau| \leq K({\sigma}-1)} |H'({\sigma}+ i\tau)|^2 \frac{d\tau}{{\sigma}^2+\tau^2} \ll_B B^2(1+B^2)\eta^2|A|^2\min\left\{\delta^{-2},\log^2(1+K)(1+K)^{-2\delta}\right\}({\sigma}-1)^{-1} . $$ Observe that if $s$ is not a zero of $G(s)$ or ${\mathcal}{G}(s)$ then $$\begin{aligned} H'(s) &= G'(s)-A{\mathcal}{G}'(s) = G(s)\frac{d}{ds}\log G(s) - A{\mathcal}{G}(s) \frac{d}{ds}\log {\mathcal}{G}(s) \\ &= H(s)\frac{d}{ds}\left(\sum_p \frac{g(p)}{p^s}\right) + G(s)\cdot \frac{G_0'(s)}{G_0(s)} + A{\mathcal}{G}(s) \frac{d}{ds}\left(\sum_p \frac{g(p)-|g(p)|}{p^s}\right) -A{\mathcal}{G}(s) \cdot \frac{{\mathcal}{G}_0'(s)}{{\mathcal}{G}_0(s)},\end{aligned}$$ by Lemma \[StrongComp\]. Using the fact that both $\frac{G_0'(s)}{G_0(s)}\cdot \frac{G(s)}{{\mathcal}{G}({\sigma})}$ and $\frac{{\mathcal}{G}_0'(s)}{{\mathcal}{G}_0(s)}\cdot \frac{{\mathcal}{G}(s)}{{\mathcal}{G}({\sigma})}$ are both $O_B(1)$ as in Lemma \[MAIN1\], we have (at non-zeros of $G$ and ${\mathcal}{G}$) $$\begin{aligned} H'(s) &= -H(s)\sum_{p} \frac{g(p)\log p}{p^s} + A{\mathcal}{G}(s)\sum_p \frac{(|g(p)|-g(p))\log p}{p^s} + O_B({\mathcal}{G}({\sigma})).\end{aligned}$$ Write ${\mathcal}{G}({\sigma})^{-2}\int_{|\tau| \leq K({\sigma}-1)} |H'(s)|^2 \frac{|ds|}{|s|^2} \leq 3(J_1 + J_2 + O_B(1))$, where $$\begin{aligned} J_1 &:= {\mathcal}{G}({\sigma})^{-2}\int_{|\tau| \leq K({\sigma}-1)} |H(s)|^2 \left|\sum_p \frac{g(p)\log p}{p^s}\right|^2 \frac{|ds|}{|s|^2}, \\ J_2 &:= |A|^2\int_{|\tau| \leq K({\sigma}-1)} \left|\sum_p \frac{(|g(p)|-g(p))\log p}{p^s}\right|^2\left|\frac{{\mathcal}{G}(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2}. $$ It follows from Lemma \[OLD\] that $\left|\frac{{\mathcal}{G}(s)}{{\mathcal}{G}({\sigma})}\right| \ll e^{-\delta \log\left(1+\frac{|\tau|}{{\sigma}-1}\right)}$. We will either use this upper bound, or the trivial bound $1$.\ We begin by computing a uniform estimate for $|H(s)|$. Since $K({\sigma}-1) \log B < 1$, by Lemma \[ZEROLEM\] $G$ and ${\mathcal}{G}$ are non-zero and holomorphic for $|\tau| \leq K({\sigma}-1)$, i.e., each of the factors $1+\frac{g(p)}{p^s-1}$ and $1+\frac{|g(p)|}{p^s-1}$ is non-zero, provided ${\sigma}-1$ is sufficiently small (or equivalently, if $t$ is sufficiently large). Observe, therefore, that $$\begin{aligned} \left|\frac{G({\sigma})}{{\mathcal}{G}({\sigma})}A^{-1}\right| &= \left|A^{-1} \prod_p \left(1+\frac{g(p)}{p^{{\sigma}}-1}\right)\left(1+\frac{|g(p)|}{p^{{\sigma}}-1}\right)^{-1}\right| \\ &= \left|\prod_{p \leq t} \left(1+\frac{g(p)-|g(p)|}{p^{{\sigma}}-1 + |g(p)|}\right)e^{-\frac{g(p)-|g(p)|}{p}}\right| \prod_{p > t}\left|1+\frac{g(p)-|g(p)|}{p^{{\sigma}}-1+|g(p)|}\right| \\ &= \exp\left(\text{Re}\left(\sum_{p \leq t} \left(\log\left(1+\frac{g(p)-|g(p)|}{p^{{\sigma}}-1+|g(p)|}\right)-\frac{g(p)-|g(p)|}{p}\right)\right)\right) \\ &\cdot \exp\left(\sum_{p > t} \log\left|1+\frac{g(p)-|g(p)|}{p^{{\sigma}}-1+|g(p)|}\right|\right) \\ &=: e^{P_1+P_2}.\end{aligned}$$ Estimating $P_2$ first, we note that $|g(p)-|g(p)|| \leq B\eta$ for each $p$, whence by , $$P_2 \leq \sum_{p > t} \log\left(1+\frac{B\eta}{p^{{\sigma}}-1}\right) \leq 2B\eta \sum_{p > t} \frac{1}{p^{{\sigma}}} \ll_B B\eta.$$ Next, as $B\eta < 1$ we have $$\begin{aligned} P_1 &= \sum_{p \leq t} \text{Re}\left(\left(\frac{g(p)-|g(p)|}{p^{{\sigma}}-1+|g(p)|} - \frac{g(p)-|g(p)|}{p}\right) + \sum_{l \geq 2} \frac{(-1)^{l-1}}{l} \frac{(g(p)-|g(p)|)^l}{(p^{{\sigma}}-1+|g(p)|)^{l}}\right) \\ &\leq B\eta\sum_{B^{\frac{1}{{\sigma}}}+1< p \leq t} \frac{|g(p)| + 1}{p(p-1+|g(p)|)} + O(B^2\eta) \ll B(1+B)\eta.\end{aligned}$$ Furthermore, setting $R(\tau) := \frac{G(s)}{{\mathcal}{G}(s)}\left(\frac{G({\sigma})}{{\mathcal}{G}({\sigma})}\right)^{-1}$, $$\begin{aligned} |R(\tau)| &= \left|\frac{G_0(s)}{G_0({\sigma})}\right|\left|\frac{{\mathcal}{G}_0({\sigma})}{{\mathcal}{G}_0(s)}\right|\exp\left(\text{Re}\left(\sum_p (g(p)-|g(p)|)\left(\frac{1}{p^s}-\frac{1}{p^{{\sigma}}}\right)\right)\right) \\ &= \left|\frac{G_0(s)}{G_0({\sigma})}\right|\left|\frac{{\mathcal}{G}_0({\sigma})}{{\mathcal}{G}_0(s)}\right| \exp\left(O\left(B\eta \sum_p \left|\frac{1}{p^s}-\frac{1}{p^{{\sigma}}}\right|\right)\right) = \left|\frac{G_0(s)}{G_0({\sigma})}\right|\left|\frac{{\mathcal}{G}_0(s)}{{\mathcal}{G}_0({\sigma})}\right|e^{O\left(B\eta \log\left(1+\frac{|\tau|}{{\sigma}-1}\right)\right)}. $$ the second last estimate following from Lemmas \[OLD\] and \[StrongComp\]. It follows that $$\begin{aligned} \left|\frac{G_0(s)}{G_0({\sigma})}\right| &= \left|\prod_p e^{-\frac{g(p)-|g(p)|}{p^s}}\left(1+\frac{g(p)-|g(p)|}{p^s-1+|g(p)|}\right)\right| \\ &= \exp\left(\text{Re}\left(\sum_p \left(\log(1+\frac{g(p)-|g(p)|}{p^s-1+|g(p)|})-\frac{g(p)-|g(p)|}{p^s}\right)\right)\right) \\ &= \exp\left(\text{Re}\left(\sum_p \log\left(1+\frac{g(p)-|g(p)|}{p^s}\right)- \frac{g(p)-|g(p)|}{p^s}\right) + O(B(1+B)\eta)\right),\end{aligned}$$ the error term above owing to $$\begin{aligned} &\left|\log\left(1+\frac{g(p)-|g(p)}{p^s-1+|g(p)|}\right)-\log\left(1+\frac{g(p)-|g(p)|}{p^s}\right)\right| \\ &= \left|\log\left(1+(g(p)-|g(p)|)\frac{|g(p)|-1}{(p^s-1+|g(p)|)(p^s+g(p)-|g(p)|)}\right)\right| \\ &\leq \frac{|g(p)-|g(p)|||B-1|}{|p^s-1+|g(p)||(p^{{\sigma}}-|g(p)-|g(p)||)} \leq \frac{B(1+B)\eta}{(p^{{\sigma}}-B\eta)|p^s-1+|g(p)||},\end{aligned}$$ the sum of which over all primes is $\ll B(1+B)\eta$, as $B\eta < 1$. For the remaining expression, $$\begin{aligned} &\text{Re}\left(\sum_p \log\left(1+\frac{g(p)-|g(p)|}{p^s}\right) - \frac{g(p)-|g(p)|}{p^s}\right) \leq \sum_{l \geq 2} \frac{1}{l}\left|\frac{g(p)-|g(p)|}{p^s}\right|^l \\ &\leq \frac{B^2\eta^2}{p^{2{\sigma}}}\left(1-\frac{1}{p^{{\sigma}}}\right)^{-1} = \frac{B^2\eta^2}{p^{{\sigma}}(p^{{\sigma}}-1)},\end{aligned}$$ and summing over all $p$ gives $\ll B^2 \eta^2$. It follows that $\left|\frac{G_0(s)}{{\mathcal}{G}_0(s)}\right| = e^{O(B(1+B)\eta)}$. The same argument holds in estimating $\left|\frac{{\mathcal}{G}_0({\sigma})}{G_0({\sigma})}\right|$ upon replacing $g(p)-|g(p)|$ by $|g(p)|-g(p)$ and $s$ by ${\sigma}$. Hence, $$|R(\tau)| = e^{O\left(B\eta\left(\log\left(1+\frac{|\tau|}{{\sigma}-1}\right) + (1+B)\right)\right)} = 1+O\left(B\eta \log\left(1+\frac{|\tau|}{{\sigma}-1}\right)\right),$$ by assumption on $K$, since $\log\left(1+\frac{|\tau|}{{\sigma}-1}\right) \leq \log(1+K)$. Hence, whenever $B\eta \log(1+K) < 1$, $$G(s) = R(\tau){\mathcal}{G}(s)\cdot\frac{G({\sigma})}{{\mathcal}{G}({\sigma})} = A{\mathcal}{G}(s)\left(1+O_B\left(B\eta \log\left(1+\frac{|\tau|}{{\sigma}-1}\right)\right)\right),$$ and thus $$|H(s)| = |G(s)-A{\mathcal}{G}(s)| \ll_B B\eta|A||{\mathcal}{G}(s)|\log\left(1+\frac{|\tau|}{{\sigma}-1}\right),$$ Inserting this into $J_1$, bounding $\left|\frac{{\mathcal}{G}(s)}{{\mathcal}{G}({\sigma})}\right|$ by $e^{-\delta \log\left(1+\frac{|\tau|}{{\sigma}-1}\right)}$ and applying Lemma \[MODMONT\], $$\begin{aligned} J_1 &\ll_B B^2\eta^2|A|^2\left(\int_{-K({\sigma}-1)}^{K({\sigma}-1)} \log^2\left(1+\frac{|\tau|}{{\sigma}-1}\right)\left|\frac{{\mathcal}{G}(s)}{{\mathcal}{G}({\sigma})}\right|^2\left|\sum_{n \geq 1} g(n)\Lambda(n)n^{-s}\right|^2 d\tau + K({\sigma}-1)\log^2(1+K)\right)\\ &\leq B^2\eta^2|A|^2\left(B^2\int_{-K({\sigma}-1)}^{K({\sigma}-1)} \log^2\left(1+\frac{|\tau|}{{\sigma}-1}\right)e^{-2\delta \log\left(1+\frac{|\tau|}{{\sigma}-1}\right)}\left|\sum_{n \geq 1} \Lambda(n)n^{-s}\right|^2 d\tau + K({\sigma}-1)\log^2(1+K)\right).\end{aligned}$$ Observe that the map $ue^{-\delta u}$ is maximized in $u$ when $u = \frac{1}{\delta}$. Hence, if $u = \log\left(1+\frac{|\tau|}{{\sigma}-1}\right)$, this quantity is largest once $|\tau| = \left(e^{\frac{1}{\delta}}-1\right)({\sigma}-1) =: \tau_0$. Now, if $K \geq \tau_0$ then we can bound the product $\log\left(1+\frac{|\tau|}{{\sigma}-1}\right)e^{-\delta \log\left(1+\frac{|\tau|}{{\sigma}-1}\right)}$ by $\frac{1}{\delta}$, while if $K < \tau_0$ then our bound is $\log(1+K)(1+K)^{-\delta}$. Allowing either choice, we have $$\begin{aligned} J_1&\ll B^2\eta^2|A|^2\left(B^2\min\left\{\delta^{-2},\frac{\log^2(1+K)}{(1+K)^{2\delta}}\right\} \int_{|\tau| \leq K({\sigma}-1)} \left|\sum_{n \geq 1} \Lambda(n)n^{-s}\right|^2d\tau + K({\sigma}-1)\log^2(1+K)\right).\end{aligned}$$ As before, since $\sum_{n \geq 1} \Lambda(n)n^{-s} = -\frac{\zeta'(s)}{\zeta(s)} = \frac{1}{s-1} + O(1)$, $$\begin{aligned} J_1 &\ll B^2\eta^2|A|^2\left(B^2\min\left\{\delta^{-2},\frac{\log^2(1+K)}{(1+K)^{2\delta}}\right\} \int_{|\tau| \leq K({\sigma}-1)} \frac{d\tau}{|s-1|^2} + K({\sigma}-1)\log^2(1+K)\right)\\ &\ll B^2\eta^2|A|^2\left(B^2\min\left\{\delta^{-2},\frac{\log^2(1+K)}{(1+K)^{2\delta}}\right\}({\sigma}-1)^{-1} \int_{|u| \leq K} (1+u)^{-2} du + K({\sigma}-1)\log^2(1+K)\right) \\ &\ll B^2\eta^2|A|^2\left(B^2\min\left\{\delta^{-2},\frac{\log^2(1+K)}{(1+K)^{2\delta}}\right\}({\sigma}-1)^{-1} + K({\sigma}-1)\log^2(1+K)\right).\end{aligned}$$ Similarly, we can apply Lemma \[MODMONT\] to $J_2$, noting that $(|g(n)|-g(n))\Lambda(n) \leq B\eta\Lambda(n)$ in this case, to get $$J_2 \ll B^2\eta^2|A|^2\int_{|\tau| \leq K({\sigma}-1)} \left|\sum_{n \geq 1} \Lambda(n)n^{-s}\right|^2 d\tau \ll B^2\eta^2 |A|^2\int_{|\tau| \leq K({\sigma}-1)} \frac{d\tau}{|s-1|^2} \ll B^2\eta^2|A|^2({\sigma}-1)^{-1}.$$ Collecting the estimates for $J_1$ and $J_2$, $$\int_{|\tau| \leq K({\sigma}-1)} |H'(s)|^2 \frac{|ds|}{|s|^2} \ll_B B^2(1+B^2)\eta^2 |A|^2\min\left\{\delta^{-2},\log^2(1+K)(1+K)^{-2\delta}\right\}({\sigma}-1)^{-1} $$ This completes the proof. Similar estimates can be had if $g$ is completely multiplicative. In such a case any factor of the form $1+g(p)/(p^s-1)$ is replaced by $(1-g(p)/p^2)^{-1}$, and a Taylor expansion of the corresponding logarithm is available whenever $|g(p)| < 2$. \[SHARPAPP\] Let $K > 0$ be such that $B\eta \log(1+K) < 1$ and $T \geq 1$ be fixed such that $K({\sigma}-1) < 2$ and $({\sigma}-1)^{D-1} < ({\sigma}-1)^{-1}T^{-1} < 1$ for some $D> 1$. Furthermore, set $\gamma := \min_{1 \leq j \leq n} \beta_j^3 \delta_j$. Then $$\begin{aligned} J_{H,1}({\sigma}) &\ll_{B,D} B^2{\mathcal}{G}({\sigma})^2({\sigma}-1)^{-1}(\eta^2|A|^2\min\left\{\delta^{-2},\log^2(1+K)(1+K)^{-2\delta}\right\} + ({\sigma}-1)^{-1}T^{-1} \\ &+ ({\sigma}-1)^{\gamma}|A|^{\frac{\gamma_0}{2(1+\gamma_0)}} + |A|^2({\sigma}-1)^{2\delta/3} + \gamma^{-1}|A|^{\frac{\gamma_0}{2(1+\gamma_0)}} (1+K)^{-(1+\gamma)} + \frac{|A|^2}{\delta} (1+K)^{-(1+2\delta)})\end{aligned}$$ By definition $$\begin{aligned} {\mathcal}{G}({\sigma})^{-2}J_{H,1}({\sigma}) &= {\mathcal}{G}({\sigma})^{-2}\left(\int_{|\tau| \leq K({\sigma}-1)} + \int_{K({\sigma}-1) < |\tau| \leq 2} + \int_{2 < |\tau| \leq T} +\int_{|\tau| > T} \right) \left|H'(s)\right|^2 \frac{|ds|}{|s|^2} \\ &=: {\mathcal}{G}({\sigma})^{-2}\left(I_1 + I_2 + I_3 +I_4\right).\end{aligned}$$ We deal with each of the $I_j$ in turn. The content of Lemma \[SHARPH\] shows that $$I_1 \ll_B B^2(1+B^2)|A|^2\eta^2({\sigma}-1)^{-1}\max\left\{\delta^{-2},\log^2(1+K)(1+K)^{-2\delta}\right\}.$$ Next, we consider $I_4$. Note that $|H'(s)|^2 \ll |G'(s)|^2 + |A|^2|{\mathcal}{G}(s)|^2$, thus $$I_4 \ll {\mathcal}{G}({\sigma})^2\left(\int_{|\tau| > T} \left|\frac{G'(s)}{{\mathcal}{G}(s)}\right|^2\frac{|ds|}{|s|^2} + |A|^2\int_{|\tau|>T} \left|\frac{{\mathcal}{G}'(s)}{{\mathcal}{G}'({\sigma})}\right|^2 \frac{|ds|}{|s|^2}\right).$$ The first integral is implicitly contained in the proof of Lemma \[MAIN1\], where we showed that it was bounded above by $B^2 ({\sigma}-1)^{-2}T^{-1}$. The proof of this estimate can be carried out in precisely the same way to get the same bound for the integral in ${\mathcal}{G}(s)$ as well. Hence, as $|A| \leq 1$, $$I_4 \ll B^2(1+|A|^2){\mathcal}{G}({\sigma})^2({\sigma}-1)^{-2}T^{-1} \ll B^2{\mathcal}{G}({\sigma})^2({\sigma}-1)^{-2}T^{-1},$$ Consider now $I_3$. Upon using Lemma \[HalDecay\] and applying the arguments of Lemma \[MAIN1\] to the first term, we have $$\begin{aligned} \int_{2 < |\tau| \leq T} \left|\frac{G'(s)}{G(s)}\right|^2 \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2} &\leq \left(\max_{|\tau| \leq T} \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|\right)\int_{2 < |\tau| \leq T}e^{-\gamma \log\left(1+\frac{|\tau|}{{\sigma}-1}\right)} \left(\left|\sum_p \frac{g(p)\log p}{p^s}\right|^2 + O_B(1)\right)\frac{|ds|}{|s|^2} \\ &\leq \left|\frac{G({\sigma})}{{\mathcal}{G}({\sigma})}\right|^{\frac{\gamma_0}{2(1+\gamma_0)}}({\sigma}-1)^{\gamma} \int_{({\sigma})}\left(\left|\sum_p \frac{g(p)\log p}{p^s}\right|^2 + O_B(1)\right)\frac{|ds|}{|s|^2}\\ &\ll_B B^2|A|^{\frac{\gamma_0}{2(1+\gamma_0)}}({\sigma}-1)^{-1+\gamma} $$ where, as in Lemma \[SHARPH\], $\left|\frac{G({\sigma})}{{\mathcal}{G}({\sigma})}\right| \asymp_B |A|$, in the last inequality. In the ${\mathcal}{G}$ integral we use the pointwise estimate $\left|\frac{{\mathcal}{G}(s)}{{\mathcal}{G}({\sigma})}\right| \ll_B \left|\frac{\zeta(s)}{\zeta({\sigma})}\right|^{\delta}$ that we invoked earlier. As a corollary of the Korobov-Vinogradov zero-free region of $\zeta$ it can be deduced that when $|\tau| \geq 2$, $|\zeta({\sigma}+ i\tau)| \ll \log^{\frac{2}{3}} |\tau|$ for ${\sigma}> 1$ (see Note 3.9 in Chapter II.3 of [@Ten2]). As a result, $$\begin{aligned} \int_{2 < |\tau| \leq T} \left|\frac{{\mathcal}{G}'(s)}{{\mathcal}{G}(s)}\right|^2 \left|\frac{{\mathcal}{G}(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2} &\leq ({\sigma}-1)^{2\delta}\int_{2 < |\tau| \leq T}\log^{\frac{2}{3}\delta}|\tau|\left(\left|\sum_p \frac{g(p)\log p}{p^s}\right|^2 + O_B(1)\right)\frac{|ds|}{|s|^2} \\ &\leq \left(({\sigma}-1) \log^{\frac{2}{3}}T\right)^{2\delta} \int_{({\sigma})}\left(\left|\sum_p \frac{g(p)\log p}{p^s}\right|^2 + O_B(1)\right)\frac{|ds|}{|s|^2} \\ &\ll_D ({\sigma}-1)^{\frac{2\delta}{3}} \int_{({\sigma})}\left(\left|\sum_p \frac{g(p)\log p}{p^s}\right|^2 + O_B(1)\right)\frac{|ds|}{|s|^2} \\ &\ll_B B^2({\sigma}-1)^{-1+\frac{2\delta}{3}},\end{aligned}$$ where, in the last estimate, we used Parseval’s theorem as in . In sum, we have $$I_3 \ll_{B,D} B^2 {\mathcal}{G}({\sigma})^2({\sigma}-1)^{-1}\left(({\sigma}-1)^{\gamma}|A|^{\frac{\gamma_0}{2(1+\gamma_0)}} + |A|^2({\sigma}-1)^{\frac{2\delta}{3}}\right).$$ It remains to estimate $I_2$. Consider first the $G$ integral. We decompose the interval $(K({\sigma}-1),2]$ dyadically and let $L$ be the number of subintervals thus produced. Set $\psi_l := 2^lK({\sigma}-1)$ for each $0 \leq l \leq L-1$. Applying Lemma \[HalDecay\] and arguing that we can ignore the effect of any of the zeros of $G$ as before (from $\frac{G_0'(s)}{G_0(s)}G(s) \ll_B {\mathcal}{G}({\sigma})$), $$\begin{aligned} &\int_{K({\sigma}-1) < |\tau| \leq 2} \left|\frac{G'(s)}{G(s)}\right|^2 \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2} \\ &\leq |A|^{\frac{\gamma_0}{2(1+\gamma_0)}}\int_{K({\sigma}-1)<|\tau| \leq 2} e^{-\gamma \log\left(1+\frac{|\tau|}{{\sigma}-1}\right)} \left(\left|\sum_{n \geq 1} \frac{g(n)\Lambda(n)}{n^s}\right|^2 + O_B(1)\right)\frac{|ds|}{|s|^2}\\ &\leq 2|A|^{\frac{\gamma_0}{2(1+\gamma_0)}}\sum_{0 \leq l \leq L-1} e^{-\gamma \log\left(1+2^lK\right)}\left( \int_{\psi_l}^{\psi_{l+1}} \left|\sum_{n \geq 1} \frac{g(n)\Lambda(n)}{n^s}\right|^2 \frac{|ds|}{|s|^2} + O_B(1)\right)\\ &= 2|A|^{\frac{\gamma_0}{2(1+\gamma_0)}}\sum_{0 \leq l \leq L-1} (1+2^lK)^{-\gamma} \left(\int_{\psi_l}^{\psi_{l+1}} \left|\sum_{n \geq 1} \frac{g(n)\Lambda(n)}{n^s}\right|^2 \frac{|ds|}{|s|^2} + O_B(1)\right).\end{aligned}$$ Let $q > 1$ be a parameter to be chosen, and let $p$ be its Hölder conjugate (i.e., $\frac{1}{q}+\frac{1}{p} =1$). By Hölder’s inequality, $$\begin{aligned} \int_{\psi_l}^{\psi_{l+1}} \left|\sum_{n \geq 1} \frac{g(n)\Lambda(n)}{n^s}\right|^2 \frac{|ds|}{|s|^2} &\leq \left(\int_{\psi_{l}}^{\psi_{l+1}} \frac{d\tau}{({\sigma}^2 + \tau^2)^q} \right)^{\frac{1}{q}} \left(\int_{\psi_{l}}^{\psi_{l+1}} \left|\sum_{n \geq 1} \frac{g(n)\Lambda(n)}{n^s}\right|^{2p} d\tau\right)^{\frac{1}{p}} \\ &\leq \psi_l^{\frac{1}{q}} \int_{\psi_{l}}^{\psi_{l+1}} \left|\sum_{n \geq 1} \frac{g(n)\Lambda(n)}{n^s}\right|^2 d\tau.\end{aligned}$$ For convenience, for each $0 \leq l \leq L-1$ let $\alpha_l := (1+2^lK)^{-\gamma}$ and, given an arithmetic function $a$, set $J_l(a) := \int_{\psi_{l}}^{\psi_{l+1}} \left|\sum_{n \geq 1} \frac{a(n)\Lambda(n)}{n^s}\right|^2 d\tau$. With these notations, we have $$\int_{K({\sigma}-1) < |\tau| \leq 2} \left|\frac{G'(s)}{G(s)}\right|^2 \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2} \leq 2|A|^{\frac{\gamma_0}{2(1+\gamma_0)}}\sum_{0 \leq l \leq L-1} \alpha_l\psi_l^{\frac{1}{q}} J_l(g).$$ By Lemma \[MODMONT\], as $|g(n)|\Lambda(n) \leq B\Lambda(n)$ for each $n$, $$\begin{aligned} 2J_l(g) &\leq \int_{|\tau| \leq \psi_{l+1}} \left|\sum_{n \geq 1} \frac{g(n)\Lambda(n)}{n^s}\right|^2 d\tau \leq 3B^2\int_{|\tau| \leq \psi_{l+1}} \left|\sum_{n \geq 1} \frac{\Lambda(n)}{n^s}\right|^2 d\tau = 3B^2 \int_{|\tau| \leq \psi_{l+1}} \left|\frac{\zeta'(s)}{\zeta(s)}\right|^2 d\tau =: 3B^2\iota_l\end{aligned}$$ Now, we clearly have $\iota_l = 2J_l(1) + \iota_{l-1}$, so that $$2\sum_{0 \leq l \leq L-1} \alpha_l\psi_l^{\frac{1}{q}}J_l(g) \leq 3B^2 \sum_{0 \leq l \leq L-1} \alpha_l\psi_l^{\frac{1}{q}} \iota_l = 3B^2\left(2\sum_{0 \leq l \leq L-1} \alpha_l \psi_l^{\frac{1}{q}}J_l(1) + \sum_{1 \leq l \leq L-1} \alpha_l\psi_l^{\frac{1}{q}} \iota_{l-1}\right).$$ Observe now that for any $1 \leq l \leq L-1$ and $K > 1$, we have $\frac{2^{l-1}K}{1+2^lK} \geq \frac{1}{3}$, whence $$\begin{aligned} \frac{\alpha_l\psi_l^{\frac{1}{q}}}{\alpha_{l-1}\psi_{l-1}^{\frac{1}{q}}} &= 2^{\frac{1}{q}} \left(1-\frac{2^{l-1}K}{1+2^lK}\right)^{2\gamma} \leq 2^{\frac{1}{q}} \exp\left(-2\gamma \frac{2^{l-1}K}{1+2^lK}\right) \leq 2^{\frac{1}{q}}e^{-\frac{2\gamma}{3}} = 2^{\frac{1}{q}-\frac{2\gamma}{3\log 2}}.\end{aligned}$$ Choosing $q := \frac{3\log 2}{\gamma}$ yields, with $\omega := 2^{-\frac{\gamma}{3\log 2}} < 1$, the inequality $\alpha_l \psi_l^{\frac{1}{q}} \leq \omega \alpha_{l-1}\psi_{l-1}^{\frac{1}{q}}$ uniformly in $1 \leq l \leq L-1$. It follows that $$\begin{aligned} \sum_{0 \leq l \leq L-1} \alpha_l\psi_l^{\frac{1}{q}} \iota_l &\leq 2\sum_{0 \leq l \leq L-1} \alpha_l \psi_l^{\frac{1}{q}}J_l(1) + \omega\sum_{1 \leq l \leq L-1} \alpha_{l-1}\psi_{l-1}^{\frac{1}{q}} \iota_{l-1} \\ &\leq 2\sum_{0 \leq l \leq L-1} \alpha_l \psi_l^{\frac{1}{q}}J_l(1) + \omega\sum_{0 \leq l \leq L-1} \alpha_{l}\psi_{l}^{\frac{1}{q}} \iota_{l}.\end{aligned}$$ Rearranging this inequality, we get $$\sum_{0 \leq l \leq L-1} \alpha_l\psi_l^{\frac{1}{q}} \iota_l \leq \frac{2}{1-\omega} \sum_{0 \leq l \leq L-1} \alpha_l\psi_l^{\frac{1}{q}}J_l(1).$$ We now have $$\begin{aligned} \int_{K({\sigma}-1) < |\tau| \leq 2} \left|\frac{G'(s)}{G(s)}\right|^2 \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2} &\leq \frac{6B^2}{1-\omega}|A|^{\frac{\gamma_0}{2(1+\gamma_0)}}\sum_{0 \leq l \leq L-1} \alpha_l\psi_l^{\frac{1}{q}} J_l(1) \ll \frac{B^2}{\gamma} \sum_{0 \leq l \leq L-1} \alpha_l\psi_l^{\frac{1}{q}} J_l(1) \\ &\ll \frac{B^2}{\gamma} |A|^{\frac{\gamma_0}{2(1+\gamma_0)}}\sum_{0 \leq l \leq L-1} \alpha_l \int_{\psi_l}^{\psi_{l+1}} \left|\frac{\zeta'(s)}{\zeta(s)}\right|^2 d\tau \\ &\ll \frac{B^2}{\gamma} |A|^{\frac{\gamma_0}{2(1+\gamma_0)}}\int_{K({\sigma}-1) < |\tau| \leq 2} \left(1+\frac{|\tau|}{{\sigma}-1}\right)^{-2\gamma} \left|\frac{\zeta'(s)}{\zeta(s)}\right|^2 d\tau,\end{aligned}$$ the second last estimate following from $\psi_l \leq 2$ for all $l$, and the last being obvious from $\alpha_l \leq 2^{2\gamma} \left(1+2^{l+1}K\right)^{-2\gamma}$. Observe that by the Laurent series representation of $\zeta$, $$\begin{aligned} &\int_{K({\sigma}-1) < |\tau| \leq 2} \left(1+\frac{|\tau|}{{\sigma}-1}\right)^{-2\gamma} \left|\frac{\zeta'(s)}{\zeta(s)}\right|^2 d\tau \ll \int_{K({\sigma}-1) < |\tau| \leq 2} \left(1+\frac{|\tau|}{{\sigma}-1}\right)^{-2\gamma} \frac{d\tau}{|s-1|^2} \\ &\leq ({\sigma}-1)^{-1} \int_K^{\infty} (1+u)^{-2\gamma} (1+u^2)^{-1} du \ll ({\sigma}-1)^{-1}\int_K^{\infty} (1+u)^{-2(1+\gamma)} du \\ &= ({\sigma}-1)^{-1}(1+K)^{-(1+\gamma)}.\end{aligned}$$ As such, $$\int_{K({\sigma}-1) < |\tau| \leq 2} \left|\frac{G'(s)}{G(s)}\right|^2 \left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2} \ll \frac{B^2}{\gamma}|A|^{\frac{\gamma_0}{2(1+\gamma_0)}}({\sigma}-1)^{-1}(1+K)^{-(1+\gamma)}.$$ For the ${\mathcal}{G}$ integral, we use $\left|\frac{{\mathcal}{G}(s)}{{\mathcal}{G}({\sigma})}\right| \ll \left(1+\frac{|\tau|}{{\sigma}-1}\right)^{-2\delta}$, so that our preceding arguments for the $G$ integral apply with $2\delta$ here in place of $\gamma$ (of course, without appealing to Lemma \[INTBOUND\] in this case). It then follows that $$\int_{K({\sigma}-1) < |\tau| \leq 2} \left|\frac{{\mathcal}{G}'(s)}{{\mathcal}{G}(s)}\right|^2 \left|\frac{{\mathcal}{G}(s)}{{\mathcal}{G}({\sigma})}\right|^2 \frac{|ds|}{|s|^2} \ll \frac{B^2}{\delta}({\sigma}-1)^{-1}(1+K)^{-(1+2\delta)}.$$ Thus, $$\begin{aligned} I_2 &\ll B^2{\mathcal}{G}({\sigma})^2({\sigma}-1)^{-1}\left(\gamma^{-1}|A|^{\frac{\gamma_0}{2(1+\gamma_0)}}(1+K)^{-(1+\gamma)} + \delta^{-1}|A|^2(1+K)^{-(1+2\delta)}\right). $$ Combining all of our integral estimates gives $$\begin{aligned} J_{H,1}({\sigma}) &\ll B^2(1+B^2){\mathcal}{G}({\sigma})^2({\sigma}-1)^{-1} (|A|^2\eta^2\max\left\{\delta^{-2},\log^2(1+K)(1+K)^{-2\delta}\right\} + ({\sigma}-1)^{-1}T^{-1} \\ &+ ({\sigma}-1)^{\gamma}|A|^{\frac{\gamma_0}{2(1+\gamma_0)}} + |A|^2({\sigma}-1)^{2\delta/3} + \frac{1}{\gamma} |A|^{\frac{\gamma_0}{2(1+\gamma_0)}}(1+K)^{-(1+\gamma)} + \frac{|A|^2}{\delta} (1+K)^{-(1+2\delta)}),\end{aligned}$$ as claimed. \[REMGEN2\] In the same vein as Remark \[REMGEN\], we may observe that Lemmas \[SHARPH\] and \[SHARPAPP\] are equally valid upon replacing $h(n)$ by $|g(n)|-Xf(n)$, in which case $X = X_t := \exp\left(-\sum_{p \leq t} \frac{f(p)-|g(p)|}{p}\right)$, $|g(n)| \leq f(n)$, $|f(p)| \leq B$ and $\left|\frac{|g(p)|}{f(p)}-1\right| \leq \eta$. To replace Lemma \[HalDecay\] and its corollary, Lemma \[INTBOUND\], putting $F(s) := \sum_{n \geq 1} \frac{f(n)}{n^s}$ we simply use $$\left|\frac{{\mathcal}{G}(s)}{F({\sigma})}\right| \leq \frac{{\mathcal}{G}({\sigma})}{F({\sigma})} \ll X,$$ the last estimate following by arguments similar to those in the proof of Lemma \[SHARPH\]. From Arithmetic to Analysis =========================== In this section we bridge the gap between the results of Section 4 and those of Section 5. Our first lemma in this direction allows us to estimate with precision one of the terms arising in Lemma \[NUM\]. For the definitions of *reasonable* and *non-decreasing* pairs $(g,{\boldsymbol}{E})$, see Definition \[REAS\]. \[WITHR\] There exists $\lambda = \lambda(t) > 0$ such that for all $\kappa \in (0,1)$, $$\int_1^{t^{\kappa}} \frac{M_{|g|}(u)}{u^2 \log^{\lambda} u} \ll_B \left(\int_1^t \frac{M_{|g|}(u)}{u^2} du\right) \kappa^{-\lambda}e^{-\frac{1}{2}S_{\kappa}(t)}\log^{-\lambda} t ,\label{INTRAT}$$ where $S_{\kappa}(t) = \sum_{t^{\kappa} < p \leq t} \frac{g(p)}{p}$.\ In particular, for any $c_j > 0$, and $\gamma_{0,j}$ as defined in Lemma \[HalDecay\] we can take $$\lambda(t) := \frac{1}{\log_2 t}\sum_{1 \leq j \leq m} c_jd_j\frac{\gamma_{0,j}}{1+\gamma_{0,j}}\sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p},$$ where $d_j = 1$ if $(g,{\boldsymbol}{E})$ is non-decreasing, and $d_j := \frac{\delta_j}{B_j}$ otherwise. \[REMWITHR\] Note that, provided that $\log(1/\kappa) < \log_2 t$, which we will show to be true (see Lemma \[EXPONENTS\] below and the remarks following its statement), the right side of is decreasing in $\lambda$. Thus, we may always decrease $\lambda$ if necessary and the bound still holds with this smaller value of $\lambda$.\ We should also remark that it follows from partial summation and the condition $\log P_t = \sum_{p \in S \atop p \leq t} \log p \ll_r r \log t$, that $$\sum_{p \in S \atop s < p \leq t} \frac{1}{p} = \int_s^t d\left\{\sum_{p \in S \atop p \leq u} \log p\right\} \frac{1}{u \log u} \ll_r \frac{1}{s} + \int_s^t \left(\sum_{p \in S \atop p \leq u} \log p\right) \frac{du}{u^2 \log u} \ll_r \frac{1}{s}.$$ Thus, for all intents and purposes, sums over primes from $S$ give a negligible contribution. By partial summation, for any $\lambda > 0$ we have $$\sum_{n \leq u} \frac{|g(n)|}{n\log^{\lambda}(3n)} = \frac{M_{|g|}(u)}{u\log^{\lambda}(3u)} + \int_1^u \left(1+\frac{\lambda}{\log u}\right)\frac{M_{|g|}(u)}{u^2 \log^{\lambda}(3u)}du \geq \int_1^u \frac{M_{|g|}(u)}{u^2 \log^{\lambda}(3u)}du.$$ Also, by Lemmas \[LOGSUM\] and \[CHEAP\], the integral over $[1,t]$ in the statement of the lemma is bounded from below by $P_{|g|}(t)$. Thus, it suffices to estimate $P_{|g|}(t)^{-1}\sum_{n \leq t^{\kappa}} \frac{|g(n)|}{n\log^{\lambda}(3n)}$.\ Set $\lambda$ as in the statement of the lemma (depending on whether or not $\{|g(p)|\}_{p \in E_j}$ is non-decreasing for each $j$). Define $\alpha > 0$ to be such that if $Z := e^{\log^{\alpha} t}$ then $$\frac{1}{2\log_2 t}\sum_{Z < p \leq t} \frac{|g(p)|}{p} = \lambda.$$ Take $L := {\left\lfloor}\frac{\log \left(\kappa \log^{1-\alpha} t\right)}{\log 2} {\right\rfloor}$, and if $L \geq 1$ and $0 \leq k \leq L-1$ set $D_k := (Z^{2^k},Z^{2^{k+1}}]$ (if $L = 0$ then the sum over $k$ below is empty). We can suppose that the expression in the floor function brackets defining $L$ is an integer by perturbing $t$ or $\kappa$ slightly (which will not change the order of magnitude of the bounds). Let $\psi_k := \sum_{n \leq Z^{2^{k}}} \frac{|g(n)|}{n}$ (not to be confused with the $\psi_l$ in Lemma \[SHARPAPP\]). Thus, we have $$\begin{aligned} &P_{|g|}(t)^{-1}\sum_{n \leq t} \frac{|g(n)|}{n\log^{\lambda}(3n)} \leq P_{|g|}(t)^{-1}\sum_{n \leq Z} \frac{|g(n)|}{n} + P_{|g|}(t)^{-1}\sum_{0 \leq k \leq L-1} \sum_{n \in D_k} \frac{|g(n)|}{n\log^{\lambda}(3n)} \\ &\leq P_{|g|}(t)^{-1} \prod_{p \leq Z} \left(1+\frac{|g(p)|}{p}\right) + P_{|g|}(t)^{-1}\log^{-\lambda} Z \sum_{0 \leq k \leq L-1} 2^{-k\lambda} (\psi_{k+1}-\psi_k) \\ &\leq P_{|g|}(t)^{-1} \prod_{p \leq Z} \left(1+\frac{|g(p)|}{p}\right) + P_{|g|}(t)^{-1}(1-2^{-\lambda})\log^{-\lambda} Z \sum_{0 \leq k \leq L-1} 2^{-k\lambda} \psi_{k+1}+ 2^{-\lambda L}\log^{-\lambda}Ze^{-S_k(t)},\end{aligned}$$ where the last estimate follows by partial summation, noting that in the last term on the right, $L_{|g|}(t^{\kappa})$ is at most $P_{|g|}(t^{\kappa})$, and for $t^{\kappa}$ sufficiently large, $$\prod_{t^{\kappa} < p \leq t} \left(1+\frac{|g(p)|}{p}\right)^{-1} = \exp\left(-\sum_{t^{\kappa} < p \leq t} \log\left(1+\frac{|g(p)|}{p}\right)\right) \ll_B e^{-S_{\kappa}(t)}.$$ Since $1-2^{-\lambda} \asymp \lambda$, it follows that $$\begin{aligned} P_{|g|}(t)^{-1}\sum_{n \leq t} \frac{|g(n)|}{n\log^{\lambda}(3n)} &\ll e^{-\frac{1}{2}S_{\kappa}(t)} e^{-\frac{1}{2}\sum_{Z < p \leq t} \frac{|g(p)|}{p}}\left(1 + \lambda \kappa^{-\lambda} \sum_{0\leq k \leq L-1} \frac{\kappa^{\lambda}}{2^{k\lambda}\log^{\lambda} Z} \exp\left(\frac{1}{2} \sum_{Z < p \leq Z^{2^{k+1}}} \frac{|g(p)|}{p}\right)\right) \\ &+ 2^{-\lambda L}e^{-S_{\kappa}(t)}\log^{-\lambda }Z. \label{DYAD}\end{aligned}$$ By choice, the first factor is precisely $e^{-\frac{1}{2}S_{\kappa}(t)}\log^{-\lambda} t$. Now, observe that for each $0 \leq j \leq L-1$, there is some $\xi_j \in [\delta,B]$ such that $\sum_{Z^{2^{j}} < p \leq Z^{2^{j+1}}} \frac{|g(p)|}{p} = \xi_j\log 2 + O_B\left(\frac{1}{2^j \log Z}\right)$. In particular, we can write $$\begin{aligned} \lambda &= \frac{1}{2\log_2 t} \sum_{Z < p \leq t} \frac{|g(p)|}{p} =\frac{\log 2}{2\log_2 t}\left(\sum_{0 \leq l \leq L-1} \xi_l +\frac{1}{\log 2} S_{\kappa}(t)+ O_B\left(\frac{1}{\log Z}\right)\right).\end{aligned}$$ Additionally, by definition we have $$\log^{-\lambda} Z = \exp\left(-\frac{1}{2}\alpha \sum_{Z<p \leq t} \frac{|g(p)|}{p}\right) \ll \exp\left(-\frac{1}{2}\alpha \log 2 \sum_{0 \leq l \leq L-1} \xi_l\right).$$ Hence, it follows that each term in the sum in takes the form $$2^{-k\lambda} \kappa^{\lambda}\exp\left(\frac{1}{2}\sum_{Z < p \leq Z^{2^{k+1}}} \frac{|g(p)|}{p}\right)\log^{-\lambda} Z \ll 2^{-\frac{1}{2}(k+1){\epsilon}_k},$$ where, for $\Sigma_j := \sum_{0 \leq l \leq j} \xi_l$, we have set $$\begin{aligned} {\epsilon}_k &:= \lambda\left( 1+ \frac{2}{(k+1)\log 2}\log(1/\kappa)\right) + \frac{1}{(k+1)\log 2}\left(\log_2 Z - \log 2\sum_{0 \leq l \leq k} \xi_l\right) \nonumber\\ &= \frac{1}{k+1}\left(\frac{\log 2}{\log_2 t}\left((k+1)\Sigma_{L-1} + \frac{S_{\kappa}(t)}{\log 2} + \frac{2\log(1/\kappa)}{\log 2}\left(\Sigma_{L-1} + S_{\kappa}(t)\right)\right) + \alpha\Sigma_{L-1} - \Sigma_k\right). \label{CANCEL}\end{aligned}$$ By choice, we have $$\frac{\log 2}{\log_2 t} = \frac{1-\alpha}{L}\left(1-\frac{\log(1/\kappa)}{\log_2 t} \right),$$ so that, as $\frac{1-\alpha}{L}\frac{(k+1)\log(1/\kappa)}{\log_2 t}$ is canceled by the $\log(1/\kappa)$ contribution in uniformly in $k \leq L-1$, we have $$\begin{aligned} {\epsilon}_k &\geq \frac{1}{k+1}\left(\frac{1-\alpha}{L}\left((k+1)\Sigma_{L-1} + \frac{S_{\kappa}(t)}{\log 2}\right) + \alpha\Sigma_{L-1} - \Sigma_k\right)\\ &\geq \frac{1}{k+1}\left(\Sigma_{L-1}\left(\frac{k+1}{L}(1-\alpha) + \alpha\right) - \Sigma_k\right) = \frac{1}{k+1}\Sigma_{L-1}\left(\frac{k+1}{L} + \frac{L-k-1}{L}\alpha-\frac{\Sigma_k}{\Sigma_{L-1}}\right).\end{aligned}$$ Consider when $(g,{\boldsymbol}{E})$ is non-decreasing for each $j$. We claim that $\frac{\Sigma_k}{\Sigma_{L}} \leq \frac{k+1}{L}$. To see this, write $\xi_{l,j}$ to denote the contribution to $\xi_l$ coming from primes in $E_j$. Then $$\frac{1}{k+1}\sum_{0 \leq l \leq k} \xi_{l,j} \leq \frac{1}{L}\sum_{0 \leq l \leq L-1} \xi_{l,j}.$$ It follows that $$\frac{1}{k+1}\Sigma_k = \sum_{1 \leq j \leq m} \left(\frac{1}{k+1}\sum_{0\leq l \leq k} \xi_{l,j}\right) \leq \frac{1}{L}\sum_{1 \leq j \leq m} \sum_{0 \leq l \leq L-1} \xi_{l,j} = \frac{1}{L}\Sigma_L.$$ Hence, we have $(k+1){\epsilon}_k \geq (L-k-1)\alpha\Sigma_{L-1}/L =: (L-k-1)\tilde{\lambda}$. Inserting this into the geometric series in , reindexing and summing, we get $$\begin{aligned} P_{|g|}(t)^{-1}\sum_{n \leq t} \frac{|g(n)|}{n\log^{\lambda}(3n)}&\ll_B e^{-\frac{1}{2}S_{\kappa}(t)}\log^{-\lambda} t\left(1 + \lambda\kappa^{-\lambda}\sum_{0\leq k \leq L-1} 2^{-k\tilde{\lambda}}\right) + 2^{-\lambda L}e^{-S_{\kappa}(t)}\log^{-\lambda }Z \\ &\ll e^{-\frac{1}{2}S_{\kappa}(t)}\log^{-\lambda} t\left(1 + \frac{\lambda}{\tilde{\lambda}}\kappa^{-\lambda}\right) + \kappa^{-\lambda}e^{-S_{\kappa}(t)}\log^{-\lambda} t,\end{aligned}$$ where, in the last line we used the definition of $L$. Now, we saw earlier that $$\lambda = \frac{1-\alpha}{L}\Sigma_{L-1}\left(1 + \frac{S_{\kappa}(t)}{\Sigma_{L-1}\log 2}\right) = \frac{1-\alpha}{\alpha}\tilde{\lambda}\left(1+ \frac{S_{\kappa}(t)}{\Sigma_{L-1}\log 2}\right).$$ Hence, $\lambda/\tilde{\lambda} \ll \frac{1-\alpha}{\alpha}$, and we claim that $\alpha > \frac{1}{2}$. Indeed, by monotonicity, $$\sum_{p \leq e^{\sqrt{\log t}} \atop p \in E_j} \frac{|g(p)|}{p} \leq \sum_{e^{\sqrt{\log t}} < p \leq t \atop p \in E_j} \frac{|g(p)|}{p},$$ and since $\lambda < \frac{1}{4}$ as can be checked by the definition given, if $\alpha = \frac{1}{2}$ then $$\lambda \log_2 t = \sum_{1 \leq j \leq m} \frac{c_j\gamma_{0,j}}{1+\gamma_{0,j}} \sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p} < \frac{1}{4}\sum_{1 \leq j \leq m} \sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p} \leq \frac{1}{2} \sum_{Z < p \leq t} \frac{|g(p)|}{p} = \lambda \log_2 t,$$ a contradiction. Since the sum on the interval $[Z,t]$ decreases monotonically as $\alpha$ increases, $\alpha > \frac{1}{2}$, as required. This completes the proof of the claim in this case.\ If $(g,{\boldsymbol}{E})$ is not non-decreasing for some $j$, bound $|g(p)|$ from below trivially by the function $g_0$ defined on primes such that $g_0(p) := \delta_j$ for each $p \in E_j$ and each $j$. Then applying the foregoing analysis with $|g(p)|$ replaced by $g_0(p)$, we can replace $\lambda$ by $$\sum_{1 \leq j \leq m} \frac{\gamma_{0,j}}{1+\gamma_{0,j}}\delta_jE_j(t) +O_B(1) \geq \sum_{1 \leq j \leq m} \frac{\delta_j}{B_j}\frac{\gamma_{0,j}}{1+\gamma_{0,j}} \sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p} + O_B(1)$$ (see Remark \[REMWITHR\]). This completes the proof. \[EXPONENTS\] Let $\lambda = \lambda(t) > 0$ be defined as in Lemma \[WITHR\].\ i) If $(g,{\boldsymbol}{E})$ is reasonable then $$\lambda = \begin{cases} \frac{1}{\log_2 t} \sum_{1 \leq j \leq m} \min\left\{\frac{1}{4m^2}, \frac{\gamma_{0,j}}{1+\gamma_{0,j}}\right\}\sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p} &\text{ if $(g,{\boldsymbol}{E})$ is non-decreasing} \\ \frac{1}{\log_2 t} \left(\min_{1 \leq j \leq m}\frac{\delta_j}{B_j}\right)\sum_{1 \leq j \leq m} \min\left\{\frac{1}{4m^2},\frac{\gamma_{0,j}}{1+\gamma_{0,j}}\right\}\sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p} & \text{ otherwise.}\end{cases}$$ ii) If $(g,{\boldsymbol}{E})$ is not reasonable then $$\lambda = \begin{cases} \frac{1}{\log_2 t} \frac{\delta}{B}\sum_{1 \leq j \leq m} \frac{\gamma_{0,j}}{1+\gamma_{0,j}} \sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p} &\text{ if $(g,{\boldsymbol}{E})$ is non-decreasing} \\ \frac{\delta}{\log_2 t}\sum_{1 \leq j \leq m} \frac{\gamma_{0,j}}{1+\gamma_{0,j}}E_j(t)& \text{ otherwise.}\end{cases}$$ In each of these cases, for any constant $C > 0$ we may choose $C' > 0$ a constant depending at most on $B$ and $C$ such that if $\kappa =\kappa(C) := \left(\frac{\delta}{2C'B}\right)^{\frac{4}{\delta}}$ then $$C \frac{B}{\delta}\kappa^{-\lambda}e^{-\frac{1}{2}S_{\kappa}(t)} \leq \frac{1}{2}.$$ \[REMEXPONENTS\] Note that in the case where ${\boldsymbol}{E}$ is good, $(g,{\boldsymbol}{E})$ is necessarily reasonable. We prove the result in a greater general in part to motivate the utility of the ”good” condition.\ Choosing $\delta_j \gg \log_2^{-1+{\epsilon}} t$ for ${\epsilon}> 0$ fixed, $1/\kappa$ is at most $e^{\log_2^{1-{\epsilon}} t}$ (in the worst case where $B$ is much larger than $\delta_j$), so our assumption that the inequality $\log(1/\kappa) < \log_2 t$ holds easily here. Suppose first that $(g,{\boldsymbol}{E})$ is reasonable. Thus, choose $1 \leq j_0 \leq m$ such that $$\begin{aligned} E_{j_0}(t)-E_{j_0}\left(t^{\kappa}\right) &\geq \frac{1}{m}\log(1/\kappa), \\ \sum_{p \leq t \atop p \in E_{j_0}} \frac{|g(p)|}{p} &\geq \frac{1}{m}\sum_{p \leq t} \frac{|g(p)|}{p}.\end{aligned}$$ If $(g,{\boldsymbol}{E})$ is non-decreasing then clearly $$\begin{aligned} &E_j(t)\sum_{t^{\kappa} < p \leq t \atop p \in E_j} \frac{|g(p)|}{p} - (E_j(t)-E_j(t^{\kappa})) \sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p} = \sum_{t^{\kappa} < p \leq t \atop p \in E_j} \sum_{q \leq t \atop q \in E_j} \frac{1}{pq}\left(|g(p)|-|g(q)|\right) \\ &= \sum_{t^{\kappa} < p,q \leq t \atop p,q \in E_j} \frac{1}{pq}\left(|g(p)|-|g(q)|\right) + \sum_{t^{\kappa} < p \leq t \atop p \in E_j} \sum_{q \leq t^{\kappa}\atop q \in E_j } \frac{1}{pq}\left(|g(p)|-|g(q)|\right).\end{aligned}$$ By symmetry, the first sum is 0, while by monotonicity, the second sum is non-negative. Hence, it follows that $$\sum_{t^{\kappa} < p \leq t \atop p \in E_j} \frac{|g(p)|}{p} \geq \frac{E_j(t)-E_j(t^{\kappa})}{E_j(t)} \sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p}. \label{MONO1}$$ Applying this for each $j$ and using the ”reasonable” property, we have $$\begin{aligned} \sum_{t^{\kappa} < p \leq t} \frac{|g(p)|}{p} &\geq \sum_{t^{\kappa} < p \leq t \atop p \in E_{j_0}} \frac{|g(p)|}{p} \geq \frac{E_{j_0}(t)-E_{j_0}(t^{\kappa})}{E_{j_0}(t)} \sum_{p \leq t \atop p \in E_{j_0}} \frac{|g(p)|}{p} \geq \frac{1}{m} \frac{\log(1/\kappa)}{E_{j_0}(t)}\sum_{p \leq t \atop p \in E_{j_0}} \frac{|g(p)|}{p} +o(1) \nonumber\\ &\geq \frac{1}{m^2} \frac{\log(1/\kappa)}{\log_2 t} \sum_{p \leq t} \frac{|g(p)|}{p} + O_B(1), \label{MONO2}\end{aligned}$$ upon using the trivial bound $E_j(t) \leq \log_2 t + O(1)$. Thus, setting $c_j :=\min\left\{\frac{1}{4m^2},\frac{\gamma_{0,j}}{1+\gamma_{0,j}}\right\}$ in the definition of $\lambda$, we have $$\kappa^{-\lambda} e^{-\frac{1}{2}S_{\kappa}(t)} = \exp\left(-\frac{1}{2}\sum_{t^{\kappa} < p \leq t} \frac{|g(p)|}{p} + \frac{\log(1/\kappa)}{\log_2 t} \sum_{1 \leq j \leq m} c_j\sum_{p \leq t \atop p \in E_j}\frac{|g(p)|}{p}\right) \ll_B e^{-\frac{1}{4}S_{\kappa}(t)}. \label{KAPDEF}$$ If $(g,{\boldsymbol}{E})$ is not non-decreasing then we instead have (using our observation regarding $S$ from Remark \[REMWITHR\]) $$\begin{aligned} \sum_{t^{\kappa} < p \leq t} \frac{|g(p)|}{p} &\geq \sum_{1 \leq j \leq m} \delta_j((E_j{\backslash}S)(t)-(E_j{\backslash}S)(t^{\kappa})) \geq \frac{1}{m}\delta_{j_0}\log(1/\kappa) +o(\delta_{j_0})\\ &\geq \frac{1}{m}\frac{\delta_{j_0}}{B_{j_0}} \frac{\log(1/\kappa)}{E_{j_0}(t)}\sum_{p \leq t \atop p \in E_{j_0}} \frac{|g(p)|}{p} + o(\delta_{j_0}) \geq \left(\min_{1 \leq j \leq m} \frac{\delta_j}{B_j}\right)\frac{1}{m^2\log_2 t}\sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p} + O_B(1), $$ and holds again with $c_j := \frac{\delta}{B}\min\left\{\frac{1}{4m^2},\frac{\gamma_{0,j}}{1+\gamma_{0,j}}\right\}$ in this case.\ Suppose now that $(g,{\boldsymbol}{E})$ is not reasonable. Again, if $(g,{\boldsymbol}{E})$ is non-decreasing then still applies for each $j$, and instead of we simply note that if $c = \frac{\delta}{B}$ in the definition of $\lambda$ then we have $$\sum_{t^{\kappa} < p \leq t} \frac{|g(p)|}{p} \geq \delta \log\left(1/\kappa\right) +o(\delta)= Bc\log\left(1/\kappa\right) +o(\delta)\geq \frac{c\log(1/\kappa)}{\log_2 t}\sum_{p \leq t} \frac{|g(p)|}{p} +O_B(1) \geq 2\lambda \log\left(1/\kappa\right),$$ the last inequality following because $\frac{\gamma_{0,j}}{1+\gamma_{0,j}} \leq \frac{1}{2}$ for each $j$. With this value of $c$, we may take $\kappa$ as before.\ Finally, suppose $(g,{\boldsymbol}{E})$ is not reasonable or non-decreasing. Then we simply bound $|g(p)|$ in the definition of $\lambda$ from below by $\delta$.\ In each of these scenarios we may select $\kappa$ such that, with $C' > 0$ a constant larger than $C$ depending only on $B$, we have $$C'\frac{B}{\delta}e^{-\frac{\delta}{4}\log(1/\kappa)}= \frac{1}{2},$$ i.e., $\kappa = \left(\frac{\delta}{2C' B}\right)^{\frac{4}{\delta}}$. This completes the proof. The above estimates culminate in the following. \[STARTER\] For $t$ sufficiently large there exists a $C = C(B) > 0$, a function $\lambda: [1,x] {\rightarrow}(0,\infty)$ and $\kappa = \kappa(C)$ as in Lemma \[EXPONENTS\] such that $$\frac{|M_h(t)|}{M_{|g|}(t)} - \frac{R_h(\lambda)}{2\log^{\lambda} t} \leq C_1\left(\frac{B}{\delta}\frac{P_t}{\phi(P_t)}\left(\frac{1}{(1-\kappa) \log^{\frac{1}{2} }t}\left({\mathcal}{G}({\sigma})^{-2}J_{H,1}({\sigma})\right)^{\frac{1}{2}} + |A|\mu\right)+ R_h(\lambda)\frac{B\log_2 t}{\kappa \log^{1+\lambda} t} +\frac{1}{\log^2 t}\right) $$ Recall that $\mu := \max_p |\theta_p|$. Take $\lambda$ as in Lemma \[WITHR\]. Applying Lemmas \[NUM\] and \[DENOM\] combined with Lemma \[CHEAP\], $$\begin{aligned} \frac{|M_h(t)|}{M_{|g|}(t)} &\ll_B \frac{B}{\delta}\frac{P_t}{\phi(P_t)}\left(\frac{{\mathcal}{G}({\sigma})^{-1}}{\log t}\int_{t^{\kappa}}^{t} \frac{|M_h(u)|\log u}{u^{2}} du+R_h(\lambda)\frac{\int_1^{t^{\kappa}} \frac{M_{|g|}(u)}{u^2\log^{\lambda}(3u)} du}{\int_{1}^t \frac{M_{|g|}(u)}{u^2}du}+ A\mu\right)\\ &+ R_h(\lambda)\frac{B\log_2 t}{\kappa \log^{1+\lambda} t} + \frac{1}{\log^2 t}. $$ Now, using the definition of $\Delta_h(u)$, which was introduced in Section 4, we have $$\begin{aligned} &\int_{t^{\kappa}}^{t} \frac{|M_h(u)|\log u}{u^{2}} du \leq \int_1^{t} \frac{|N_h(u)|}{u^{2}} du+ \int_{t^{\kappa}}^{t}\frac{|\Delta_{h}(u)|}{u^{2}} du, $$ whence we have the estimate $$\frac{|M_h(t)|}{M_{|g|}(t)} \ll_B \frac{B}{\delta}\frac{P_t}{\phi(P_t)}\left(\frac{1}{(1-\kappa)\log t}{\mathcal}{G}({\sigma})^{-1}(M_1 + M_2) +M_3 + A\mu\right) + R_h(\lambda)\frac{B\log_2 t}{\kappa \log^{1+\lambda} t}$$ where we have defined $M_1 := \int_1^t \frac{|N_h(u)|}{u^2}du$ and $$\begin{aligned} M_2 &:= \left(\int_{1}^{t} \frac{M_{|g|}(u)}{u^{2}}du\right)^{-1}\int_{t^{\kappa}}^t \frac{|\Delta_h(u)|}{u^2} du, \\ M_3 &:= R_h(\lambda)\left(\int_1^t \frac{M_{|g|}(u)}{u^2}du\right)^{-1}\left(\int_1^{t^{\kappa}} \frac{M_{|g|}(u)}{u^2\log^{\lambda}(3u)} du\right).\end{aligned}$$ By Lemma \[WITHR\], $M_3 \ll R_h(\lambda)\kappa^{-\lambda}e^{-\frac{1}{2}S_{\kappa}(t)}\log^{-\lambda} t$. Next, using the pointwise bound for $\Delta_h(u)$, , we have $$\begin{aligned} M_2\left(\int_{1}^{t} \frac{M_{|g|}(u)}{u^{2}}du\right) &\ll BR_h(\lambda)\int_{t^{\kappa}}^{t} \frac{M_{|g|}(u)\log_2 u}{u^{2}\log^{1+\lambda}u}du \leq BR_h(\lambda)\frac{\log_2 t}{\kappa \log^{1+\lambda} t}\left(\int_{1}^{t} \frac{M_{|g|}(u)}{u^{2}}du\right). \end{aligned}$$ Therefore, $M_2 \ll BR_h(\lambda)\frac{\log_2 t}{\kappa \log^{1+\lambda} t}$. Consider now $M_1$. Note that $t^{{\sigma}-1} = e$. Applying Cauchy-Schwarz and then invoking Parseval’s theorem, we get $$\begin{aligned} M_1 &\leq \left(\int_1^{t} \frac{du}{u}\right)^{\frac{1}{2}}\left(\int_1^{t} \frac{|N_h(u)|^2}{u^{3}}\right)^{\frac{1}{2}} \leq \log^{\frac{1}{2}} t\left(e^2\int_1^{\infty} \frac{|N_{h}(u)|^2}{u^{1+2{\sigma}}} du\right)^{\frac{1}{2}} = e\log^{\frac{1}{2}} t \left(\int_0^{\infty} |N_{h}(e^v)|^2e^{-2v{\sigma}} dv\right)^{\frac{1}{2}} \\ &= 2\pi e \log^{\frac{1}{2}} t \left(\int_{({\sigma})} |H'(s)|^2\frac{|ds|}{|s|^2}\right)^{\frac{1}{2}} \ll J_{H,1}({\sigma})^{\frac{1}{2}}\log^{\frac{1}{2}} t.\end{aligned}$$ Combining these estimates, we can find some constant $C_1$ depending at most on $B$ such that $$\frac{|M_h(t)|}{M_{|g|}(t)} \leq C_1\left(\frac{B}{\delta}\frac{P_t}{\phi(P_t)}\left(\frac{\left({\mathcal}{G}({\sigma})^{-2}J_{H,1}({\sigma})\right)^{\frac{1}{2}}}{(1-\kappa)\log^{\frac{1}{2}} t} + |A|\mu\right)+ R_h(\lambda)\left(\kappa^{-\lambda}e^{-\frac{1}{2}S_{\kappa}(t)}\log^{-\lambda} t + B\frac{\log_2 t}{\kappa \log^{1+\lambda} t}\right) + \frac{1}{\log^2 t}\right).$$ We now choose $\kappa > 0$ as in Lemma \[EXPONENTS\] for $C = C_1$. In sum, we have $$\frac{|M_h(t)|}{M_{|g|}(t)} - \frac{1}{2}\frac{R_h(\lambda)}{\log^{\lambda} t} \leq C_1\left(\frac{B}{\delta}\frac{P_t}{\phi(P_t)}\left(\frac{1}{\log^{\frac{1}{2}} t}\left({\mathcal}{G}({\sigma})^{-2}J_{H,1}({\sigma})\right)^{\frac{1}{2}} + |A|\mu\right) + R_h(\lambda)\frac{\log_2 t}{\kappa \log^{1+\lambda} t} + \frac{1}{\log^2 t}\right).$$ This gives the claim. An immediate corollary of the above proposition is an upper bound for $R_h(\lambda)$ upon taking maxima. \[CORSTARTER\] For each $2 \leq t \leq x$ let ${\sigma}_t := 1+\frac{1}{\log t}$. Suppose $C > 0$ is sufficiently small, $\kappa = \kappa(C)$ and $\lambda$ are as in Lemma \[WITHR\] and Lemma \[EXPONENTS\]. Then there exists a constant $C = C(\lambda,\delta) > 0$ and a $t_0 \geq 2$ sufficiently large such that $$R_h(\lambda) \leq \max\left\{1+|A|,C\max_{t_0 \leq t \leq x} \left\{\frac{B}{\delta}\frac{P_t}{\phi(P_t)}\log^{\lambda} t\left(\frac{1}{\log^{\frac{1}{2}} t}\left({\mathcal}{G}({\sigma})^{-2}J_{H,1}({\sigma}_t)\right)^{\frac{1}{2}}+ |A|\mu\right) + \frac{1}{\log^2 t}\right\}\right\}. $$ In Proposition \[STARTER\] we must choose $t_0$ sufficiently large such that each $t \geq t_0$ satisfies its conclusion. For the remaining elements we can only get $O(1)$ order terms, and since, by the triangle inequality we have $\frac{|M_h(t)|}{M_{|g|}(t)} \leq 1+|A|$, the first bound also follows.\ For $t \geq t_0$, otherwise, the claim follows immediately upon multiplying each side of the estimate in Proposition \[STARTER\] by $\log^{\lambda} t$, taking the maximum of both sides and then coalescing all of the terms in $R_h(\lambda)$ together on the left side. This gives a fixed positive multiple of $R$ when $C$ is sufficiently small. Assume first that $A = 0$. Then $H = G$, and by Lemmas \[INTBOUND\] and \[MAIN1\], we have $$\begin{aligned} {\mathcal}{G}({\sigma})^{-2}J_{G,1}({\sigma}_t) &\ll_{m,B,{\boldsymbol}{\phi},{\boldsymbol}{\beta}} B^2({\sigma}_t-1)^{-1}\prod_{1 \leq j \leq m} \left|\frac{G_j({\sigma}_t)}{{\mathcal}{G}_j({\sigma}_t)}\right|^{\frac{\gamma_{0,j}}{1+\gamma_{0,j}}} +1 \\ &\ll_B B^2({\sigma}_t-1)^{-1}\exp\left(-\sum_{1 \leq j \leq m} \frac{\gamma_{0,j}}{1+\gamma_{0,j}} \sum_{p\in E_j} \frac{|g(p)|-\text{Re}(g(p))}{p^{{\sigma}_t}}\right) + 1.\end{aligned}$$ By Corollary \[CORSTARTER\], we need only show that if ${\sigma}_t := 1+\frac{1}{\log t}$ then $$\label{INCR} h(t) := \frac{B^2}{\delta}\frac{P_t}{\phi(P_t)}\prod_{1 \leq j \leq m} \left|\frac{G_j({\sigma}_t)}{{\mathcal}{G}_j({\sigma}_t)}\right|^{c_j'\frac{\gamma_{0,j}}{2(1+\gamma_{0,j})}}\log^{\lambda} t $$ where $c_j' >0$, is maximized when $t = x$, after which the theorem will immediately follow. Indeed, if the function is maximized at $t = x$ then we have $$\begin{aligned} \frac{|M_g(x)|}{M_{|g|}(x)}\log^{\lambda(x)} x = R_g(\lambda) \ll_B \frac{B^2}{\delta}\frac{P_x}{\phi(P_x)}\exp\left(-\sum_{1 \leq j \leq m} \frac{c_j'\gamma_{0,j}}{1+\gamma_{0,j}} \sum_{p\in E_j} \frac{|g(p)|-\text{Re}(g(p))}{p^{{\sigma}_x}}\right)\log^{\lambda(x)} x,\end{aligned}$$ in case $g \in {\mathcal}{C}_b$, and a similar argument holds when $g \in {\mathcal}{C}{\backslash}{\mathcal}{C}_b$. Now, implies that $\sum_{p > x} |g(p)|\left(\frac{1}{p^{{\sigma}_x}} + \frac{1}{p^{{\sigma}'}}\right) \ll_B 1$, and $$\begin{aligned} \exp\left(\sum_{p \leq x} \left(\frac{1}{p^{{\sigma}}}-\frac{1}{p}\right)\right) &= \exp\left(\sum_{p \leq x} \frac{1}{p}\left(e^{-({\sigma}-1)\log p} - 1\right)\right) \leq \exp\left(-({\sigma}-1)\sum_{p \leq x} \frac{\log p}{p}\right) \ll 1.\end{aligned}$$ Thus, $$\label{MAX} \frac{|M_g(x)|}{M_{|g|}(x)}\ll_B \frac{B^2}{\delta}\frac{P_x}{\phi(P_x)}\exp\left(-\sum_{1 \leq j \leq m} c_j'\frac{\gamma_{0,j}}{2(1+\gamma_{0,j})} \sum_{p\leq x \atop p \in E_j} \frac{|g(p)|-\text{Re}(g(p))}{p}\right).$$ Now, $\frac{B^2}{\delta}$ is non-decreasing in $t$, and thus to show that $t = x$ maximizes $h(t)$, it suffices to show that the remaining factors defining it are maximal at $t = x$. Of course, the maximal value of $\frac{P_t}{\phi(P_t)}$ is $O\left(\log_2 t\right)$ (see, for instance, Theorem 4 in I.5 of [@Ten2]). We consider two cases, according to whether $g \in {\mathcal}{C}_b$ or not. Suppose first that $g \in {\mathcal}{C}_b$. Since $\log^{\lambda(t)} t = \exp\left(\sum_{1 \leq j \leq m} \frac{c_j\gamma_{0,j}}{1+\gamma_{0,j}}\sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p}\right)$, where $c_j > 0$, we see that it suffices that $c_j' \leq c_j$, for in this case, $$\begin{aligned} &\log^{\lambda(t)}t\exp\left(-\sum_{1 \leq j \leq m} \frac{c_j'\gamma_{0,j}}{1+\gamma_{0,j}} \sum_{p \leq t \atop p \in E_j} \frac{g(p)-\text{Re}(g(p))}{p}\right) \\ &\geq \exp\left(\sum_{1 \leq j \leq m}\frac{\gamma_{0,j}}{2(1+\gamma_{0,j})}\left((2c_j-c_j') \sum_{p \leq t \atop p \in E_j} \frac{g(p)}{p} +\sum_{p \leq t \atop p \in E_j} \frac{|\text{Re}(g(p))|}{p}\right)\right)\\ &\geq \exp\left(\sum_{1 \leq j \leq m}c_j'\frac{\gamma_{0,j}}{2(1+\gamma_{0,j})}\sum_{p \leq t \atop p \in E_j} \frac{|g(p)|+\text{Re}(g(p))}{p}\right)\end{aligned}$$ uniformly in $t_0 \leq t \leq x$, and the expression in the exponential is non-decreasing in $t$ since $|g(p)|+\text{Re}(g(p)) \geq 0$ for all $p$, provided we replace $c_j$ by $\frac{1}{2}c_j$, if necessary, to account for the $\log_2 t$ factor from $\frac{P_t}{\phi(P_t)}$. Thus, depending on the choice of $c_j$ emerging from Lemma \[EXPONENTS\], the right side of with $t$ in place of $x$ is largest when $t = x$ when $c_j' := \frac{1}{2}c_j$, and the proof of the theorem is complete in this case.\ On the other hand, if $g \in {\mathcal}{C} {\backslash}{\mathcal}{C}_b$ by Lemma \[EASYHalDecay\] we have $$\max_{|\tau| \leq ({\sigma}-1)^{-D}}\left|\frac{G(s)}{{\mathcal}{G}({\sigma})}\right| \ll_B \exp\left(-\sum_{1 \leq j \leq m} B_j\left(\rho_{E_j}(t;\tilde{g},T) + \sum_{p \in E_j \atop p \leq t} \frac{1-|\tilde{g}(p)|}{p}\right)\right).$$ Inserting this instead into the estimate from Lemma \[MAIN1\] and then applying Corollary \[CORSTARTER\] as above, we need to show that $$\begin{aligned} &\lambda(t)\log_2 t +B_j\left(\sum_{p \in E_j \atop p \leq t} \frac{1}{p} - \rho_{E_j}(t;\tilde{g},T)\right) - \sum_{p \in E_j \atop p \leq t} \frac{|g(p)|}{p} \\ &= \sum_{p \in E_j \atop p \leq t} (c_j-1)\sum_{p \in E_j \atop p \leq t} \frac{|g(p)|}{p} + B_j\left(\sum_{p \in E_j \atop p \leq t} \frac{1}{p} - \rho_{E_j}(t;\tilde{g},T)\right)\end{aligned}$$ is increasing in $t$. Of course, $2\sum_{p \in E_j \atop p \leq t}\frac{1}{p}-\rho_{E_j}(t;\tilde{g},T)$ is non-decreasing in $t$ by definition, and thus, picking $c_j = B_j+1$ proves the claim in this case as well. In this case, $\mu = \max_p |\theta_p| < \eta$. Applying Corollary \[CORSTARTER\] with $$A := \exp\left(-\sum_{p \leq t} \frac{|g(p)|-g(p)}{p}\right)$$ instead, in conjunction with Lemma \[SHARPAPP\] gives, in the non-trivial case, $$R_h(\lambda) \ll_B \max_{t_0 \leq t \leq x} \left\{\frac{B}{\delta}\log^{\lambda} t\left(\frac{1}{\log^{\frac{1}{2}} t}\left({\mathcal}{G}({\sigma})^{-2}J_{H,1}({\sigma}_t)\right)^{\frac{1}{2}}+ A\eta\right) + \frac{1}{\log^2 t}\right\},$$ where, for $T = ({\sigma}-1)^D$, $D > 2$ and $K > 0$ satisfying $K({\sigma}-1) < 1$ and $B\eta \log(1+K) < 1$, $$\begin{aligned} {\mathcal}{G}({\sigma}_t)^{-2}J_{H,1}({\sigma}_t) &\leq B^2({\sigma}_t-1)^{-1}(\eta^2|A|^2\min\{\delta^{-2},\log^2(1+K)(1+K)^{-2\delta}\} + ({\sigma}_t-1)^{-2}T^{-1} \\ &+ ({\sigma}_t-1)^{\gamma}|A|^{\frac{\gamma_0}{2(1+\gamma_0)}} + |A|^2({\sigma}_t-1)^{2\delta/3} + \frac{1}{\gamma} (1+K)^{-(1+\gamma)} + \frac{|A|^2}{\delta} (1+K)^{-(1+\delta)}).\end{aligned}$$ If $B\eta \leq \delta < \eta^{\frac{1}{2}}$, take $K := e^{\frac{1}{\delta}}-1$. Otherwise, take $K := e^{\frac{1}{\sqrt{\eta}}}-1$. Set $d_1 := \delta^{-1}\sqrt{\eta}$ in the first case and $d_1 := 1$ in the second case. Then $$\begin{aligned} {\mathcal}{G}({\sigma}_t)^{-2}J_{H,1}({\sigma}_t) &\ll_{B,{\boldsymbol}{\phi},{\boldsymbol}{\beta}} \frac{B^2}{{\sigma}_t-1}\left(d_1^2\eta|A|^2+ ({\sigma}_t-1)^{2\delta/3} + \delta^{-1}e^{-\frac{2d_1}{\sqrt{\eta}}} + |A|^{\frac{\gamma_0}{2(1+\gamma_0)}}\left(({\sigma}_t-1)^{\gamma} + \frac{e^{-\frac{2d_1}{\sqrt{\eta}}}}{\gamma}\right)\right). $$ It therefore follows, upon taking termwise square-roots of the above expression in the non-trivial bound in Corollary \[CORSTARTER\] and using the definition of ${\sigma}= {\sigma}_t$, $$\begin{aligned} R_h(\lambda) &\ll_{B,m,{\boldsymbol}{\phi},{\boldsymbol}{\beta}} \frac{B^2}{\delta}\log^{\lambda} t \left(d_1\eta^{\frac{1}{2}}|A| + \log^{-\frac{\delta}{3}} t + \delta^{-\frac{1}{2}}e^{-\frac{d_1}{\sqrt{\eta}}} + |A|^{\frac{\gamma_0}{4(1+\gamma_0)}}\left(\log^{-\frac{\gamma}{2}}t + e^{-\frac{d_1}{\sqrt{\eta}}}\right) +\log^{-2} t + |A|\eta\right).\end{aligned}$$ Now, we saw in Lemmas \[HalDecay\] and \[WITHR\] that $\gamma_{0,j} = C_j\beta_j^3$, where $C_j:= \frac{27\delta_j}{1024\pi B_j}$. Since $|\theta_p| \leq \eta_j$ for each $p \in E_j$ by assumption, we can take $\phi_j = \pi$ so that $\beta_j = \pi-\eta_j > 1$ is admissible. If it happens that $\gamma_{0,j} \geq 1$ then $\frac{\gamma_{0,j}}{2(1+\gamma_{0,j})} \geq \frac{1}{4}$. On the other hand, if $\gamma_{0,j} < 1$, we have $\frac{\gamma_{0,j}}{2(1+\gamma_{0,j})} \geq \frac{C_j}{8}\beta_j^3 > \frac{C_j}{8}$. Thus, provided that $\eta_j < \frac{C_j}{16}c_j$ for each $j$, we have $\frac{\gamma_{0,j}c_j}{2(1+\gamma_{0,j})} > \alpha\eta_j$, for any $\alpha \in (0,1]$. It therefore follows that for such $\alpha$, $$\begin{aligned} \log^{\lambda} t|A|^{\alpha} &\geq \exp\left(\sum_{1 \leq j\leq m} \sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p}\left(\frac{c_j\gamma_{0,j}}{2(1+\gamma_{0,j})} - \alpha\left|1-e^{i\theta_p}\right|\right)\right) \\ &\geq \exp\left(\sum_{1 \leq j \leq m} \sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p}\left(\frac{c_j\gamma_{0,j}}{2(1+\gamma_{0,j})} - \alpha\eta_j\right)\right) \\ &\geq \exp\left(\sum_{1 \leq j \leq m} \frac{c_j\gamma_{0,j}}{4(1+\gamma_{0,j})}\sum_{p \leq t \atop p \in E_j} \frac{|g(p)|}{p}\right),\end{aligned}$$ which is increasing in $t$. The other terms depending on $t$ contribute strictly smaller contributions. Hence, for this choice of $\lambda$, the upper bound for $R_h(\lambda)$ is maximized at $t = x$. It follows that, in the non-trivial case, $$\begin{aligned} \frac{|M_h(x)|}{M_{|g|}(x)}\log^{\lambda(x)}x = R_h(\lambda) &\ll_{B,m,{\boldsymbol}{\phi},{\boldsymbol}{\beta}} \frac{B^2}{\delta}\log^{\lambda(x)} x ((\eta^{\frac{1}{2}}|A|\left(d_1 + \log^{-\frac{\delta}{3}} x + \delta^{-\frac{1}{2}}e^{-\frac{d_1}{\sqrt{\eta}}}\right) \\ &+ |A|^{\frac{\gamma_0}{4(1+\gamma_0)}}\left(\log^{-\gamma/2}x + e^{-\frac{d_1}{\sqrt{\eta}}}\right) +\log^{-2} t) + |A|\eta).\end{aligned}$$ The last term being irrelevant, since $\eta < 1$ and $\frac{\gamma_0}{4(1+\gamma_0)} < 1$ we have $$\begin{aligned} |M_h(x)| &\ll_{B,m,{\boldsymbol}{\phi},{\boldsymbol}{\beta}} \frac{B^2}{\delta} M_{|g|}(x)\left(\eta^{\frac{1}{2}}|A|\left(d_1 +\log^{-\frac{\delta}{3}} x + \delta^{-\frac{1}{2}}e^{-\frac{d_1}{\sqrt{\eta}}}\right)+ |A|^{\frac{\gamma_0}{4(1+\gamma_0)}}\left(\log^{-\frac{\gamma}{2}} x + \gamma^{-1}e^{-\frac{d_1}{\sqrt{\eta}}}\right)\right).\end{aligned}$$ The upper bound just given is precisely the error term in ii) of Theorem \[HalGen\]. Proof of Theorem \[WIRSINGEXT\] =============================== In this section, we apply Theorem \[HalGen\] to prove Theorem \[WIRSINGEXT\]. We first need an estimate for the ratio $\frac{M_{|g|}(x)}{M_{f}(x)}$, which we can directly determine via Theorem \[LOWERMV\]. \[BASICLOW\] Let $\delta, B > 0$, and let $S$ be a set of primes for which $P_x := \prod_{p \in S \atop p \leq x} p \ll_r x^r$ for $r > 0$. Suppose $g$ is a strongly multiplicative function satisfying $|g(p)| \geq \delta$ for each prime $p \notin S$, and $f$ is a non-negative multiplicative function satisfying $|g(n)| \leq f(n)$ for all $n$ and $f(p) \leq B$. Then for any sufficiently large $x$, $$M_{|g|}(x) \ll_{B} \frac{B}{\delta} \frac{P_x}{\phi(P_x)}\exp\left(-\sum_{p \leq x} \frac{f(p)-|g(p)|}{p}\right)M_{f}(x).$$ By assumption, $\delta \leq |g(p)| \leq f(p) \leq B$, so by Theorem \[LOWERMV\], we have $$M_{f}(x) \gg_B \delta \frac{\phi(P_x)}{P_x}\frac{x}{\log x} \prod_{p \leq x} \left(1+\frac{f(p)}{p}\right). \label{LAMBDAEST}$$ On the other hand, by the argument of Lemma \[NUM\] and by Lemma \[LOGSUM\], $$\label{GESTFORRAT} M_{|g|}(x) \ll_B B\frac{x}{\log x} \int_1^x \frac{M_{|g|}(u)}{u^2}du = B\frac{x}{\log x} \left(\sum_{n \leq x}\frac{|g(n)|}{n} - \frac{M_{|g|}(x)}{x}\right) \ll_B B\frac{x}{\log x} \prod_{p \leq x} \left(1+\frac{|g(p)|}{p}\right).$$ Dividing by , we get $$\begin{aligned} M_{|g|}(x) &\ll_{B} \frac{B}{\delta}\frac{P_x}{\phi(P_x)}M_{f}(x)\prod_{p \leq x} \left(1+\frac{|g(p)|}{p}\right)\left(1+\frac{f(p)}{p}\right)^{-1} = \frac{B}{\delta}M_{f}(x)\prod_{p \leq x} \left(1-\frac{f(p)-|g(p)|}{p+f(p)}\right) \\ &\ll \frac{B}{\delta}\frac{P_x}{\phi(P_x)}M_{f}(x)\exp\left(-\sum_{p \leq x} \frac{f(p)-|g(p)|}{p+f(p)}\right) \ll_B \frac{B}{\delta}\frac{P_x}{\phi(P_x)}\exp\left(-\sum_{p \leq x} \frac{f(p)-|g(p)|}{p}\right),\end{aligned}$$ as claimed. Part i) of the theorem is immediate upon making the factorization $\frac{|M_g(x)|}{M_{f}(x)} = \frac{|M_g(x)|}{M_{|g|}(x)}\cdot \frac{M_{|g|}(x)}{M_{f}(x)}$, and applying Theorem \[HalGen\] to the first factor and Lemma \[BASICLOW\] to the second one. Note that the right side of the estimate in the statement must vanish as $x {\rightarrow}\infty$. Indeed, if $\sum_{p \leq x} \frac{f(p)-|g(p)|}{p}$ diverges as $x {\rightarrow}\infty$ then this is obvious; otherwise, $|f(p)-|g(p)|| < \frac{1}{2}|f(p)-\text{Re}(g(p)p^{i\tau})|$ for $p$ sufficiently large, by virtue of $$\prod_{p \leq x} \left(1+\sum_{k \geq 1} \frac{g(p)p^{-ik\tau}}{p^k}\right)^{-1}\left(1+\sum_{k \geq 1} \frac{f(p^k)}{p^k}\right) \ll_B \exp\left(\sum_{p \leq x} \frac{f(p)-\text{Re}(g(p)p^{-i\tau})}{p}\right),$$ for any $|\tau| \leq T$, and the left side of this last estimate tends to infinity as $x$ gets large by assumption (a similar estimate holds for $g$ completely multiplicative, with $g(p)^k$ in place of $g(p)$). Therefore, $$||g(p)|-\text{Re}(g(p)p^{i\tau})| \geq |f(p)-\text{Re}(g(p))| - |f(p)-|g(p)|| \geq \frac{1}{2}|f(p)-\text{Re}(g(p)p^{i\tau})|,$$ whence it follows that $\sum_{p \leq x} \frac{|g(p)|-\text{Re}(g(p)p^{-i\tau})}{p}$ tends to infinity as $x {\rightarrow}\infty$ uniformly in compact intervals of $\tau$. Consequently, for some $E_j$ in a given partition ${\boldsymbol}{E}$, the corresponding sum in the exponential in tends to $\infty$, and the exponential itself tends to 0.\ For part ii), we first make some observations related to Remarks \[REMGEN\] and \[REMGEN2\], which suggest that the conclusions of Sections 4 and 5 all apply more generally. Indeed, set $h(n) := |g(n)|-X_tf(n)$, where $X_t := \exp\left(-\sum_{p \leq t} \frac{f(p)-|g(p)|}{p}\right)$, $f(n)$ is a strongly multiplicative function satisfying $|g(n)| \leq f(n)$, $\delta \leq |g(p)| \leq f(p) \leq B$, and $\left|f(p)-|g(p)|\right| \leq B\eta$. We indicated in the aforementioned remarks that the arguments in sections 4,5 and 6 still apply with these choices, as do the arguments in Section 6 (in which we must instead define $R_h(\lambda) := \max_{2 \leq t \leq x} \frac{M_{|g|}(t)}{M_{f}(t)}\log^{\lambda} t$). With the appropriate translations (which we leave to the reader, as they are straightforward restatements of our above arguments), the statement of Corollary \[CORSTARTER\] gives (with $X := X_x$) $$R_{|g|-Xf}(\lambda) \leq \max\left\{1+X,D\max_{t_0 \leq t \leq x} \frac{B}{\delta}\frac{P_t}{\phi(P_t)}\log^{\lambda} t \left(\log^{-\frac{1}{2}} t (F({\sigma}_t)^{-2} J_{H,1}({\sigma}_t))^{\frac{1}{2}} + X\mu\right) + \frac{1}{\log^2 t}\right\},$$ where $F(s)$ is the Dirichlet series of $f$ and $D$ a constant depending at most on $B$. Moreover, for ${\sigma}_t = 1 + \frac{1}{\log t}$ as above, $$F({\sigma}_t)^{-2} J_{H,1} \ll_B B^2({\sigma}_t-1)^{-1}\left(X^2\left(c_1^2\eta + ({\sigma}_t-1)^{\delta\beta^3} + \frac{e^{-\frac{2d_1}{\sqrt{\eta}}}}{\delta}\right)+({\sigma}_t-1)^{\frac{2\delta}{3}} + \delta^{-1}e^{-\frac{2d_1}{\sqrt{\eta}}}\right) +({\sigma}_t-1)^2.$$ It follows by the same argument as at the end of Section 6 that $$\label{MOREGEN} M_{|g|}(x) = M_{f}(x)\left(X + O_{B,m}\left({\mathcal}{R}_1\right)\right).$$ where we have set $${\mathcal}{R}_1 := \left(\frac{B}{\delta}\right)^2 \frac{P_t}{\phi(P_t)}\left(X\left(d_1\eta^{\frac{1}{2}} + \log^{-\frac{\delta\beta^3}{2}} x + \delta^{-1}e^{-\frac{d_1}{\sqrt{\eta}}}\right) + \log^{-\frac{2\delta}{3}}x + \delta^{-1}e^{-\frac{2d_1}{\sqrt{\eta}}}\right).$$ With these observations, consider the modified hypotheses that $f$ is a strongly multiplicative function satisfying $|g(n)| \leq f(n)$ for each positive integer $n$, $\delta \leq |g(p)| \leq f(p) \leq B$ and $|g(p)-f(p)| \leq B\eta$ (rather than $||g(p)|-f(p)| \leq B\eta$) uniformly over all primes $p$. By the triangle inequality, we have $||g(p)|-f(p)| \leq \eta$, as well as $$\left||g(p)|-g(p)\right| \leq |g(p)-f(p)| + ||g(p)|-f(p)| \leq 2\eta.$$ We may therefore apply Theorem \[HalGen\] ii) with $g$, giving $$M_g(x) = M_{|g|}(x)\left(A + O_{B,m}\left({\mathcal}{R}_2\right)\right),$$ where $${\mathcal}{R}_2 := \left(\frac{B}{\delta}\right)^2 \left(d_1\eta^{\frac{1}{2}}|A| + \log^{-\frac{2\delta}{3}}x + \delta^{-1}e^{-\frac{2d_1}{\sqrt{\eta}}} + |A|^{\frac{\gamma_0}{4(1+\gamma_0)}}\left(\log^{-\frac{\gamma}{2}} x + \gamma^{-1}e^{-\frac{d_1}{\sqrt{\eta}}}\right)\right).$$ Inputting the asymptotic in , we get $$M_g(x) = M_{f}(x)\left(AX + O_{B,m}(|A|{\mathcal}{R}_1+X{\mathcal}{R}_2)\right).$$ Now clearly, $AX = \exp\left(-\sum_{p \leq x} \frac{f(p)-g(p)}{p}\right)$, so the proof of the theorem is complete. Acknowledgements {#acknowledgements .unnumbered} ================ The author would like to thank his Ph.D supervisor Dr. J. Friedlander for his ample patience and encouragement during the period in which this paper was written, and Dr. D. Koukoulopoulos for helpful comments.
--- abstract: 'We study the large deviation function for the entropy production rate in two driven one-dimensional systems: the asymmetric random walk on a discrete lattice and Brownian motion in a continuous periodic potential. We compare two approaches: the Donsker-Varadhan theory and the Freidlin-Wentzell theory. We show that the wings of the large deviation function are dominated by a single optimal trajectory: either in forward (positive rate) or in backward direction (negative rate). The joining of both branches at zero entropy production implies a non-differentiability and thus the appearance of a “kink”. However, around zero entropy production many trajectories contribute and thus the kink is smeared out.' address: - 'Institut für Theoretische Physik II, Heinrich-Heine-Universität Düsseldorf, Universitätsstraße 1, 40225 Düsseldorf, Germany' - 'Universität Oldenburg, Institut für Physik, 26111 Oldenburg, Germany' - '[II.]{} Institut für Theoretische Physik, Universität Stuttgart, Pfaffenwaldring 57, 70550 Stuttgart, Germany' author: - Thomas Speck - Andreas Engel - Udo Seifert title: 'Large deviation function for the entropy production: Optimal trajectory and role of fluctuations' --- Introduction ============ Driving a system away from thermal equilibrium implies a non-vanishing entropy production rate in the surrounding environment due to heat dissipation. For mesoscopic systems, the energy thus exchanged with the environment is a stochastic quantity, and so is the entropy ${s_\mathrm{m}}$ produced along a single trajectory. This observation forms the basis of stochastic thermodynamics [@seif12]. For a system driven into a non-equilibrium steady state, the long-time limit of the corresponding probability distribution $p({s_\mathrm{m}},t)$ defines a large deviation function (LDF) $$\label{eq:ldf} h({\sigma}) \equiv \lim_{t{\rightarrow}\infty} -\frac{\ln p({s_\mathrm{m}},t)}{{{\langle \dot s_\mathrm{m}\rangle}}t}$$ with normalized entropy production rate $$\label{eq:sig} {\sigma}\equiv \frac{{s_\mathrm{m}}}{{{\langle \dot s_\mathrm{m}\rangle}}t}.$$ Under very general assumptions, this large deviation function obeys the Gallavotti-Cohen symmetry [@gall95; @kurc98; @lebo99] $$\label{eq:gc} h(-{\sigma}) = h({\sigma}) + {\sigma}.$$ Experimental tests of this relation have been performed in a variety of systems, e.g., a single trapped colloidal particle [@spec07; @andr07] and a mechanical oscillator [@doua06]. Other large deviation functions also play an important role for the study of non-equilibrium systems [@pani97; @saha11; @nemo11; @touc13]. We study two paradigmatic models for driven systems sketched in Fig. \[fig:sys\]: the asymmetric random walk on a discrete lattice and Brownian motion of a colloidal particle in a one-dimensional periodic potential. The latter system has been studied extensively in the context of the fluctuation-dissipation theorem [@spec06; @blic07; @seif10; @mehl10; @gome11]. In order to obtain explicit expressions for Eq. [(\[eq:ldf\])]{}, two main approaches are discussed in the literature (for a comprehensive review and references, see Ref. [@touc09]). The first is the Donsker-Varadhan theory, stating that the scaled cumulant generating function $$\label{eq:al} {\alpha}({\lambda}) \equiv \lim_{t{\rightarrow}\infty}\frac{1}{t}\ln{\langle e^{{\lambda}{s_\mathrm{m}}}\rangle}$$ is related to the LDF through the Fenchel-Legendre transformation $$\label{eq:legendre} h({\sigma}) = \sup_{{\lambda}} \{ {\lambda}{\sigma}- {\alpha}({\lambda})/{{\langle \dot s_\mathrm{m}\rangle}}\}.$$ This method has been used previously to obtain the LDF for the two systems we study here: the asymmetric random walk [@lebo99] and the continuous periodic potential [@mehl08]. The second approach due to Freidlin and Wentzell [@freidlin] focusses on paths instead of the scaled cumulant generating function ${\alpha}({\lambda})$. The probability of a path $x(\tau)$ obeys a large deviation principle $$\label{eq:fw} P[x(\tau)] \sim e^{-J[x(\tau)]/{\varepsilon}}$$ in the limit of small noise ${\varepsilon}{\rightarrow}0$. The large deviation function $h({\sigma})$ can then be obtained from the optimal trajectory $x_\ast(\tau)$ minimizing $J[x(\tau)]$ using the contraction principle. While the Donsker-Varadhan theory makes no assumptions on the noise strength and is thus more general, it is also less intuitive. The Freidlin-Wentzell theory, on the other hand, is more appealing to physical intuition since it allows a more direct insight. ![The two systems studied: (a) the asymmetric random walk of a particle on a lattice with discrete positions. Different forward rate $k^+$ and backward rate $k^-$ can be interpreted as an effective driving force $f=\ln(k^+/k^-)$. (b) The continuous analog is a particle moving in a periodic ring potential $V(x)$ and driven by a force $f$.[]{data-label="fig:sys"}](systems.pdf){width=".8\linewidth"} The purpose of this work is two-fold: First, we provide a detailed study comparing for two specific models the two approaches to the calculation of the large deviation function. Based on the Freidlin-Wentzell approach, the second purpose is to study the origin of a particular feature of the LDF: The emergence of a “kink” around zero entropy production, an abrupt albeit differentiable change of the slope [@mehl08]. This “kink” has been observed in distributions of the velocity (which is closely related to the entropy production) in models for molecular motors [@laco08], and experimentally for a tagged granular particle [@kuma11]. Of course, this “kink” does not indicate a phase transition. Dynamic phase transitions are indicated by a jump discontinuity of either the first or second derivative of the scaled cumulant generating function ${\alpha}({\lambda})$. Such transitions have been found in interacting systems, e.g., for the totally asymmetric exclusion process [@gori11], the symmetric exclusion process [@leco12], and for models with kinetic constraints [@spec11]. Basically two theoretical explanations for the physical origin of the kink have been proposed so far. Pleimling and coworkers [@doro11; @avra11] have claimed that the kink is a generic feature of LDFs for the entropy production that follows quite generally from the Gallavotti-Cohen symmetry Eq. [(\[eq:gc\])]{}. But the relation Eq. [(\[eq:gc\])]{} only restricts the anti-symmetric part, i.e., it allows us to write $h({\sigma})={h_\mathrm{s}}({\sigma})-{\sigma}/2$ with ${h_\mathrm{s}}(-{\sigma})={h_\mathrm{s}}({\sigma})$. Clearly, from Eq. [(\[eq:gc\])]{} no statement can be inferred about ${h_\mathrm{s}}$, which is a non-universal function containing the details of the dynamics and the driving. On the other hand, the anti-symmetric part is universal and originates from breaking time-reversal symmetry. As an alternative explanation, Budini [@budi11] has attributed the kink to “intermittency”, i.e., the random switching between different dynamic regimes. Here we show explicitly that the optimal trajectory minimizing the stochastic action indeed corresponds to two different regimes depending on the sign of ${\sigma}$: forward (${\sigma}>0$) or backward (${\sigma}<0$). Considering only the optimal trajectories, therefore, implies a non-differentiability in the large deviation function at ${\sigma}=0$, which, however, is smeared out by fluctuations. Asymmetric random walk ====================== We first consider a particle on an infinite one-dimensional lattice. The particle can hop to the left with rate $k^-$, or hop to the right with rate $k^+$. A single trajectory of length $t$ is characterized by the number of hops to the left, $n^-$, and to the right, $n^+$. Travelling the distance $n\equiv n^+-n^-$, the entropy produced in the surrounding medium is $$\label{eq:arw:sm} {s_\mathrm{m}}(n^+,n^-) = n\ln\frac{k^+}{k^-} \equiv nf,$$ where $$\label{eq:arw:smm} {{\langle \dot s_\mathrm{m}\rangle}}= f(k^+-k^-) = fk^+(1-e^{-f})$$ is the mean entropy production rate. For $k^+=k^-$ the rates obey detailed balance and clearly no entropy is produced on average. Donsker-Varadhan theory ----------------------- In a seminal paper on the fluctuation theorem for stochastic dynamics [@lebo99], Lebowitz and Spohn have obtained the LDF for the asymmetric random walk (ARW) from the cumulant generating function. The normalized function $h({\sigma})$ can be cast in the form $$\label{eq:arw:h} h({\sigma}) = (fz)^{-1} \left\{\cosh(f/2) + z{\sigma}[{\mathrm{arsinh}}(z{\sigma})-f/2] - \sqrt{(z{\sigma})^2+1} \right\}$$ with $z\equiv\sinh(f/2)$. It is straightforward to check that the function $h({\sigma})$ has the following properties: (i) it is convex and non-negative, $h({\sigma})\geqslant0$, with minimum $h(1)=0$, (ii) it obeys the Gallavotti-Cohen symmetry Eq. [(\[eq:gc\])]{}, and (iii), following from this symmetry, $h(-1)=1$ holds. To make our introductory comments concrete about a kink in the large deviation function, for $|z{\sigma}|\gg1$ we expand the inverse hyperbolic sine leading to $$h({\sigma}) \approx (fz)^{-1}\left\{\cosh(f/2)+z{\sigma}[\pm\ln z|{\sigma}|-f/2] - z|{\sigma}|\right\}.$$ The plus sign holds for ${\sigma}>0$ while the minus sign holds for ${\sigma}<0$. Here and in the following we assume $f>0$. Considering the case $f\gg1$ of large asymmetry we can further simplify $$\frac{\cosh(f/2)}{z} \approx 1, \qquad \ln z \approx -\frac{f}{2}$$ yielding the limiting form $$\label{eq:arw:hasym} h({\sigma}) \approx \frac{1}{f} \left\{ \begin{array}{ll} 1+{\sigma}(\ln{\sigma}-1) & ({\sigma}>0) \\ 1+|{\sigma}|(\ln|{\sigma}|-1+f) & ({\sigma}<0) \end{array} \right.$$ of the large deviation function in agreement with Ref. [@doro11]. In particular, the first derivative $$h'({\sigma}) \approx \frac{1}{f} \left\{ \begin{array}{ll} \ln{\sigma}& ({\sigma}>0) \\ -\ln|{\sigma}| -f & ({\sigma}<0) \end{array} \right.$$ jumps at ${\sigma}=0$. However, this jump is only apparent since for the expansion leading to the limiting expression Eq. [(\[eq:arw:hasym\])]{} we used $|{\sigma}|\gg1/z$, i.e., we exclude a region of size $1/z$ around ${\sigma}=0$. With increasing $f$ this region becomes smaller but it does not vanish. Hence, for any finite $f$ the function $h({\sigma})$ is differentiable. In Fig. \[fig:arw\], the large deviation function and its first derivative are plotted for different forces $f$. ![Asymmetric random walk: (a) Large deviation function $h({\sigma})$ for two forces $f=4$ and $f=10$ (solid lines). Also shown is the limiting form Eq. [(\[eq:arw:hasym\])]{} (dashed lines). (b) The derivative becomes steeper at ${\sigma}=0$ for higher driving force $f$. (c) The forward (solid line) and backward (dashed line) fluxes \[Eq. [(\[eq:arw:flx\])]{}\] *vs.* the scaled rate $z{\sigma}$, where $z=\sinh(f/2)$.[]{data-label="fig:arw"}](arw.pdf) Alternative derivation {#sec:arw:alt} ---------------------- We now give an alternative derivation of Eq. [(\[eq:arw:h\])]{} in the spirit of Freidlin-Wentzell. The path weight of a single trajectory is $$\label{eq:arw:P} P(n^+,n^-;t) = {m\choose n^+} \frac{t^m}{m!} (k^+)^{n^+}(k^-)^{n^-} e^{-(k^++k^-)t}$$ with $m\equiv n^++n^-$ total jumps. This weight fulfills the normalization condition $$\sum_{m=0}^\infty\sum_{n^+=0}^m P(n^+,n^-;t) = 1.$$ Introducing dimensionless fluxes $\nu^\pm\equiv n^\pm/(k^+t)$ and expanding the factorials using Stirling’s approximation we obtain the rate function $$\begin{aligned} \nonumber J(\nu^+,\nu^-) &\equiv& -\frac{1}{t}\ln P \\ &=& \label{eq:arw:ht} k^+[1 + e^{-f} - (\nu^+ + \nu^-) + f\nu^- + \nu^+\ln\nu^+ + \nu^-\ln\nu^-],\end{aligned}$$ which does not depend on time explicitly. For long times $t$ the path weight Eq. [(\[eq:arw:P\])]{} is sharply peaked around the minimum of $J$. We will call trajectories that contribute to this minimum “optimal” trajectories. Note that the small parameter ${\varepsilon}$ in Eq. [(\[eq:fw\])]{} is now given by ${\varepsilon}=1/t$, i.e., here the time limits the fluctuations around the minimum. For fixed ${\sigma}$, forward and backward fluxes are not independent but related through $$\label{eq:nu} \nu^+ = \nu^- + (1-e^{-f}){\sigma}.$$ From $\partial J/\partial\nu^-=0$ we then find that the minimum of $J$ is reached for fluxes $$\label{eq:arw:flx} \nu_\ast^\pm = e^{-f/2}\left[\sqrt{(z{\sigma})^2+1} \pm z{\sigma}\right], \qquad z\equiv\sinh(f/2).$$ Using that $$\ln\nu_\ast^\pm = -\frac{f}{2}\pm{\mathrm{arsinh}}(z{\sigma})$$ it is straightforward to show that $J(\nu_\ast^+,\nu_\ast^-)/{{\langle \dot s_\mathrm{m}\rangle}}=h({\sigma})$ yields the result Eq. [(\[eq:arw:h\])]{} obtained from the Donsker-Varadhan theory. The advantage of the alternative derivation is that it becomes transparent which trajectories contribute to $h({\sigma})$ for a given normalized entropy production rate ${\sigma}$. We now use this insight to discuss the physical origin of the apparent kink. What does the optimal trajectory to reach a specified entropy production, i.e., the trajectory that minimizes the rate function $J$, look like? As becomes clear from Eq. [(\[eq:arw:flx\])]{}, for a given positive ${\sigma}\gg1/z$ the optimal trajectory consists almost entirely of forward jumps. Conversely, for negative ${\sigma}$ the optimal trajectory consists almost entirely of backward jumps even though these trajectories are extremely unlikely. This might seem surprising but any forward jump has to be compensated by an unlikely backward jump. Setting $n^-=0$ for ${\sigma}>0$ and $n^+=0$ for ${\sigma}<0$ leads to the two branches of the limiting form given in Eq. [(\[eq:arw:hasym\])]{}. The reason why nevertheless $h({\sigma})$ is differentiable is that for $|{\sigma}|\lesssim 1/z$ many trajectories contribute to the large deviation function. In particular, for ${\sigma}=0$ the forward and backward fluxes become equal, $\nu^+=\nu^-$. All trajectories with the same number of forward and backward jumps contribute since they have the same weight. Their number is given by the binomial coefficient in Eq. [(\[eq:arw:P\])]{}. In contrast, there is only one optimal combination $(n^+,n^-)$ in the wings of the distribution. Driven Brownian motion ====================== In this section we consider a single colloidal particle moving in a closed ring geometry with circumference $L$ and external periodic potential $V(x+L)=V(x)$. The overdamped dynamics are described by the Langevin equation $$\label{eq:lang} \dot x = \mu_0[-V'(x)+f] + \eta,$$ where the prime denotes the derivative with respect to $x$. The particle is driven out of equilibrium through the force $f$ resulting in a non-vanishing current through the ring. The noise $\eta$ describes the effective interactions between the solvated particle and the solvent at temperature $T$ with correlations $$\label{eq:corr} {\langle \eta(\tau)\eta(\tau')\rangle} = 2D_0\delta(\tau-\tau').$$ Through $D_0=\mu_0 T$ we assume that the solvent remains at equilibrium even though the particle is driven. To simplify the notation we will use dimensionless quantities $$x \mapsto Lx, \quad V(x) \mapsto TV(x), \quad f \mapsto \frac{T}{L}f, \quad \tau \mapsto \frac{L^2}{D_0}\tau$$ throughout the remainder of this paper. Moreover, these units make transparent the connection with the ARW. In particular, the number of periods traversed is $n$ and the dimensionless force $f$ corresponds to the force defined in Eq. [(\[eq:arw:sm\])]{}. We will also need the notion of a critical force ${f_\mathrm{c}}$ such that for $f>{f_\mathrm{c}}$ the deterministic force $-V'(x)+f>0$ is positive for all $x$ and, therefore, in the absence of noise running solutions exist. The rate of entropy production is ${\dot s}_\mathrm{m}=f\dot x$ with ${s_\mathrm{m}}=f\Delta x$ \[cf. Eq. (\[eq:arw:sm\]) for the ARW\], where $\Delta x$ is the distance the particle has traveled during the time $t$. The mean velocity of the particle can be calculated from Stratonovich’s formula [@risken] $${{\langle \dot x\rangle}}= {\kappa}(1-e^{-f})$$ with $$\begin{aligned} {\kappa}^{-1} \equiv {\int_{0}^{1}{\rmd}x\;}{\int_{0}^{1}{\rmd}z\;} \exp\left\{V(x)-V(x-z)-fz\right\}.\end{aligned}$$ The mean velocity (and also the mean entropy production) of ARW and ring potential agree if we set $k^+={\kappa}$, cf. Eq. [(\[eq:arw:smm\])]{}. However, in general fluctuations will be different, leading to different shapes for the LDFs. For large forces $f$ the influence of the potential on the particle motion becomes negligible and ${{\langle \dot x\rangle}}\approx f$. Donsker-Varadhan theory ----------------------- The time evolution of the probability distribution of $x$ is governed through the Fokker-Planck operator $$\label{eq:fp} \hat L_0 \equiv {\frac{\partial }{\partial x}}\left[(V'-f) + {\frac{\partial }{\partial x}}\right].$$ In particular, the steady state distribution obeys $\hat L_0p_\mathrm{ss}(x)=0$. To calculate ${\alpha}({\lambda})$ \[Eq. [(\[eq:al\])]{}\] consider the two-sided Laplace transform $$\label{eq:ld:joint} \tilde p(x,{\lambda},t) = {\int_{-\infty}^{+\infty}{\rmd}{s_\mathrm{m}}\;} e^{{\lambda}{s_\mathrm{m}}} p(x,{s_\mathrm{m}},t)$$ of the joint probability $p(x,{s_\mathrm{m}},t)$ to find the particle at position $x$ with an amount of entropy ${s_\mathrm{m}}$ produced up to time $t$. The time evolution of the transformed joint probability obeys $\partial_t\tilde p=\hat L_{\lambda}\tilde p$, where the operator $$\label{eq:OL} \hat L_{\lambda}= \hat L_0 + \left[2{\frac{\partial }{\partial x}}+V'-f\right](f{\lambda}) + (f{\lambda})^2$$ has been determined in Ref. [@mehl08]. We expand the joint probability in eigenfunctions of the time evolution operator $$\tilde p(x,{\lambda},t) = \sum_{i=0}^\infty c_i e^{{\alpha}_i({\lambda})t}\psi_i(x,{\lambda}),$$ where the coefficients $c_i$ quantify the overlap with the initial distribution $\tilde p(x,{\lambda},0)=p_\mathrm{ss}(x)$. The eigenvalue equations read $\hat L_{\lambda}\psi_i(x,{\lambda})={\alpha}_i({\lambda})\psi_i(x,{\lambda})$. In the limit of long times, the joint probability is dominated by the largest eigenvalue ${\alpha}_0$. Integrating Eq. [(\[eq:ld:joint\])]{} over the position $x$ thus leads to $${\langle e^{{\lambda}{s_\mathrm{m}}}\rangle} \sim e^{{\alpha}_0({\lambda})t},$$ and we conclude that the scaled cumulant generating function ${\alpha}({\lambda})={\alpha}_0({\lambda})$ is indeed given by the largest eigenvalue of the time evolution operator (\[eq:OL\]). It can be calculated solving either the eigenvalue problem directly [@mehl08] or through the Ritz variational method [@laco09]. The function $h({\sigma})$ then follows from [(\[eq:legendre\])]{}. Here we use the first method and refer to Ref. [@mehl08] for further details. Stochastic action and effective potential ----------------------------------------- To continue we employ the well-established machinery of path integrals for stochastic processes [@kleinert] that has been developed to deal, e.g., with diffusion in bistable potentials [@caro81] and escape problems [@lehm00]. Imagine that the particle starts at position $x_0$ and travels a distance $\Delta x$ during the time $t$. The probability for such a transition $$\label{eq:P} P(x_0+\Delta x|x_0;t) = \int_{x_0}^{x_0+\Delta x}[{\rmd}x(\tau)] e^{-{\mathcal S}[x(\tau)]}$$ is obtained through summing over all possible paths with stochastic action $$\label{eq:S} {\mathcal S}[x(\tau)] = {\int_{0}^{t}{\rmd}\tau\;} \left\{\frac{1}{4}(\dot x+V'-f)^2 - \frac{1}{2}V'' \right\}$$ calculated along a single path $x(\tau)$. The last term $V''/2$ is the Jacobian arising from the change of variables $\eta{\rightarrow}x$. The stochastic action Eq. [(\[eq:S\])]{} can be rearranged to give $$\label{eq:S:L} {\mathcal S}[x(\tau)] = \frac{1}{2}\Delta V - \frac{1}{2}f\Delta x + \frac{1}{4} f^2t + {\int_{0}^{t}{\rmd}\tau\;} {\mathcal L}(\tau).$$ The last term $$\label{eq:L} {\mathcal L}(x,\dot x) = \frac{1}{4}\dot x^2 - U(x)$$ takes on the form of a Lagrangian for a classical particle with mass $m=1/2$ moving in the effective potential $$\label{eq:U} U(x) \equiv \frac{1}{2}V''(x) - \frac{1}{4}[V'(x)]^2 + \frac{1}{2}fV'(x).$$ It is straightforward to check that for a periodic (and twice differentiable) potential $V(x)$ the effective potential $U(x)$ is periodic, too. For completeness, we note that the same effective potential arises in the symmetrization of the Fokker-Planck operator \[Eq. [(\[eq:fp\])]{}\], $$e^{V(x)/2}\hat L_0e^{-V(x)/2} = {\frac{\partial ^2}{\partial x^2}} + U(x),$$ see, e.g., the textbook treatment in Ref. [@risken]. ![(a) Original potential $V(x)=v_0\cos(2\pi x)$ and (b) the mapped effective potential $U(x)$ from Eq. [(\[eq:U\])]{} plotted for $v_0=3$ and forces $f=0$, $f<{f_\mathrm{c}}$, and at the critical force $f={f_\mathrm{c}}=6\pi$.[]{data-label="fig:sketch"}](sketch.pdf) In Fig. \[fig:sketch\] the effective potential $U(x)$ is plotted for the cosine potential $V(x)=v_0\cos(2\pi x)$ with critical force ${f_\mathrm{c}}=2\pi v_0$. It will turn out useful to define the reduced force $f_\ast\equiv f/{f_\mathrm{c}}$. In the following we will use this specific potential for all illustrations. In equilibrium, the principal feature of the effective potential is that the minimum of $V(x)$ becomes the global maximum of $U(x)$. For $v_0>1$, the minimum of $U(x)$ splits into two minima, which are degenerate at equilibrium ($f=0$). Turning on the driving force ($f>0$) the global maximum of $U(x)$ is shifted, the depth of the left minimum increases, and the depth of the right minimum decreases until for forces $f_\ast\geqslant(1-v_0^{-2/3})^{3/2}$ only one global minimum remains. For $f_\ast\geqslant1$ the upper arc becomes concave. Optimal trajectory ------------------ We now determine the trajectory $x_\ast(\tau)$ that minimizes the action \[Eq. [(\[eq:S:L\])]{}\] with ${\mathcal S}_\ast\equiv{\mathcal S}[x_\ast(\tau)]$. Clearly, this trajectory is the solution of the Euler-Lagrange equation $$\label{eq:EL} -{\frac{\delta {\mathcal S}}{\delta x(\tau)}} = {\frac{{\rmd}}{{\rmd}\tau}}{\frac{\partial {\mathcal L}}{\partial \dot x}} - {\frac{\partial {\mathcal L}}{\partial x}} = \frac{1}{2}\ddot x + U'(x) = 0$$ subject to the boundary conditions $x_\ast(0)=x_0$ and $x_\ast(t)=x_0+\Delta x$. Since the effective potential is periodic, the solution $x_\ast(\tau+{\tau_0})=x_\ast(\tau)$ of the Euler-Lagrange Eq. [(\[eq:EL\])]{} will also be periodic with period $${\tau_0}= \frac{1}{|{\sigma}|{\langle \dot x\rangle}}.$$ The sign of ${\sigma}$ determines the sign of the initial velocity of the particle. Note that the solution of Eq. [(\[eq:EL\])]{} is time-reversal symmetric. Before we proceed we shift the effective potential $$U(x) = U_\ast + W(x), \qquad U_\ast = \max_x U(x)$$ such that $W(x)\leqslant0$ is non-positive. Although no small parameter appears in the stochastic action, we nevertheless follow the Freidlin-Wentzell route and assume that the transition probability is dominated by the optimal trajectory. We thus obtain from Eq. [(\[eq:S:L\])]{} the unnormalized, naive approximation $$\label{eq:h:0} \tilde h_\ast({\sigma}) \equiv \lim_{t{\rightarrow}\infty}\frac{{\mathcal S}_\ast}{t} = -\frac{1}{2}{\sigma}{{\langle \dot s_\mathrm{m}\rangle}}+ \frac{1}{4}f^2 + \frac{A}{{\tau_0}} - U_\ast$$ with optimal stochastic action for a single period $$\label{eq:A} A \equiv {\int_{0}^{{\tau_0}}{\rmd}\tau\;} \left\{ \frac{1}{4}[\dot x_\ast(\tau)]^2 - W(x_\ast(\tau)) \right\}.$$ Since the potential $V(x)$ is bounded, the term $\Delta V/t$ drops out in the long time limit. We stress that, since no small parameter appears in the stochastic action, it cannot be expected in general that $h_\ast({\sigma})$ corresponds to $h({\sigma})$ as obtained by the operator method. The time integral in Eq. [(\[eq:A\])]{} is evaluated most conveniently by changing the integration variable from $\tau$ to $x$. To this end we note that $$\label{eq:E} E = \frac{1}{4}[\dot x_\ast(\tau)]^2 + W(x_\ast(\tau))$$ is a constant of motion. The time to traverse a single period is $$\label{eq:tau0} {\tau_0}(E) = \frac{1}{2} {\int_{0}^{1}{\rmd}x\;} [E-W(x)]^{-1/2}.$$ Rearranging Eq. [(\[eq:A\])]{} using Eq. [(\[eq:E\])]{}, the action for a single period reads $$\label{eq:A:E} A(E) = {\int_{0}^{1}{\rmd}x\;} \frac{E-2W(x)}{2\sqrt{E-W(x)}} = {\int_{0}^{1}{\rmd}x\;} \sqrt{E-W(x)} - E{\tau_0}(E).$$ We thus solve Eq. [(\[eq:tau0\])]{} for a given time ${\tau_0}$ in order to obtain $E$, from which we determine the optimal action $A$. The resulting normalized function $$\label{eq:h:norm} h_\ast({\sigma})=\tilde h_\ast({\sigma})/{{\langle \dot s_\mathrm{m}\rangle}}$$ is plotted in Fig. \[fig:ring\] for different driving forces $f$ below and above the critical force ${f_\mathrm{c}}$. The calculation breaks down for small ${\sigma}$ as the initial velocity, and therefore $E$, become of the order of the machine precision. We find that for large ${\sigma}$ the optimal path solution agrees with the full solution $h({\sigma})$ but deviates for small ${\sigma}$, where the disagreement becomes more pronounced for small driving forces. ![Large deviation function for the periodic potential $V(x)=v_0\cos(2\pi x)$ with $v_0=2$ (top row) and $v_0=6$ (bottom row) for three reduced driving forces $f_\ast$. Shown are the exact solutions $h({\sigma})$ (thick lines) obtained from the operator method together with the naive approximations $h_\ast({\sigma})$ \[Eq. [(\[eq:h:norm\])]{}, thin lines\]. For small ${\sigma}$ the numerical calculation breaks down and the dashed lines show the linear extrapolation towards the known value $h_\ast(0)$ \[see Eq. [(\[eq:h0\])]{}\].[]{data-label="fig:ring"}](ring.pdf) We now consider two special cases. For large $E$ the influence of the potential is negligible and the velocity $\dot x_\ast\approx\pm2\sqrt{E}$ is approximately constant, with $\dot x_\ast=\pm1/{\tau_0}={\sigma}{{\langle \dot x\rangle}}$. Together with $A=E{\tau_0}$ we obtain from Eq. [(\[eq:h:0\])]{} $$\tilde h_\ast({\sigma}) = \frac{1}{4}({\sigma}{{\langle \dot x\rangle}}-f)^2 - U_\ast,$$ i.e., the asymptotic form of the large deviation function for large rates $|{\sigma}|$ is a parabola. In particular, for a flat potential $V(x)=0$ with ${{\langle \dot x\rangle}}=f$ we recover the exact result $h({\sigma})=(1/4)({\sigma}-1)^2$. ![Optimal trajectory $x_\ast(\tau)$ plotted over three periods (parameters: $v_0=5$, $f=8\pi$, ${\sigma}=1$). The potentials $U(x)$ (red/bright) and $V(x)-fx$ (black) are shown in the right panel.[]{data-label="fig:traj"}](trajectory.pdf) For $E{\rightarrow}0$ the period ${\tau_0}$ increases dramatically and the particle spends a long time at the barriers of the effective potential \[corresponding to the valleys of the original potential $V(x)$\], see Fig. \[fig:traj\]. Up to linear order in $E$ the optimal action Eq. [(\[eq:A:E\])]{} becomes $$A \simeq A_0 \equiv A(0) = {\int_{0}^{1}{\rmd}x\;} \sqrt{-W(x)}.$$ This is of course the action of an *instanton*, the classical solution for $E=0$ that connects two adjacent barriers in infinite time. To approximately calculate the period ${\tau_0}$ we expand $W(x)$ to second order, $$\frac{{\tau_0}}{2} \approx \frac{1}{2}{\int_{0}^{\frac{1}{2}}{\rmd}x\;} [E-U_\mathrm{b}''x^2/2]^{-1/2},$$ where, without loss of generality, we have shifted the potential along the $x$-axis such that $W(0)=0$, and $U_\mathrm{b}''<0$ is the curvature of the effective potential at the barrier. To leading order we find the well-known result [@caro81] $$\label{eq:E:small} E \simeq \frac{|U_\mathrm{b}''|}{2} \exp\left\{-\sqrt{2|U_\mathrm{b}''|}{\tau_0}\right\}.$$ While the time ${\tau_0}$ to traverse a single period diverges for ${\sigma}{\rightarrow}0$, the action of an instanton remains finite and $A_0/{\tau_0}{\rightarrow}0$. The approximated large deviation function at vanishing entropy production ${\sigma}=0$ thus reads $$\label{eq:h0} \tilde h_\ast(0) = \frac{1}{4}f^2 - U_\ast.$$ The linear extrapolation of the numerically determined $h_\ast({\sigma})$ towards this value is shown in Fig. \[fig:ring\]. Summarizing, the results of naively applying the Freidlin-Wentzell approach to the calculation of the large deviation function are: (i) The optimal path is not sufficient to reproduce the large deviation function over the full range of rates ${\sigma}$. (ii) Only accounting for the contribution from the optimal path results in a function $h_\ast({\sigma})$ that is non-differentiable at ${\sigma}=0$, i.e., the approximated large deviation function exhibits a genuine kink. (iii) The Gallavotti-Cohen symmetry [(\[eq:gc\])]{} is fulfilled by both $h_\ast({\sigma})$ and $h({\sigma})$. The rounding of the kink in the full solution $h({\sigma})$ points to the importance of fluctuations around the optimal path even in the limit $t{\rightarrow}\infty$. Small-noise limit ----------------- As mentioned in the introduction, the Freidlin-Wentzell method is strictly valid only in the limit ${\varepsilon}{\rightarrow}0$ of vanishing noise. In our dimensionless units, the small parameter ${\varepsilon}=1/v_0$ is the ratio of thermal energy to the potential depth. Fixing $f_\ast$ with ${f_\mathrm{c}}\propto v_0$ we see that the first term in Eq. [(\[eq:U\])]{} is of order $v_0$ and the other two terms are of order $v_0^2$. We thus drop the first term (the Jacobian). Rearranging terms we obtain for the effective potential the simplified expression $$U(x) = \frac{1}{4}f^2 - \frac{1}{4}[-V'(x)+f]^2.$$ In the following we focus on the case $f<{f_\mathrm{c}}$. The maximum of the potential is then reached at points $x$ where the deterministic force $-V'(x)+f$ is zero with $$U_\ast = \frac{1}{4}f^2 \qquad (f<{f_\mathrm{c}}).$$ From Eq. [(\[eq:h0\])]{} we then know that $h_\ast(0)=0$. Since $h_\ast({\sigma})$ is a convex function with $h_\ast(1)=0$ we can infer that the shape in the range $-1\leqslant{\sigma}\leqslant1$ is a “wedge”. The approximated LDF is universal in this range and does not depend on $f$ (as long as $f<{f_\mathrm{c}}$) nor $v_0$. ![Small-noise limit: (a) Simplified effective potential $U(x)$ dropping the Jacobian in Eq. [(\[eq:U\])]{} \[cf. Fig. \[fig:sketch\](b)\]. (b) Approximated LDF $h_\ast({\sigma})$ for $f_\ast<1$ (thick line). Also shown is the LDF $h({\sigma})$ for $v_0=7$ and $f=6\pi$ ($f_\ast=3/7$) together with the ARW solution Eq. [(\[eq:arw:h\])]{} for the same force.[]{data-label="fig:smnoise"}](smnoise.pdf) Specifically, for the cosine potential $V(x)=v_0\cos(2\pi x)$ the effective potential $$U(x) = \frac{{f_\mathrm{c}}^2}{4}\left\{ f_\ast^2 - [\sin(2\pi x)+f_\ast]^2 \right\}$$ is shown in Fig. \[fig:smnoise\](a), whereas the “wedge”-shaped $h_\ast({\sigma})$ is depicted in Fig. \[fig:smnoise\](b). The action of an instanton is easily evaluated to give $$\label{eq:sm:A} A_0 = {\int_{0}^{1}{\rmd}x\;} \frac{{f_\mathrm{c}}}{2}\left|\sin(2\pi x)+f_\ast\right| = \frac{{f_\mathrm{c}}}{\pi}\left[\sqrt{1-f_\ast^2}+f_\ast\arcsin f_\ast\right].$$ Also shown in Fig. \[fig:smnoise\](b) is that the actual LDF $h({\sigma})$ agrees with the ARW solution for large $v_0$. Since $f_\ast$ is fixed the force $f$ increases as we increase the potential depth $v_0$. Both the ARW and the actual solution $h({\sigma})$ then approach the limiting “wedge” shape as $v_0{\rightarrow}\infty$. Role of fluctuations -------------------- At this point it seems natural to inquire what the nature is of the fluctuations contributing to the large deviation function. Specifically, we consider two types of fluctuations: small Gaussian perturbations around the optimal path, and fluctuations of the “jumps” times $\tau_l$. In the periodic optimal trajectory $x_\ast(\tau)$ these jumps occur regularly with period ${\tau_0}$, see Fig. \[fig:traj\]. ### Gaussian fluctuations. For finite $t$, we expand the action Eq. [(\[eq:S:L\])]{} $$\label{eq:S:Q} {\mathcal S}[x_\ast(\tau)+\xi(\tau)] \approx {\mathcal S}_\ast + \frac{1}{2} {\int_{0}^{t}{\rmd}\tau\;} \xi(\tau)\hat D(\tau)\xi(\tau)$$ to second order with small perturbations $\xi(\tau)$ of the optimal path, which obey the boundary conditions $\xi(0)=\xi(t)=0$. Here we have introduced the symmetric second variation operator $$\label{eq:SL} \hat D(\tau) \equiv -\frac{1}{2}{\frac{{\rmd}^2}{{\rmd}\tau^2}} - U''(x_\ast(\tau)),$$ which is formally equivalent to a Schrödinger equation with potential energy $-U''(x_\ast(\tau))$ (positive curvatures of the effective potential correspond to low “energies”). We, therefore, can immediately state that the eigenvalues ${\lambda}_i$ of $\hat D$ are real and can be ordered, ${\lambda}_0<{\lambda}_1<\cdots$. Moreover, eigenfunctions $\xi_i(\tau)$ corresponding to different eigenvalues ${\lambda}_i$ are orthogonal, $$\label{eq:xi} {\int_{0}^{t}{\rmd}\tau\;} \xi_i(\tau)\xi_j(\tau) = \delta_{ij}, \qquad \xi_i(0) = \xi_i(t) = 0.$$ We expand the generic perturbation $\xi(\tau)$ in the basis spanned by the normalized eigenfunctions $\{\xi_i\}$, $$\label{eq:pert} \xi(\tau) = \sum_{i=1}^\infty c_i\xi_i(\tau),$$ with coefficients $c_i$. The path measure is $$[{\rmd}x(\tau)] \propto \prod_{i=1}^\infty {\rmd}c_i$$ and we perform the Gaussian integration of Eq. [(\[eq:P\])]{} with result $$\label{eq:P:gauss} P(x_0+\Delta x|x_0;t) \simeq \frac{e^{-{\mathcal S}_\ast}}{\sqrt{\det\hat D}}, \qquad \det\hat D=\prod_{i=1}^\infty{\lambda}_i.$$ Of course, this result only holds if all eigenvalues are positive. We numerically calculate the determinant using Floquet theory as outlined in the appendix. We find that small Gaussian fluctuation around the optimal path $x_\ast(\tau)$ do not contribute to the large deviation function. ### Dilute instanton gas. For $f\ll{f_\mathrm{c}}$ the particle in the periodic continuous potential $V(x)$ effectively undergoes a hopping motion resembling the discrete random walk. For such thermally activated jumps, the corresponding rate is $$k^+ \approx \exp\left\{-\Delta V + f(x^+-x^-) \right\},$$ where $x^-$ is the position of the minimum, $x^+$ the position of the barrier, and $\Delta V\equiv V(x^+)-V(x^-)$ is the barrier height. For the cosine potential we find $$2\pi x^- = \pi + \arcsin f_\ast, \quad 2\pi x^+ = 2\pi - \arcsin f_\ast.$$ The rate thus reads $$\label{eq:sm:k} k^+ \approx e^{-A_0+f/2},$$ where $A_0$ is the action \[Eq. [(\[eq:sm:A\])]{}\] of an instanton in the small noise limit and we have ignored a sub-exponential prefactor. The optimal trajectory $x_\ast(\tau)$ is strictly periodic. However, Fig. \[fig:traj\] suggests that shifting the positions of the “jumps” $\tau_l=l{\tau_0}$ does only minimally change the action as long as the particle spends a quasi-infinite time on top of the barrier (of the effective potential $U$). In this case some eigenvalues ${\lambda}_i$ become exponentially small (and therefore contribute to the large deviation function) or even vanish, in which case the calculation of the previous subsection breaks down. Such zero-modes need to be treated in a different way known as the method of collective coordinates. In effect, integrations over zero-modes are replaced by integrations over the corresponding jump time. Suppose a trajectory is composed of $n^+$ instantons and $n^-$ anti-instantons jumping against the driving force. We use again $n=n^+-n^-$ and $m=n^++n^-$. If we neglect all “interactions” between instantons we can approximate the action as $${\mathcal S}\approx -\frac{1}{2}fn + mA_0$$ independent of the actual jump times. Integration over the jump times then leads to a path weight $$P(n^+,n^-;t) = \frac{1}{Z}{m\choose n^+} \frac{t^m}{m!} e^{-{\mathcal S}}$$ with normalization factor $$Z = \exp\left\{2t\cosh(f/2)e^{-A_0}\right\}.$$ We now follow the alternative derivation for the asymmetric random walk presented in Sec. \[sec:arw:alt\]. Introducing dimensionless fluxes $\nu^\pm\equiv n^\pm/({\kappa}t)$ \[related by Eq. [(\[eq:nu\])]{}\], we obtain the rate function $$\begin{aligned} \nonumber J(\nu^+,\nu^-) &=& 2\cosh(f/2)e^{-A_0} + {\kappa}\left\{\nu^+\ln\nu^+ + \nu^-\ln\nu^- \right. \\ && \left. - (f/2)(\nu^+-\nu^-) - (\nu^++\nu^-)(1-A_0-\ln{\kappa}) \right\}.\end{aligned}$$ The minimum is attained at fluxes $$\nu^\pm_\ast = \frac{e^{-A_0}}{{\kappa}} \left[\sqrt{(\zeta{\sigma})^2+1} \pm \zeta{\sigma}\right]$$ with $$\zeta \equiv \frac{1}{2}{\kappa}e^{A_0}(1-e^{-f}).$$ Plugging in the rate Eq. [(\[eq:sm:k\])]{} for ${\kappa}=k^+$ we find $\zeta\approx z$ and, moreover, we recover the form Eq. [(\[eq:arw:h\])]{} of the LDF for the ARW. Conclusions =========== We have studied the large deviation function for the entropy production in two simple one-dimensional systems: the asymmetric random walk of a particle on a discrete lattice and the continuous motion of a driven particle in an external potential. For both systems we have calculated the large deviation function using the path approach of Freidlin and Wentzell. However, while for the ARW the solution thus obtained agrees with the Donsker-Varadhan theory, for the continuous case the solution is only an approximation. In both approaches, the wings of the large deviation function are well described by a single trajectory composed of only forward (${\sigma}>0$) or only backward (${\sigma}<0$) jumps. Changing the prescribed rate ${\sigma}$, the transition between these two regimes appears as a kink at ${\sigma}=0$, which in the true large deviation function is smeared out by fluctuations. For the ARW the reason is that there are many combinations of forward and backward jumps (and therefore many trajectories) around ${\sigma}=0$ as described by the combinatorial factor in Eq. [(\[eq:arw:P\])]{}. For the continuous system we have verified numerically that small perturbations of the optimal path do not contribute to the large deviation function. For large $v_0$ and small driving forces $f_\ast\ll1$ trajectories are dominated by discrete barrier crossings and can be decomposed into instanton solutions. Summing these trajectories, we recover the large deviation function of the ARW. We thank Hugo Touchette for valuable discussions. TS gratefully acknowledges financial support by the Alexander-von-Humboldt foundation during the initial stage of this work. Floquet theory ============== We estimate the effect of small fluctuations through calculating the full determinant $\det\hat D$ of the operator Eq. [(\[eq:SL\])]{}. To this end we make use of the following relation $$\frac{\det\hat D}{\det\hat D_0} = \frac{\phi(t)}{\phi_0(t)}$$ with $\hat D_0$ a suitable reference operator [@kirs03]. In the simplest case $\hat D_0=-\partial_\tau^2$ but the exact form of $\hat D_0$ does not play a role. Here, $\phi(t)$ is the solution of $\hat D\phi=0$ whereas $\phi_0(t)$ is the solution of $\hat D_0\phi_0=0$. Defining the vector ${\mathbf x}\equiv(\phi,\dot\phi)$ we obtain a linear, periodic differential equation $$\label{eq:floq} \dot{{\mathbf x}} = {\mathsf M}(x_\ast(\tau)){\mathbf x}, \qquad {\mathsf M}(x) \equiv \left( \begin{array}{cc} 0 & 1 \\ -2U''(x) & 0 \end{array}\right).$$ From Floquet theory it is well-known that the solution can be written $$\label{eq:floq:sol} {\mathbf x}(\tau) = c_+e^{+\mu\tau}{\mathbf p}(\tau) + c_-e^{-\mu\tau}{\mathbf p}(-\tau)$$ with characteristic, or Floquet, exponent $\mu$. Hence, in the limit of long times the determinant behaves as $\det\hat D\sim\phi(t)\sim|e^{\mu t}|$ to leading order. We again change variables from $\tau$ to $x$ and solve the ordinary differential equation $${\frac{\partial {\mathsf X}}{\partial x}} = \frac{1}{\dot x_\ast}{\mathsf M}(x){\mathsf X} = \frac{1}{2}[E-U(x)]^{-1/2} {\mathsf M}(x){\mathsf X}$$ for a matrix ${\mathsf X}$ with the matrix ${\mathsf M}(x)$ given in Eq. [(\[eq:floq\])]{}. The initial condition is ${\mathsf X}(0)=\mathbf 1$, and we integrate a single period. The resulting matrix ${\mathsf X}(1)$ has two eigenvalues $\rho_\pm=e^{\pm\mu}$. Numerically we find $|\rho_\pm|\simeq1$ for the values of $E$ that can be obtained numerically. Therefore, small Gaussian fluctuations around the optimal path $x_\ast(\tau)$ do not contribute to the large deviation function in the long time limit. References {#references .unnumbered} ========== [10]{} U. Seifert, Rep. Prog. Phys. (in press) arXiv:1205.4176 (2012). G. Gallavotti and E. G. D. Cohen, Phys. Rev. Lett. [**74**]{}, 2694 (1995). J. Kurchan, J. Phys. A: Math. Gen. [**31**]{}, 3719 (1998). J. L. Lebowitz and H. Spohn, J. Stat. Phys. [**95**]{}, 333 (1999). T. Speck, V. Blickle, C. Bechinger, and U. Seifert, EPL [**79**]{}, 30002 (2007). D. Andrieux, P. Gaspard, S. Ciliberto, N. Garnier, S. Joubaud, and A. Petrosyan, Phys. Rev. Lett. [**98**]{}, 150601 (2007). F. Douarche, S. Joubaud, N. B. Garnier, A. Petrosyan, and S. Ciliberto, Phys. Rev. Lett. [**97**]{}, 140603 (2006). M. Paniconi and Y. Oono, Phys. Rev. E [**55**]{}, 176 (1997). A. Saha, J. K. Bhattacharjee, and S. Chakraborty, Phys. Rev. E [**83**]{}, 011104 (2011). T. Nemoto and S.-i. Sasa, Phys. Rev. E [**84**]{}, 061113 (2011). H. Touchette and R. J. Harris, in [*Nonequilibrium Statistical Physics of Small Systems: Fluctuation Relations and Beyond*]{}, edited by R. Klages, W. Just, and C. Jarzynski (Wiley-VCH, Berlin, 2013), Chap. Large deviation approach to nonequilibrium systems. T. Speck and U. Seifert, Europhys. Lett. [**74**]{}, 391 (2006). V. Blickle, T. Speck, C. Lutz, U. Seifert, and C. Bechinger, Phys. Rev. Lett. [**98**]{}, 210601 (2007). U. Seifert and T. Speck, EPL [**89**]{}, 10007 (2010). J. Mehl, V. Blickle, U. Seifert, and C. Bechinger, Phys. Rev. E [**82**]{}, 032401 (2010). J. R. Gomez-Solano, A. Petrosyan, S. Ciliberto, and C. Maes, J. Stat. Mech.: Theor. Exp. [**2011**]{}, P01008 (2011). H. Touchette, Phys. Rep. [**478**]{}, 1 (2009). J. Mehl, T. Speck, and U. Seifert, Phys. Rev. E [**78**]{}, 011123 (2008). M. Freidlin and A. Wentzell, [*Random Perturbations of Dynamical Systems*]{}, Vol. 260 of [*Grundlehren der Mathematischen Wissenschaften*]{} (Springer-Verlag, New York, 1984). D. Lacoste, A. W. Lau, and K. Mallick, Phys. Rev. E [**78**]{}, 011915 (2008). N. Kumar, S. Ramaswamy, and A. K. Sood, Phys. Rev. Lett. [**106**]{}, 118001 (2011). M. Gorissen and C. Vanderzande, J. Phys. A: Math. Theor. [**44**]{}, 115005 (2011). V. Lecomte, J. P. Garrahan, and F. van Wijland, J. Phys. A: Math. Theor. [ **45**]{}, 175001 (2012). T. Speck and J. Garrahan, Eur. Phys. J. B [**79**]{}, 1 (2011). S. Dorosz and M. Pleimling, Phys. Rev. E [**83**]{}, 031107 (2011). D. ben Avraham, S. Dorosz, and M. Pleimling, Phys. Rev. E [**84**]{}, 011115 (2011). A. A. Budini, Phys. Rev. E [**84**]{}, 061118 (2011). H. Risken, [*The Fokker-Planck Equation*]{}, 2nd ed. (Springer-Verlag, Berlin, 1989). D. Lacoste and K. Mallick, Phys. Rev. E [**80**]{}, 021923 (2009). H. Kleinert, [*Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets*]{}, 5th edition ed. (World Scientific, Singapore, 2009). B. Caroli, C. Caroli, and B. Roulet, J. Stat. Phys. [**26**]{}, 83 (1981). J. Lehmann, P. Reimann, and P. Hänggi, Phys. Rev. E [**62**]{}, 6282 (2000). K. Kirsten and A. J. McKane, Ann. Phys. [**308**]{}, 502 (2003).
--- abstract: 'We report on microwave optomechanics measurements performed on a nuclear adiabatic demagnetization cryostat, whose temperature is determined by accurate thermometry from below 500$~\mu$K to about 1$~$Kelvin. We describe a method for accessing the on-chip temperature, building on the blue-detuned parametric instability and a standard microwave setup. The capabilities and sensitivity of both the experimental arrangement and the developed technique are demonstrated with a very weakly coupled silicon-nitride doubly-clamped beam mode of about 4$~$MHz and a niobium on-chip cavity resonating around 6$~$GHz. We report on an unstable intrinsic driving force in the coupled microwave-mechanical system acting on the mechanics that appears below typically 100$~$mK. The origin of this phenomenon remains unknown, and deserves theoretical input. It prevents us from performing reliable experiments below typically 10-30$~$mK; however [*no evidence*]{} of thermal decoupling is observed, and we propose that the same features should be present in [*all devices*]{} sharing the microwave technology, at different levels of strengths. We further demonstrate empirically how most of the unstable feature can be annihilated, and speculate how the mechanism could be linked to atomic-scale two level systems. The described microwave/microkelvin facility is part of the EMP platform, and shall be used for further experiments within and [*below*]{} the millikelvin range.' address: | (\*) Univ. Grenoble Alpes, Institut Néel - CNRS UPR2940, 25 rue des Martyrs, BP 166, 38042 Grenoble Cedex 9, France\ (\*\*) Since 01/10/2017: IEMN, Univ. Lille - CNRS UMR8520, Av. Henri Poincaré, Villeneuve d’Ascq 59650, France author: - 'X. Zhou$^{*,**}$, D. Cattiaux$^{*}$, R. R. Gazizulin$^{*}$, A. Luck$^{*}$, O. Maillet$^{*}$, T. Crozes$^{*}$, J-F. Motte$^{*}$, O. Bourgeois$^{*}$, A. Fefferman$^{*}$ and E. Collin$^{*,\dag}$' title: 'On-chip thermometry for microwave optomechanics implemented in a nuclear demagnetization cryostat' --- Introduction ============ Advances in clean-room technologies within the last decades make it possible to create mechanical elements with one (or more) dimensions below a micron, namely NEMS (Nano-Electro-Mechanical Systems) [@roukescleland]. These objects can be embedded in electronic circuits, and today even within [*quantum*]{} electronic circuits [@cleland2010; @quantelecsimmonds; @quantelec2]. Indeed a breakthrough has been made with microwave optomechanics, essentially shifting the concepts of optomechanics into the microwave domain [@lehnert2008]. These quantum-[*mechanical*]{} devices can then be thought of as a new resource for quantum electronics and quantum information processing, with the realization of e.g. quantum-limited optical-photon/microwave-photon converters and non-reciprocal microwave circuits [@convOskar; @cindy; @clerknonrec; @simmondsamplifnonrecip; @kippenbergamplifnonrecip; @jfink]. Profound quantum concepts are also under study, with for instance mechanical motion squeezing [@schwavsciencesqueeze] and recently mechanical entanglement of [*separate*]{} objects [@sillanpaaintrique]. In order to operate at the quantum limit, the mechanical mode in use has to be initially in its quantum ground state; for experiments on microwave optomechanics based on megahertz motion, this shall rely on a strong red-detuned pump signal that [*actively*]{} cools the mode, with an environment remaining “hot” [@kippenbergamplifnonrecip; @sillanpaamultimode; @schwabtones]. For most applied issues this is not a problem, even if it brings an additional complexity with a microwave tone that has to be kept on, which could lead to heating of the circuit and mixing with other signals involved. However, for all experiments aiming at a careful characterization of the mechanical decoherence due to the thermal bath, this is impractical: the whole system is required to be in thermodynamic equilibrium. As such, theoretical proposals testing the grounds of quantum mechanics have been released in the literature, both on microwave setups using quantum circuits [@ArmourBlencowe] and conventional optomechanics (i.e. with lasers) [@bouwmeesterNJP]. Indeed, the first attempt to use nuclear adiabatic demagnetization for optomechanics is due to Dirk Bouwmeester, building on the technology of Leiden Cryogenics [@KlecknerThesis]. This ambitious setup was aimed at cooling a [*macroscopic*]{} moving mirror actuated/detected by optical means. On the other hand, a microwave-based optomechanical system hosting a NEMS is much less demanding; the heat loads from both the photon field and the internal heat release of materials are much less. As such, these setups have already proven to be compatible with dilution (millikelvin) technology [@cleland2010; @quantelecsimmonds; @quantelec2; @lehnert2008; @kippenbergamplifnonrecip; @schwavsciencesqueeze; @sillanpaaintrique]. ![image](setupII) We have thus built a microwave platfrom for optomechanics on a nuclear adiabatic demagnetization cryostat. The microwave circuitry has been kept as basic as possible so far for demonstration purposes; no JPA (Josephson Parametric Amplifier) has been used, and we rely only on [*intrinsic*]{} properties of optomechanics for the measurement [@AKMreview]. The cryostat reaches temperatures below 500$~\mu$K, and is equipped with accurate thermometry from the lowest temperatures up to about 1$~$K: using a noise SQUID-based (Superconducting QUantum Interference Device) thermometer [@noiseJohn] plus a $^3$He-fork thermometer [@Rob3Hefork]. We report on measurements realized on this platform with a very weakly coupled doubly-clamped beam flexural mode embedded in an on-chip cavity. Beam-based devices are indeed a tool of choice for applications like ultimate sensing (e.g. single molecule mass spectrometry [@roukesmass]), while the smallness of the photon-phonon coupling enables demonstrating the sensitivity of the methods used: we build on the optomechanical blue-detuned pumping instability to extract the [*on-chip*]{} temperature. On the other hand, popular drumhead devices achieve routinely much higher couplings [@schwavsciencesqueeze; @quantelecsimmonds] (from $\times 10$ to $\times 100$); the very same methods can then be applied with much less power, therefore limiting heating problems. We report not only on the thermalization of the mechanical mode itself, but also on the bulk of the mechanical element with the temperature of its constitutive TLSs (Two-Level Systems). Below about 100$~$mK, the mechanical device starts to be self-driven out-of-equilibrium by a strong stochastic force of unknown origin. We report on this phenomenon with a detailed account of the observed features, which have been confirmed in different laboratories but never documented to our knowledge [@sillanpaPrivCom; @schwabPHD; @contactLehnert]. This thorough description is calling for theoretical understanding, and we speculate on the possible ingredients that could be underlying this effect. We demonstrate that the strongest events can be canceled by applying a DC voltage onto the transmission line. However, for beam-based devices the remaining (small) events seem to limit the measurement capabilities to about 10$~$mK, roughly the lowest achievable temperature for dilution cryostats. This is actually an order of magnitude better than what has been reported so far for doubly-clamped beams [@sillanpaamultimode; @schwabrocheleau; @TeufelBEAM]. Looking also at the spread in the reported thermalization temperatures of drum-like devices, obtained in [*completely similar*]{} conditions, we suspect that this genuine phenomenon should be present in all devices sharing the same microwave readout scheme and constitutive materials, at different levels of expression. In this respect, the present description is extremely relevant for the community, and calls for further experiments on other types of devices [*below*]{} 10$~$mK. By carefully characterizing the self-heating due to the [*continuously*]{} injected microwave power, we infer that for the extremely weakly coupled mode used here the technique is suitable down to temperatures of the order of $1-2~$milliKelvin; for much larger couplings (like in drum structures) where less power is needed, in the [*absence*]{} of any uncontrolled intrinsic drive the technique should be functional down to the lowest achievable temperatures. As a comparison, similar experiments in the millikelvin range using optics could only be performed in pulsed mode [@DavisTLS]. Experiment ========== The microwave setup that we have chosen for this demonstration is relatively basic, and resembles the one described in Ref. [@lehnert2008]. Details on the microwave wiring can be found in Appendix \[setups\]. The measurement is performed in transmission, with a coplanar transmission line coupled on-chip to microfabricated microwave cavities (see Fig. \[fig\_1\] left). The practical aspect of this design is multiplexing: in Fig. \[fig\_1\] one cavity hosts our on-chip thermometer while the others can be used for more complex experiments. The microwave cavities are realized through laser lithography and RIE (Reactive Ion Etching) of a 120$~$nm thick layer of niobium (Nb). A typical resonance curve at about $\omega_c/(2 \pi) \approx 6~$GHz is shown in Fig. \[fig\_1\] center, displaying a (bi-directional) coupling rate of $\kappa_{ext} /(2 \pi)\approx 100~$kHz and a total damping rate of about $\kappa_{tot}/(2 \pi) \approx 150~$kHz. The NEMS mechanical element we use as on-chip thermometer is made from 80$~$nm thick high-stress silicon-nitride (SiN, 0.9$~$GPa), grown on top of silicon. It is a 50$~\mu$m long doubly-clamped beam of width 300$~$nm. It is covered by a 30$~$nm layer of aluminum (Al), capacitively coupled to the cavity through a 100$~$nm gap. The aluminum part has been patterned using standard e-beam lithography and lift-off, while the beam was released through RIE etching of the silicon-nitride followed by a selective XeF$_2$ silicon etching [@KunalsJLTP]. The silicon-nitride has not been removed below the niobium layer. The mechanical resonance of the first flexural mode we use is shown in Fig. \[fig\_1\] right, around $\Omega_m/(2 \pi) \approx 4~$MHz, with damping rate $\gamma_m/(2 \pi)$ of order 10$~$Hz. Such a mechanical mode still hosts about $n_{thermal} \approx 6~$phonons around 1$~$mK, large enough to be considered in the [*classical*]{} limit and thus well adapted to mode thermometry. The microwave setup has been carefully calibrated by electronic means; we therefore quote injected powers in Watts applied on-chip, and detected spectra in Watts/Hz (expressed in [*number of 6$~$GHz photons*]{}) at the output port of the chip. These units are adapted to both issues of heat leaks (cryogenics) and signal-to-noise (electonics) discussed in this Article. Two setups have been used: a dry commercial BlueFors$^{\textregistered}\,$ (BF) machine for preliminary experiments (base temperature 7.5$~$mK), and then the home-made Grenoble nuclear adiabatic demagnetization cryostat (operated down to 400$~\mu$K) [@paperYuriSatge]. Particular care has been taken in the construction of the two cryostats’ temperature scales, and details are given in Appendix \[setups\]. ![ Optomechanical schemes used (drawing not to scale, frequency domain). (a): “in-cavity” pumping, a strong tone (in green) is applied at the cavity resonance and the two (equivalent) sidebands are measured. (b) Red-detuned pumping: a strong microwave signal is applied (in red) at a frequency detuned from the cavity $\omega_c$ by $\Delta = -\Omega_m$, and we measure the so-called anti-Stokes peak at $\omega_c$. (c) Blue-detuned pumping: a strong pump tone (in blue) is applied at frequency $\omega_c+\Omega_m$, while we measure the Stokes peak.[]{data-label="fig_2"}](stokesantistokes){width="8.5cm"} The opto-mechanical interaction arises from the force exerted by light onto movable objects [@AKMreview]. It corresponds to the transfer of momentum carried by light (i.e. photon particles) to the surfaces on which it reflects. In a cavity design with one movable mirror, the retarded nature of this so-called radiation pressure force (due to the finite lifetime of light inside the cavity) leads to damping or anti-damping of the motion, depending on the frequency detuning of the input light with respect to the cavity [@AKMreview]. Our moving end mirror is the NEMS beam capacitively coupled to the microwave resonator of Fig. \[fig\_1\] (the cavity) [@lehnert2008]. The standard schemes we use are illustrated in Fig. \[fig\_2\], with an input wave of power $P_{in}$ at frequency $\omega_c+\Delta$. We are in the so-called resolved-sideband regime, with $\gamma_m, \kappa_{tot} \ll \Omega_m$ [@AKMreview]. When we pump power in the system exactly at the frequency of the cavity $\omega_c$ (detuning $\Delta=0$), the mechanical motion leads to a phase shift of the light [@AKMreview]. The Brownian motion of the mechanical element thus imprints two equivalent sidebands in the spectrum that we can measure. For a red-detuned pump (frequency detuning $\Delta = -\Omega_m$, Fig. \[fig\_2\]), energy quanta from the mechanics (phonons) can be transferred to the optical field (photons). This leads to the well-known sideband cooling technique [@AKMreview; @firstopticsSBcool; @microwaveSBCoolLehnertPRL]. The so-called anti-Stokes sideband peak in the spectrum is favored, and the mechanical mode is [*damped*]{} by light. On the other hand for a blue-detuned pump ($\Delta = +\Omega_m$, Fig. \[fig\_2\]), the optical field generates a parametric instability in the mechanics [@AKMreview; @clerkmarquardtharris2010]. The Stokes sideband peak is favored, and the mechanics is amplified through [*anti-damping*]{}. Eventually one can reach at high pumping powers the self-oscillation regime [@kippenbergOscillVahalaPRL2005; @delftsteeleOscill].\ ![ Effective damping/anti-damping $\gamma_{ef\!f}$ measured for the blue and the red-detuned pumping schemes as a function of power (at 210$~$mK, blue and red squares respectively). The slope of the fit (black line) leads to the definition of $g_0$, while the $P_{in} \approx 0$ corresponds to $\gamma_m$. The arrow indicates the position of the threshold $P_{thr}$ towards self-sustained oscillations, which is simply proportional to $\gamma_m$ (see text).[]{data-label="fig_3"}](gammaeff){width="11cm"} We start by calibrating the optomechanical interaction. The optical damping and anti-damping are linear in applied power $P_{in}$, see Fig. \[fig\_3\]. From a fit \[Eq. (\[eq1\]) below\], we can infer the so-called single photon coupling strength $g_0 = \frac{1}{2} \omega_c \frac{1}{C} \frac{d C}{d x} x_{zpf}$ with $x_{zpf}$ the zero-point-motion [@AKMreview]. This is essentially a [*geometrical*]{} parameter, arising from the modulation $\frac{d C}{d x}$ of the microwave mode capacitance $C$ by the beam motion [@lehnert2008]. We find $g_0/(2 \pi) \approx 0.55 \pm 0.1~$Hz, corresponding to the out-of-plane flexure. This coupling is particularly small, the idea being to take advantage of that to demonstrate the sensitivity of our method. The magnitude of the output power is fit to theory [@AKMreview], leading to a calibration of the measured phonon mode population/temperature \[performed at 210$~$mK, parameter $\cal M$ in Eq. (\[eq2\]) and Fig. \[fig\_fit\]\]. More details on the optomechanics measurements can be found in Appendix \[optomechs\]. Method ====== The method we propose builds on the parametric instability of the blue-detuned pumping scheme. When the pump tone is applied at $\omega_c$, the size of the two equivalent sideband peaks (their measured area ${\cal A}_0$, in photons/s) is simply proportional to injected power $P_{in}$ and mode temperature $T_{mode}$ [@AKMreview]: $${\cal A}_0= {\cal M} \, P_{in} T_{mode} \label{eq2}.$$ This optomechanics scheme alters neither the measured position of the sideband peaks (detuned by $\pm \Omega_m[T_{beam}]$), nor their linewidth $\gamma_m(T_{beam})$: both are determined by mechanical properties, which depend on the beam temperature $T_{beam}$. The lineshapes are Lorentzian. We introduce the [*number of stored photons*]{} in the cavity $n_{cav}$, function of both $P_{in}$ and $\Delta$: $$n_{cav}\left(P_{in}\right) = \frac{P_{in} \, \kappa_{ext}/2}{\hbar \left( \omega_c + \Delta \right) \left(\Delta^2 + \kappa_{tot}^2/4 \right) } \label{eq0}.$$ ![ Main: gain of the parametric amplification method based on the blue-detuned pumping scheme (210$~$mK data, brown squares), as a function of $P_{in}$. As in Fig. \[fig\_3\], the arrow indicates the position of the threshold $P_{thr}$ towards self-sustained oscillations. Inset: resonance line (blue trace) measured at very large gains, demonstrating its lorentzian lineshape (linewidth of order 0.9$~$Hz, close to instability). The black lines are fits, and we report about 20$~$dB amplification of the Brownian signal. Finite error from both statistics and fluctuations in mechanical parameters (see text). []{data-label="fig_4"}](gain){width="11cm"} On the other hand for blue-detuned pumping, as we increase the injected power $P_{in}$ (but keep it below the instability threshold), the area $\cal A$ of the Stokes peak is amplified. The blue/red-detuned pumping expressions write [@AKMreview]: $$\begin{aligned} \gamma_{ef\!f}\left(P_{in}\right) & = & \gamma_m - \mbox{Sign}\left(\Delta\right) \frac{4 g_0^2 \, n_{cav}\left(P_{in}\right) }{\kappa_{tot}} \label{eq1} , \\ {\cal A} & = & {\cal A}_0 \times \frac{\gamma_m}{\gamma_{ef\!f}\left(P_{in}\right)} \label{eqgain} ,\end{aligned}$$ in the limit of negligible cavity thermal population. For $\Delta>0$, the last term in Eq. (\[eqgain\]) after the $\times$ sign is a gain, illustrated in Fig. \[fig\_4\]. It arises from the anti-damping, with $\gamma_{ef\!f}$ the linewidth of the Lorentzian peak. Controlling the applied power $P_{in}$, from the knowledge of system parameters one can straightforwardly recalculate the value of ${\cal A}_0$, and thus of $T_{mode}$ (i.e. the temperature of the mode [*in absence of*]{} optomechanical pumping). In Fig. \[fig\_4\] we demonstrate 18.5$~$dB gain, which is greater than the previously reported maximum for a similar setup using a graphene device [@delftsteeleOscill]. Essentially [*only*]{} $\gamma_m$ depends on temperature, and has to be known to apply Eq. (\[eqgain\]). It can be obtained easily from a measurement of the mechanical effective damping (the linewidth of the Lorentzian Stokes peak, Fig. \[fig\_3\]), by either extrapolating to $P_{in} \rightarrow 0$ or defining the position of the threshold $P_{thr} \propto \gamma_m$ \[with Eq. (\[eq1\]) at $\gamma_{ef\!f}=0$, see Figs. \[fig\_3\], \[fig\_4\] and \[fig\_oscill\]\]. More details are given in Appendix \[optomechs\]. Obviously, the main requirement for this $T_{mode}$ estimate is the [*stability*]{} of experimental parameters. The mechanical mode itself happens to be the limiting element (see Fig. \[fig\_3\] and Fig. \[fig\_5\] below), leading to finite error bars at large gains in Fig. \[fig\_4\]; fluctuations are further discussed in Sections \[inequil\] and \[unstable\], see Fig. \[fig\_8\]. ![ Main: mechanical damping parameter $\gamma_m$ as a function of cryostat temperature $T_{cryo}$. Inset: resonance frequency shift from $\Omega_m(T^{\ast})$ of the flexural mode in same conditions. The black lines are fits following the TLS model ($\alpha \approx 0.65$, $T^{\ast} \approx 1.1~$K see text). Note the scatter in the data (also in Fig. \[fig\_3\]), further discussed in Section \[unstable\].[]{data-label="fig_5"}](tls){width="11cm"} The measured mechanical damping rates $\gamma_m$ and resonance frequencies $\Omega_m$ are shown in Fig. \[fig\_5\]. The displayed dependencies are characteristic of NEMS devices in the millikelvin range: a damping $\gamma_m \propto T^\alpha$ with $0.3 < \alpha < 2.5$ and a logarithmic frequency shift $\propto \ln \left( T/T^{\ast} \right)$ with $T^{\ast}$ a characteristic temperature (see fits in Fig. \[fig\_5\]). For all materials (from monocrystalline to amorphous) this behavior is understood as a signature of TLSs (Two-Level Systems) [@TLSMohanty; @TLSPashkin; @KunalPRB; @KunalPRL; @FaveroTLSPRL; @DavisTLS; @Painter]: either defects (e.g. for monocrystalline Si, or polycrystalline Al), or constitutive of the atomic arrangement (for amorphous SiN). Direct coupling of the first flexural mode to the phonon bath (i.e. clamping losses) [@RoukesLifshitzClamp] is negligible for these structures at millikelvin temperatures. Within the TLS model the mechanical mode is coupled to the two level systems, which are themselves coupled to the external bath: the electrons and the (thermal) phonons present in the moving structure [@KunalPRL]. For superconducting materials, the electronic contribution is negligible and the TLSs temperature should reflect the phononic temperature in the beam (i.e. the temperature of high frequency modes well-coupled to the clamping ends), which we simply define as $T_{beam}$. By inverting the fits in Fig. \[fig\_5\] it is thus straightforward to extract $T_{beam}$. ![ Main: microwave heating as a function of $n_{cav}$ performed at 210$~$mK, with temperatures recalculated from the measured width and position (corresponding to $T_{beam}$) and peak area ($T_{mode}$). The line is a linear fit, leading to the microwave-heating coefficient $\sigma$. Inset: heating coefficient $\sigma$ versus cryostat temperature $T_{cryo}$; the dashed line is a power law guide to the eye. []{data-label="fig_6"}](heat){width="11cm"} The aim of our work is thus to compare the temperature of the cryostat $T_{cryo}$ to $T_{beam}$ and $T_{mode}$. These results are analyzed in Section \[inequil\]; however it is mandatory to quantify beforehand the impact of the microwave pump power on the defined temperatures. For this purpose we use the “in-cavity” pumping scheme (Fig. \[fig\_2\]). We measure, at a given temperature $T_{cryo}$, the mechanical characteristics $\gamma_m$, $\Omega_m$ and the area ${\cal A}_0$ of the two sideband peaks as a function of injected microwave power $P_{in}$. Using respectively the fits of Fig. \[fig\_5\] and Eq. (\[eq2\]), we can recalculate the expected temperatures $T_{beam}$ and $T_{mode}$ under microwave irradiation. Since the local heating should be proportional to the local electric field squared confined onto the NEMS, we discuss these results as a function of $n_{cav}$. The properties of the cavity itself as a function of the pump settings are discussed in Appendix \[optomechs\]. ![ (a): mode temperature $T_{mode}$ as a function of cryostat temperature $T_{cryo}$. (b): beam temperature $T_{beam}$ inferred from the TLS bath as a function of $T_{cryo}$. The conditions of the measurements are defined in the legend. The line is the $y=x$ function. []{data-label="fig_7"}](tmode "fig:"){width="11cm"} ![ (a): mode temperature $T_{mode}$ as a function of cryostat temperature $T_{cryo}$. (b): beam temperature $T_{beam}$ inferred from the TLS bath as a function of $T_{cryo}$. The conditions of the measurements are defined in the legend. The line is the $y=x$ function. []{data-label="fig_7"}](tbeam "fig:"){width="11cm"} A typical result obtained at 210$~$mK is shown in Fig. \[fig\_6\] (main graph). Both $T_{beam}$ (obtained equivalently from damping and frequency shift) and $T_{mode}$ display [*the same*]{} linear dependence on $n_{cav}$, and the two sidebands are equivalent: this demonstrates that the effect is indeed thermal. Defining the slope of the fit as $\sigma$, we can extract this coefficient as a function of $T_{cryo}$ (Fig. \[fig\_6\] inset). This temperature-dependence is non-trivial, and no heating model is provided here: such a model should take into account the microwave absorption in the materials, the energy flow in the beam [*plus*]{} the clamping zone slab (suspended by the fabrication undercut), and finally the anchoring to the bulk of the chip. Nonetheless, we can use this graph to estimate the NEMS heating for a given $T_{cryo}$ and $n_{cav}$ in the blue-detuned pumped scheme. As a result, we extrapolate that applying a power of order $P_{thr}$ at 1$~$mK would heat the beam by about 1$~$mK; above 10$~$mK, the heating is essentially negligible (see error bars on $T$ axis of Fig. \[fig\_oscill\]). Knowing the smallness of the coupling $g_0$ employed here, this demonstrates the capabilities of the method. Furthermore, because of this microwave-heating it is obviously meaningless to report experiments below about $T_{cryo} \approx 1~$mK for this first “ultimate” cooling attempt. In-equilibrium Results {#inequil} ====================== From fits to Eq. (\[eqgain\]) of the power-dependent Stokes peak area, we thus extract $T_{mode}$. Reversing the fits of the mechanical parameters $\gamma_m$, $\Omega_m$ (Fig. \[fig\_5\]) we obtain $T_{beam}$. Both are displayed as a function of $T_{cryo}$ in Fig. \[fig\_7\], for our two experimental setups. We demonstrate a thermalization from about 10$~$mK to 1$~$K of the mode and of the whole beam; the device [*is in thermal equilibrium*]{} over 2 orders of magnitude in $T_{cryo}$. Reported lowest thermodynamic temperatures in the literature lie all within the range 10 - 30$~$mK [@cleland2010; @quantelecsimmonds; @quantelec2; @lehnert2008; @kippenbergamplifnonrecip; @schwavsciencesqueeze; @sillanpaaintrique; @microwaveSBCoolLehnertPRL; @delftsteeleOscill; @jfink]; however one work reports a potential mode temperature for an Al-drum of order 7$~$mK, consistent with base temperature of dry dilution cryostats [@schwab7mK]. Similarly, a lowest temperature of 7$~$mK is reported for a gigahertz phononic crystal [@Painter]; but obviously such a mode cannot be used for phonon thermometry at millikelvin temperatures. As far as thermalization between [*measured*]{} $T_{cryo}$ and $T_{mode}, T_{beam}$ is concerned, Fig. \[fig\_7\] reproduces the state-of-the-art, but does not go yet beyond even though the cryostat cools well below 7$~$mK. The reason for this is discussed in Section \[unstable\]: below typically 100$~$mK, the system displays [*huge amplitude fluctuations*]{} which hinder the measurements (Fig. \[fig\_8\]). These features have been seen by other groups for [*beam-based microwave optomechanical devices containing an Aluminum layer*]{}, but never reported so far [@sillanpaPrivCom; @schwabPHD; @contactLehnert; @jfinkpriv]. Until recently, this essentially prevented experiments from being performed on these types of devices below physical temperatures of order 100$~$mK [@sillanpaamultimode; @schwabrocheleau; @TeufelBEAM]; remarkably however, (Al covered) ladder-type Si beams [@jfink; @jfinkpriv] seem to be less susceptible to this problem than simple doubly-clamped beams. On the other hand, large signal fluctuations [*have not*]{} been observed to date for (Al) drum-like structures [@sillanpaPrivCom; @contactLehnert], and do not show up in schemes which [*do not*]{} involve microwaves (e.g. magnetomotive measurements of SiN and Al beams [@OlivierPhD; @TLSPashkin], or laser-based measurements of Si beams [@DavisTLS]). This is what enabled nano-mechanical experiments to be conducted at base temperature of dilution cryostats. Up to now, the only possibility to deal with these large events was [*post-selection*]{}, which is extremely time-consuming and even stops being useable at all at the lowest temperatures. The origin of this phenomenon remains unknown, and we can only speculate on it in Section \[unstable\] hereafter. Note that there is [*no evidence*]{} of thermal decoupling in Fig. \[fig\_7\]. More conventional frequency $\Omega_m$ and damping $\gamma_m$ fluctuations [@ACSnanoUs] are also present (see e.g. Fig. \[fig\_8\]). These features have been reported for essentially all micro/nano mechanical devices, as soon as they were looked for; their nature also remains unexplained, and their experimental magnitude is much greater than all theoretical expectations [@RoukesHentzNatNanotech]. Frequency noise essentially leads to inhomogeneous broadening [@OliveNJP]. It [*does not*]{} alter the area $\cal A$ measurement, but does corrupt both frequency and linewidth estimates. This noise comes in with a $1/f$-type component [@RoukesHentzNatNanotech; @ACSnanoUs], plus [*telegraph-like jumps*]{} [@MartialPhD]. It leads to the finite error bars in Fig. \[fig\_5\] inset; below 10$~$mK, the mechanical parameters $\Omega_m$ and $\gamma_m$ cannot be measured accurately. Similar damping fluctuations [@ACSnanoUs] are more problematic, since the amplification gain Eq. (\[eqgain\]) depends on $\gamma_m$. The error bars of Fig. \[fig\_5\] (main graph) are essentially due to this; they translate into a finite error for the estimate of the gain, Fig. \[fig\_4\], which itself limits the resolution on $T_{mode}$ (Fig. \[fig\_7\], top). ![ Stokes resonance peak (amplitude in color scale, frequency from $\Omega_m$ on the left) as a function of time at about 1$~$mK ( 0.6$~$nW applied power, blue-detuned pumping scheme). Huge amplitude jumps are seen, together with frequency (and damping) fluctuations (see text).[]{data-label="fig_8"}](fig1mk){width="9.5cm"} unstable drive force features {#unstable} ============================= In Fig. \[fig\_8\] we show a typical series of spectral acquisitions as a function of time, around 1$~$mK. We see very large amplitude fluctuations which start to appear around 100$~$mK, and get worse for lower temperatures (regardless of the scheme used): the “spikes” grow even larger, but more importantly [*their occurrence*]{} increases. We studied these events in the whole temperature range accessible to our experiment. Their statistics seems to be rather complex, and shall be the subject of another Article. Key features are summarized in this Section. For blue-detuned pumping the spikes worsen as pump power increases, while for red-detuned pumping it is the opposite, suggesting that the effective damping of the mode plays an important role. With in-cavity pumping, spiky features are also present at very low powers, but not at high powers when the NEMS physical temperature exceeds about 100$~$mK. The recorded heights can be as large as equivalent mode temperatures in the Kelvin range. Around $10-30~$mK, post-selection becomes impossible. ![ Comparing measurements with/without applied DC voltage in similar conditions. (a): resonance lines (blue) obtained at 5$~$mK with 0.6$~$nW drive; 2 hours acquisition shown in 560 averaged traces, with [*no*]{} DC voltage bias. (b): resonance lines obtained at 7$~$mK with 0.8$~$nW drive; 18 hours in 770 averaged traces, with +3$~$V DC applied on the transmission line (see text). The scheme used for both data sets is blue-detuned pumping, and the thick black line is a fit of the average curve (in red). Note the $10^4$ difference in the vertical axes. []{data-label="fig_9"}](figspike "fig:"){width="8cm"} ![ Comparing measurements with/without applied DC voltage in similar conditions. (a): resonance lines (blue) obtained at 5$~$mK with 0.6$~$nW drive; 2 hours acquisition shown in 560 averaged traces, with [*no*]{} DC voltage bias. (b): resonance lines obtained at 7$~$mK with 0.8$~$nW drive; 18 hours in 770 averaged traces, with +3$~$V DC applied on the transmission line (see text). The scheme used for both data sets is blue-detuned pumping, and the thick black line is a fit of the average curve (in red). Note the $10^4$ difference in the vertical axes. []{data-label="fig_9"}](figvolt "fig:"){width="8cm"} With the aim of searching for the origin of this effect, we have characterized it in various situations. We first realized that cycling the system from the lowest temperatures to above 100$~$mK was producing a sort of “reset”. But very quickly (a matter of hours), after cooling down again the large spikes happen to dominate the signal again. We then tried to apply a small magnetic field to the system; this was not very conclusive. However, applying a DC voltage had a drastic effect on these random features. This is illustrated in Fig. \[fig\_9\]: with a few volts on the chip’s coplanar transmission line [*all the large features disappear*]{}. The averaged signal (deeply buried into the noise) recovers a reasonable Lorentzian lineshape (see fit in Fig. \[fig\_9\]), while the shape of the spikes is not resolved (Fig. \[fig\_8\]). Details on the DC voltage biasing are given in Appendices \[setups\] and \[optomechs\]. In discussing the source of this feature, a few comments have to be made. What is shown in Fig. \[fig\_8\] is primarily fluctuations of the [*output*]{} optical field. These are detected [*only*]{} on the Stokes and Anti-Stokes peaks, for any of the schemes shown in Fig. \[fig\_2\]. Furthermore, the threshold to self-oscillation in the blue-detuned pumping scheme displays a large hysteresis (certainly due to nonlinearities in the system, see Fig. \[fig\_oscill\], Appendix \[optomechs\]). We have noticed that when the microwave power applied (at frequency $\omega_c+\Omega_m$) lies within this hysteresis, the spiky events seem to be able to trigger the self-oscillation. This would not be possible if the amplitude fluctuations measured were only in the detected signal, at the level of the HEMT. We thus have to conclude that we see genuine [*mechanical*]{} amplitude fluctuations. However, these cannot be due to damping fluctuations alone that could trigger self-oscillations, since we do see the same type of features when pumping red-detuned or in-cavity. If these fluctuations were due to the input field itself, from Fig. \[fig\_6\] we would reasonably conclude that the NEMS beam would be heated to rather high temperatures, leading to broad and very shifted in frequency (see Fig. \[fig\_5\]) Stokes/Anti-Stokes peaks. This is not compatible with the measurement of Fig. \[fig\_8\]. The only reasonable conclusion seems thus to be that we do suffer from a [*genuine extra stochastic force*]{} acting on the mechanical element. This is consistent with a stronger sensitivity to the phenomenon when the effective damping of the mode is small (bue-detuned scheme). Since a DC voltage applied [*only to the cell*]{} can drastically modify the measured features, the source has to be on-chip. Noting furthermore than with an in-cavity pumping, it disappears when the beam temperature exceeds 100$~$mK, we conclude that it [*should even be within the mechanical element*]{}. But the mechanism remains mysterious: citing only documented effects in other areas of research, is it linked to vortex motion in the superconductor [@vortexNoise], trapped charges [@chargeNoise], adsorbed molecules [@adsorbate] or to atomic-size Two-Level-Systems in dielectrics (beyond the standard friction model) [@dielecTLS]? The low temperature properties of NEMS are described within the tunneling model of Two-Level-Systems (TLSs): for damping, frequency shifts, and phase fluctuations [@DavisTLS; @fongPRB; @ACSnanoUs]. It is thus natural to consider strongly coupled individual TLSs as the most probable source of our problems. Besides, while the actual nature of these microscopic defects remains elusive in most systems, they could be generated in many ways beyond the standard atomic configuration argument [@ustinov]; an electron tunneling between nearby traps would be a TLS strongly coupled to its electromagnetic environment, among other possibilities [@adsorbate]. For Al-based NEMS, these would create (only a few) defects present in (or [*on*]{}) the Al layer; they should carry a dipole moment, which couples them to the microwave drive as well as to the electric field generated by the applied DC voltage. This field distorts their potentials, such that they could get locked in one state and “freeze”. Furthermore, our results seem to be very similar to those of Ref. [@ParpiaGamma] obtained with a [*macroscopic*]{} mechanical glass sample, where “spiky” events were demonstrated to be originating in the interaction with low-level radioactivity (gamma rays). These results suggest a parametric coupling to TLS at GigaHertz frequencies mediated by the microwave drive, but were the energy corresponding to the large peaks would be provided by the external radiation. The reason why the mechanism should be dependent on the low phononic dimensionality or size of the device (typ. width about 100$~$nm, much smaller than the phonon wavelength at 10$~$mK) is nontrivial. One simple argument could be that the spring constant of the modes under consideration are very different: about 1$~$N/m for megahertz beams and 100$~$N/m for drumheads. This could justify why beam-based structures are more reactive to external force fluctuations; an immediate consequence of this argument is then that membrane-based Al devices [*are not*]{} truly immune to force fluctuations, but are just less sensitive: if that should be true, cooling them to low enough temperatures would eventually revive the same features as for beam-based NEMS. ![ (a): area of peak extracted from a sliding average performed with a window of 26$~$minutes at 23$~$mK, with applied power 0.8$~$nW (blue-detuned pumping). (b): same measurement performed at 7$~$mK. The horizontal dashed lines are the thermal population expected values, matched at 23$~$mK in the stable zone (middle of graph). At the lowest temperatures, we observe very large amplitude fluctuations which [*cannot be*]{} of thermal origin; the measured area remains always larger than the expected value (see text).[]{data-label="fig_10"}](sliding){width="12"} To conclude, let us concentrate on the measurements performed at ultra-low temperatures with a DC voltage bias (of the type of Fig. \[fig\_9\], bottom). Even with the help of the in-built parametric amplification, the signal is very small and requires decent averaging, typically here about 30$~$minutes for reasonable error bars. Even if the resonance peak is Lorentzian, below typically 20$~$mK the measured area $\cal A$ does not correspond to the actual cryostat’s temperature $T_{cryo}$: it is always bigger, but the actual value presents [*large fluctuations*]{} in magnitude. This is demonstrated in Fig. \[fig\_10\], with identical measurements performed at 23$~$mK and 7$~$mK. What is shown is how the measured area of the Stokes peak evolves over time, performing a sliding average over the whole set of acquired data. In the former case, we see that the fluctuations of the measured area are not more than about $+ 60~\%$; they are much smaller for higher temperatures, leading to proper estimates of $T_{mode}$. However, for the latter these are greater than $300~\%$. Besides, fluctuations happen to have an extremely slow dynamics: while spikes switch on/off faster than our acquisition time, their overall occurrence fluctuates over [*a day*]{} (Fig. \[fig\_10\]). By no means could this behavior be explained by a thermal decoupling of the device from the cryostat. As a consequence, even the calm zones in Fig. \[fig\_8\] are corrupted by the phenomenon shown in Fig. \[fig\_10\]. This is essentially why no reliable data could be acquired below 10$~$mK; but from the DC biasing and the continuous monitoring of the Stokes peak, thermal equilibrium has been demonstrated at about ten times lower temperature than previously reported in microwave doubly-clamped NEMS experiments [@sillanpaamultimode; @schwabrocheleau; @TeufelBEAM]. Conclusion ========== We presented measurements of a microwave optomechanical system performed on a nuclear adiabatic demagnetization cryostat, able to reach temperatures well below the 10$~$mK limit of conventional dilution machines. Relying on a fairly standard microwave wiring and the in-built parametric amplification provided by a blue-detuned pumping, we devised a method providing accurate thermometry of both the mechanical mode and its on-chip environment (the Two-Level Systems to which it couples). The experiment was conducted on a beam Nano-Electro-Mechanical System embedded in an on-chip microwave cavity. The efficiency of the method is demonstrated with a very low opto-mechanical coupling. Thermalization is shown from 10$~$mK to 1$~$K with no sign of thermal decoupling. However at very low temperatures we report strong fluctuations in the signal amplitude which prevented any experiments to be conducted at ultra-low temperatures. These features appear around 100$~$mK and have been observed in different laboratories, but had never been studied in details so far. We demonstrate the characteristics of these fluctuations, and argue that they are due to an extra stochastic driving force of unknown origin. Microwave irradiation seems to trigger the phenomenon. Applying a DC voltage of a few Volts on-chip cancels the large spiky events, but a small component of this extra random drive persists, with variations over a typical timescale of about a day. It is unclear if all the fluctuations characteristics present in the devices (amplitude, frequency, damping) are linked to the same underlying mechanism. One could even imagine that temperature-dependent non-linear effects could impact the phonon-photon coupling, beyond the lowest (geometrical) order $g_0$. However, it is likely that these effects are present in [*all*]{} experimental systems at different levels of expressions, since all NEMS/MEMS share the same overall characteristics (especially damping, frequency shifts and phase noise typical of Two-Level Systems physics). It is thus tempting to relate this stochastic force to a mechanism mediated by some kind of microscopic TLSs, driven by microwaves but blocked under DC voltage biasing. This stochastic driving force can mimic to some extent a thermal decoupling, and could explain why some drumhead devices in the literature refuse to cool down below typically 20-30$~$mK. While being a limitation for experimentalists, this phenomenon definitely deserves theoretical investigations. The present work also calls for further experiments at lower temperatures, using other types of devices (e.g. drums). This shall be performed in the framework of the European Microkelvin Platform (EMP) [@EnssPickettEMP]. [*Note added in proof:*]{} following the present work, a collaboration between [*Aalto University*]{} and [*Institut Néel*]{} has started with the aim of cooling down a drumhead Al device as low as possible on our adiabatic nuclear demagnetization platform. We have evidence that the same features as for beams are present, at a different level of expression. This shall be published elsewhere. () Corresponding Author: [email protected] We acknowledge the use of the Néel facility [*Nanofab*]{} for the device fabrication, and the Néel [*Cryogenics*]{} facility with especially Anne Gerardin for realization of mechanical elements of the demagnetization cryostat. X.Z. and E.C. would like to thank Olivier Arcizet and Benjamin Pigeau for help in room-temperature characterization of the NEMS devices using optics; E.C. would like also to thank O. Arcizet, M. Sillanpää, K. Lehnert, J. Teufel, S. Barzanjeh, K. Schwab, J. Parpia and M. Dykman for very useful discussions. We acknowledge support from the ERC CoG grant ULT-NEMS No. 647917, StG grant UNIGLASS No. 714692 and the STaRS-MOC project from [*Région Hauts-de-France*]{}. The research leading to these results has received funding from the European Union’s Horizon 2020 Research and Innovation Programme, under grant agreement No. 824109, the European Microkelvin Platform (EMP). ![ Simplified common wiring of the experimental platforms; the different levels within the cryostats (BF and demag.) are shown with their respective temperature. SS stands for Stainless-Steel, NbTi for Niobium-Titanium and Cu for Copper coaxial cables (50$~$Ohms impedance). The boxed elements have been added/removed depending on the experimental run (see text for details).[]{data-label="fig_wiring"}](circuitII){width="9cm"} setup {#setups} ===== Two similar microwave setups have been used in these experiments. Their common features are described in Fig. \[fig\_wiring\]. This wiring is basic and can also be found within the literature [@lehnert2008]. Essentially, they are built around a cryogenic HEMT (High Electron Mobility Transistor) placed at about 4$~$K and two circulators mounted on the mixing chamber of the dilution units. On the BlueFors$^{\textregistered}\,$ (BF) machine, the HEMT is a Low Noise Factory$^{\textregistered}\,$ 4-8$~$GHz bandwidth, with a measured noise of about 3$~$K (10 photons) at 6$~$GHz. On the nuclear adiabatic demagnetization cryostat, it is a Caltech 1-12$~$GHz bandwidth with a measured noise of about 15$~$K (50 photons at 6$~$GHz). The (dashed green) boxed component below the HEMT in Fig. \[fig\_wiring\] represents a power combiner used to realize an opposition line. On the BF setup, this is mandatory to avoid saturation of the cryogenic HEMT from the strong blue-detuned pump tone. On the demag. cryostat, the cryogenic HEMT is linear enough so this protection is not necessary. This choice has been made because of space constrains: feeding an extra microwave opposition line in the nuclear adiabatic demagnetization cryostat would be very demanding. The filtering of the injection lines (DC and microwave) is also described in Fig. \[fig\_wiring\]. Gains and noise levels of the full chain have been carefully checked with respect to HEMT working point. Besides, each component has been tested at 4$~$K prior to mounting. The whole setup has then been calibrated, using an Agilent$^{\textregistered}\,$ microwave generator EXG N5173B and an Agilent$^{\textregistered}\,$ spectrum analyser MXA N9020A. The measurements presented in the core of the paper have been realized using a Zurich Instruments$^{\textregistered}\,$ UHFLI lock-in detector operating in spectrum mode. The signal is mixed down with a Local Oscillator (LO) and detected at frequency $\pm \Omega_m+2~$MHz (the shift avoiding overlap of Stokes/Anti-Stokes signals). The generators used were from Agilent$^{\textregistered}\,$ or Keysight$^{\textregistered}\,$ brands leading to equivalent data quality. The [*absolute*]{} error in the calibrations is estimated at about $\pm 2~$dB over the whole set of realized runs; within these error bars, the two cryogenic platforms gave the same quantitative results. Particular care has been taken in thermalization issues. The experimental cell is thus made of annealed high-purity copper. It is mounted on a cold finger either bolted onto the mixing chamber plate of the BF machine, or connected through silver wires to the bottom of the nuclear stage of the demag. cryostat. This stage is of laminar type, about 1$~$kg of high-quality copper [@paperYuriSatge]. On the top it is connected through a Lancaster-made Al heat switch [@ULANCHSpaper] to the mixing chamber of the home-made dilution unit. Both dilution cryostats reach base temperatures of order 7-10$~$mK depending on cooling power settings. The PCB board mounted inside the cell is hollow in its center; the chip is pressed there [*directly*]{} onto the copper block, by means of a copper/indium spring. At very large microwave powers, we do see heating at the intermediate stages where attenuators are anchored. However, the nuclear stage thermometers [*do not*]{} show any heating, even at the highest powers used. ![ Calibration measurements performed on the $^3$He thermometer, from the lowest operated temperature (here 400$~\mu$K, raw data fork resonance curve in inset with Lorentzian fit) to about 100$~$mK. The superfluid transition $T_c$ is clearly visible around 1$~$mK and can be used as a temperature fixed point; the line is the expected behavior from $^3$He viscosity, see text.[]{data-label="fig_3He"}](widthfork){width="11.5cm"} Thermometry below typically 30$~$mK is no easy task; below 10$~$mK it requires the expertise of ultra-low temperature laboratories (see e.g. EMP partners [@EnssPickettEMP]). In the graphs of Fig. \[fig\_7\], the $x$ axis is as important as the $y$ one; this point is not always emphasized in the literature. We thus paid particular care in providing an almost primary temperature scale for our two microwave platforms. The use of the word [*almost*]{} here should be understood as follows: we also used resistive thermometers (RuO$_2$, carbon Speer-type) calibrated against primary devices, and our primary thermometers (which do follow known temperature dependencies) are calibrated at a single point in temperature for practical reasons. Our laboratory has a long history of working for the construction of the ultra-low temperature scale [@RefHenriULTS]. In our case, the BF cryostat has been equipped with a Magnicon$^{\textregistered}\,$ MFFT noise thermometer and a CMN paramagnetic salt (Cerium Magnesium Nitrate) thermometer. The MFFT was bolted at mixing chamber level while the CMN was directly mounted on the cold finger. On the demag. cryostat, an MFFT is mounted at the top of the nuclear stage while a $^3$He thermometer is connected at the bottom; the exerimental cell is actually [*between*]{} the nuclear stage and this thermometer. When their working range overlap, the thermometers agree within typically 2 - 5 $~\%$; the error bars on $x$ axes of the temperature plots also take into account thermal gradients and slow drifts. The nuclear adiabatic demagnetization process requires large magnetic fields to be used (here, 8$~$T). One usually starts at $T_{ini} \approx 10~$mK with the superconducting coil surrounding the stage fully magnetized at $B_{ini}$ (heat switch open), and then demagnetizes to about $B_{fin} \approx 100~$mT to reach the lowest possible temperatures (in the present runs, we worked down to 400$~\mu$K, Fig. \[fig\_3He\]). The cryostat can then stay cold over a week (heat leak $< 100~$pW); another cycle needs then to be initiated with a pre-cooling of the system. The process being adiabatic, the nuclear spins of copper are cooled according to $B_{ini}/B_{fin}=T_{ini}/T_{fin}$ [@PobellBook]. Our magnet is compensated on both sides, but nonetheless small stray fields are present when it is magnetized (represented in Fig. \[fig\_wiring\] by the small boxed coil). In order to make sure that these fields do not introduce any slant in the data, we performed measurements [*above*]{} 10$~$mK but within a demagnetization cycle: we started at 20$~$mK insead of 10$~$mK, and reduced the field by 1/2. This can then be compared to experiments performed without any field, at base temperature of the dilution unit. The microwave cavity does shift in frequency with applied field (see Appendix \[optomechs\]), but no other difference could be found. All data taken below 10$~$mK have been obtained exclusively with the nuclear adiabatic demagnetization technique. The lowest part of our temperature scale is obtained from a $^3$He thermometer (see Fig. \[fig\_3He\]). It is based on the viscosity measured in the fluid with an immersed probe, a mechanical resonator. In the past, vibrating wires were the best choice for this task [@GuenaultViW; @ClemensViW]. Today, people use quartz tuning forks which are more practical [@Rob3Hefork; @ULANCFork; @ULANCFork2]. Our thermometer is thus a nested cell containing two tuning forks (one in the outside cell, the other in the inside), filled with silver sinters connected to the nuclear stage by silver wires. The outside cell serves as a thermal shield for the inner one, and does not cool down below typically 1$~$mK. The inner cell is directly connected to the cold finger that hosts the microwave cell. $^3$He is a Fermi liquid above about 1$~$mK, and becomes superfluid below. At $0~$bar pressure, $0~$T field this state is called $^3$He-B, and the properties of this amazing fluid have been extensively studied over the years [@Book3HeVollhardt]. For instance, the viscosity of $^3$He is well known for both normal and superfluid states, see e.g. [@HookHall1; @HookHall2]. In principle, from the knowledge of the geometry of the immersed object and its surface rugosity, one can calculate the friction and thus the broadening of the resonance. In practice, it is much more efficient to calibrate the device by scaling its properties on known measurements [@ULANCFork; @ULANCFork2]. In order to perform this scaling, we measured the damping of the quartz tuning fork thermometer (inner cell) as a function of the final field of the demagnetization process $B_{fin}$. The protocol was to demagnetize by steps, and then remagnetize to $B_{ini}$ in order to verify that the process was indeed adiabatic (by recovering the initial temperature $T_{ini}$). It is then straightforward to calculate the temperature of the nuclear spins of copper for each step, which should be in equilibrium with electrons (and finally $^3$He) if one waits long enough. The result is shown in Fig. \[fig\_3He\]. The line is the expected behavior from published results. Note that the superfluid transition $T_c$ also acts as a [*fixed point*]{} in the temperature scale. Microwave optomechanics {#optomechs} ======================= The microwave cavity is the central element of the technique. It happens to be very sensitive to external conditions: cell temperature, pump settings (power, detuning), DC voltage bias and magnetic field. In order to avoid any systematic error, the protocol is thus to measure the cavity [*for each setting*]{}. On the BF cryostat, in order to mimic stray fields from the demagnetization protocol and to study the impact of (small) magnetic fields on the unstable features reported in Section \[unstable\], a small coil has been installed on top of the cell (boxed element in Fig. \[fig\_wiring\]). It can create perpendicular fields up to about 10$~\mu$T. The voltage DC bias is applied [*only*]{} to the cell which is placed within a Bias Tee and a DC Block (see boxed elements in Fig. \[fig\_wiring\]). Above about 4$~$V, the chip starts to heat (certainly because of currents flowing within the silicon). All reported measurements obtained with a DC bias have thus been acquired at 3$~$V DC, ramping slowly periodically (every 1/2 hour approximately) the voltage to almost 4$~$V and back again. At zero bias, after some time the “spikes” eventually reappear. They did not seem to react particularly to the small magnetic fields used. ![ Photon flux due to occupation of the cavity mode in excess of that expected from the cryostat temperature. The $x$ axis is $\sqrt{n_{cav}}$ (blue-detuned pump scheme for blue symbols, red-detuned pump for red). Inset: example of resonance peak obtained from spectrum analyzer. The lines are guides to the eye. The numbers in magenta correspond to the calculated intra-cavity population for the last point of each temperature series (from fit value of $\kappa_{ext}$, see text). []{data-label="fig_cavityheat"}](cavityheat){width="11.5cm"} The cavity parameters are extracted from a transmission response measurement (amplitude of $S_{21}$ component of $\left[S\right]$ matrix). This is measured by applying a very weak probe tone (see boxed element in Fig. \[fig\_wiring\]) while keeping the strong pump on. We verified that the probe power was small enough not to alter the measurement. The data (see Fig. \[fig\_1\]) is fit to the expression [@RefS12fitAlessandro]: $$\left( S_{12}\left[\omega\right] \right)_{dB} = 20 \log \left| 1- \frac{Q_{tot} \left( Q_{ext}+ I Q_{im} \right)^{-1}}{1+2 I Q_{tot} \frac{\omega - \omega_c}{\omega_c}} \right| , \label{eq_S21}$$ with $I$ the pure imaginary, $Q_{tot}=\omega_c/\kappa_{tot}$ and $Q_{ext}=\omega_c/\kappa_{ext}$. $Q_{im}$ corresponds to an imaginary component of the external impedance, usually attributed to the inductive bonding wires (leading to the asymmetry of the fit in Fig. \[fig\_1\] center). ![ Measured signal amplitude for the 3 schemes used at 210$~$mK: blue-detuned pump, red-detuned pump and “in-cavity” (green, the square and circle symbols stand for Stokes and Anti-Stokes peaks respectively). For blue/red pumping, the fits correspond to Eqs. (\[eq2\]-\[eqgain\]), defining the coefficient $\cal M$. The dashed line corresponds to the heating measured in Fig. \[fig\_6\]. []{data-label="fig_fit"}](marquardt){width="11.5cm"} The fits are always of good quality. $\kappa_{ext}$ is very stable and reproducible, and does not vary by more than about $\pm 5~\%$. On the other hand $\kappa_{tot}=\kappa_{ext}+\kappa_{in}$ varies between typically $120~$kHz and $190~$kHz from one cool-down to the other. $\kappa_{in}$ represents the internal decay processes, taking into account all microscopic mechanisms. At low pump powers, $\kappa_{tot}$ happens to be worse (i.e. larger) than at high powers. This is true up to some maximum above which we start to substantially degrade the cavity’s properties; in terms of $n_{cav}$ this limit lies in the range of $10^8 - 10^9$ photons. In the range studied the temperature dependence of $\kappa_{tot}$ is rather small. It is immune to the magnetic fields and voltages applied within the parameter range studied. However, the [*resonance position*]{} of the cavity is very sensitive to all parameters: so the main issue of the fit is to define $\omega_c$ as accurately as possible. The properties of the cavity are too complex to allow an analysis of the type of Fig. \[fig\_6\] (performed for the NEMS), which could tell us by how much the microwave mode (and/or the chip) heats at a given pump power. We therefore measured directly the thermal population of the cavity with respect to $P_{in}$. We present these data in Fig. \[fig\_cavityheat\] as a function of $\sqrt{n_{cav}}$: on pure phenomenological grounds, the dependence seems then to be linear (see guides for the eyes in Fig. \[fig\_cavityheat\]). We could not perform this measurement below typically 400$~$mK because the signal was too weak. The power-dependence is rather different from Fig. \[fig\_6\] and does not seem to be a true heating of the chip itself; it has thus to be noise fed into the cavity by the pump generator. Indeed, similar features but with different levels of cavity populations have been measured using different brands of generators. The relevant outcome of this graph is the [*extra (out-of-equilibrium) cavity population*]{} induced by the strong pump; this is the number quoted in magenta in Fig. \[fig\_cavityheat\] for each temperature, at the largest powers displayed. This number is obtained by dividing the photon flux by $\kappa_{ext}/2$ (bi-directional coupling). Injecting it in the theoretical expressions [@AKMreview], we find out that this effect remains always negligible with respect to other problems creating the finite error bars of Fig. \[fig\_7\]. From the careful measurement of the cavity at each point, the knowledge of the heating effects in Figs. \[fig\_6\] and \[fig\_cavityheat\] we can guarantee that the experiment is performed in the best possible conditions. To be then quantitative, we finally need to be able to convert photon population in the Stokes/Anti-Stokes peaks into phonon populations. To do so, we need to know the parameter $\cal M$ introduced in Eq. (\[eq2\]). In principle, one could calculate it from theory, knowing all experimental details of the setup [@AKMreview]. However, since all of these parameters are known within some experimental error, the final value obtained for $\cal M$ would be of poor precision. It is thus much more efficient to [*calibrate it*]{} at a given temperature: this is performed in Fig. \[fig\_fit\], at sufficiently high $T_{cryo}$ to guarantee a good thermal coupling and enough resolution in signal (210$~$mK). In this plot we show the signal (area) obtained for the 3 schemes presented in Fig. \[fig\_2\], as a function of $P_{in}$ (they thus all overlap at low powers). The lines are fit to theory Eqs. (\[eq1\],\[eqgain\]) [@AKMreview], with the dashed one taking into account the heating produced by the large $n_{cav}$ reached with the “in-cavity” scheme. We thus obtain ${\cal M }\approx 1.8 \times 10^{12}~$photons/s/W/K for our device. Note that at the highest powers used, in the red-detuned pumping scheme the mode cooled from $210~$mK down to approximately $20~$mK. ![ Main: Power threshold $P_{thr}$ to self-oscillation regime (blue-detuned pump scheme). A large hysteresis is seen, certainly due to nonlinearities in the system. The black line is the calculated value $\propto \gamma_m$ from Eq. (\[eq1\]), and the dashed line is a guide to the eyes $\propto \gamma_m^2$. Inset: peak height measured sweeping the power up (full black), and down (empty blue symbols) at 210$~$mK. The dashed verticals are the threshold positions. []{data-label="fig_oscill"}](oscill){width="11cm"} Our aim was to demonstrate the capabilities of the method for temperatures as low as possible and a coupling $g_0$ which is [*not*]{} especially large. We therefore relied on the out-of-plane flexure of our NEMS beam; in-plane and out-of-plane modes were both characterized first at room temperature using optics [@ArcizetThanks]. The out-of-plane flexural mode was found at 3.660$~$MHz, shifted about 280$~$kHz from the in-plane one. Both had a quality factor of about $6\,500$. At low temperatures, the method then relies on the blue-detuned pump instability: below the threshold it enables [*amplification*]{} of the weak signal, and the position of the threshold itself gives access to the parameter $\gamma_m$ needed for quantitative fits. It is thus in principle [*not necessary*]{} to be able to resolve the signal at low powers; in particular, in these conditions sideband-assymetry thermometry would be impractical. The definition of $\gamma_m$ from the threshold position is illustrated in Fig. \[fig\_oscill\]. Ramping up the power, from Eq. (\[eq1\]) we can recalculate the damping rate from the position of the threshold $P_{thr}$ (full symbols and full line). Furthermore, it happens that this threshold towards self-sustained oscillations is hysteretic (Fig. \[fig\_oscill\] empty symbols). This effect is not yet modeled, but it clearly arises from non-linearities in the system. For microwave optomechanics, this are genuine [*geometrical*]{} nonlinearites, as opposed to thermal nonlinearities seen in optics [@faveroOscill]. The temperature-dependence of the down-sweep threshold (large amplitude motion) is even stronger than for the up-sweep one (small amplitude). Again, no thermal decoupling can be seen down to 10$~$mK; the heating effect (Fig. \[fig\_6\]) has been taken into account, and has only a small impact on the data. [blaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa]{} G. Pickett, C. Enss, The European Microkelvin Platform, Nature Review Materials 3, 18012 (2018). A. N. Cleland and M. L. Roukes, Fabrication of high frequency nanometer scale mechanical resonators from bulk Si crystals, Appl. Phys. Lett. 69, 2653 (1996). A. D. O’Connell, M. Hofheinz, M. Ansmann, Radoslaw C. Bialczak, M. Lenander, Erik Lucero, M. Neeley, D. Sank, H. Wang, M. Weides, J. Wenner, John M. Martinis and A. N. Cleland, Quantum ground state and single-phonon control of a mechanical resonator, Nature 464, 697-703 (2010). T. A. Palomaki, J. W. Harlow, J. D. Teufel, R. W. Simmonds, K. W. Lehnert, Coherent state transfer between itinerant microwave fields and a mechanical oscillator, Nature 495, 210 (2013). J.-M. Pirkkalainen, S. U. Cho, Jian Li, G. S. Paraoanu, P. J. Hakonen and M. A. Sillanpää, Hybrid circuit cavity quantum electrodynamics with a micromechanical resonator, Nature 494, 211 (2013). C. A. Regal, J. D. Teufel And K. W. Lehnert, Measuring nanomechanical motion with a microwave cavity interferometer, Nat. Phys. 4, 555 (2008). Alfredo Rueda, Florian Sedlmeir, Michele C. Collodo, Ulrich Vogl, Birgit Stiller, Gerhard Schunk, Dmitry V. Strekalov, Christoph Marquardt, Johannes M. Fink, Oskar Painter, Gerd Leuchs, and Harald G. L. Schwefel, Efficient microwave to optical photon conversion: an electro-optical realization, Optica 3, 597 (2016). A. P. Higginbotham, P. S. Burns, M. D. Urmey, R. W. Peterson, N. S. Kampel, B. M. Brubaker, G. Smith, K. W. Lehnert and C. A. Regal , Harnessing electro-optic correlations in an efficient mechanical converter, Nature Physics 14, 1038 (2018). A. Metelmann and A. A. Clerk, Nonreciprocal Photon Transmission and Amplification via Reservoir Engineering, Phys. Rev. X 5, 021025 (2015). Gabriel A. Peterson, Florent Q. Lecocq, Katarina Cicak, Raymond W. Simmonds, Jose A. Aumentado, John D. Teufel, Demonstration of efficient nonreciprocity in a microwave optomechanical circuit, Phys. Rev. X 7, 031001 (2017). N. R. Bernier, L. D. Tóth, A. Koottandavida, M. A. Ioannou, D. Malz, A. Nunnenkamp, A. K. Feofanov and T. J. Kippenberg, Nonreciprocal reconfigurable microwave optomechanical circuit, Nature Comm. 8, 604 (2017). S. Barzanjeh, M. Wulf, M. Peruzzo, M. Kalaee, P.B. Dieterle, O. Painter and J.M. Fink, Mechanical on-chip microwave circulator, Nature Comm. 8, 953 (2017). E. E. Wollman, C. U. Lei, A. J. Weinstein, J. Suh, A. Kronwald, F. Marquardt, A. A. Clerk, K. C. Schwab, Quantum squeezing of motion in a mechanical resonator, Science 349, 952 (2015). C. F. Ockeloen-Korppi, E. Damskägg, J.-M. Pirkkalainen, M. Asjad, A. A. Clerk, F. Massel, M. J. Woolley and M. A. Sillanpää, Stabilized entanglement of massive mechanical oscillators, Nature 556, 478 (2018). Francesco Massel, Sung Un Cho, Juha-Matti Pirkkalainen, Pertti J. Hakonen, Tero T. Heikkilä and Mika A. Sillanpää, Multimode circuit optomechanics near the quantum limit, Nat. Comm. 3, 987 (2012). A. J. Weinstein, C. U. Lei, E. E. Wollman, J. Suh, A. Metelmann, A. A. Clerk, and K. C. Schwab, Observation and Interpretation of Motional Sideband Asymmetry in a Quantum Electromechanical Device, Phys. Rev. X 4, 041003 (2014). A. D. Armour and M. P. Blencowe, Probing the quantum coherence of a nanomechanical resonator using a superconducting qubit: I. Echo scheme, New J. of Phys. 10, 095004 (2008). Dustin Kleckner, Igor Pikovski, Evan Jeffrey, Luuk Ament, Eric Eliel, Jeroen van den Brink, and Dirk Bouwmeester, Creating and verifying a quantum superposition in a micro-optomechanical system, New J. of Phys. 10, 095020 (2008). Dustin Paul Kleckner, [*Micro-Optomechanical Systems for Quantum Optics*]{}, PhD Thesis of University of California, Santa Barbara (04/2010). Markus Aspelmeyer, Tobias J. Kippenberg, Florian Marquardt, Cavity optomechanics, Rev. of Mod. Phys. 86, 1391 (2014). A. Shibahara, O. Hahtela, J.Engert, H. van der Vliet, L. V. Levitin, A.Casey, C.P.Lusher, J. Saunders, D. Drung, and Th. Schurig, Primary current-sensing noise thermometry in the millikelvin regime, Trans. R. Soc. A 374, 20150054 (2016). R. Blaauwgeers, M. Blazkova, M. Clovecko, V.B. Eltsov, R. de Graaf, J. Hosio, M. Krusius, D. Schmoranzer, W. Schoepe, L. Skrbek, P. Skyba, R.E. Solntsev, D.E. Zmeev, Quartz Tuning Fork: Thermometer, Pressure- and Viscometer for Helium Liquids, Journal of Low Temperature Physics 146, 537 (2007). A. K. Naik, M. S. Hanay, W. K. Hiebert, X. L. Feng and M. L. Roukes, Towards single-molecule nanomechanical mass spectrometry, Nat. Nanotech. 4, 445 (2009). M. A. Sillanpää, private communication. Tristan Rocheleau, [*Quantum-Limited Mechanical Resonator Measurement and Back-Action Cooling to near the Quantum Ground State*]{}, PhD thesis of Cornell University (2011). K. Lehnert and J. Teufel, private communication. S. Barzanjeh, private communication. J. D. Teufel, T. Donner, M. A. Castellanos-Beltran, J. W. Harlow, and K.W.Lehnert, Nanomechanical motion measured with an imprecision below that at the standard quantum limit, Nature Nanotech. 4, 820 (2009). T. Rocheleau, T. Ndukum, C. Macklin, J. B. Hertzberg, A. A. Clerk and K. C. Schwab, Preparation and detection of a mechanical resonator near the ground state of motion, Nature 463, 72 (2010). B. D. Hauer, P. H. Kim, C. Doolin, F. Souris, and J. P. Davis, Two-level system damping in a quasi-one-dimensional optomechanical resonator, Phys. Rev. B 98, 214303 (2018). M. Defoort, K.J. Lulla, C. Blanc, H. Ftouni, O. Bourgeois, and E. Collin, Stressed Silicon Nitride Nanomechanical Resonators at Helium Temperatures, J. of Low Temp. Phys. 171, 731 (2013). C. Bäuerle, Y. Bunkov, S.N. Fisher, Chr. Gianese and H. Godfrin, The new Grenoble 100 microKelvin refrigerator, Czekoslovak J. of Phys. [**46**]{}, suppl S5, 2791-2792, (1996). Jasper Chan, T. P. Mayer Alegre, Amir H. Safavi-Naeini, Jeff T. Hill, Alex Krause, Simon Gröblacher, Markus Aspelmeyer and Oskar Painter, Laser cooling of a nanomechanical oscillator into its quantum ground state, Nature 478, 89 (2011). J. D. Teufel, Tobias Donner, Dale Li, J. W. Harlow, M. S. Allman, Katarina Cicak, A. J. Sirois, Jed D. Whittaker, K. W. Lehnert, Raymond W. Simmonds, Sideband cooling of micromechanical motion to the quantum ground state, Nature 475, 359 (2011). Florian Marquardt, J. G. E. Harris, and S. M. Girvin, Dynamical Multistability Induced by Radiation Pressure in High-Finesse Micromechanical Optical Cavities, Phys. Rev. Lett. 96, 103901 (2006). T. J. Kippenberg, H. Rokhsari, T. Carmon, A. Scherer, and K. J. Vahala, Analysis of Radiation-Pressure Induced Mechanical Oscillation of an Optical Microcavity, Phys. Rev. Lett. 95, 033901 (2005). V. Singh, S. J. Bosman, B. H. Schneider, Y. M. Blanter, A. Castellanos-Gomez and G. A. Steele, Optomechanical coupling between a multilayer graphene mechanical resonator and a superconducting microwave cavity, Nat. Nanotech. 9, 820 (2014). P. Mohanty, D. A. Harrington, K. L. Ekinci, Y. T. Yang, M. J. Murphy, and M. L. Roukes, Intrinsic dissipation in high-frequency micromechanical resonators, Phys. Rev. B 66, 085416 (2002). F. Hoehne, Yu. A. Pashkin, O. Astafiev, L. Faoro, L. B. Ioffe, Y. Nakamura, and J. S. Tsai, Damping in high-frequency metallic nanomechanical resonators, Phys. Rev. B 81, 184112 (2010). A. Venkatesan, K. J. Lulla, M. J. Patton, A. D. Armour, C. J. Mellor, and J. R. Owers-Bradley, Dissipation due to tunneling two-level systems in gold nanomechanical resonators, Phys. Rev. B 81, 073410 (2010). K. J. Lulla, M. Defoort, C. Blanc, O. Bourgeois, and E. Collin, Evidence for the role of normal-state electrons in nanoelectromechanical damping mechanisms at very low temperatures, Phys. Rev. Lett. 110, 177206 (2013). M. Hamoumi, P. E. Allain, W. Hease, E. Gil-Santos, L. Morgenroth, B. Gérard, A. Lemaître, G. Leo, and I. Favero, Microscopic Nanomechanical Dissipation in Gallium Arsenide Resonators, Phys. Rev. Lett. 120, 223601 (2018). Gregory S. MacCabe, Hengjiang Ren, Jie Luo, Justin D. Cohen, Hengyun Zhou, Alp Sipahigil, Mohammad Mirhosseini, and Oskar Painter, Phononic bandgap nano-acoustic cavity with ultralong phonon lifetime, arXiv:1901.04129v1 (2019). M. C. Cross and Ron Lifshitz, Elastic wave transmission at an abrupt junction in a thin plate with application to heat transport and vibrations in mesoscopic systems, Phys. Rev. B 64, 085324 (2001). J. Suh, A. J. Weinstein, C. U. Lei, E. E. Wollman, S. K. Steinke, P. Meystre, A. A. Clerk, K. C. Schwab, Mechanically Detecting and Avoiding the Quantum Fluctuations of a Microwave Field, Science 344, 1262 (2014). C. Song, T.W. Heitmann, M.P. DeFeo, K. Yu, R. McDermott, M. Neeley, John M. Martinis, and B.L.T. Plourde, Microwave response of vortices in superconducting thin films of Re and Al, Phys. Rev. B 79, 174512 (2009). A. B. Zorin, F.-J. Ahlers, J. Niemeyer, T. Weimann, H. Wolf, V. A. Krupenin, and S. V. Lotkhov, Background charge noise in metallic single-electron tunneling devices, Phys. Rev. B 53, 13682 (1996). S.E. de Graaf, L. Faoro, J. Burnett, A.A. Adamyan, A.Ya. Tzalenchuk, S.E. Kubatkin, T. Lindström and A.V. Danilov, Suppression of low-frequency charge noise in superconducting resonators by surface spin desorption, Nature Comm. 9, 1143 (2018). Jiansong Gao, Miguel Daal, John M. Martinis, Anastasios Vayonakis, Jonas Zmuidzinas, Bernard Sadoulet, Benjamin A. Mazin, Peter K. Day, and Henry G. Leduc, A semiempirical model for two-level system noise in superconducting microresonators, Appl. Phys. Lett. 92, 212504 (2008). King Y. Fong, Wolfram H. P. Pernice, and Hong X. Tang, Frequency and phase noise of ultrahigh Q silicon nitride nanomechanical resonators, Phys. Rev. B 85, 161410(R) (2012). Olivier Maillet, [*Stochastic and non-linear processes in nano-electro-mechanical systems*]{}, PhD Thesis of Université Grenoble Alpes (25/05/2016). Olivier Maillet, Xin Zhou, Rasul R. Gazizulin, Bojan R. Ilic, Jeevak M. Parpia, Olivier Bourgeois, Andrew D. Fefferman, and Eddy Collin, Measuring frequency fluctuations in nonlinear nanomechanical resonators, ACS Nano 12, 5753 (2018). Marc Sansa, Eric Sage, Elizabeth C. Bullard, Marc Gély, Thomas Alava, Eric Colinet, Akshay K. Naik, Luis Guillermo Villanueva, Laurent Duraffourg, Michael L. Roukes, Guillaume Jourdan, Sébastien Hentz, Frequency fluctuations in silicon nanoresonators, Nature Nanotech. 11, 552-558 (2016) Alexander Bilmes, Sebastian Zanker, Andreas Heimes, Michael Marthaler, Gerd Schön, Georg Weiss, Alexey V. Ustinov, and Jürgen Lisenfeld, Electronic Decoherence of Two-Level Systems in a Josephson Junction, Physical review B 96, 064504 (2017). O. Maillet, F. Vavrek, A.D. Fefferman, O. Bourgeois and E. Collin, Classical decoherence in a nanomechanical resonator, New J. Phys. 18, 073022 (2016). M. Defoort, [*Non-linear dynamics in nano-electromechanical systems at low temperatures*]{}, PhD Thesis of Université Grenoble Alpes (16/12/2014). E. Nazaretski, R. D. Merithew,V.O. Kostroun, A.T. Zehnder, R.O. Pohl, and J.M. Parpia, Effect of Low-Level Radiation on the Low Temperature Acoustic Behavior of [*a*]{}-SiO$_2$, Phys. Rev. Lett. 92, 245502-1 (2004). N. S. Lawson, A simple heat switch for use at millikelvin temperatures, Cryogenics 22, 667 (1982). R. Rusby, D. J. Cousins, D. Head, P. Mohandas, Yu. M. Bunkov, C. Bäuerle, R. Harakaly, E. Collin, S. Triqueneaux, C. Lusher, J. Li, J. Saunders, B. Cowan, J. Nyéki, M. Digby, J. Pekola, K. Gloos, P. Hernandez, M. de Groot, A. Peruzzi, R. Jochemsen, A. Chinchure, W. Bosch, F. Mathu, J. Flokstra, D. Veldhuis, Y. Hermier, L. Pitre, A. Vergé, F. Benhalima, B. Fellmuth, J. Engert, [*Dissemination of the Ultra-Low Temperature Scale PLTS-2000*]{}, Proceedings of TEMPMEKO, Vols. 1 & 2, B. Fellmuth, J. Seidel, G. Scholz (ed.), Berlin, VDE Verlag GmbH (2002) F. Pobell, [*Matter and methods at low temperatures*]{}, Springer-Verlag Berlin Heidelberg, Second Edition (1996). A.M. Guénault, V. Keith, C.J. Kennedy, S.G. Mussett, and G.R. Pickett. The mechanical behavior of a vibrating wire in superfluid $^3$He-B in the ballistic limit, J. of Low Temp. Phys. 62, 511 (1986). C. B. Winkelmann, E. Collin, Yu. M. Bunkov, H. Godfrin, Vibrating wire thermometry in superfluid $^3$He, J. of Low Temp. Phys.135, 3 (2004). D.I. Bradley, M. Clovecko, S.N. Fisher, D. Garg, A. Guénault, E. Guise, R.P. Haley, G.R. Pickett, M. Poole, V. Tsepelin, Thermometry in Normal Liquid He-3 Using a Quartz Tuning Fork Viscometer, J. of Low Temp. Phys. 171, 750 (2013). D.I. Bradley, P. Crookston, S.N. Fisher, A. Ganshin, A. Guénault, R.P. Haley, M.J. Jackson, G.R. Pickett, R. Schanen, V. Tsepelin, The damping of a quartz tuning fork in superfluid He-3-B at low temperatures, J. of Low Temp. Phys. 157, 476 (2009). Dieter Vollhardt and Peter Wolfle, [*The Superfluid Phases of Helium 3*]{}, Dover Books on Physics, NY (2013). D.C. Carless, H.E. Hall, and J.R. Hook, Vibrating wire measurements in liquid $^3$He. I. The normal state, J. of Low Temp. Phys. 50, 583 (1983). D.C. Carless, H.E. Hall, and J.R. Hook, Vibrating wire measurements in liquid $^3$He. II. The superfluid B phase, J. of Low Temp. Phys. 50, 605 (1983). M. S. Khalil, M. J. A. Stoutimore, F. C. Wellstood, and K. D. Osborn, An analysis method for asymmetric resonator transmission applied to superconducting devices, J. Appl. Phys. 111, 054510 (2012). A room-temperature vacuum chamber equipped with a laser readout from O. Arcizet and B. Pigeau has been used. Christophe Baker, Sebastian Stapfner, David Parrain, Sara Ducci, Giuseppe Leo, Eva M. Weig and Ivan Favero, Optical Instability and Self-Pulsing in Silicon Nitride Whispering Gallery Resonators, Optics Express 20, 29076 (2012).
--- author: - bibliography: - 'bibfile.bib' title: | Convergence law for hyper-graphs\ with prescribed degree sequences ---
--- abstract: 'We investigate the triggering mechanism and the structural properties of obscured luminous active galactic nuclei from a detailed study of the rest-frame $B$ and $I$ [*Hubble Space Telescope*]{} images of 29 nearby ($z\approx 0.04-0.4$) optically selected type 2 quasars. Morphological classification reveals that only a minority ($34\%$) of the hosts are mergers or interacting galaxies. More than half ($55\%$) of the hosts contain regular disks, and a substantial fraction ($38\%$), in fact, are disk-dominated ($B/T\lesssim 0.2$) late-type galaxies with low  indices ($n < 2$), which is characteristic of pseudo bulges. The prevalence of bars in the spiral host galaxies may be sufficient to supply the modest fuel requirements needed to power the nuclear activity in these systems. Nuclear star formation seems to be ubiquitous in the central regions, leading to positive color gradients within the bulges and enhancements in the central surface brightness of most systems.' author: - 'Dongyao Zhao, Luis C. Ho, Yulin Zhao, Jinyi Shangguan, and Minjin Kim' bibliography: - 'QSO2s\_paper1.bib' date: 'Accepted ??. Received ??' title: The Role of Major Mergers and Nuclear Star Formation in Nearby Obscured Quasars --- \[firstpage\] Introduction {#sec:introduction} ============ The ubiquitous presence of supermassive black holes in the centers of galaxies and the tight correlations between black hole mass and bulge stellar mass (@Magorrian98 [@KormendyHo13]) and velocity dispersion (@FerrareseMerritt00; @Gebhardt00) have often been attributed to a close connection between the growth of the central black hole (through accretion) and the growth of the host galaxy (through star formation). The exact nature of this connection, however, is still under debate. In this regard, active galactic nuclei (AGNs) are of great importance to understand the physical link between black holes and their host galaxies, as AGNs are powered by intense accretion of material onto the central black hole. Strong outflows from AGNs can quench star formation efficiently, and may be responsible for establishing the empirical correlations between black hole mass and host galaxy properties (e.g., @DiMatteo05; @Springel05). One of the most crucial but yet unknown factors on clarifying the role of nuclear activity in the coevolution of black holes and their hosts is how AGNs are triggered. For AGNs with low to moderate luminosities (e.g., Seyfert galaxies with bolometric luminosities $L_{\rm bol}\lesssim 10^{45}$ erg s$^{-1}$), observational and theoretical studies suggest that various internal processes can trigger mass accretion to the central black hole (e.g., @HopkinsHernquist09; @Hopkins14). However, these mechanisms are insufficient to explain the ignition of more powerful AGNs with $L_{\rm bol}> 10^{45}$ erg s$^{-1}$. It seems unlikely that the gas reservoir on kpc scales can lose sufficient angular momentum to feed luminous quasars (@Jogee06). In numerical simulations, gas-rich major mergers are suggested as a promising mechanism to trigger luminous AGNs (e.g., @Hopkins08 [@AlexanderHickox12]). The morphological signatures of major mergers, such as close pairs, double nuclei, disturbed morphologies, tidal tails, and shells and bridges are expected to be visible up to $\sim 0.5-1.5$ Gyr after the merger. Given that the AGN lifetime is thought to be $\lesssim 100$ Myr (@MartiniWeinberg01 [@YuTremaine02; @Martini04]), the features of morphological disturbance should be observable in their host galaxies if luminous AGNs are triggered by gas-rich major mergers. A number of observational studies have examined the morphologies of the host galaxies of luminous AGNs to test the major-merger scenario. A high frequency of distortions in the morphologies of AGN host galaxies has been reported in a number of studies using quasars selected from various methods (e.g., radio quasars: @RamosAlmeida11 [@RamosAlmeida12]; optically unobscured quasars: @Veilleux09), seemingly consistent with the conventional hypothesis. On the other hand, there are no shortage of studies that reach the opposite conclusion, that major mergers play only an insignificant role in triggering luminous AGNs (e.g., radio quasars: @Dunlop03 [@Floyd04]; X-ray quasars: @Cisternas11 [@Villforth14]; optically unobscured quasars: @Mechtley16). According to the gas-rich major-merger scenario, luminous AGNs should be highly obscured during the early stages of the merger because of the enhanced concentration of gas and dust from the progenitor galaxies. In the aftermath of the merger event, these highly obscured, luminous AGNs—classified as “type 2” quasars—should be morphologically highly disturbed. When the obscuration clears and “type 1” quasars emerge toward the late stages of the evolution, it is unclear the extent to which the morphological signatures of the merger process still remain visible. Thus, type 2 quasars are the more promising targets to test the role of gas-rich major mergers in triggering AGNs, and the overall major merger-driven framework of black hole-galaxy coevolution. In terms of morphological studies of AGN host galaxies, obscured sources enjoy another strong advantage compared with their unobscured counterparts because of their absence of a bright nucleus. The host galaxies of type 1 AGNs can be extraordinarily difficult to study because of the dominating influence of their strong central point source (e.g., @Kim08a [@Kim08b; @Kim17]; @Mechtley16). Even basic morphological classifications—not to mention of more quantitative structural parameters—can be challenging to obtain. By contrast, the nuclear obscuration of type 2 AGNs serves as a natural coronagraph to block the blinding nucleus, thereby affording a cleaner view of the detailed internal structures of the host galaxy. Although various theoretical studies have long predicted the existence of obscured luminous AGNs, a limited number of type 2 quasars were known until large samples were discovered in the last decade (e.g., @Zakamska03 [@MartinezSansigre05; @Reyes08; @Alexandroff16]). At high redshifts ($z\approx 2$), [@Donley18] demonstrate that major mergers play a dominant role in triggering and fuelling infrared-selected, luminous obscured AGNs. At intermediate redshifts ($z \approx 0.5$), morphological hints of interactions have also been found to be prevalent in the host galaxies of optically selected type 2 quasars (e.g., @VillarMartin11 [@Wylezalek16]), further supporting the major merger scenario. However, the situation is less clear at lower redshifts ($z \lesssim 0.3$). @Bessiere12 reported a significant fraction (75%) of type 2 quasars showing evident features of morphological disturbance. Nevertheless, elliptical host galaxies were seen to be dominant ($\sim 70$%) in type 2 samples by other studies, such as those of @Zakamska06 and @VillarMartin12. This work reports deep, high-resolution, rest-frame optical images obtained with the *Hubble Space Telescope* () of a sample of 29 nearby ($z\approx 0.04-0.4$) type 2 quasars. Although the sample is modest, our observations represent the most extensive, detailed study to date of the host galaxies of obscured quasars in the local Universe. The  images, taken in rest-frame $B$ and $I$, enable us not only to investigate the morphological properties of the host galaxies but also to derive crude constraints on their stellar populations. We analyze the morphologies, photometric structures, colors, and stellar masses of the host galaxies, paying special emphasis on their bulges. We examine whether major mergers are causally connected to AGN activity. The paper is organized as follows. Section \[sec:sample\_obs\] introduces the sample and describes the  observations and data reduction. Section \[sec:Data\_analyse\] presents the image analysis of the host galaxies, including morphological classification, structure decomposition, color map construction, and stellar mass estimation. Results and discussions are presented in Section \[sec:results\]. We summarize our main conclusions in Section \[sec:conculde\]. This work adopts the following cosmological parameters: $\Omega_m= 0.286$, $\Omega_{\Lambda}= 0.714$, and $H_0 = 69.6$ km s$^{-1}$ Mpc$^{-1}$ (@Bennett14). Data {#sec:sample_obs} ==== Sample Selection ---------------- Our type 2 quasars were originally selected to complement a matching study of low-redshift type 1 quasars selected from the Palomar-Green survey of @SchmidtGreen83 to examine the evolutionary connection between these two populations, which will be reported in an upcoming paper. The comparison sample of Palomar-Green quasars consists of 87 objects with $z < 0.5$ (@Boroson92). To this end, we randomly selected 87 type 2 quasars, matching the Palomar-Green sample in terms of redshift and   $\lambda$5007 luminosity, from the catalog of 887 type 2 quasars published by @Reyes08[^1]. Type 2 quasars in this catalog were identified from the Sloan Digital Sky Survey (SDSS; @York00) Data Release 6 spectroscopic database (@Adelman08). Our selection assumes that type 2 quasars have the same intrinsic AGN luminosity as type 1 quasars for a given $L_{\rm [O~III]}$, and that $L_{\rm [O~III]}$ is related to the bolometric luminosity of the AGN (@Heckman05 [@LaMassa09; @Dicken14]). As the observations were conducted in the “snapshot” mode of , only $\sim 1/3$ of the original sample of type 2 quasars was observed successfully, yielding a sample of 29 objects (Table \[tab:obs\_info\]). The final sample spans a redshift range of 0.04 to 0.4, with a median value of $z = 0.12$, and extinction-corrected   luminosities from $\log (L_{\rm [O~III]}/L_\odot) = 8.42$ to 9.88, with a median value of 9.11 (Fig. \[fig:Loiiiz12\]). The 29 objects have a similar distribution of redshifts,  luminosities, and optical magnitudes as the original sample of 87 objects. They also match the distribution of these quantities in the parent catalog of @Reyes08 at $z<0.5$.  WFC3 Observations {#sec:WFC3_obs} ------------------- The observations were conducted using the WFC3 camera between November 2012 and July 2014 (proposal ID 12903; PI: Luis C. Ho). Each object was observed with a blue filter and a red filter using the UVIS or IR channel. The bandpasses were carefully chosen from the large suite of available WFC3 filters, with two considerations: to match approximately rest-frame $B$ and $I$, and to avoid strong emission lines. Hereinafter, we denote the bluer filter (F438W, F475W, F555W) as $B_{\rm WFC3}$ and the redder filter (F814W, F105W, F110W, F125W) as $I_{\rm WFC3}$. For the UVIS channel, each quasar was observed with three long exposures, using the three-point dithering pattern `WFC3-UVIS-DITHER-LINE-3PT`. For the IR channel, four long exposures were taken using the four-point dithering pattern `WFC3-IR-DITHER-BOX-MIN`. We used subarrays to minimize the readout time and buffer size, resulting in a field-of-view (FoV) of $67 \times 67$ arcsec$^2$ and $40 \times 40$ arcsec$^2$ for the IR and UVIS channels, respectively. These FoVs, which correspond to $\sim 145$ and $87$ kpc at the median redshift of the sample, are sufficiently wide to cover the outskirts of the host galaxies for detecting extended features and to achieve accurate sky measurement. Total exposure times, which varied between 147 and 780 s, were set to reach a surface brightness limit of $\mu \approx 25$ mag arcsec$^{-2}$, a depth that previous studies had demonstrated can yield robust detections of faint outer structures (e.g., @Kim08a [@Greene08; @Jiang11]). Table \[tab:obs\_info\] gives a summary of the observations. [ccccccccc]{} SDSS J011935.63$-$102613.1 & 0.125 & 586 & 0.0371 & 8.43 & 8.81 & F105W/F475W & 147/780 & 2013-06-19\ SDSS J074751.56+320052.1 & 0.280 & 1438 & 0.0700 & 8.95 & 9.88 & F110W/F555W & 147/780 & 2014-04-27\ SDSS J075329.93+230930.7 & 0.336 & 1775 & 0.0633 & 8.59 & 8.92 & F110W/F555W & 147/780 & 2014-02-13\ SDSS J075940.95+505024.0 & 0.054 & 243 & 0.0415 & 8.83 & 9.32 & F814W/F438W & 470/300 & 2013-09-09\ SDSS J080252.92+255255.5 & 0.081 & 368 & 0.0329 & 8.86 & 9.23 & F105W/F438W & 147/705 & 2013-09-13\ SDSS J080337.32+392633.1 & 0.065 & 295 & 0.0456 & 8.12 & 9.25 & F105W/F438W & 147/705 & 2014-01-12\ SDSS J080523.29+281815.8 & 0.128 & 604 & 0.0476 & 8.62 & 9.14 & F105W/F475W & 147/570 & 2013-01-03\ SDSS J081100.20+444216.3 & 0.183 & 888 & 0.0418 & 8.27 & 9.85 & F105W/F475W & 147/540 & 2013-01-21\ SDSS J084107.06+033441.3 & 0.274 & 1404 & 0.0334 & 8.77 & 9.36 & F110W/F555W & 147/780 & 2014-01-21\ SDSS J084344.99+354941.9 & 0.054 & 241 & 0.0360 & 8.14 & 8.55 & F814W/F438W & 260/320 & 2013-02-23\ SDSS J090754.07+521127.5 & 0.085 & 386 & 0.0159 & 8.23 & 8.72 & F105W/F438W & 147/720 & 2013-03-12\ SDSS J091819.66+235736.4 & 0.419 & 2303 & 0.0443 & 9.58 & 9.87 & F125W/F555W & 147/780 & 2013-04-14\ SDSS J093625.36+592452.7 & 0.095 & 439 & 0.0200 & 8.35 & 8.63 & F105W/F438W & 147/780 & 2013-04-28\ SDSS J103408.59+600152.2 & 0.050 & 225 & 0.0093 & 8.85 & 9.14 & F814W/F438W & 156/130 & 2013-10-09\ SDSS J105208.19+060915.1 & 0.052 & 232 & 0.0310 & 8.20 & 8.60 & F814W/F438W & 325/390 & 2013-02-25\ SDSS J110213.01+645924.8 & 0.077 & 352 & 0.0319 & 8.45 & 9.54 & F105W/F438W & 147/660 & 2013-08-03\ SDSS J111015.25+584845.9 & 0.143 & 678 & 0.0096 & 8.88 & 9.08 & F105W/F475W & 147/780 & 2014-03-21\ SDSS J113710.77+573158.7 & 0.395 & 2152 & 0.0097 & 9.61 & 9.87 & F125W/F555W & 147/780 & 2014-03-21\ SDSS J115326.42+580644.5 & 0.064 & 290 & 0.0249 & 8.48 & 8.97 & F105W/F438W & 147/630 & 2014-03-19\ SDSS J123804.81+670320.7 & 0.179 & 871 & 0.0190 & 8.26 & 8.69 & F105W/F475W & 147/780 & 2012-12-31\ SDSS J125850.77+523913.0 & 0.055 & 246 & 0.0141 & 8.26 & 8.42 & F814W/F438W & 325/390 & 2013-01-11\ SDSS J130038.09+545436.8 & 0.088 & 403 & 0.0180 & 8.94 & 9.11 & F105W/F438W & 147/660 & 2013-08-06\ SDSS J133542.49+631641.5 & 0.169 & 816 & 0.0187 & 8.47 & 9.49 & F105W/F475W & 147/780 & 2013-08-10\ SDSS J140541.21+402632.5 & 0.080 & 366 & 0.0135 & 8.78 & 9.17 & F105W/F438W & 147/660 & 2014-08-07\ SDSS J140712.94+585120.4 & 0.170 & 823 & 0.0107 & 8.27 & 8.80 & F105W/F475W & 147/780 & 2013-05-29\ SDSS J144038.09+533015.8 & 0.038 & 166 & 0.0115 & 8.94 & 9.30 & F814W/F438W & 188/132 & 2013-11-24\ SDSS J145019.18$-$010647.4 & 0.120 & 559 & 0.0459 & 8.42 & 8.61 & F105W/F475W & 147/660 & 2013-06-15\ SDSS J155829.36+351328.6 & 0.119 & 558 & 0.0246 & 8.77 & 8.98 & F105W/F475W & 147/540 & 2012-10-27\ SDSS J162436.40+334406.7 & 0.122 & 573 & 0.0227 & 8.56 & 8.82 & F105W/F475W & 147/660 & 2013-09-07\ [1.2]{} **Notes.** Column (1): Object name. Column (2): Redshift. Column (3): Luminosity distance. Column (4): Galactic extinction. Column (5): Observed \[O III\] luminosity [@Reyes08]. Column (6): Extinction-corrected \[O III\] luminosity [@KongHo18]. Column (7): WFC3 filter. Column (8): Exposure time for $I_{\rm WFC3}$ and $B_{\rm WFC3}$. Column (9): Date of observations. Data Reduction {#sec:data_reduction} -------------- We use `AstroDrizzle` to combine the dithered images to generate cosmic ray-removed science images. The pixel scale is set to $0\farcs06$ for the IR channel and $0\farcs03$ for the UVIS channel so that it Nyquist samples the point-spread function (PSF), which has a full width at half maximum (FWHM) of $\sim 0\farcs13$ and $0\farcs07$ for the IR and UVIS channel, respectively. Fortunately, none of the central pixels near the nucleus was saturated. Although `AstroDrizzle` performs sky subtraction, further adjustments of the sky level were made during the two-dimensional (2-D) image fitting process using [GALFIT]{} (@Peng02 [@Peng10]; see Section \[sec:Galfit\]). A robust model of the PSF is crucial for accurate image decomposition, even for the hosts of type 2 AGNs. Ideally, the PSF can be constructed from bright stars observed simultaneously in the science images, but in general this is not possible in our program because of the relatively small FoV of our subarray images. While synthetic [TinyTim]{} (@Krist11) PSFs are commonly used as a substitute (e.g., @Kim17), our experience (Huang et al. 2019) indicates that empirical PSFs generated from stacked WFC3 images of multiple bright, unsaturated, isolated stars observed with the same filter, in the same dither pattern, but at different times perform significantly better than synthetic PSFs. Hence, our analysis uses a library of empirical PSFs of high signal-to-noise (S/N) created from stacking individual stars. The total number of stars used to generate the stacked PSF differs from filter to filter, with the average being a few tens. Analysis {#sec:Data_analyse} ======== We first inspect the images to visually classify the morphologies of the host galaxies. We then quantify their structural parameters through detailed 2-D image decomposition. We generate color maps and color profiles, which enable us to derive stellar masses and explore the stellar population of the host galaxies. Morphology Classification ------------------------- The high resolution and sensitivity of the WFC3 images, coupled with the low redshifts of our sample, allow us to perform quite reliable visual classifications of the galaxy morphologies. We distinguish five broad types: merging/disturbed, unbarred spirals, barred spirals, lenticulars, and ellipticals. In this work, we regard spirals and lenticulars as disk galaxies, and we consider spirals as late-type. ![image](mergers_10gal-eps-converted-to.pdf) ![image](bar_spiral_8gal-eps-converted-to.pdf) ![image](spiral_3gal-eps-converted-to.pdf) ![image](lenticular_5gal-eps-converted-to.pdf) ![image](compact_3gal-eps-converted-to.pdf) We summarize the classifications of the 29 objects as follows (Figure \[fig:morph\_class\]): 10 can be considered merging/disturbed because they exhibit obvious signs of interactions, distorted features, or otherwise reside in host galaxies in close pairs; three are found in unbarred and eight in barred spirals; five are hosted by lenticulars; and the remaining three are in ellipticals. The morphological type of each quasar can be found in Table \[tab:galfit\_bestfit\]. Both the merging/disturbed system SDSS J162436.40+334406.7 and the lenticular galaxy SDSS J093625.36+592452.7 have an apparent close companion. Based on available redshift measurements, the companion of SDSS J162436.40+334406.7 is genuinely associated with it, thus indeed constituting a merging system. However, the apparent companion of SDSS J093625.36+592452.7 is a foreground galaxy at $z=0.04$. While the WFC3 images of SDSS J144038.09+533015.8 suggest that the host galaxy is an isolated spiral, a larger FoV SDSS image (see inset panel in Figure \[fig:morph\_class\]) reveals a long stellar bridge connecting the quasar to a disturbed companion, which prompted us to classify the host as merging/disturbed. We inspected large-FoV SDSS images for all the other objects in the sample and found no other examples of potential companions. Surprisingly, the majority of the sample (55%) exhibit unambiguous large-scale disks, with a significant fraction (38%) hosting clear late-type morphologies in the form of spiral arms and bars. Only approximately one-third of the host galaxies reside in close pairs or show obvious signatures of ongoing or recent interactions. If the ellipticals can be considered merger products, then in total $\sim 45\%$ of the sample are or have been associated with mergers of one type or another. Section \[sec:discuss\_merging\] discusses the implications of these findings in relation to quasar triggering mechanisms. Figure \[fig:z\_distrib\] shows the redshift distribution of the host galaxies with different morphological types. The galaxies with disks and morphologically disturbed features generally concentrate toward lower redshifts ($z \approx 0.1$) than those classified as ellipticals (median $z\approx 0.4$). Have extended features of low surface brightness been missed in these more distant objects? To test whether surface brightness dimming is the main cause of the classification of the elliptical galaxies, we use the [FERENGI]{} code (@Barden08) to generate mock, redshifted images of galaxies using the actual observed images of lower redshift objects. To match the luminosity of quasars with elliptical hosts ($\log L_{\rm [O~III]}/L_\odot= 8.88$, 9.58, 9.61), we choose nearby counterparts with similar  luminosities covering the range of extended morphologies (merging/disturbed, barred and unbarred spirals, lenticulars). The code takes into account the cosmological corrections for size, surface brightness, bandpass shifting, and $k$-correction. [FERENGI]{} treats the cosmological evolution of the stellar population, crudely parameterizing the luminosity evolution as $dM/dz=-1$ (@Ilbert05). We assume that the mock high-$z$ quasars are located at $z=0.143$ and observed with the filter pair F475W/F105W, or at $z=0.4$ and observed with the filter pair F555W/F125W. In the set of simulated IR images (upper panel in Figure \[fig:shift\_to\_highz\]), many of the original morphological details are lost, and most of the galaxies would be incorrectly classified as ellipticals or lenticulars when viewed in F105W at $z=0.143$ or in F125W at $z=0.4$. However, the simulated UVIS F475W and F555W images do a much better job in retaining the original structural information (bottom panel in Figure \[fig:shift\_to\_highz\]). Therefore, so long as images in both filters are available, the morphological classification is unlikely to be biased in our study. We conclude that the morphological classifications of the three elliptical hosts in our sample should be secure. Structural Decomposition ------------------------- We analyze the images using [GALFIT]{} V3.0 (@Peng10), a non-linear least-squares fitting code that uses Levenberg-Marquardt minimization to decompose the major structural components of galaxies. We allow [GALFIT]{} to generate its own $\sigma$ (weight) image. The sky value is fixed to a constant determined from the average background value of five source-free $150\times 150$ pixel$^2$ regions, and the uncertainty of the sky is the standard deviation of the five measurements. The sky measurement is measured separately for each image of each filter. To minimize contamination from nearby sources, we masked all objects beyond 1.5 times the Kron radius[^2] from the target quasar. Additionally, objects that are more than 2.5 mag fainter than the target quasar are masked out regardless of their position because they will hardly affect the fit for the primary target. Unmasked close companions are simultaneously fit with the target galaxy. Prominent dust lanes and regions need to be masked in some objects (e.g., SDSS J080337.32+392633.1, J145019.18$-$010647.4, and J084344.99+354941.9). ### Best-fit Models {#sec:Galfit} We fit bulges with the @Sersic68 profile. We set an upper limit of $n=8$ for the Sérsic index, which is close to the largest values seen in the most luminous ellipticals (e.g., @Kormendy09). Moreover, values larger than $n=8$ are often associated with poor model fits (@Barden12). We adopt an exponential profile ($n=1$) for the disk component.Spiral arms, when clearly visible, are modeled by coordinate rotation and bending modes provided by [GALFIT]{} V3.0 (@Peng10; additional examples can be found in @GaoHo17 and @Gao19). Three lenticular galaxies (SDSS J075940.95+505024.0, J093625.36+592452.7, and J130038.09+545436.8) exhibit an inner lens. @GaoHo17 [see also @Gao18] demonstrate that neglecting inner lenses will bias the derived bulge parameters significantly. Therefore, we model the inner lens as an independent component with either a  or an exponential profile, depending on which gives the better fit. The bar, when present, needs to be properly included to avoid incurring large errors on the derived properties of the bulge (@Laurikainen04 [@Laurikainen05; @Gadotti08; @GaoHo17]). Following common practice (e.g., @Freeman66 [@deJong96]), we adopt a fixed  $n=0.5$ profile for the bar component. Figure \[fig:Bar\_effect\] demonstrates the effect of the bar component for one of the objects in the sample. Without a bar component (middle panel), the size, brightness, and ellipticity of the bulge appear to be substantially overestimated due to the influence of the bar. The poor match of the ellipticity profile as well as the large residuals at $r\approx 2$$-4$  further attest to the inadequacy of the model without a bar. For host galaxies classified as merging/disturbed, especially for those with substantial disturbance, the bulge component cannot always be distinguished clearly from the other irregular components. We take as the bulge the prominent central component, which is usually well fit with a single  profile (with $n \leqslant 8$). We use Fourier modes (Peng et al. 2010) to model asymmetric structures such as lopsided features and tidal tails. Although the active nucleus should be deeply obscured in type 2 quasars, a fraction of its light can still scatter out (@AntonucciMiller85) and thereby modify the innermost light profile of the galaxy. A significant fraction of our sample requires an additional compact, nuclear component—modeled as an unresolved PSF component—to achieve a satisfactory fit for the bulge. Absent the nuclear component, the   index of the bulge can reach unrealistically high values that are inconsistent with the values expected for the morphological types of the host galaxies. An example is illustrated in the right panel of Figure \[fig:Bar\_effect\]. The three-component (bulge, bar, and disk) model yields a bulge  index of $n=4.63$, which would be unprecedented for such an obviously late-type galaxy (e.g., @Balcells03 [@Vika15]). The origin of the large  index is clear: the very central region contains a sharp spike, which has the effect of mimicking a large $n$. After including an additional nucleus component (left panel of Figure \[fig:Bar\_effect\]), the bulge  index drops to a much more reasonable value of $n=2.08$. This suggests that accounting for a nuclear component, probably due to scattered light, is essential to deriving accurate photometric properties of the bulge. A nuclear component seems to be required in 14 objects, of which six are spirals, two are ellipticals, and six are merging/disturbed hosts. The nucleus typically contributes $\lesssim 10$% of the total brightness of the galaxy (average of 8% in $I_{\rm WFC3}$ and 9% in $B_{\rm WFC3}$). Note that, with absolute magnitudes of $-18$ to $-21$, these central components are unlikely to be nuclear star clusters, which have typical absolute magnitudes of $M_I \approx -10$ to $-14$ (@Boker02). The $I_{\rm WFC3}$-band images are significantly deeper than the $B_{\rm WFC3}$-band images. The redder bandpass is also intrinsically more sensitive to the dominant, older stellar component of the host, and, of course, is less affected by dust extinction. We first determine the best-fit model using the $I_{\rm WFC3}$-band image. Then we fix all structural parameters of the sub-components (e.g., , position angle, ellipticity, central position) to solve only for their brightnesses in the $B_{\rm WFC3}$ band. The mask and sky level of the $B_{\rm WFC3}$-band image are determined in the same manner as the $I_{\rm WFC3}$-band image. The best-fit models from the $I_{\rm WFC3}$-band images for the sample are given in Appendix A, and the final parameters are summarized in Table \[tab:galfit\_bestfit\]. The fits are generally good, with reduced $\chi^2 \approx 1$. Quasars that are classified as barred and unbarred spirals tend to have less centrally concentrated ($n \lesssim 2$) and smaller ($R_e \lesssim 0.6$) bulges, with $B/T$ generally less than $0.2$. In contrast, the ellipticals and bulges of lenticular and merging/disturbed hosts have much more concentrated ($n>2$), larger ($R_e > 0.6$), and more dominant ($B/T>0.2$) spheroids. We will discuss in detail the implication of the bulge properties with different morphologies in Section \[sec:results\_SF\]. ### Uncertainties of Best-fit Parameters Three main factors contribute to the uncertainties of the structural parameters: uncertainties in sky determination, variations of the PSF, and assumptions of the model construction. Uncertainties in sky determination do not affect much components of high surface brightness, such as the bulge, but they do impact the lower surface brightness, extended structures, such as the disk and tidal features. To study the impact of sky determination, we repeat the fits by perturbing the sky level one standard deviation above and below the mean value. The impact of PSF variations was assessed by stacking different combinations of stars to generate variants of the empirical PSFs, and then repeating the fits. By far the largest source of uncertainty comes from the assumptions that unavoidably need to be made when constructing simplified 2-D models to fit the intrinsically complex structure of galaxies. This problem was recently investigated by @GaoHo17, who studied the impact of including various morphological components (e.g., inner/outer lenses, bars, disk breaks, spiral arms) on the derived parameters of galaxy bulges. @GaoHo17 showed that inner lenses and bars present the dominant source of uncertainty for the bulge parameters, whereas outer spiral arms have a marginally effect. Therefore, for the disk galaxies in our sample, we explicitly treat bars and, if present, lenses and nuclei. For completeness, we also include spiral arms, even though they are not essential for the robust measurements of the bulge. The best-fit models reproduce properly the observed surface brightness profiles of most galaxies. Following @GaoHo17, we adopt an average uncertainty of 0.1 mag for the bulge luminosity, and 10% for other structural parameters of the bulge. Ellipticals are obviously less complex, but our single-component fits may be an oversimplification (@Huang13). We nominally assign their uncertainties half of the above uncertainties adopted for bulges. The model uncertainty for the hosts with merging/disturbed morphologies is difficult to ascertain. For concreteness, we simply assume that their uncertainties are twice those for bulges. The final uncertainties for the structural parameters are the quadrature sum of the uncertainties from the sky, PSF, and model decomposition. [ccccccccccccccccl]{} & & &\ (lr)[2-10]{}(lr)[11-16]{} Object &Filter &$n_{\rm bulge}$ &$R_{e,\rm bulge}$ &Bulge &Disk &Bar &Nucleus &Total &$B/T$ &Filter &Bulge &Disk &Bar &Nucleus &Total &Morphology\ & & &() &(mag) &(mag) &(mag) &(mag) &(mag) & & &(mag) &(mag) &(mag) &(mag) &(mag) &\ (1) &(2) &(3) &(4) &(5) &(6) &(7) &(8) &(9) &(10) &(11) &(12) &(13) &(14) &(15) &(16) &(17)\ J011935 & F105W & 1.85$^{+0.19}_{-0.20}$ & 0.78$^{+0.08}_{-0.08}$ & 17.03$^{+0.09}_{-0.09}$ & 16.58$^{+0.01}_{-0.01}$ & .... & 19.52$^{+0.09}_{-0.03}$ & 15.97 & 0.27 & F475W & 19.94$^{+0.12}_{-0.11}$ & 18.85$^{+0.02}_{-0.02}$ & .... & 21.42$^{+0.13}_{-0.09}$ & 18.51 & spiral (unbarred)\ J074751 & F110W & 0.53$^{+0.11}_{-0.12}$ & 0.09$^{+0.04}_{-0.05}$ & 18.83$^{+0.10}_{-0.10}$ & 16.98$^{+0.03}_{-0.01}$ & 18.70$^{+0.03}_{-0.02}$ & .... & 16.62 & 0.13 & F555W & 20.92$^{+0.11}_{-0.11}$ & 19.18$^{+0.02}_{-0.01}$ & 21.65$^{+0.03}_{-0.02}$ & .... & 18.89 & spiral (barred)\ J075329 & F110W & 2.25$^{+0.46}_{-0.47}$ & 0.80$^{+0.16}_{-0.16}$ & 18.20$^{+0.19}_{-0.19}$ & 19.66$^{+0.01}_{-0.03}$ & .... & 20.05$^{+0.02}_{-0.02}$ & 17.95 & 0.79 & F555W & 21.05$^{+0.22}_{-0.21}$ & 22.31$^{+0.03}_{-0.02}$ & .... & 23.98$^{+0.14}_{-0.01}$ & 20.76 & merging\ J075940 & F814W & 1.04$^{+0.12}_{-0.20}$ & 0.12$^{+0.01}_{-0.02}$ & 16.60$^{+0.09}_{-0.09}$ & 15.48$^{+0.03}_{-0.03}$ & 16.38$^{+0.01}_{-0.01}$ & .... & 14.85 & 0.20 & F438W & 18.17$^{+0.12}_{-0.09}$ & 17.62$^{+0.06}_{-0.07}$ & 18.84$^{+0.05}_{-0.04}$ & .... & 16.91 & lenticular\ J080252 & F105W & 4.53$^{+0.91}_{-0.92}$ & 1.65$^{+0.33}_{-0.33}$ & 14.16$^{+0.16}_{-0.14}$ & 14.76$^{+0.02}_{-0.00}$ & .... & .... & 13.67 & 0.63 & F438W & 17.25$^{+0.19}_{-0.17}$ & 17.69$^{+0.01}_{-0.00}$ & .... & .... & 16.70 & merging\ J080337 & F105W & 4.57$^{+0.46}_{-0.46}$ & 4.82$^{+0.52}_{-0.52}$ & 13.68$^{+0.07}_{-0.07}$ & 15.16$^{+0.04}_{-0.04}$ & .... & .... & 13.43 & 0.80 & F438W & 18.68$^{+0.15}_{-0.12}$ & 17.19$^{+0.03}_{-0.04}$ & .... & .... & 16.94 & lenticular\ J080523 & F105W & 2.22$^{+0.45}_{-0.49}$ & 0.87$^{+0.18}_{-0.18}$ & 16.87$^{+0.18}_{-0.17}$ & 15.40$^{+0.01}_{-0.00}$ & .... & 18.05$^{+0.05}_{-0.07}$ & 15.15 & 0.21 & F475W & 20.23$^{+0.22}_{-0.21}$ & 17.91$^{+0.04}_{-0.02}$ & .... & 19.94$^{+0.01}_{-0.09}$ & 17.79 & merging\ J081100 & F105W & 4.03$^{+0.48}_{-0.56}$ & 0.41$^{+0.05}_{-0.06}$ & 16.97$^{+0.12}_{-0.10}$ & 16.07$^{+0.02}_{-0.01}$ & .... & .... & 15.61 & 0.18 & F475W & 20.05$^{+0.11}_{-0.10}$ & 18.30$^{+0.02}_{-0.03}$ & .... & .... & 18.10 & spiral (unbarred)\ J084107 & F110W & 4.08$^{+0.85}_{-0.87}$ & 1.46$^{+0.30}_{-0.30}$ & 17.07$^{+0.17}_{-0.17}$ & .... & .... & 19.65$^{+0.19}_{-0.06}$ & 17.07 & 1.00 & F555W & 19.39$^{+0.19}_{-0.19}$ & .... & .... & 23.30$^{+0.02}_{-0.08}$ & 19.39 & merging\ J084344 & F814W & 3.98$^{+0.89}_{-1.23}$ & 4.18$^{+1.57}_{-1.99}$ & 14.67$^{+0.25}_{-0.42}$ & 15.12$^{+0.10}_{-0.24}$ & .... & .... & 14.12 & 0.60 & F438W & 16.97$^{+0.18}_{-0.17}$ & 17.66$^{+0.08}_{-0.08}$ & .... & .... & 16.51 & disturbed\ J090754 & F105W & 1.89$^{+0.21}_{-0.21}$ & 0.24$^{+0.02}_{-0.02}$ & 16.42$^{+0.09}_{-0.08}$ & 15.49$^{+0.01}_{-0.01}$ & 16.22$^{+0.01}_{-0.01}$ & 19.46$^{+0.48}_{-0.20}$ & 14.77 & 0.22 & F438W & 19.39$^{+0.10}_{-0.11}$ & 18.44$^{+0.10}_{-0.11}$ & 19.56$^{+0.06}_{-0.05}$ & 23.24$^{+0.12}_{-0.05}$ & 17.82 & spiral (barred)\ J091819 & F125W & 1.79$^{+0.14}_{-0.25}$ & 1.10$^{+0.08}_{-0.07}$ & 18.75$^{+0.08}_{-0.06}$ & .... & .... & 19.26$^{+0.14}_{-0.06}$ & 18.75 & 1.00 & F555W & 20.95$^{+0.06}_{-0.06}$ & .... & .... & 21.01$^{+0.15}_{-0.01}$ & 20.95 & elliptical\ J093625 & F105W & 2.92$^{+0.29}_{-0.31}$ & 0.17$^{+0.02}_{-0.02}$ & 16.81$^{+0.09}_{-0.09}$ & 16.01$^{+0.01}_{-0.01}$ & 17.56$^{+0.02}_{-0.04}$ & .... & 15.42 & 0.28 & F438W & 19.77$^{+0.10}_{-0.10}$ & 19.39$^{+0.10}_{-0.11}$ & 17.90$^{+0.01}_{-0.02}$ & .... & 17.51 & lenticular\ J103408 & F814W & 3.42$^{+0.79}_{-1.55}$ & 3.99$^{+1.10}_{-1.56}$ & 14.45$^{+0.35}_{-1.52}$ & 14.82$^{+0.18}_{-0.64}$ & .... & .... & 13.87 & 0.59 & F438W & 16.31$^{+0.17}_{-0.16}$ & 17.35$^{+0.10}_{-0.11}$ & .... & .... & 15.96 & disturbed\ J105208 & F814W & 2.08$^{+0.22}_{-0.22}$ & 0.49$^{+0.05}_{-0.05}$ & 16.95$^{+0.09}_{-0.09}$ & 15.39$^{+0.03}_{-0.03}$ & 16.37$^{+0.03}_{-0.05}$ & 19.59$^{+0.05}_{-0.07}$ & 14.85 & 0.14 & F438W & 19.48$^{+0.10}_{-0.10}$ & 17.38$^{+0.06}_{-0.07}$ & 18.86$^{+0.07}_{-0.04}$ & 20.49$^{+0.07}_{-0.05}$ & 17.01 & spiral (barred)\ J110213 & F105W & 2.06$^{+0.41}_{-0.47}$ & 1.58$^{+0.32}_{-0.32}$ & 15.11$^{+0.16}_{-0.16}$ & 15.01$^{+0.01}_{-0.00}$ & .... & 16.57$^{+0.16}_{-0.03}$ & 14.30 & 0.48 & F438W & 19.08$^{+0.19}_{-0.19}$ & 17.76$^{+0.00}_{-0.01}$ & .... & 20.44$^{+0.08}_{-0.09}$ & 17.48 & merging\ J111015 & F105W & 4.56$^{+0.24}_{-0.45}$ & 0.13$^{+0.01}_{-0.01}$ & 17.94$^{+0.05}_{-0.04}$ & .... & .... & .... & 17.94 & 1.00 & F475W & 19.59$^{+0.09}_{-0.07}$ & .... & .... & .... & 19.59 & elliptical\ J113710 & F125W & 5.97$^{+0.38}_{-0.40}$ & 0.97$^{+0.08}_{-0.08}$ & 17.16$^{+0.05}_{-0.05}$ & .... & .... & .... & 17.16 & 1.00 & F555W & 19.81$^{+0.06}_{-0.06}$ & .... & .... & .... & 19.81 & elliptical\ J115326 & F105W & 1.75$^{+0.19}_{-0.27}$ & 0.19$^{+0.02}_{-0.02}$ & 16.66$^{+0.11}_{-0.08}$ & 15.34$^{+0.01}_{-0.01}$ & 16.54$^{+0.01}_{-0.01}$ & 18.04$^{+0.26}_{-0.02}$ & 14.81 & 0.18 & F438W & 19.31$^{+0.10}_{-0.10}$ & 17.95$^{+0.07}_{-0.07}$ & 19.81$^{+0.09}_{-0.07}$ & 20.22$^{+0.01}_{-0.04}$ & 17.54 & spiral (barred)\ J123804 & F105W & 1.31$^{+0.13}_{-0.13}$ & 0.20$^{+0.02}_{-0.02}$ & 19.12$^{+0.10}_{-0.10}$ & 16.93$^{+0.00}_{-0.00}$ & 16.91$^{+0.00}_{-0.00}$ & .... & 16.10 & 0.06 & F475W & 22.17$^{+0.12}_{-0.12}$ & 20.13$^{+0.02}_{-0.00}$ & 18.83$^{+0.02}_{-0.04}$ & .... & 18.51 & spiral (barred)\ J125850 & F814W & 1.62$^{+0.16}_{-0.33}$ & 0.61$^{+0.06}_{-0.06}$ & 16.57$^{+0.11}_{-0.09}$ & 14.88$^{+0.05}_{-0.05}$ & 16.43$^{+0.03}_{-0.02}$ & .... & 14.48 & 0.15 & F438W & 18.85$^{+0.12}_{-0.10}$ & 17.24$^{+0.13}_{-0.15}$ & 19.60$^{+0.16}_{-0.14}$ & .... & 16.92 & spiral (barred)\ J130038 & F105W & 3.82$^{+0.42}_{-0.41}$ & 0.71$^{+0.09}_{-0.08}$ & 15.90$^{+0.09}_{-0.10}$ & 15.79$^{+0.01}_{-0.02}$ & 16.86$^{+0.09}_{-0.09}$ & .... & 14.90 & 0.40 & F438W & 18.43$^{+0.10}_{-0.10}$ & 19.85$^{+0.65}_{-1.85}$ & 19.16$^{+0.09}_{-0.09}$ & .... & 17.80 & lenticular\ J133542 & F105W & 3.01$^{+0.61}_{-0.61}$ & 1.69$^{+0.34}_{-0.34}$ & 16.06$^{+0.16}_{-0.16}$ & .... & .... & 18.58$^{+0.05}_{-0.01}$ & 16.06 & 1.00 & F475W & 18.33$^{+0.19}_{-0.18}$ & .... & .... & 21.93$^{+0.17}_{-0.03}$ & 18.33 & disturbed\ J140541 & F105W & 1.69$^{+0.17}_{-0.20}$ & 0.50$^{+0.05}_{-0.05}$ & 15.83$^{+0.09}_{-0.08}$ & 16.16$^{+0.02}_{-0.01}$ & .... & 17.60$^{+0.07}_{-0.01}$ & 15.19 & 0.45 & F438W & 18.16$^{+0.10}_{-0.10}$ & 18.65$^{+0.09}_{-0.10}$ & .... & 19.70$^{+0.11}_{-0.03}$ & 17.63 & spiral (unbarred)\ J140712 & F105W & 1.86$^{+0.19}_{-0.19}$ & 0.19$^{+0.02}_{-0.02}$ & 17.28$^{+0.09}_{-0.09}$ & 16.80$^{+0.01}_{-0.01}$ & 17.44$^{+0.03}_{-0.03}$ & .... & 15.94 & 0.29 & F475W & 19.66$^{+0.13}_{-0.10}$ & 19.00$^{+0.04}_{-0.04}$ & 20.39$^{+0.04}_{-0.04}$ & .... & 18.35 & spiral (barred)\ J144038 & F814W & 2.04$^{+0.42}_{-0.45}$ & 0.19$^{+0.04}_{-0.05}$ & 16.35$^{+0.17}_{-0.16}$ & 14.34$^{+0.03}_{-0.03}$ & .... & 17.96$^{+0.16}_{-0.04}$ & 14.18 & 0.14 & F438W & 17.54$^{+0.19}_{-0.18}$ & 16.09$^{+0.04}_{-0.04}$ & .... & 18.64$^{+0.03}_{-0.08}$ & 15.84 & merging\ J145019 & F105W & 4.22$^{+0.54}_{-0.50}$ & 1.81$^{+0.39}_{-0.28}$ & 15.38$^{+0.14}_{-0.12}$ & 16.88$^{+0.36}_{-0.20}$ & .... & .... & 15.14 & 0.80 & F475W & 18.51$^{+0.12}_{-0.10}$ & 18.89$^{+0.05}_{-0.06}$ & .... & .... & 17.93 & lenticular\ J155829 & F105W & 0.80$^{+0.08}_{-0.09}$ & 0.31$^{+0.03}_{-0.03}$ & 18.10$^{+0.09}_{-0.09}$ & 16.25$^{+0.01}_{-0.01}$ & 17.11$^{+0.03}_{-0.03}$ & 19.48$^{+0.07}_{-0.00}$ & 15.72 & 0.11 & F475W & 19.99$^{+0.10}_{-0.10}$ & 18.30$^{+0.05}_{-0.05}$ & 19.82$^{+0.05}_{-0.05}$ & 21.67$^{+0.13}_{-0.03}$ & 17.89 & spiral (barred)\ J162436 & F105W & 1.93$^{+0.39}_{-0.41}$ & 0.79$^{+0.16}_{-0.16}$ & 16.84$^{+0.18}_{-0.18}$ & 15.88$^{+0.01}_{-0.00}$ & .... & 19.14$^{+0.08}_{-0.02}$ & 15.51 & 0.29 & F475W & 19.27$^{+0.20}_{-0.19}$ & 18.59$^{+0.04}_{-0.04}$ & .... & 21.31$^{+0.12}_{-0.05}$ & 18.13 & merging\ [1.2]{} **Notes.** Column (1): Galaxy name. Column (2): Filter used, corresponding to $I_{\rm WFC3}$. Column (3)$-$(8): Best-fit [GALFIT]{} parameters. Column (9): Total magnitude of the host, excluding the nucleus, if present. Column (10): $B/T$ ratio. Column (11): Filter used, corresponding to $B_{\rm WFC3}$. Column (12)$-$(15): Best-fit [GALFIT]{} parameters. Column (16): Total magnitude of the host, excluding the nucleus, if present. Column (17): Morphological classification. Colors {#sec:cm_create} ------ The  observations were designed specifically to provide at least rudimentary color information with the filter combination of rest-frame $B$ and $I$. In view of the possibility that the central region of the galaxy might be contaminated mildly by scattered light from the AGN (Section \[sec:Galfit\]), we perform the color analysis on the images after subtracting the best-fit nucleus component, if present. We can generate color images in a straightforward manner for the six targets that were observed with the same (UVIS) detector. The majority (23/29), however, were observed with two detectors with different pixel scales and spatial resolutions. We rebin the UVIS images to match the pixel scale of the IR images, and we convolve the images taken in one filter with the corresponding PSF of the other filter. The color maps are generated by $(B-I)_{\rm WFC3}= -2.5\log (f_{B_{\rm WFC3}}/f_{I_{\rm WFC3}}) + {\rm ZP}_{B_{\rm WFC3}} - {\rm ZP}_{I_{\rm WFC3}}$, where $f_{B_{\rm WFC3}}$ and $f_{I_{\rm WFC3}}$ are the counts in $B_{\rm WFC3}$ and $I_{\rm WFC3}$, respectively, and ${\rm ZP}_{B_{\rm WFC3}}$ and ${\rm ZP}_{I_{\rm WFC3}}$ are the corresponding zero points. We apply the [IRAF]{} task [ellipse]{} to the color images to derive radial color profiles. Figure \[fig:color\_map\] gives the color maps and color profiles. For each object, the upper panel shows the $B_{\rm WFC3}$ image, the $I_{\rm WFC3}$ image, and the $(B-I)_{\rm WFC3}$ color map. The color radial profile is shown in the bottom panel. We will discuss the results in Section \[sec:results\_SF\]. ![image](colormap_tot_1-eps-converted-to.pdf) ![image](colormap_tot_2-eps-converted-to.pdf) Bulge Stellar Masses {#sec:mass_bulge} -------------------- We convert the $I_{\rm WFC3}$ magnitude of the bulge ($M_{I,{\rm bul}}$) to the bulge stellar mass ($M_{\rm bul}$) using a mass-to-light ratio ($M/L$) inferred from the rest-frame $(B-I)$ color. Adopting the color-based conversion from @Bell01, $$\label{eq:M/Lratio} \log \left( \frac{ M_{\rm bul}}{M_\odot} \right) = -0.4(M_{I,{\rm bul}} - M_{I,\odot}) + 0.439(B-I) - 0.594,$$ where $M_{I,\odot}= 4.08$ is the $I$-band absolute magnitude of the Sun (@BinneyMerrifield98). This relation assumes a @Kroupa01 stellar initial mass function. The stellar masses have typical errors of $\sim 0.1$ dex due to uncertainties in stellar population (@Bell03; @Conroy09). We use the SDSS Data Release 7 (DR7) spectrum of each object to calculate the $k$-correction and the color conversion from the $B_{\rm WFC3}$ and $I_{\rm WFC3}$ filters to conventional $B$ and $I$ filters. Strictly speaking, the SDSS spectrum, taken using a 3 fiber, covers the central $\sim 5.6$ kpc of the host galaxy (for median $z \approx 0.1$), much larger than the size scale of the bulge. Nevertheless, we will use it and confirm later that the stellar mass derived from the spectrum well represents the bulge mass. Note that the spectral range of the SDSS spectrum is not wide enough to cover the $I_{\rm WFC3}$ bandpass. We, therefore, derive the stellar spectrum by fitting the SDSS spectrum (see Figure \[fig:SDSS\_spec\_fit\] for an example) with a @BC03 [BC03] stellar population synthesis model consisting of four stellar components with ages in the range \[0.04, 0.3\], \[0.3, 1.0\], \[0.0, 3.0\], and 8.0 Gyr, assuming solar metallicity. Strong emission lines are excluded from the fit. Meanwhile, Galactic extinction is removed adopting $R_V = 3.1$ and the Milky Way extinction law of @Cardelli89. As previously noted, contamination from nuclear scattered light is not entirely negligible for 14 of the quasars. For consistency, a power-law continuum representing the AGN component is included in the fits for these objects. The SDSS spectra of 24 targets have sufficiently high S/N ($\gtrsim15$) that their best-fit BC03 model yields a reduced $\chi^2 \approx 1$. For the remaining five objects with spectra of lower quality (S/N $<7$), we simply adopt a model consisting of an old (10 Gyr) and a young (2.1 Gyr) population (@CanalizoStockton13). In view of the fact that the SDSS spectrum generally covers larger regions of the host galaxy than the scale of bulge, we consider the SDSS spectrum to be representative of the whole host galaxy and derive an alternative estimate of the bulge stellar mass. We first obtain the total stellar mass of the host galaxy ($M_*$) using its integrated $I$-band magnitude and $(B-I)$ color by Equation \[eq:M/Lratio\], and then we attribute a fraction $B/T$ thereof to the bulge ($M_{\rm bul,B/T}$). These alternative estimates of the bulge masses derived from $B/T$ agree well with those derived from bulge magnitudes: $\langle \log M_{\rm bul}- \log M_{\rm bul,B/T} \rangle= 0.03\pm 0.16$ dex. Another check on our stellar masses can be made using the subset of 19 objects that overlap with the MPA-JHU catalog[^3] of spectral measurements from SDSS DR7. The stellar masses in this catalog were derived from spectral energy distribution fits to the DR7 photometry[^4] The median total stellar masses from DR7 ($M_{*, \rm DR7}$) agree well with our estimates: $\langle \log M_*-\log M_{*, \rm DR7}\rangle = -0.01\pm 0.13$ dex. Table \[tab:smass\] lists the stellar masses derived in this work. For simplicity, in the following discussion, we simply adopt the bulge masses ($M_{\rm bul}$) estimated from Equation (\[eq:M/Lratio\]). We note that none of the conclusions of this paper depends on which of the two bulge masses we choose. [ccccDDl]{} J011935.63 &10.71 &10.29 &10.45 & 0.64 &$-$0.72 & spiral (unbarred)\ J074751.56 &11.03 &10.14 &10.06 & 1.24 & 1.65 & spiral (barred)\ J075329.93 &11.00 &10.90 &10.92 & 0.83 &$-$2.99 & merging\ J075940.95 &10.67 & 9.97 & 9.75 & 0.73 & 0.79 & lenticular\ J080252.92 &11.27 &11.07 &11.10 & 0.63 &$-$0.66 & merging\ J080337.32 &11.30 &11.20 &11.16 &$-$1.27 &$-$3.91 & lenticular\ J080523.29 &11.12 &10.43 &10.75 & 1.17 &$-$0.86 & merging\ J081100.20 &11.18 &10.64 &10.89 & 0.60 &$-$1.65 & spiral (unbarred)\ J084107.06 &11.03 &11.03 &11.03 & 0.64 &$-$2.43 & merging\ J084344.99 &11.05 &10.83 &10.79 & 0.07 &$-$2.44 & disturbed\ J090754.07 &10.92 &10.26 &10.23 & 0.90 & 1.29 & spiral (barred)\ J091819.66 &10.26 &10.26 &10.26 & 1.70 &$-$1.76 & elliptical\ J093625.36 &10.28 & 9.73 &10.11 & 1.15 & 1.17 & lenticular\ J103408.59 &10.99 &10.76 &10.66 &$-$0.10 &$-$2.20 & disturbed\ J105208.19 &10.65 & 9.81 & 9.97 & 0.20 &$-$0.60 & spiral (barred)\ J110213.01 &10.95 &10.62 &10.90 & 0.35 &$-$2.57 & merging\ J111015.25 & 9.70 & 9.70 & 9.70 & 1.34 & 1.23 & elliptical\ J113710.77 &10.91 &10.91 &10.91 & 0.39 &$-$1.57 & elliptical\ J115326.42 &10.56 & 9.82 & 9.79 & 1.02 & 1.48 & spiral (barred)\ J123804.81 &10.93 & 9.73 &10.01 & 1.52 & 0.13 & spiral (barred)\ J125850.77 &10.96 &10.12 &10.05 & 0.75 & 0.82 & spiral (barred)\ J130038.09 &10.86 &10.46 &10.30 & 0.48 & 0.63 & lenticular\ J133542.49 &10.88 &10.88 &10.88 & 0.36 &$-$1.47 & disturbed\ J140541.21 &10.58 &10.32 &10.27 & 0.41 &$-$0.05 & spiral (unbarred)\ J140712.94 &10.97 &10.44 &10.43 & 1.69 & 1.54 & spiral (barred)\ J144038.09 &11.01 &10.14 & 9.94 &$-$0.29 & 1.45 & merging\ J145019.18 &11.14 &11.04 &11.10 & 0.03 &$-$1.90 & lenticular\ J155829.36 &10.71 & 9.76 & 9.64 & 0.59 & 1.06 & spiral (barred)\ J162436.40 &10.99 &10.46 &10.38 & 1.21 &$-$0.13 & merging\ [1.2]{} **Notes.** Column (1): Galaxy name. Column (2): Total stellar mass of host galaxy. Column (3): Bulge mass calculated from $M_*$ and $B/T$. Column (4): Bulge mass calculated from $M/L$ ratio of Equation (\[eq:M/Lratio\]). Column (5): Color gradients of the galaxy inner regions. The units are $\Delta$mag per dex in radius (arcsec). Column (6): Color gradients of the galaxy outer regions. Column (7): Morphological classification. Results and Implications {#sec:results} ======================== Evidence of Nuclear Star Formation {#sec:results_SF} ---------------------------------- Black hole accretion and star formation are often suggested to go hand in hand (@Springel05 [@Hopkins06]), but the verdict from observations is somewhat mixed. While some studies of the stellar population of AGN host galaxies find nuclear and starburst activity to be broadly synchronized (e.g., @Tadhunter11 [@Bessiere14; @Bessiere17]), others maintain that AGN activity significantly lags behind star formation (e.g., @Wild10 [@CanalizoStockton13]). The color information from this study provides some fresh insights into this issue, from the point of view of luminous, obscured quasars that should be experiencing both intense star formation and black hole growth. The color maps shown in Figure \[fig:color\_map\] illustrate that in most of the hosts the central regions of the bulge are generally bluer than their outer regions. This holds for most of the merging/disturbed hosts and most of the disk galaxies. The few that do not follow this trend may be affected by dust reddening (i.e. some of the disturbed systems and two of the lenticulars with prominent inner dust lanes). Consistent with the expected behavior of galaxies, almost all the global color profiles initially exhibit a negative color gradient in their outermost regions (become redder toward smaller radii). However, upon reaching the central regions, roughly on the scale of the effective radius of the bulge, most of the color profiles flatten or even turn over at smaller radii. To quantify this effect, we derive the color gradients of the inner and outer regions of the host galaxies (see Table \[tab:smass\]), and we directly compare them with a control sample of inactive galaxies measured in the same manner from the Carnegie-Irvine Galaxy Survey (CGS; @Ho11). The inner region is defined as the range from half of the PSF FWHM (to avoid possible AGN contamination, even after subtracting the nucleus) to the effective radius of the bulge (); the outer region extends between  and 2.5. Following @Li2011, the color gradient, corrected for Galactic extinction, is calculated as the slope of the color profile, representing the change in color per dex in radius: $$\label{eq:color_grad} \nabla(\rm color)_{\rm in/out}= \dfrac{({\rm color})_{\textit{r}_1}-({\rm color})_{\textit{r}_0}}{\log {\textit{r}_1} - \log {\textit{r}_0}},$$ where $r_1$ and $r_0$ correspond to  and 0.5 FWHM of PSF for the inner region, and 2.5 and  for the outer region. As in @Li2011, color gradients larger than 0.1 are considered positive, between $-0.1$ and 0 are flat, and smaller than $-0.1$ are negative. A positive slope indicates that the galaxy is getting bluer toward the center. ![Color gradients of the inner regions (radius $< R_e$) of type 2 quasars (hatched purple histograms), divided by morphology. The inner gradients of normal (inactive) CGS galaxies with similar morphologies (@Li2011) are shown for comparison as the filled red histograms; no CGS comparison is given for the merging/disturbed category. The vertical dashed lines in each panel mark the adopted boundaries for negative ($\nabla(B - I) < -0.1$), flat ($-0.1 \leqslant \nabla(B - I) \leqslant 0.1$), and positive ($\nabla(B - I) >0.1$) color gradients. A positive color gradient means that the color becomes bluer toward the center. The central regions of most quasars have positive color gradients, opposite to the trend in normal galaxies.[]{data-label="fig:colorgrad_comp"}](compare_inner_gradient_QSO2_CGS-eps-converted-to.pdf) Regardless of the sign of the outer color gradients, most of our objects exhibit *positive* inner $B-I$ color gradients. Figure \[fig:colorgrad\_comp\] compares the inner color gradients of our sample, split by morphological types, relative to a comparison sample drawn from CGS (@Li2011). The two samples are clearly different. Apart from two S0 hosts with central dust lanes, all the early-type hosts of type 2 quasars (ellipticals and S0s) have positive inner color gradients. By comparison, 85% of the ellipticals and 64% of the S0s in CGS have flat inner color profiles. The contrast is even more pronounced for spiral galaxies, for which [*all*]{} spiral AGN hosts have clearly positive inner color gradients, whereas negative gradients characterize the majority ($65-88$%, depending on the exact morphological type) of the bulges of inactive spirals. Even merging/disturbed systems hosting type 2 quasars, whose environments expected to be dusty, mostly exhibit positive inner color gradients. The host galaxies of type 2 quasars have a preponderance of blue central regions compared to normal galaxies of similar Hubble type, consistent with enhanced ongoing or recent star formation. Additionally, less direct but nevertheless compelling evidence for enhanced central star formation in type 2 quasars comes from inspection of the detailed structural parameters of the host galaxies. Figure \[fig:structure\_bulgemass\] illustrates the distributions of  index ($n$) and bulge-to-total ratio ($B/T$) as a function of bulge stellar mass ($M_{\rm bul}$) for the sample. Not unexpectedly, the merging/disturbed hosts (grey symbols) tend to have more massive ($M_{\rm bul} \gtrsim 10^{10.5}\,M_\odot$), more prominent ($B/T \gtrsim 0.2$) bulges with relatively large  indices (median $n=2.63$), akin to classical bulges and consistent with the expectation that major mergers build classical bulges (@KormendyKennicutt04 [@FisherDrory08]). The undisturbed early types in the sample (lenticulars and ellipticals; red and orange symbols, respectively) span a wide range in mass, but their   indices are large (median $n=4.02$), again consistent with classical bulges. The late-type hosts are strikingly different. Most of the barred and unbarred spirals (blue and green symbols) have bulges of lower mass ($M_{\rm bul} \lesssim 10^{10.5}\,M_\odot$), lower prominence ($B/T \lesssim 0.2$), and characteristically lower  indices ($n \lesssim 2$). They conform to the typical properties of pseudo bulges (@KormendyKennicutt04 [@FisherDrory08]). The Kormendy relation (@Kormendy77), an inverse correlation between the effective radius () of spheroids and their surface brightness ($\mu_e$) within , provides a useful empirical tool to distinguish bulge types. At a given , pseudo bulges have lower $\mu_e$ than classical bulges or elliptical galaxies (@KormendyKennicutt04 [@Gadotti09; @FisherDrory10]). Figure \[fig:Kormendy\_rel\] (top) examines the Kormendy relation of our sample using best-fit parameters from the $I_{\rm WFC3}$ band. We fit the relation only for the classical bulges (i.e. ellipticals and the bulges of merging/disturbed and lenticular galaxies), which is denoted by the solid line. It is surprising that the late-type galaxies, which, as argued above, contain pseudo bulges, do [*not*]{}  depart from the relation of classical bulges. This grossly deviates from the behavior of inactive galaxies, for which pseudo bulges scatter systematically below the locus of classical bulges and ellipticals. Why do pseudo bulges not appear in the Kormendy relation of type 2 quasar host galaxies? The simplest and most plausible explanation is that the bulge effective surface brightnesses have been enhanced because of excess light from recent or ongoing star formation. Together with the color gradients previously discussed, we conclude that the central regions of these obscured quasars have a characteristically young stellar population, presumably associated with a recent episode of star formation. A similar trend is also reported in the hosts of type 1 AGNs (@KimHo19). Are Mergers Important for Triggering AGNs? {#sec:discuss_merging} ------------------------------------------ The role of mergers in governing AGN activity remains a vexingly controversial topic. From a theoretical point of view, major external triggers are thought to be necessary to supply the high mass accretion rates needed to sustain the luminosities of the most powerful quasars (@Shlosman90). The less stringent fueling requirements of weaker AGNs can be met through internal secular processes, such as angular momentum transported by bars (e.g., @Ho97 [@Somerville08; @HopkinsHernquist09]). A substantial body of observational evidence broadly supports this thesis. From their analysis of a large sample of AGNs spanning a wide range of bolometric luminosities and redshifts, @Treister12 report a strong correlation between the fraction of host galaxies experiencing major mergers and AGN luminosity. A high incidence of merger features is frequenctly found for luminous AGNs, be they unobscured (e.g., @Letawe10 [@Liu12; @Hong15]), dust-reddened (@Urrutia08 [@Kocevski15; @Fan16]), or highly obscured (@VillarMartin11 [@Bessiere12; @Wylezalek16; @Donley18; @UrbanoMayorgas18]). But not everyone agrees. A significant number of studies of luminous AGNs at moderate ($0.5 < z < 0.8$; @Villforth14) and high ($z\lesssim 2$; @Schawinski11 [@Schawinski12; @Mechtley16; @Villforth17]) redshifts dismiss the importance of major mergers in driving nuclear activity. One of the most surprising results of this study is the sheer diversity of the morphologies of the host galaxies of obscured quasars. As summarized in Section \[sec:introduction\], the traditional gas-rich major merger scenario for quasar evolution predicts that type 2 quasars should be hosted by morphologically highly disturbed galaxies. This basic expectation is not supported by our observations, at least not for a sizable fraction of our sample. Among the 29 objects studied, only 10 (34%) show clear morphological signatures of interactions or mergers, rising at most to 13 (45%) if we regard the three ellipticals as post-merger products. The majority (55%; 11 spirals and 5 lenticulars) possess normal disks. Even if we accept that lenticulars may be remnants of major mergers (@ElicheMoral18)—by no means a universal view (@Kormendy09)—a substantial fraction (38%) are incontrovertibly ordinary unbarred or barred late-type spiral galaxies. Although it is stated that disks can survive from gas-rich major mergers under some circumstances (e.g., @BarnesHernquist96 [@SpringelHernquist05; @Hopkins09]), the remnants cannot significantly fuel central black holes (@HopkinsHernquist09), and the regrowth of the disk (@Hopkins09 [@Bundy10]) takes much longer that the typical quasar lifetime (e.g., @Porciani04 [@Hopkins05; @Shen07]). Therefore, the late-type host galaxies of type 2 quasars, which most likely possess pseudo bulges (Section \[sec:results\_SF\]), probably never experienced major mergers during their lifetimes. Secular processes presumably were responsible for their black hole growth. Interestingly, most of the spirals (8 out of 11) contain a bar (Figure \[fig:morph\_class\]). The subset of our type 2 quasars hosted by spirals has a typical \[O III\] $\lambda$5007 luminosity of $10^9\,L_\odot$, not dissimilar from those hosted by earlier type galaxies or mergers. For an \[O III\] bolometric correction of 600 (@KauffmannHeckman09), this corresponds to a bolometric luminosity of $L_{\rm bol} = 2.3\times10^{45}\,{\rm erg}\,{\rm s}^{-1}$, or a mass accretion rate of $\dot{M} = (\epsilon c^2)^{-1} L_{\rm bol} \approx 0.4\, {M_\odot}\,{\rm yr}^{-1}$, where $c$ is the speed of light and $\epsilon=0.1$ is the radiative efficiency. This level of fueling is modest, reflecting the fact that our sample of low-redshift obscured AGNs, although technically considered as “quasars”, are, in fact, quite modest in power. Bar-driven gravitational torques may suffice to transport cold gas at this rate to the central regions of spiral galaxies (@Haan09). Conclusions {#sec:conculde} =========== We observed 29 local ($z\approx 0.04 - 0.4$), optically selected type 2 quasars in rest-frame $B$ and $I$ using WFC3 on [*HST*]{}, to study the stellar properties of their host galaxies and explore their triggering mechanism. We classify the morphologies, perform detailed two-dimensional decomposition to study structural properties of the hosts, analyze their optical colors and color gradients, and derive bulge stellar masses. Our principal findings can be summarized as follows: - Only a minority (34%) of the host galaxies exhibit clear merging or disturbed features. A significant fraction ($38\%$) of the hosts are late-type, mostly disk-dominated spiral galaxies. Major mergers do not seem to play a dominant role in triggering nuclear activity in nearby obscured quasars, but the luminosities of these sources, and hence their mass accretion rates, are not sufficiently high to severely challenge the major merger model for quasar evolution. Indeed, we argue that secular processes alone may suffice to supply their modest fueling rates. - The central regions of most of the host galaxies are bluer than their outer parts, indicating nearly ubiquitous recent or ongoing star formation. - While merging/disturbed systems and early-type (lenticular and elliptical) hosts tend to have  indices ($n\geqslant 2$) and bulge-to-total ratios ($B/T \gtrsim 0.2$) expected of classical bulges, the late-type hosts possess pseudo bulges with $n < 2$ and $B/T\lesssim 0.2$. However, unlike inactive galaxies, the pseudo bulges hosting type 2 quasars have systematically higher central surface brightnesses because of the excess light from young stars. We thank an anonymous referee for many helpful comments. We are grateful to MinZhi Kong and Hua Gao for fruitful discussions and suggestions. This work was supported by the National Key R&D Program of China (2016YFA0400702) and the National Science Foundation of China (11473002, 11721303). Minjin Kim was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2017R1C1B2002879). Best-fit Decompositions for the Sample ====================================== Here are the best-fit results of structure decomposition for the host galaxies of our 29 type 2 quasar by GALFIT. For each quasar, the 2-D original $I_{\rm WFC3}$ image, model, and residual images are illustrated in the upper panel, and the 1-D distributions of ellipticity, position angle, surface density, and residual of intensity along with radius are shown in the bottom panel. \[lastpage\] [^1]: Reyes et al. (2008) chose a luminosity cut of $L_{\rm [O~III]} > 10^{8.3}\, L_\odot$ to define type 2 quasars, but a fraction ($\sim 17\%$) of the sources in the catalog have luminosities below this limit because of recent recalibration of the SDSS spectrophotometry. In this study, we adopt the extinction-corrected  luminosities of @KongHo18. [^2]: We use the following definition of Kron radius: $\,=2.5r_1$, where $r_1$ is the first moment of the light distribution [@Kron80; @BA96]. For an elliptical light distribution, this is, strictly speaking, the semi-major axis. [^3]: [www.mpa-garching.mpg.de/SDSS/DR7/]{} [^4]: See details in [www.mpa-garching.mpg.de/SDSS/DR7/mass\_comp]{}.
--- abstract: 'The subject of this paper is a quantification of the information content of cosmological probes of the large-scale structures, specifically of temperature and polarisation anisotropies in the cosmic microwave background, CMB-lensing, weak cosmic shear and galaxy clustering, in terms of Information theory measures like information entropies. We aim to establish relationships for Gaussian likelihoods, between conventional measures of statistical uncertainties and information entropies. Furthermore, we extend these studies to the computation of (Bayesian) evidences and the power of measurement to distinguish between competing models. We investigate in detail how cosmological data decreases information entropy by reducing statistical errors and by breaking degeneracies. In addition, we work out how tensions between data sets increase information entropy and quantify this effect in three examples: the discrepancy in $\Omega_m$ and $\sigma_8$ between the CMB and weak lensing, the role of intrinsic alignments in weak lensing data when attempting the dark energy equation of state parameters, and the famous $H_0$-tension between Cepheids in the Hubble keystone project and the cosmic microwave background as observed by Planck.' author: - | Ana Marta Pinho$^1$, Robert Reischke$^{2,3,4}$, Marie Teich$^4$, Bj[ö]{}rn Malte Sch[ä]{}fer$^4$[^1]\ $^1$Institut f[ü]{}r theoretische Physik, Universit[ä]{}t Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany\ $^2$ Department of Physics, Technion, Haifa 32000, Israel\ $^3$ Department of Natural Sciences, The Open University of Israel, 1 University Road, P.O. Box 808, Ra’anana 4353701, Israel\ $^4$Zentrum f[ü]{}r Astronomie der Universit[ä]{}t Heidelberg, Astronomisches Rechen-Institut, Philosophenweg 12, 69120 Heidelberg, Germany bibliography: - 'references.bib' title: Information entropy in cosmological inference problems --- \[firstpage\] gravitational lensing: weak – dark energy – large-scale structure of Universe. introduction ============ At the moment, we are observing a natural progression in cosmological data analysis: Firstly, the homogeneous and isotropic background expansion of the Universe was probed with Cepheid variable stars and with supernovae of type Ia, secondly, the linear perturbations in the metric were observed with the temperature and polarisation anisotropies in the cosmic microwave background. Now, and thirdly, the nonlinearly evolved cosmic large-scale structure is dissected by galaxy clustering and weak lensing surveys, where a number of complications arise in data analysis, related to systematic astrophysical effects on one side and to non-Gaussian statistics on the other. With these observations it is possible to investigate the expansion dynamics of the Universe with the relevant laws of gravity and the properties of cosmological fluids at high precision, as well as the initial conditions of structure formation and the processes that lead to the cosmic structures that we see today. In the spirit of narrowing down the allowed parameter range, the combination of cosmological probes is of particular importance, because they are sensitive to different signatures of the cosmological model. While the statistical error is expected to decrease if the probes are consistent, any tension between the best fit-values can hint, if significant, at the presence of systematic errors due to badly understood astrophysical processes, or better, at new physics beyond $\Lambda$CDM or $w$CDM. The knowledge on cosmological models and their corresponding parameter choices is encapsulated in the likelihood function, which assumes an approximate Gaussian shape if the model has a low complexity and if the data is well-constraining the parameter space [@fisher_logic_1935; @trotta_bayesian_2017 the latter for an application in cosmology]. In this case, nonlinearities in the model can be approximated well enough by linear relationships, which renders the likelihood ideally Gaussian and makes it accessible to the Fisher-matrix formalism [with an application to cosmology, @wolz_validity_2012; @crittenden_fables_2012; @elsner_fast_2012; @khedekar_cosmology_2013], which is ubiquitous in modern cosmology [@tegmark_karhunen-loeve_1997; @coe_fisher_2009; @bassett_fisher4cast_2009; @schafer_describing_2016]. Under exactly the assumption of a Gaussian likelihood, it is possible to compute the expected parameter covariance from the second derivatives of the logarithmic likelihood, which becomes equal to the expectation value of the product of first derivatives if averaged over the expected data. The Fisher-matrix formalism is foremost a tool for determining statistical errors for a Gaussian likelihood, where in forecasting applications the true model is already known, which in this context is referred to as the fiducial cosmology. Because inference from cosmological data is often not limited by statistics but rather by systematics, extensions to the Fisher-formalism have been introduced that allow the forecasting of systematical errors, i.e. the shift of the best-fit point of a Gaussian likelihood if an unknown systematic is not removed or properly modelled [@loverde_magnification-temperature_2007; @taburet_biases_2009; @amara_systematic_2008; @schaefer_implications_2009; @schafer_parameter_2010; @kirk_optimising_2011]. There is no unambiguous way to quantify the total statistical error budget of a cosmological probe or the significance of tensions between likelihood obtained with different cosmological probes, even in the case of Gaussian likelihoods. Such a measure of total error would be convenient in quantifying the information content of a particular cosmological probe, or its parameter degeneracy breaking power, or in applications of experimental design, where one optimises a survey to yield the smallest possible errors [@jenkins_power_2011; @kerscher_model_2019]. In a larger context, Bayesian evidences [@trotta_applications_2007; @trotta_bayes_2008; @santos_bayesian_2016] used in model selection are measures of the consistency between likelihood and prior [@liddle_model_2006], and are only one possible choice among many others, for instance the Akaike information or the Bayes-information, for preferring a particular model, incorporating a tradeoff between the goodness-of-fit and model complexity [@liddle_present_2006; @mukherjee_model_2006; @heavens_model_2007; @knuth_bayesian_2015]. The likelihood $\mathcal{L}(\boldsymbol{D}|\boldsymbol{x},M)$ of a cosmological model $M$, subject to the parameters $\boldsymbol{x}$, in the light of the data set, which we compress into a data vector $\boldsymbol{D}$, is embedded into Bayes’ theorem [for a summary of applications of Bayes-statistics in cosmology, see @loredo_bayesian_2012]. This expresses the state of knowledge after carrying out an experiment, the posterior $p(\boldsymbol{x}|M,\boldsymbol{D})$, as proportional to the likelihood provided by the experiment times the prior distribution $\pi(\boldsymbol{x}|M)$, $$p(\boldsymbol{x}|M,\boldsymbol{D}) = \frac{\mathcal{L}(\boldsymbol{D}|\boldsymbol{x},M)\pi(\boldsymbol{x}|M)}{p(\boldsymbol{D}|M)},$$ where the constant of proportionality is the inverse of the evidence $p(\boldsymbol{D}|M)$ for the model $M$, $$p(\boldsymbol{D}|M) = \int{\mathrm{d}}^n x\:\mathcal{L}(\boldsymbol{D}|\boldsymbol{x},M) \pi(\boldsymbol{x}|M).$$ Note that we compress the data and the parameters into vectors $\boldsymbol{D}$ and $\boldsymbol{x}$ of dimension $m$ and $n$ respectively We write vector components as $x^\mu$ and dual vector components as $x_\mu$. In cosmology, one often works under the assumption of Gaussian distributions, where every result can be expressed in terms of the covariance matrix, if the Gauss-Markov-theorem is fulfilled. Models can be well constrained by data if their likelihoods are peaked and if their parameter covariances assume small numbers: In this case, the nonlinearities of the model, which give rise to non-Gaussian likelihoods, can be linearised. In many studies, the focus is on the statistical errors of cosmological parameters as way to understand whether future data can help to investigate new models for gravity, dark energy or inflationary structure formation. When differentiating between models, for instance between dark energy and a cosmological constant, one introduces an interpolating parameterisation and is content if that parameter has a sufficiently small error for distinguishing between the models. Additionally, evidence measures take the complexity of the model into account and prefer simpler models unless a more complex model explains data significantly better [@handley_quantifying_2019]. As such, Bayesian evidence was employed for selecting the most likely model, irrespective of the specific parameter choice. Motivated by the Neyman-Pearson-lemma (which itself is related to a relative entropy), one compares competing models by constructing their logarithmic evidence ratio [@trotta_forecasting_2007] as one would do when comparing likelihoods, $$\Delta B = \ln\frac{p(\boldsymbol{D}|M_1)}{p(\boldsymbol{D}|M_2)}$$ such that positive values of $\Delta B$ prefer the model $M_1$ over $M_2$ and vice versa. Quantitatively, one uses the Jeffrey’s scale for preferring one model over another [@nesseris_is_2013]. Studies concerning Bayesian evidence, or Akaike- or Bayes-information criteria as precursors, have been used for quantifying how well measurements can differentiate between competing models and for optimising experimental design [@mukherjee_model_2006]. Information entropies, on the other side, quantify the amount of randomness in a distribution. The Shannon-entropy $S$ [@Shannon_original] is defined as $$S = -\int{\mathrm{d}}^n x\:p(\boldsymbol{x})\ln p(\boldsymbol{x}), \label{eqn_shannon}$$ would assume small numbers for a peaked likelihood, as $S = \ln[2\pi\sigma^2\exp(1)]/2$ for a Gaussian distribution. More general measures of entropy are R[é]{}nyi-entropies $S_\alpha$ [@Renyi_original; @RenyiEntGaussProc] that are parameterised by $\alpha>0$ and $\alpha\neq 1$, $$S_\alpha = -\frac{1}{\alpha-1}\ln\int{\mathrm{d}}^n x\:p(\boldsymbol{x})p^{\alpha-1}(\boldsymbol{x}),$$ where one recovers the Shannon-entropy in the limit $\alpha\rightarrow 1$ by application of de l’H[ô]{}pital’s rule. R[é]{}nyi-entropies increase likewise with the variance for positive values of $\alpha$, $S_\alpha = \ln(2\pi\sigma^2\alpha^\frac{1}{\alpha-1})/2$. This implies that entropies provide a way of quantifying how constraining data is as they quantify the size of the allowed parameter space [@mehrabi_information_2019]. The subject of our paper is the application of information entropy to cosmology: Similar to @carron_probe_2011, we investigate the information content of cosmological probes by computing the entropy of their likelihood, and compute relative entropies if cosmological probes are combined [@grandis_information_2016]: In this way, we intend to provide an interpretation of cosmological likelihoods in terms of a quantity that is defined axiomatically, as done by Shannon, without any ambiguity. We will show that even more complex quantities such as biases between likelihoods or Bayesian evidences can be expressed as relative entropies, giving them again an axiomatically defined meaning and a natural scale for their magnitude. It is the case that even the dark energy figure of merit designed for quantifying the performance of cosmological probes to measure deviations from the cosmological constant $\Lambda$ is actually an information entropy. While focusing on Gaussian distributions, where all calculations we show have analytical solutions, the concept of information entropy is perfectly applicable to asymmetric or even multimodal distributions, and provides natural generalisations to quantities that are intuitive for Gaussian distributions. This applies in particular to Bayesian-evidences or evidence ratios, which can in fact be related to information entropy differences, which are interpretable directly without resorting to the rather arbitrarily defined Jeffreys-scale and are likewise quantified in units of nats. Additionally, the Shannon-entropy singles out the Gaussian distribution as being extremal: Among all distributions with a fixed variance, the Gaussian distribution maximises the Shannon-entropy $S$, which is usually shown by functional variation with a boundary condition. We would like to illustrate this statement using a Gram-Charlier-parameterised distribution $p(x){\mathrm{d}}x$ with weak non-Gaussianities [@wallace_asymptotic_1958] described by the cumulants $\kappa_3$ and $\kappa_4$, both of which are much smaller than one, $$p(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{x^2}{2\sigma^2}\right) \Bigg[1+\frac{\kappa_3}{3!\sigma^3}H_3\left(\frac{x}{\sigma}\right)+\frac{\kappa_4}{4!\sigma^4}H_4\left(\frac{x}{\sigma}\right) \Bigg],$$ with the Hermite-polynomials $H_n(x)$ of order $n$. Substituting this series into the definition (\[eqn\_shannon\]) and approximating $\ln(1+\epsilon)\simeq\epsilon$ for $|\epsilon|\ll 1$, one obtains at second order the result $$S = \frac{1}{2}\ln\left[2\pi\sigma^2\exp(1)\right] - \frac{1}{3!}\frac{\kappa_3^2}{\sigma^6} - \frac{1}{4!}\frac{\kappa_4^2}{\sigma^8}, \label{eqn_entropy_gram_charlier}$$ by using the orthogonality relation of the Hermite-polynomials, $$\int{\mathrm{d}}x\:\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{x^2}{2\sigma^2}\right)\:H_m\left(\frac{x}{\sigma}\right)\:H_n\left(\frac{x}{\sigma}\right) = n!\delta_{mn}. \label{eqn_hermite}$$ Eqn. (\[eqn\_entropy\_gram\_charlier\]) shows that the entropy of the Gaussian distribution is always diminished by non-Gaussianities, because $\kappa_3^2$ and $\kappa_4^2$ are as squares necessarily positive. Because of this result, we would like to point out that the information entropies that we compute are upper bounds, and that realistic non-Gaussian likelihoods would have lower values for their information entropies than their Gaussian counterparts. It is remarkable that the orthogonality relation (\[eqn\_hermite\]) cancels the influence of non-Gaussianities on $S$ to first order, which can be shown by substituting $1 = H_0(x)$ and $x^2 = H_2(x) + H_0(x)$. Sadly, there is no analogous result to eqn. (\[eqn\_hermite\]) for the R[é]{}nyi-entropy, but eqn. (\[eqn\_hermite\]) can be generalised in principle to hold for non-Gaussianities of arbitrary order $\kappa_n$. While this could serve as an illustration, it is by no means a stringent proof, as the Gram-Charlier-expansion is not necessarily positive for all choices of $\kappa_n$. A multivariate generalisation of eqn. (\[eqn\_entropy\_gram\_charlier\]) can serve as a way to estimate the Shannon-entropy from MCMC-samples of the likelihood without the need of a density estimate as a way to compute $\ln p(x)$. Instead, one would estimate multivariate cumulants from the samples directly and correct the Gaussian result for the information entropy. It can be expected that a similar relationship exists for DALI-approximated likelihoods [@Sellentin_2014; @Sellentin_2015]. In our investigation, we will juxtapose two popular cosmologies, $\Lambda$CDM and $w$CDM, with a prior on spatial flatness, parameterised by $\Omega_m$, $\sigma_8$, $h$, $n_s$, $\Omega_b$ and possibly the equation of state $w$ unequal to zero. The fiducial parameter choices are the values of the [@Planck2018] ($TT$, $TE$, $EE$, low$z$ + lensing) $\Omega_m = 0.3153$, $\sigma_8 = 0.8111$, $h = 0.6736$, $n_s = 0.9649$ and $\Omega_b = 0.0493$. The recombination redshift is chosen as $z_{re}= 11.357$ and galaxy bias as $b=0.68$ [@ferraro_wise_2015]. The dark energy fluid is described by an equation of state parameter constant in time [@ChevallierPolarski2001; @Linder2003], where $w = -1$ recovers the case of a cosmological constant $\Lambda$, which is our fiducial for both cosmological models. After summarising the key concepts of Bayesian statistics, the Fisher-matrix formalism and information entropy in Sect. \[sect\_theory\], we demonstrate the decrease in entropy through combining cosmological probes, outlined in Sect. \[sect\_probes\] in Sect. \[sect\_decrease\] as well as their correspondence to more conventional measures of error in Sect. \[sect\_statistics\]. We consider three topical cases of tensions in Sect. \[sect\_systematics\] and their corresponding loss in evidence and increase in information entropy in Sect. \[sect\_evidence\], before summarising our results in Sect. \[sect\_summary\]. In our investigation we assume the characterics of the Euclid for the large-scale structure and of Planck for the CMB and CMB-lensing. statistics in cosmology {#sect_theory} ======================= Approximating likelihoods with a multivariate Gaussian distribution is the basis of the Fisher-matrix formalism, because the covariance matrix can be computed from the averaged gradients of the logarithmic likelihood. Fixing the fiducial model, one obtains for the Fisher-matrix components $F^\mu_{\ \nu}$ [@tegmark_karhunen-loeve_1997], $$F^\mu_{\ \nu} = -\left{\langle}\frac{\partial^2\ln\mathcal{L}}{\partial x_\mu\partial x^\nu}\right{\rangle},$$ yielding for a measurement of multipole moments $A_{\ell m}$ and $B_{\ell m}$ of Gaussian random fields $A(\theta,\varphi)$ and $B(\theta,\varphi)$ that are described by angular spectra $C_{AA}(\ell)$, $C_{BB}(\ell)$ and $C_{AB}(\ell)$ as $$\label{eq:fisher} F^\mu_{\ \nu} = \sum_\ell \frac{2\ell+1}{2}\mathrm{tr}\left(\frac{\partial}{\partial x_\mu}\ln \boldsymbol{C}_D\:\frac{\partial}{\partial x^\nu}\ln \boldsymbol{C}_D\right),$$ where the spectra are combined into a common data covariance $\boldsymbol{C}_D = \boldsymbol{C}_D(\ell)$. Quoting the logarithmic curvature $F^\mu_{\ \nu}$ of the likelihood surface in parameter space, the tensor of second moments $\boldsymbol{C} = {\langle}\boldsymbol{x}\otimes\boldsymbol{x}{\rangle}$ or confidence intervals is equivalent for a Gaussian distribution. As the Fisher-matrix corresponds to the inverse parameter covariance $\boldsymbol{C} = \boldsymbol{F}^{-1}$, one can write down a multivariate Gaussian distribution as $$p(\boldsymbol{x}) = \sqrt{\frac{\mathrm{det}(\boldsymbol{F})}{(2\pi)^n}}\exp\left(-\frac{1}{2}x^\mu F_{\mu\nu} x^\nu\right),$$ if one uses coordinates relative to the best-fit point. Note that $\boldsymbol{C}$ is a tensor constructed on the tangent bundle of the parameter space, unlike in Eq. (\[eq:fisher\]) where $\boldsymbol{C}_D$ is the covariance of the data. Furthermore we will employ the sum convention so that repeated indices are summed over. Measures of total uncertainty can be derived from the Fisher-matrix in a straightforward way as, for instance, the invariant trace $\mathrm{tr}(\boldsymbol{F})$, the Frobenius-norm $\mathrm{tr}(\boldsymbol{F}^2)$ or the determinant $\mathrm{det}(\boldsymbol{F})$ of which we will take the logarithm $\ln\mathrm{det}(\boldsymbol{F})$ to make the connection to information entropies clearer. Generalisations to the trace and the Forbenius-norm of the type $\mathrm{tr}(\boldsymbol{F}^p)$ with $p>2$ would be restricted in the values that they can assume by the H[ö]{}lder-inequality, $$\frac{1}{n} \mathrm{tr}(\boldsymbol{F}) \leq \left(\frac{1}{n}\mathrm{tr}(\boldsymbol{F}^p)\right)^{\frac{1}{p}} \label{eqn_hoelder}$$ for arbitrary powers $p$ of Fisher-matrices in $n$ dimensions, where the traces can be generalised to arbitrary real-valued powers $p$ by using $\mathrm{tr}(\boldsymbol{F}^p) = \mathrm{tr}\exp(p\ln(\boldsymbol{F}))$. On the other side, the generalised inequality of the arithmetic and geometric mean implies $$\frac{1}{n} \mathrm{tr}(\boldsymbol{F}) \geq \mathrm{det}(\boldsymbol{F})^{\frac{1}{n}}, \label{eqn_arith_geo}$$ such that the information entropies are bounded by traces of the Fisher-matrix, as shown in the next paragraph. Specifically, while $\mathrm{tr}(\boldsymbol{F}) = \sum_\mu \sigma_\mu^{-2}$ is a measure of the total uncertainty of the likelihood, it does not differentiate between correlated and uncorrelated distributions, which is taken care of by $\mathrm{tr}(\boldsymbol{F}^2)$, as the expression contains information from the off-diagonal elements in addition to performing a different weighting of the errors. While all trace-relations for arbitrary $p$ are measures of total error, only the determinant provides a geometric interpretation as the volume of parameter space: The dark energy figure of merit is defined as the volume of the $w_0$-$w_a$ subspace of parameter space bounded by the $1\sigma$-contour. Of all these measures, however, only $\mathrm{tr}(\boldsymbol{F})$ is additive for statistically independent measurements. Given the inequalities \[eqn\_hoelder\] and \[eqn\_arith\_geo\], we state all results in a scaled way, i.e. $(\mathrm{tr}(\boldsymbol{F}^p)/n)^{1/p}$ and $\mathrm{det}(\boldsymbol{F})^{1/n}$. The usage of these scaled traces is motivated by the fact that for a diagonal Fisher-matrix with identical entries $1/\sigma^2$ they all return the same value of $1/\sigma^2$ irrespective of $n$ or $p$. Lastly, the Fisher matrix can also be understood as a metric tensor in the context of information geometry by describing the parameter space as a Riemaniann manifold as studied in @amari_information_2016 and applied to a cosmological setting in @giesel2020information yielding insights about the geometrical structure of parameter spaces. Analytical expressions for the entropies $S$ and $S_\alpha$ can be derived for a multivariate Gaussian in terms of the determinant of the covariance matrix $\boldsymbol{C}$ as the inverse Fisher-matrix. Specifically, integration by substitution yields directly $$S = \frac{1}{2}\ln\left[(2\pi)^n \, \mathrm{det}(\boldsymbol{C}) \, \exp(n)\right],$$ for the Shannon-entropy and $$S_\alpha = \frac{1}{2}\ln\left[(2\pi)^n \, \mathrm{det}(\boldsymbol{C}) \, \alpha^\frac{n}{\alpha-1}\right],$$ for the R[é]{}nyi-entropy, such that the univariate case is recovered for $n=1$ and $\mathrm{det}(\boldsymbol{C}) = \sigma^2$. The two definitions are consistent as in the limit $\alpha\rightarrow 1$, the expression $\alpha^\frac{n}{\alpha-1}$ converges to $\exp(n)$. The difference between Shannon- and R[é]{}nyi-entropies for Gaussian distributions with identical covariances is given by an additive term, $$\Delta_n(\alpha) = S_\alpha - S = \frac{1}{2}\left(\ln\left(\alpha^\frac{n}{\alpha-1}\right) - n\right)$$ which is depicted in Fig. \[fig\_entropy\_scaling\], showing that in particular Bhattacharyya-entropies [@Bhattacharyya_original], where $\alpha=1/2 < 1$, will always be larger than Shannon-entropies. This trend becomes stronger with increasing number of random variables $n$. Typical numbers in cosmology with a $\Lambda$CDM- or $w$CDM-model with 7 or 8 parameters would then be $\Delta_7(1/2) \simeq 1.35$ and $\Delta_8(1/2) \simeq 1.54$. ![Difference $\Delta_n(\alpha)$ between R[é]{}nyi- and Shannon-entropies as a function of $\alpha$ for $n$-variate Gaussian distributions with identical covariances. The dashed vertical line corresponds to the Shannon-case, the solid vertical line to the Bhattacharyya-case with $\alpha=1/2$.[]{data-label="fig_entropy_scaling"}](./entropy_scaling.pdf) Remarkably enough, both entropies are measures of the logarithmic volume of the parameter space bounded by the $1\sigma$-contour, implying that the dark energy figure of merit is in fact an inverse information entropy. Interestingly, the entropies are defined as $\mathrm{det}(\boldsymbol{C})$ is always strictly positive for the covariance matrix $C = {\langle}\boldsymbol{x}\otimes\boldsymbol{x}{\rangle}$ as a consequence of Gram’s inequality, while $\mathrm{det}(\boldsymbol{x}\otimes \boldsymbol{x})$ without averaging would be exactly zero. By the choice of the natural logarithm, the unit of entropy is nat. Clearly, the information entropies $S$ and $S_\alpha$ are inversely proportional to $\ln\det(\boldsymbol{F})$ and one should expect similar relations with the measures $\mathrm{tr}(\boldsymbol{F}) = F^\mu_{\ \ \mu}$ and $\mathrm{tr}(\boldsymbol{F}^2) = F_{\mu\nu}F^{\mu\nu}$ too. As explained before, the H[ö]{}lder-inequality and the inequality of the geometric and arithmetic mean provide bounds on the information entropy $S$ for both the Shannon- and R[é]{}nyi-definition in terms of trace invariants of the Fisher-matrix. Additivity in the case of statistical independence is a defining property of information entropies that makes them useful for describing the information content. They share this property with Fisher-matrices for the case of statistically independent probes, i.e. $\boldsymbol{F} = \boldsymbol{F}^{(1)} + \boldsymbol{F}^{(2)}$ implies $S_\alpha = S^{(1)}_\alpha + S^{(2)}_\alpha$ as a consequence of $p(\boldsymbol{x}) = p^{(1)}(\boldsymbol{x}) \: p^{(2)}(\boldsymbol{x})$. For cases with statistical non-independence, additivity of the entropies does not hold and therefore one defines relative entropies between two distributions, also referred to as divergences. For the Shannon-entropy, there is the Kullback-Leibler-divergence $\Delta S$ [@kullback1951], $$\Delta S = D_\mathrm{KL} = \int{\mathrm{d}}^n x\:p(\boldsymbol{x})\ln\frac{p(\boldsymbol{x})}{q(\boldsymbol{x})},$$ where @baez_bayesian_2014 provide a link to Bayesian statistics, and a more general class of $\alpha$-divergences $\Delta S_\alpha$ for R[é]{}nyi-entropies [@van_erven_renyi_2014], $$\Delta S_\alpha = \frac{1}{\alpha-1}\ln\int{\mathrm{d}}^n x\:p(\boldsymbol{x})\left(\frac{p(\boldsymbol{x})}{q(\boldsymbol{x})}\right)^{\alpha-1}$$ between two multivariate distributions $p(\boldsymbol{x})$ and $q(\boldsymbol{x})$. Likewise, relative entropies would be invariant under transformation of the random variables, whereas absolute entropies would not. In fact, they do depend on the choice of parameterisation and even on the choice of units for the parameters, which in particular is less relevant in cosmology as almost all parameters are defined in a dimensionless way, with $H_0$ or $\chi_H = c/H_0$ being notable exceptions. Indeed, under an invertible reparameterisation with a nonzero Jacobian determinant $\det(\partial y^\nu/\partial x_\mu)$, both the Shannon-entropy $S$ and, surprisingly, the R[é]{}nyi-entropy $S_\alpha$ too acquire the identical additive term $\ln\mathrm{det}(\partial y^\nu/\partial x_\mu)$, if the transformation is affine, $y_\nu = A_{\hphantom{\mu}\nu}^\mu x_\mu + b_\nu$ with a constant $A_{\hphantom{\mu}\nu}^\mu$ and $b_\nu$ corresponding to a change in units and a shift of the mean. In our application, we would like to compute the entropy difference between the posterior $p(\boldsymbol{x}) = p(\boldsymbol{x}|M,\boldsymbol{D})\propto \mathcal{L}(\boldsymbol{D}|\boldsymbol{x},M)\pi(\boldsymbol{x}|M)$ which includes the information provided by measurement and the prior $q(\boldsymbol{x}) = \pi(\boldsymbol{x}|M)$, which reflects the state of knowledge before the data has been taken: Those distribution could either be of specific shape if a previous measurement has already constrained the parameters in question, originate from theory or be chosen in a non-committal way [@handley_maximum_2018]. We would like to point out that the entropy divergences $\Delta S$ and $\Delta S_\alpha$ are not symmetric in interchanging prior and posterior, and that the definition of relative entropy does not admit transitivity when combining multiple independent data sets $\boldsymbol{D}_1,\ldots,\boldsymbol{D}_n$, i.e. in cases where $p(\boldsymbol{x}|M,\boldsymbol{D}) = \mathcal{L}(\boldsymbol{D}_1|\boldsymbol{x},M)\cdots\mathcal{L}(\boldsymbol{D}_n|\boldsymbol{x},M)\pi(\boldsymbol{x}|M)$. In fact, if $\Delta S(1)$ is the entropy divergence between the posterior $\mathcal{L}(\boldsymbol{D}_1|\boldsymbol{x},M)\pi(\boldsymbol{x}|M) = \mathcal{L}_1\pi$ and the prior $\pi(\boldsymbol{x}|M)$, $$\Delta S(1) = \int{\mathrm{d}}^n x\: \mathcal{L}_1\pi\ln\frac{\mathcal{L}_1\pi}{\pi} = \int{\mathrm{d}}^n x\:\mathcal{L}_1\pi\ln\mathcal{L}_1,$$ where we suppress the dependence on the parameters, $\boldsymbol{x}$, for notational compactness. Now, $S(2)$ is the corresponding difference between $\mathcal{L}_1\mathcal{L}_2\pi$ and $\pi$, $$\Delta S(2) = \int{\mathrm{d}}^n x\:\mathcal{L}_1\mathcal{L}_2\pi\ln\frac{\mathcal{L}_1\mathcal{L}_2\pi}{\pi} = \int{\mathrm{d}}^n x\:\mathcal{L}_1\mathcal{L}_2\pi\ln\mathcal{L}_1\mathcal{L}_2$$ one can define the entropy decrease $\Delta S(12)$ gained by including the data set $\boldsymbol{D}_2$ and adding the likelihood $\mathcal{L}_2$ to the state of knowledge $\mathcal{L}_1\pi$ obtained from the data set $\boldsymbol{D}_1$, $$\Delta S(12) = \int{\mathrm{d}}^n x\:\mathcal{L}_1\mathcal{L}_2\pi\ln\frac{\mathcal{L}_1\mathcal{L}_2\pi}{\mathcal{L}_1\pi} = \int{\mathrm{d}}^n x\:\mathcal{L}_1\mathcal{L}_2\pi\ln\mathcal{L}_2.$$ With these definitions, one sees that $\Delta S(2) \neq \Delta S(1) + \Delta S(12)$. Because of this and due to statistical non-independence of cosmological probes, we compute all entropies from the effective Fisher-matrix combining all probes into a single Gaussian likelihood. The same issue appears in the case of R[é]{}nyi-entropies $\Delta S_\alpha$ in an identical way. Using the inverse identification, i.e. setting $p(\boldsymbol{x}) = \pi(\boldsymbol{x}|M)$ and $q(\boldsymbol{x}) \propto \mathcal{L}(\boldsymbol{D}|\boldsymbol{x},M)\pi(\boldsymbol{x}|M)$, i.e. computing the entropy divergence of the prior relative to the posterior yields an interesting result. Now, the entropy divergence quantifies by how much the entropy will decrease by acquiring new data, i.e. by how much the entropy of the posterior will be different relative to that of the prior. At first sight, one might think that entropies are then additive for statistically independent likelihoods, $\mathcal{L} = \prod_i\mathcal{L}_i$, $$\Delta S = \int{\mathrm{d}}^n x\:\pi\ln\frac{\pi}{\mathcal{L}\pi} = -\int{\mathrm{d}}^n x\:\pi\ln\mathcal{L} = -\sum_i\int{\mathrm{d}}^n x\:\pi\ln\mathcal{L}_i,$$ but the evidence-term $\int{\mathrm{d}}^n x\:\mathcal{L}(x_\mu)\pi(x_\mu)$ needed for a correctly normalised posterior in fact breaks additivity, as $$\Delta S(1) = -\int{\mathrm{d}}^nx\:\pi\ln\mathcal{L}_1 + \ln\int{\mathrm{d}}^nx\:\mathcal{L}_1\pi$$ with the renormalised posterior, $$q(\boldsymbol{x}) = \frac{\mathcal{L}(\boldsymbol{x})\pi(\boldsymbol{x})}{\int{\mathrm{d}}^nx\:\mathcal{L}(\boldsymbol{x})\pi(\boldsymbol{x})}$$ is not contained in the expression $$\Delta S(2) = -\int{\mathrm{d}}^nx\:\pi\ln\mathcal{L}_1 - \int{\mathrm{d}}^nx\:\pi\ln\mathcal{L}_2 + \ln\int{\mathrm{d}}^nx\:\mathcal{L}_1\mathcal{L}_2\pi.$$ If there are no tensions between the likelihoods and if they are of Gaussian shape, one can find analytic relations for the relative Shannon-entropy $\Delta S$, $$\Delta S = \frac{1}{2}\left[\ln\frac{\mathrm{det}(\boldsymbol{F})}{\mathrm{det}(\boldsymbol{G})} - n + F^{-1}_{\mu\nu}G^{\mu\nu}\right] \label{eqn_kl_divergence_nobias}$$ now expressed in terms of the Fisher-matrices $F_{\mu\nu}$ and $G_{\mu\nu}$ of the posterior and the prior, respectively, as well as for the relative R[é]{}nyi-entropy $\Delta S_\alpha$, $$\Delta S_\alpha = \frac{1}{2}\frac{1}{\alpha-1}\ln\left[\frac{\mathrm{det}^{\alpha}(\boldsymbol{F})}{\mathrm{det}^{\alpha-1}(\boldsymbol{G})\:\mathrm{det}(\boldsymbol{A})}\right].$$ Both relationships yield $\Delta S = \Delta S_\alpha = 0$ if $\boldsymbol{F}=\boldsymbol{G}$. It is quite illustrative to substitute $\ln\mathrm{det}(\boldsymbol{F}) = \ln\mathrm{tr}(\boldsymbol{F})$, yielding $$\Delta S = \frac{1}{2} \, \Bigg[\left(\ln(F)^\mu_\mu - F^{-1}_{\mu\nu}F^{\mu\nu}\right) - \left(\ln(G)^\mu_\mu - F^{-1}_{\mu\nu}G^{\mu\nu}\right) \Bigg]$$ such that $\Delta S$ becomes ${\langle}\Delta\chi^2{\rangle}/2$ for Gaussian likelihoods as $\mathcal{L}\propto\exp(-\chi^2/2)$, where we substituted $F_{\mu\nu}^{-1}F^{\mu\nu} = n$ for symmetry. The analogous relation for the relative R[é]{}nyi-entropy $\Delta S_\alpha$ is $$\Delta S_\alpha = \frac{1}{2}\frac{1}{\alpha-1}\left[\alpha\ln(F)^\mu_\mu + (1-\alpha)\ln(G)^\mu_\mu - \ln(A)^\mu_\mu\right] \label{eqn_relative_renyi}$$ with $$A_{\mu\nu} = \alpha F_{\mu\nu} +(1-\alpha) G_{\mu\nu},$$ where one recovers the convexity condition for the matrix-valued logarithm. Again, application of de l’H[ô]{}pital’s rule for evaluating the limit $\alpha\rightarrow 1$ recovers $\Delta S$ from $\Delta S_\alpha$. It is not straightforward to find general interpretations of eqn. (\[eqn\_relative\_renyi\]) for arbitrary $\alpha$. In the Shannon case, one finds for $S$ the ratio between the logarithmic volumes of the two likelihoods and the asymmetry of the relative entropy is ensured by the fact that $F^{-1}_{\mu\nu}G^{\mu\nu} \neq F^{\mu\nu}G^{-1}_{\mu\nu}$. One would find a symmetric expression for the R[é]{}nyi-entropy $\Delta S_\alpha$ if $\alpha=1/2$, the Bhattacharyya-entropy, in accordance with the definition in this particular case, $$\Delta S_\alpha = 2\ln\int{\mathrm{d}}^n x\:\sqrt{p(\boldsymbol{x})q(\boldsymbol{x})},$$ becoming symmetric, with equal prefactors for $F_{\mu\nu}$, $G_{\mu\nu}$ and $A_{\mu\nu}=(F_{\mu\nu}+G_{\mu\nu})/2$. We will come back to this in Sect. \[sect\_evidence\], when discussing the relationship between Bayes-evidence and information entropy. Large scale structure probes {#sect_probes} ============================ As discussed in the previous section, we will assume the data to be given as a collection of spherical harmonic modes. Under the assumption of Gaussian fields, their power spectra entirely determine the statistical properties. In this section, we will briefly describe the probes considered here and how the corresponding spectra are evaluated. For more details, we refer to @2019RobertCode, which demonstrates the construction of Fisher-matrices $F_{\mu\nu}$ from the cosmological probes including all non-vanishing cross-correlations that would arise [@kitching_3d_2014; @nicola_integrated_2016; @merkel_parameter_2017]. Here, we emphasis that cross-correlations have a dual influence on the inference process by making the data statistically dependent which would decrease the constraining power. On the other hand, they introduce unique handles on investigating structure formation, for instance, through the sensitivity of the integrate Sachs-Wolfe effect the CMB-LSS-correlations to dark energy. We approximate the covariance through a Gaussian with additional power on small scales due to the modelling of nonlinear structure formation [@hilbert_cosmic_2011; @kayo_information_2013; @krause_cosmolike_2016]. Also, due to the assumption of a true fiducial model, we do not need to worry about covariance matrix variations [@tao_random_2012; @paz_improving_2015; @reischke_variations_2016]. By the Gaussian assumption, there are no complications arising in relation to covariance matrix estimation [@taylor_estimating_2014; @sellentin_parameter_2016; @sellentin_quantifying_2016; @sellentin_insufficiency_2017]. Cosmic Microwave Background --------------------------- The spectra of cleaned, full-sky CMB maps are given by a $$\langle a^{P*}_{\ell m} a^{P'}_{\ell' m'}\rangle \equiv \hat{C}^{PP'}(\ell)= \left(C^{PP'}(\ell)+ N^P(\ell) \right)\delta_{\ell\ell'}\delta_{mm'}\;,$$ where $P=T,E,B$ stands for temperature or the two polarization modes respectively, while $C^{TB}(\ell) = C^{EB}(\ell) = 0$. The noise covariance is given by [@knox_determination_1995] $$N^{P}(\ell)\equiv \langle n^{P*}_{\ell m}n^{P'}_{\ell m}\rangle = \theta^2_\mathrm{beam}\sigma^2_P\exp\left(\ell(\ell+1)\frac{\theta_\mathrm{beam}^2}{8\mathrm{ln}2}\right)\delta_{PP'}\,.$$ with root mean square $\sigma^2_P$ and a Gaussian beam with width $\theta_\mathrm{beam}$. Stage IV CMB experiments [[e.g. @thornton_atacama_2016]]{} will have a very small instrumental noise allowing for measurements up to $\ell \sim 5000$, especially for the polarisation maps. The spectra of the different components are calculated using the `hi-CLASS` code [@hiCLASS]. Large scale structure --------------------- The modes of any large scale structure probe can be calculated, to first order, as a weighted line-of-sight integral of the modes of the density field $$A_{\ell m} = \int \mathrm{d}\chi\; W_A(\chi)\delta_{\ell m}(\chi)\;,$$ where $\chi$ is the comoving distance and a suitable weighting function $W_A(\chi)$. Corresponding spectra involve integration over Bessel functions due to the spherical basis. However, in the flat sky and Limber approximation, the calculation is simplified greatly and any angular power spectrum is given by [@1954ApJ...119..655L] $$C_{AB}(\ell) = \int\frac{\mathrm{d}\chi}{\chi^2}W_A(\chi)W_B(\chi) P_{\delta\delta} \left(\frac{\ell + 0.5}{\chi},\chi\right)\;.$$ Note that the comoving wave vector of a mode $k$ is related to the multipole $\ell$ via $k = (\ell + 0.5)/\chi$ in the Limber projection. We will continue by listing the weight functions of all probes used: 1. Cosmic shear [for reviews, we refer to @bartelmann_weak_2001; @hoekstra_weak_2008]: $$W(\chi) = \frac{3\Omega_\mathrm{m}\chi_H^2}{2a\chi}\int _{\mathrm{min}(\chi,\chi_i)}^{\chi_{i+1}}\mathrm{d}\chi'p(\chi')\frac{\mathrm{d}z}{\mathrm{d}\chi'}\left(1-\frac{\chi}{\chi'}\right)\;,$$ with the Hubble radius $\chi_H=c/H$, $i$ the tomographic bin index and the Jacobi determinant $\mathrm{d}z/\mathrm{d}\chi' = H(\chi')/c$ due to the transformation of the redshift distribution $p(z)\mathrm{d}z$ of background galaxies in redshift $z$, which is given by [@EuclidStudyReport] $$p(z) \,\mathrm{d}z \propto z^2\exp\left[-\left(\frac{z}{z_0}\right)^\beta\right]\;.$$ Typical parameters for stage IV experiments are $z_0\approx 1$ and $\beta = 3/2$. 2. Galaxy clustering [e.g. @baumgart_fourier_1991; @feldman_power-spectrum_1994; @heavens_spherical_1995] $$W\left(k,\chi\right) = \frac{H(\chi)}{c}\,b(k,\chi)\,p(\chi) \ \mathrm{if} \ \chi\in [\chi_i,\chi_{i+1})\;,$$ where $b$ is the galaxy bias [as summarised in @desjacques_large-scale_2018] for which we assume [@ferraro_wise_2015]: $$b(\chi) = b_0\,[1+z(\chi)]\;,$$ with a free positive parameter $b_0$. 3. Lensing of the CMB [e.g. @hirata_reconstruction_2003; @lewis_weak_2006]: $$W(\chi) = \frac{\chi_* - \chi}{\chi_*\chi}\frac{H(\chi)}{ca}\;,$$ with the comoving distance to the last scattering surface $\chi_*$. 4. Integrated Sachs-Wolfe effect [[iSW, @sachs_perturbations_1967]]{}: $$W(k,a) = \frac{3}{2\chi_H^3} a^2 E(a) \, F'(k,a)\;,$$ where the prime denotes a derivative with respect to $a$ and $$F(k,a) = 2\frac{D_+(k,a)}{a}\;,$$ which is measured in cross-correlation with galaxy clustering and weak lensing. The noise covariance of cosmic shear and galaxy clustering is given by $$N_\mathrm{LSS} (\ell) = \sigma^2\frac{n_\mathrm{tomo}}{\bar{n}_\mathrm{gal}}\;\delta_{\ell\ell^\prime},$$ with $\sigma = 0.3$ and $\sigma = 1$ for lensing and galaxy clustering, respectively, describing the intrinsic ellipticity of galaxies and the Poissonian fluctuation of galaxy numbers in each bin, $\bar{n}_\mathrm{gal}/n_\mathrm{tomo}$. It should be noted that the tomographic bins are chosen such that the same amount of galaxies, i.e. $\bar{n}_\mathrm{gal}/n_\mathrm{tomo}$, lie in each bin. For CMB-lensing, we assume the noise to be given by the quadratic estimator described in @hu_mass_2002 [@okamoto_cosmic_2003] using all five non-vanishing estimators involving $T$, $E$ and $B$. In our analysis, we combine the currently most powerful cosmological probes into a joint likelihood function. Specifically, we start out with spectra of the temperature and polarisation anisotropies in the CMB (labeled as CMB primary), and successively add CMB-lensing, tomographic galaxy clustering (GC) and tomographic weak gravitational shear (WL), while taking account of all possible cross-correlations. As the reference cosmology, we use the @Planck2018-result. The Fisher-matrices used in this work were computed with the code of [@2019RobertIA] where all cross-correlations are taken into account. Currently, to apply this approach to available data would not be fully correct since it is not available the correlation between probes that have overlap areas of scan as it is the case for some surveys of galaxy clustering and weak lensing. Therefore we use these specific Fisher-matrices as a proof of concept, that is, to assess relations between the Bayesian statistics methods and information theory measures. entropy decrease in probe combination {#sect_decrease} ===================================== In this section we would like to see how the statistical uncertainty in a $\Lambda$CDM- or $w$CDM-cosmology is reduced by combining cosmological probes. Starting from constraints from the temperature and polarisation anisotropies of the cosmic microwave background, we add successively gravitational lensing of the CMB, galaxy clustering and weak gravitational lensing, i.e. the large-scale structure probes ordered by decreasing redshift. In doing that, we are considering all nonzero cross-correlations in the data covariance, most notably the integrated Sachs-Wolfe effect between the CMB-temperature, any low-redshift tracer of the large-scale structure, and the nonzero cross-correlations between the galaxy density and weak lensing. At some point, the distribution will become very narrow such that their entropies, irrespective of the Shannon- or R[é]{}nyi-definition, will become negative. One can explain this straightforwardly by considering a one-dimensional Gaussian distribution with variance $\sigma^2$, where the relevant term in both entropy definitions is $\ln(\sigma^2)$ which tends towards $-\infty$ as $\sigma^2\rightarrow 0$. The $\delta_D$-distribution with perfect knowledge of the parameters has infinite negative entropy (if one considers $\delta_D(x)$ as a distribution in the probabilistic sense), and not zero as a consequence of the continuum limit. First we quantify the absolute entropies $S$ and $S_\alpha$ for the four cosmological data sets separately and put them into relation with other measures of total error that can be directly derived from the Fisher-matrix such as $\mathrm{tr}(\boldsymbol{F})$ and $\mathrm{tr}(\boldsymbol{F}^2)$, where the inequality $$\ln\mathrm{det}(\boldsymbol{F}) = \mathrm{tr}\ln \boldsymbol{F} \geq \ln\mathrm{tr}(\boldsymbol{F})$$ is obeyed as should be expected from a positive definite and symmetric Fisher-matrix $F_{\mu\nu}$. As such, the entropies are in fact not only scaling with Fisher-invariants but are bounded by them as well, keeping in mind that $2S = n(\ln(2\pi)+1) - \ln\mathrm{det}(\boldsymbol{F})$. Specifically, absolute Shannon-entropies $S$ and R[é]{}nyi-entropies $S_\alpha$ for $\alpha=1/2$ are listed in Table \[table\_absolute\_lambdacdm\] for a $\Lambda$CDM cosmology and in Table \[table\_absolute\_wcdm\] for a $w$CDM-cosmology. Clearly, for both cosmological models, the cosmic microwave background is the primary source of information, followed by galaxy clustering and weak lensing, and with CMB-lensing adding the smallest amount of information. Comparing the two cosmological models, the entropies in $\Lambda$CDM are smaller, reflecting the reduced parameter space in comparison to $w$CDM, leading to tighter constraints, smaller entries in the parameter covariance matrix and in consequence, of the information entropies. The Shannon- and R[é]{}nyi-entropies are related for the Gaussian distributions by a fixed factor, for which we have chosen to compute the case for $\alpha = 1/2$. probe Shannon-entropy $S$ Bhattacharyya-entropy $S_\alpha$ ------------------- --------------------- ---------------------------------- CMB -28.50 -27.53 CMB-lensing -8.58 -7.57 galaxy clustering -18.90 -18.02 weak lensing -16.51 -15.55 : Absolute Shannon- and Bhattacharyya-entropies $S$ and $S_\alpha$, $\alpha=1/2$, in units of nats, for the likelihood of a $\Lambda$CDM-model, computed from the Fisher-matrices.[]{data-label="table_absolute_lambdacdm"} probe Shannon-entropy $S$ Bhattacharyya-entropy $S_\alpha$ ------------------- --------------------- ---------------------------------- CMB -31.04 -30.08 CMB-lensing -8.39 -7.43 galaxy clustering -22.19 -21.22 weak lensing -18.51 -17.54 : Absolute Shannon- and Bhattacharyya-entropies $S$ and $S_\alpha$, $\alpha=1/2$, in units of nats, characterising the likelihood of a $w$CDM-model where $w$ is a constant allowed to be different from -1.[]{data-label="table_absolute_wcdm"} In contrast to absolute entropies, relative entropies $\Delta S$ and $\Delta S_\alpha$ are independent under transformations of the random variable, so in particular the choice of units does not matter. In cosmology, however, there is the particular situation that most of the cosmological parameters are defined in a dimensionless way, such that it is sensible to compare absolute entropies directly. We give in Fig. \[fig\_absolute\_entropy\_lcdm\] the total entropy of all cosmological probes individually, and show their scaling with $\mathrm{tr}(\boldsymbol{F})/n$, $(\mathrm{tr}(\boldsymbol{F})/n)^{1/p}$ and $\ln\mathrm{det}(\boldsymbol{F})/n$. Not surprisingly, information entropies show in fact that they scale with trace-invariants of the Fisher-matrix. Clearly, the primary CMB has the highest information content for a $\Lambda$CDM-model, followed by galaxy clustering, weak lensing and CMB-lensing, in that particular order. In addition, the inequalities \[eqn\_hoelder\] and \[eqn\_arith\_geo\] are clearly fulfilled. Similary, Fig. \[fig\_absolute\_entropy\_wcdm\] shows for the same cosmological probes their respective information content for a $w$CDM cosmology. Strong degeneracies between the parameters can give rise to small values for the determinant and therefore large values for $S$. ![Absolute Shannon-entropy $S$ (large symbols) and Bhattacharyya-entropye $S_\alpha$, $\alpha=1/2$ (small symbols) in units of nats for the likelihood of a $\Lambda$CDM-cosmology, constrained through primary CMB-fluctuations, CMB-lensing, galaxy clustering and weak lensing individually, plotted against $\mathrm{tr}(\boldsymbol{F})/n$, $(\mathrm{tr}(\boldsymbol{F}^p)/n)^{1/p}$, $p=2$, and $\mathrm{det}(\boldsymbol{F})^{1/n}$, with $n=5$.[]{data-label="fig_absolute_entropy_lcdm"}](./absolute_entropy_lcdm.pdf) ![Absolute Shannon-entropy $S$ (large symbols) and Bhattacharyya-entropy $S_\alpha$, $\alpha=1/2$ (small symbols) in units of nats for the likelihood of a $w$CDM-cosmology, constrained through primary CMB-fluctuations, CMB-lensing, galaxy clustering and weak lensing individually, plotted against $\mathrm{tr}(\boldsymbol{F})/n$, $(\mathrm{tr}(\boldsymbol{F}^p)/n)^{1/p}$, $p=2$, and $\mathrm{det}(\boldsymbol{F})^{1/n}$, with $n=6$.[]{data-label="fig_absolute_entropy_wcdm"}](./absolute_entropy_wcdm.pdf) information entropies and measures of statistical error {#sect_statistics} ======================================================= Information entropies and invariants of the Fisher-matrix are both measures of the total statistical uncertainty, and as such one should expect a relation between $S$ (or generally $S_\alpha$) with $\mathrm{tr}(\boldsymbol{F})$, $\mathrm{tr}(\boldsymbol{F}^2)$ and $\mathrm{det}(\boldsymbol{F})$ at every stage of combining cosmological probes. While it is clear that in the case of an uncorrelated multivariate Gaussian distribution with covariance $C_{\mu\nu}\propto\delta_{\mu\nu}$, the entropies of the individual distributions add, $S = \sum_\mu S_\mu$, the same does not hold if correlations are present. In fact, the total entropy is bounded by the conditional and marginal variances, respectively. That is, for both Shannon- and R[é]{}nyi-entropies the conditional error results from the corresponding inverse entry of the Fisher-matrix, $\sigma^2_{\mu,c} = (F_{\mu\mu})^{-1}$, such that with $S^{(c)}_\mu\propto\sigma_\mu^2$, one obtains (ignoring non-relevant prefactors) $\exp(-S^{(c)}_\mu) = F_{\mu\mu}$. Using the Hadamard-inequality one then finds $\exp(-S) = \mathrm{det}(F)\leq\prod_\mu F_{\mu\mu} = \prod_\mu \sigma^2_{\mu,c} = \prod_\mu\exp(-S^{(c)}_\mu) = \exp\left(\sum_\mu S^{(c)}_\mu\right)$, such that $S\geq \sum_\mu S^{(c)}_\mu$ for conditional variances. Conversely, the marginalised variance is computed from the inverse Fisher-matrix, $\sigma^2_{\mu,c} = (\boldsymbol{F}^{-1})_{\mu\mu}$, as well as $\mathrm{det}(\boldsymbol{F}^{-1}) = 1/\mathrm{det}(\boldsymbol{F})$ implying $\exp(S) = \mathrm{det}(\boldsymbol{F}^{-1})$. Then, $\exp(S) = \mathrm{det}(\boldsymbol{F}^{-1}) \leq \prod_\mu (\boldsymbol{F}^{-1})_{\mu\mu} = \prod_\mu \sigma^2_{\mu,m} = \prod_\mu \exp(-S^{(m)}_\mu) = \exp\left(\sum_\mu S^{(m)}_\mu\right)$, and from that $S \leq \sum_\mu S^{(m)}_\mu$, i.e. in summary $\sum_\mu S_\mu^c\leq S \leq \sum_\mu S^{(m)}_\mu$, where equality is given for the uncorrelated case. Additionally, the Cram[é]{}r-Rao-inequality asserts that the estimated variance of a distribution is bounded by the Fisher-matrix from below, where equality between the variances $\sigma^2$ and $\boldsymbol{F}^{-1}$ is only given for a Gaussian distribution. If one were to estimate the covariance matrix from a non-Gaussian distribution, the resulting variance would be larger than that of a Gaussian distribution for the same Fisher-matrix, and assigning an entropy to that covariance through $S\propto\ln\mathrm{det}(\boldsymbol{C})$ would yield a larger result than $-\ln\mathrm{det}(\boldsymbol{F})$. This statement is not in contradiction with the property of the Gaussian distribution to maximise $S$ for a given covariance, because the actual value of $S$ depends on the shape of the distribution and has to be either computed from the functional shape or be estimated from data, through $S = -\int{\mathrm{d}}^nx\:p\ln p$ or $S_\alpha = -\int{\mathrm{d}}^nx\:pp^{\alpha-1}/ (\alpha-1)$. It is a standard derivation to show by functional extremisation $\delta S=0$ of $S = -\int{\mathrm{d}}x\: p(x)\ln p(x)$ while incorporating the boundary conditions $\int{\mathrm{d}}x\:p(x) = 1$ and $\int{\mathrm{d}}x\: p(x) x^2 = \sigma^2$ with Lagrange-multipliers that the Gaussian distribution is in fact the one with the largest possible entropy for fixed variance. We would like to point out that the Gaussian distribution is likewise the solution if one fixes the Fisher-matrix $F = {\langle}(\partial\ln p)^2{\rangle}= \int{\mathrm{d}}x\: p(\partial\ln p)^2$. Formulating the entropy functional as the averaged logarithmic curvature, $$S = -\int{\mathrm{d}}x\:p\ln p + \lambda\left[\int{\mathrm{d}}x\:p - 1\right] + \mu\left[\int{\mathrm{d}}x\:p (\partial\ln p)^2 - F\right]$$ yields as a solution to $\delta S = 0$ the differential equation $$\ln p(x) + 1 + \lambda + \mu\frac{\partial^2p}{p} = 0$$ using $\partial^2p/p = -(\partial\ln p)^2$, which is solved by the Gaussian distribution $p(x) = \exp(-x^2/(2\sigma^2))/\sqrt{2\pi\sigma^2}$ while identifying $F$ with $\sigma^{-2}$. We compute information entropy differences for combinations of cosmological data sets and add successively, in order of decreasing redshift, CMB-lensing, tomographic galaxy clustering and tomographic weak gravitational lensing to the primary CMB-fluctuations. Specifically, we compute the resulting combined Fisher-matrix including all cross-correlations and use it to quantify the gain of information of the probe combination over the primary CMB. We quantify the Kullback-Leibler-divergence $\Delta S$ for the full likelihood (where any R[é]{}nyi-entropy difference would differ by a numerical factor depending on the dimensionality of the likelihood). The results are shown in Fig. \[fig\_relS-LCDM\] and \[fig\_relS-w0CDM\], for the Kullback-Leibler divergence $\Delta S$. In addition, we repeat the analysis for individual likelihoods of each individual parameter $\Omega_m$, $\sigma_8$, $h$, $n_s$ and $w$, where the above discussed inequalities for the sum of the conditionalised entropies in comparison to the total entropy become relevant. In contrast to the absolute entropies discussed in the previous chapter, the relative entropies are invariant under reparameterisation of the model and do not rely on a preferred parameterisation. Clearly, there is a reduction in uncertainty achieved through the combination of cosmological probes reflected by smaller absolute information entropies. One observes a more dramatic effect in the $w$CDM-model compared to the $\Lambda$CDM-model. The interpretation of the absolute numbers of $\Delta S$ is not straightforward, as they combine a measure of change of admissible volume of parameter space, $\ln\mathrm{det}F - \ln\mathrm{det}G$ with the measure $(F^{-1}_{\mu\nu}G_{\mu\nu})-n$ sensitive to the relative orientation of the eigensystems of $F_{\mu\nu}$ and $G_{\mu\nu}$, i.e. of the changing degeneracies in probe combination. ![Relative entropies $\Delta S$ for the full likelihood of a $\Lambda$CDM-cosmology, and for all $n=5$ parameters individually, both marginalised and conditionalised, for a successive and cumulative probe combination adding CMB-lensing, galaxy clustering and weak lensing to the primary CMB.[]{data-label="fig_relS-LCDM"}](./relative_entropy_lcdm.pdf) ![Relative entropies $\Delta S$ for the full likelihood of a $w$CDM-model, and for the individual liklihoods for all $n=6$ parameters, both marginalised and conditionalised, for a successive combination of cosmological probes adding CMB-lensing, galaxy clustering and weak lensing to the primary CMB.[]{data-label="fig_relS-w0CDM"}](./relative_entropy_wcdm.pdf) entropy increase through systematics {#sect_systematics} ==================================== Up to this point we always worked under the assumption of unbiased measurements, such that the averaged likelihoods in fact peaked at the fiducial cosmology because the data was on average equal to the theoretical prediction. We will relax this assumption by considering shifts in the likelihood functions of different cosmological probes due to systematical errors. In a previous paper we have defined the figure of bias $Q$ from the Fisher-matrix $F_{\mu\nu}$ and the shifts $\delta_\mu$ of the best-fit point through the quadratic form $Q^2 = \sum_{\mu\nu}F_{\mu\nu}\delta_\mu\delta_\nu$ and showed the relationship to the Kullback-Leibler-divergence $\Delta S = Q^2/2$ if the covariance is unaffected by the systematic. While the interpretation of $Q$ as the systematic error in units of the statistical error is straightforward, it is likewise obvious that there is an effect of systematic errors on relative entropies: In fact, the explicit relationship for the Kullback-Leibler-divergence $\Delta S$ between two Gaussian distributions with Fisher-matrices $F_{\mu\nu}$ and $G_{\mu\nu}$ has a term involving $\delta_\mu$, $$\Delta S = \frac{1}{2}\left(\delta_\mu G_{\mu\nu} \delta_\nu + \ln\frac{\mathrm{det}(\boldsymbol{F})}{\mathrm{det}(\boldsymbol{G})} - n + {F}^{-1}_{\mu\nu}{G}^{\mu\nu}\right), \label{eq.DKL}$$ which reverts to eqn. (\[eqn\_kl\_divergence\_nobias\]) in the case of vanishing tension, $\delta_\mu = 0$. The analogous relationship for the relative R[é]{}nyi-entropy $\Delta S_\alpha$ can be derived to be $$\begin{split} \Delta S_\alpha & = \frac{1}{2}\frac{1}{\alpha-1} \ln\left(\frac{\mathrm{det}^\alpha(\boldsymbol{F})}{\mathrm{det}^{\alpha-1}(\boldsymbol{G})\mathrm{det}(\boldsymbol{A})}\times\right.\\ &\quad\quad\quad\quad\quad\quad\quad \left.\exp\left[-\frac{\alpha}{2}\delta^\kappa\left(F_{\kappa\beta} -\alpha F^\mu_{\kappa}A^{-1}_{\mu\nu}F^\nu_\beta\right) \delta^\beta\right]\right), \end{split} \label{eqn_relative_renyi_entropy}$$ again with $$A_{\mu\nu} = \alpha F_{\mu\nu} + (1-\alpha) G_{\mu\nu},$$ where the previous relation for the R[é]{}nyi-entropy $\Delta S_\alpha$ is recovered for $\delta_\mu = 0$, as the exponential becomes equal to one. Clearly, in this way the entropy difference becomes a measure of consistency [@nicola_consistency_2018] as it is sensitive to differences between parameter values derived with different probes, but in addition there is a dependence on the difference between the errors. Additionally, relative entropies provide a much better characterisation of distributions than $p$-values which are necessarily restricted to univariate distributions. In the following we will consider three well-known examples of tensions between likelihoods, which we approximate by Gaussian distributions: The tension in the value of the Hubble-Lema[î]{}tre parameter $H_0$ between the CMB and Cepheids, the tension in the $(\Omega_m,\sigma_8)$-plane between the CMB and weak lensing, and intrinsic alignments as a contaminant in weak lensing data as a theoretical forecast. Concerning the interpretation of $\Delta S$ and $\Delta S_\alpha$ it is straightforward to show that the Kullback-Leibler-divergence for two identical Gaussian distributions shifted by $\delta$ is given by $(\delta/(\sqrt{2}\sigma))^2$, such that the square root measures the number of standard deviations by which the Gaussian distributions are displaced relative to each other. This immediately suggests the interpretation of the integrated probability to obtain values larger than the actual bias $\delta$ as $$p = \frac{1}{2}\mathrm{erf}\left(-\frac{\delta}{\sqrt{2}\sigma}\right),$$ i.e. the $p$-value commonly used in descriptive statistics. Hubble-Lema[î]{}tre parameter $H_0$ from Cepheids and the CMB ------------------------------------------------------------- The Hubble-Lema[î]{}tre parameter $H_0$ quantifies the current rate of expansion of the Universe, but values from the CMB [@Planck2018] and the local value of $H_0$ [@2019Riess] are in mutual significant disagreement, for either measurement as a reference. While from the CMB temperature fluctuations one can infer the cosmological parameter values, assuming a $\Lambda$CDM model, with very good statistical precisions, improvements on the distance ladder using near-infrared Cepheids variables in host galaxies with recent type Ia supernovae reduced the uncertainty on $H_0$ to 2$\%$. Typically this measurement is model-independent as it follows directly from the Hubble-Lema[î]{}tre-law and not dependent on a specific cosmological model. The systematic, whose origin is yet unresolved, shows up as a nonzero Kullback-Leibler divergence $\Delta S$. With CMB as a reference (using the best-fit $h_\mathrm{CMB} = 0.68\pm0.005$ for Planck) to be updated by Cepheids (with $h_\mathrm{Cepheid} = 0.7403\pm0.0142$), this gives a value of $\Delta S\simeq 44$ nats, and has a value of $Q\simeq 12$ for the figure of bias. The discrepancy between $\Delta S$ and $Q^2/2$ indicates differences between the variances reported by the two measurements. It might be interesting to compare the Kullback-Leibler-divergence $\Delta S$ of the two distributions to the $p$-value, which reflects the probability of obtaining values more extreme than the one from the current data set. As the tension in $h$ is predominantly present in Planck’s data set, we vary the difference $\delta = h_\mathrm{CMB}-h_\mathrm{Cepheid}$ in the Hubble-Lema[î]{}tre parameter and the error $\sigma^2_h$ of the CMB-measurements. The results on the Kullback-Leibler-divergence $\Delta S$ and the $p$-value is shown in Fig. \[fig\_pvalue\]. Surely, high values for $\Delta S$ point at large discrepancies between the two distributions as would a low $p$-value, but apart from superficial similarities it is difficult to make general statements. ![Kullback-Leibler divergence $\Delta S$ (gray surface) and $p$-value, green-yellow surface) as a function of difference in $h$ of the Hubble-Lema[î]{}tre $H_0$ parameter between Cepheid- and CMB-measurements and the error $\sigma^2_h$ of the CMB-measurement.[]{data-label="fig_pvalue"}](./pvalue.pdf) $(\Omega_m,\sigma_8)$-plane from the CMB and weak lensing --------------------------------------------------------- There is as well a long-standing discrepancy between determinations of $\Omega_m$ and $\sigma_8$ from the CMB and from weak gravitational lensing, commonly with lensing preferring smaller values for both parameters relative to the CMB, but with the well-known degeneracy as a lensing essentially determines the product of both parameters. Since the best fit value for $(\Omega_m, \sigma_8)$ does not have a Gaussian uncertainty, we opt to use the parameter $S_8 = \sigma_8 \sqrt{\Omega_m /0.3}$ which encapsulates the information of both parameters and its distribution is better approximated by a Gaussian. One obtains an entropy difference $\Delta S\simeq 2.1$ nats for the Kullback-Leibler divergence, between Planck’s CMB observation and KiDS’s weak lensing data set [@KiDS450-2017] with CMB as the reference value, for a fit of a $w$CDM-cosmology to the data with a prior on spatial flatness, indicating a much less severe tension than in the case of $H_0$. $w$CDM and lensing with intrinsic alignments -------------------------------------------- Lastly, we quantify the effect of intrinsic alignments in weak lensing data on parameter estimation and the bias that they cause: From a physical point of view, intrinsic alignments are mechanisms related to tidal interaction of galaxies with the large-scale structure, which causes them to have correlated intrinsic shapes, therefore changing fundamental assumption that lensing is the only mechanism to generate shape correlations. When deriving intrinsic ellipticity spectra using tidal shearing for elliptical and tidal torquing for spiral galaxies including the cross correlation that exists between gravitational lensing and the intrinsic shapes of elliptical galaxies, one can derive estimation biases for the $w$CDM parameter set including the dark energy equation of state parameters $w_0$ and $w_a$ [@tugendhat_angular_2018] in the parameterisation $w(a) = w_0 + (1-a)w_a$. Expressed in terms of the ratio $\delta/\sigma$, those are in fact significant for a weak lensing survey like Euclid. In this particular case, we work with the approximation that the intrinsic alignments only give rise to a nonzero bias $\delta_\mu$ while keeping the covariance, or equivalently, the Fisher-matrix $F_{\mu\nu}$ fixed, such that the entropy difference recovers the figure of bias $Q^2/2 = F^{\mu\nu}\delta_\mu\delta_\nu$ as all other terms in eqn. \[eq.DKL\] vanish: $(F^{-1})_{\mu\nu}G^{\mu\nu}$ becomes $n$, and the logarithm of the ratio of the determinants becomes zero. The numerical value of the Kullback-Leibler-divergence is computed to be $\Delta S\simeq 3\times10^3$ nats if $w$ is constant and $\Delta S\simeq 10^4$ nats if $w$ can evolve linearly with time, for an analysis of Euclid’s weak lensing data with 5-bin tomography in the framework of a $w$CDM-cosmology. With the simplified interpretation of $\Delta S$ in this case of constant covariances, $Q$ assumes a value of roughly $30$ in the first and $100$ in the second case, indicating highly significant biases with the interpretation that $Q$ measures the magnitude of the systematic error in units of the statistical error while keeping track of the degeneracy orientation. In all of these cases, it mattered which likelihood is chosen as the reference for computing the expectation value of $\ln(p(\boldsymbol{x})/q(\boldsymbol{x}))$: One could construct a symmetrised Kullback-Leibler divergence by interchanging the distributions and averaging, but we would like to propose to compute the Wasserstein-metric $\Delta W_2$ [@olkin_distance_1982; @dowson_frechet_1982], $$\Delta W_2 = \left|\delta_\mu\right|^2 + \mathrm{tr}\left(\boldsymbol{A}^{-1}+\boldsymbol{B}^{-1}-2(\boldsymbol{BA})^{-1/2}\right),$$ for which an analytic expression for Gaussian distributions in terms of the covariances and their Cholesky-decompositions $F_{\mu\nu} = A^\alpha_{\mu}A_{\nu\alpha}$ and $G_{\mu\nu} = B^\alpha_{\mu}B_{\nu\alpha}$ exists. The Wasserstein-metric $\Delta W_2$ is symmetric in both distributions and can serve as an information measure for the dissimilarity between two distributions, as $\Delta W_2 = 0$ for $\delta_\mu = 0$ and $F_{\mu\nu} = G_{\mu\nu}$, and it is remarkable that the tension $\delta_\mu$ enters through its Euclidean norm. There is no obvious relationship between the Wasserstein-metric and Kullback-Leibler- or $\alpha$-divergences, and neither for the Bhattacharyya-entropy, which among all R[é]{}nyi-entropies is the only symmetric one. Bayesian evidence and information entropy {#sect_evidence} ========================================= Bayesian evidence as a criterion for model selection provides a trade-off between the size of the statistical errors and the model complexity. It is straightforward to show that for a Gaussian likelihood $\mathcal{L}(\boldsymbol{D}|\boldsymbol{x},M)$ with a Fisher-matrix $F_{\mu\nu}$ and a Gaussian prior $\pi(\boldsymbol{x}|M)$ with the inverse covariance $P_{\mu\nu}$ the evidence $p(\boldsymbol{D}|M)$ is given by $$p(\boldsymbol{D}|M) = \sqrt{\frac{\mathrm{det}(\boldsymbol{F})\mathrm{det}(\boldsymbol{P})}{(2\pi)^n\mathrm{det}(\boldsymbol{F}+\boldsymbol{P})}},$$ which implies a scaling $\propto \pi^{-n}/2$ disfavouring models with high complexity. The expression for the evidence $p(\boldsymbol{D}|M)$ changes to $$\begin{split} p(\boldsymbol{D}|M) &= \sqrt{\frac{\mathrm{det}(\boldsymbol{F})\mathrm{det}(\boldsymbol{P})}{(2\pi)^n\mathrm{det}(\boldsymbol{F}+\boldsymbol{P})}}\times\\ &\quad\quad\quad\quad \exp\left\{-\frac{1}{2}\delta^\mu\left[P_{\mu\nu} - P_{\mu\alpha}(F+P)^{-1}_{\alpha\beta}P^\beta_{\nu}\right]\delta^\nu\right\}, \end{split}$$ if likelihood and prior are displaced by $\delta_\mu$ relative to each other. Comparing this expression with eqn. (\[eqn\_relative\_renyi\_entropy\]) shows that the two expressions are related to each other if $\alpha = 1-\alpha$, i.e. if $\alpha = 1/2$, which is know as Bhattacharyya-entropy. For this particular case, the R[é]{}nyi-entropy would weigh both likelihood and prior equally, $$\Delta S_\alpha = 2\ln\int{\mathrm{d}}^n x\:\sqrt{\mathcal{L}(\boldsymbol{D}|\boldsymbol{x},M) \pi(\boldsymbol{x}|M)},$$ with the natural bound $\sqrt{\mathcal{L}(\boldsymbol{D}|\boldsymbol{x},M)\pi(\boldsymbol{x}|M)}\leq [\mathcal{L}(\boldsymbol{D}|\boldsymbol{x},M)+\pi(\boldsymbol{x}|M)]/2$ given by the inequality of the geometric and arithmetic mean, such that $\Delta S_\alpha\geq 0$ as both $\mathcal{L}(\boldsymbol{D}|\boldsymbol{x},M)$ and $\pi(\boldsymbol{x}|M)$ are normalised. Because the logarithm is a concave function, one can use Jensen’s inequality to write $\ln\int{\mathrm{d}}^n x\:\sqrt{\mathcal{L}(\boldsymbol{D}|\boldsymbol{x},M) \pi(\boldsymbol{x}|M)} \geq \int{\mathrm{d}}^n x\:\ln\sqrt{\mathcal{L}\pi} = \int{\mathrm{d}}^n x\:(\ln\mathcal{L} + \ln\pi)/2$, such that $\Delta S_\alpha \leq \int{\mathrm{d}}^n x\:(\ln\mathcal{L} + \ln\pi)$, bounding the evidence from above, although in most cases this particular bound is diverging. Finally, expressing the evidence $p(\boldsymbol{D}|M)$ for Gaussian distributions in terms of the R[é]{}nyi-entropy yields $$p(\boldsymbol{D}|M) = \exp(-\Delta S_\alpha)\sqrt{\mathrm{det}(\boldsymbol{F}+\boldsymbol{P})}\:\left(\frac{2}{\pi}\right)^{n/2},$$ if $\delta_\mu = 0$. This relation shows that the evidence is made up from three contributions. On the one hand, it decreases $\propto (2/\pi)^{n/2}$ if the dimensionality of the parameter space, i.e. the model complexity is increased. Also the determinant of the Fisher-matrix generates a scaling $\propto \prod_\mu 1/\sigma_\mu$, such that models with large errors are assigned low evidences as well as being a measure of the dissimilarity of likelihood and prior. Taking the logarithm shows that $$\ln p(\boldsymbol{D}|M) = -\Delta S_\alpha + \frac{1}{2}\mathrm{tr}\ln(\boldsymbol{F}+\boldsymbol{P}) + n\ln 2 - \frac{n}{2}\ln\left(\frac{2}{\pi}\right),$$ meaning that the evidence reflects the inverse volume of the permissible parameter space, allowed by combining likelihood and prior. We emphasise at this point that this compact relationship between evidence $p(D|M)$ and entropy difference $\Delta S$ only exists in the case of tension-free likelihoods, $\delta_\mu = 0$, otherwise an additional factor $1/2$ appears in the exponential term of the Kullback-Leibler divergence which is not present in the evidence. Comparing Bayes-evidences and information entropy differences shows a clear mathematical relationship between the two, implying perhaps that one could in principle state the relative entropy instead of the Bayesian evidence ratio. The advantage in doing that would be to avoid the empirical Jeffreys-scale. Instead, relative entropies would be stated in units of nats. In addition, the usage of the evidence ratio appears to be motivated by the Neyman-Pearson lemma known from hypothesis testing, where it ensures that two hypotheses are tested against each other with the most efficient test statistic. While it seems to be unclear whether the Neyman-Pearson lemma applies to evidences as well in the sense if evidence ratios constitute the most efficient statistical test to decide between two models, these difficulties are avoided by relative entropies: They are axiomatically defined and unambiguous, and the evidence difference, in units of nats, can be computed for every likelihood, with the only complication originating from having to estimate $\ln p(\boldsymbol{x})$ from samples in the case of non-Gaussian likelihoods or priors. Again, our results will only assume simple forms for Gaussian distributions, and one needs in general to use advanced sampling methods for evaluating evidences in Monte Carlo-Markov chains [@mukherjee_nested_2006; @kilbinger_bayesian_2010; @hou_affine-invariant_2012; @lewis_efficient_2013]. summary {#sect_summary} ======= In this work we have applied information entropy measures to the analysis of cosmological data. Specifically, we computed likelihoods in the Gaussian approximation for spectra of the cosmic microwave background temperature and polarisation anisotropies, CMB-lensing, galaxy clustering and weak gravitational lensing by the cosmic large-scale structure. In almost all application, we work with Gaussian likelihoods and priors and reduce the expressions to be evaluated to operations on Fisher-matrices (or, equivalently, on inverse parameter covariances), for conventional $\Lambda$CDM- and $w$CDM-cosmologies. The motivation of our investigation was to quantify the information content of cosmological probes through probability entropies and show that they can be used for a wide range of applications, not only for making statements about the total error budget but also for quantifying the magnitude of systematical errors and tensions between data sets, as well as Bayesian evidences: The advantage of information entropies is their axiomatic definition and that they quantify in all these applications the extend and location of likelihoods corresponding to different probes with nats as the natural unit, even in the case of model evidences, providing a viable alternative to the Jeffreys-scale. 1. [We have computed information entropies and demonstrated their decrease through the combination of cosmological probes, and showed how measures of total error derived from the Fisher-matrix, for instance $\mathrm{tr}(\boldsymbol{F})$, $\mathrm{tr}(\boldsymbol{F}^2)$ or $\ln\mathrm{det}(\boldsymbol{F})$ scale with Shannon- and R[é]{}nyi-entropies. Likewise, there are inequalities which bound the relative entropy of the full likelihood through sums of entropies for individual parameters, both marginalised and conditionalised. Values for information entropies, both absolute and relative, are not easy to interpret. For the preferred dimensionless parameterisation used in cosmology consisting of $\Omega_m$, $\sigma_8$, $h$, $n_s$, $\Omega_b$ and possibly $w$, the CMB has the highest information content as evidenced by the most negative numbers for $S$, followed by (tomographic) galaxy clustering, (tomographic) weak lensing and finally CMB-lensing. Differences in entropy are not automatically entropy differences, which makes it difficult to compare numbers for $\Delta S$ with those for $S$: For that reason, we consider probe successive combinations and compute relative entropies relative to the information content of the CMB primaries alone. Kullback-Leibler-divergences $\Delta S$ combine a measure of changed volume of admissible parameter space with the change in the degeneracy directions, and are in that way a measure by how much the degree of knowledge on a parameter set increases. Typical numbers that we have obtained for $\Delta S$ are slightly over one order of magnitude for the full likelihood, although the information entropy of in particular conditionalised likelihoods of individual parameters can change dramatically.]{} 2. [We looked into three tensions in $\Lambda$CDM- and $w$CDM-cosmologies, the discrepancy in $H_0$ from Cepheid variable stars and the CMB, the tension in $\Omega_m$ and $\sigma_8$ packaged into the parameter $S_8$ between weak lensing by KiDS and the CMB from Planck, and the bias caused by intrinsic alignments in weak gravitational lensing as forecasted for Euclid. In these examples we showed that a quantification of the dissimilarity between the distributions combines the difference in the best fitting values and the width of the distributions, and the difference in their degeneracies in the multivariate cases. Linking relative entropies to traditional quantities in descriptive statistics like $p$-values is possible in the univariate case, but does not show a clear relationship. ]{} 3. [Lastly, we demonstrated the correspondence between information entropy and Bayesian evidence in comparing cosmological models. There is a relation between the entropy difference $\Delta S_\alpha$ and the evidence $p(D|M)$ if $\alpha$ is chosen to be $1/2$, i.e. for the Bhattacharyya-case: This would suggest the possibility of drawing a connection between the Jeffrey-scale and entropy differences measured in nats. Both quantities are clearly related to each other and offer a tradeoff between the goodness-of-fit, as expressed by the ratio of the admissible parameter spaces of two competing models, and a penalty term disfavouring high model complexity $n$.]{} One thought that we are currently pursuing is to use the Wasserstein-metric as a generalised symmetric relative entropy for quantifying biases between likelihoods, again with the motivation that relative entropy is axiomatically defined and measured in a natural unit. The Wasserstein-metric overcomes the asymmetry in the definition of relative entropy, making it perhaps a more useful measure of statistical tensions between likelihoods, in particular of Gaussian distributions, where an analytic formula exists. Acknowledgements {#acknowledgements .unnumbered} ================ AMP gratefully acknowledges the support by the Landesgraduiertenförderung (LGF) grant of the Graduiertenakademie Universität Heidelberg. RR acknowledges funding through the HeiKA-initiative and support by the Israel Science Foundation (grant no. 255/18 and 395/16). AMP and BMS would like to thank the Universidad del Valle in Cali, Colombia, for their hospitality. \[lastpage\] [^1]: e-mail: [email protected]
--- abstract: | Crystal dislocations govern the plastic mechanical properties of materials but also affect the electrical and optical properties. However, a fundamental and quantitative quantum-mechanical theory of dislocation remains undiscovered for decades. Here we present an exactly solvable quantum field theory of dislocation, for both edge and screw dislocations in an isotropic medium by introducing a new quasiparticle “dislon”. With this approach, the electron-dislocation relaxation time is studied from electron self-energy which can be reduced to classical results. Moreover, a fundamentally new type of electron energy Friedel oscillation near dislocation core is predicted, which can occur even with single electron at present. For the first time, the effect of dislocations on materials’ non-mechanical properties can be studied at a full quantum field theoretical level. PACS numbers : 03.70.+k, 61.72.Lk, 72.10.Fk. author: - Mingda Li - Wenping Cui - 'M. S. Dresselhaus' - Gang Chen bibliography: - 'dislon.bib' title: 'Canonical Quantization of Crystal Dislocation and Electron-Dislocation Scattering in an Isotropic Medium ' --- Crystal dislocations are a basic type of one-dimensional topological defect in crystals [@1nabarro1967theory]. Since Volterra’s ingenious prototype in 1907 [@2volterra1907equilibre] and Taylor, Orowan and Polyani’s simultaneous formal introduction of such defects in 1934 [@3taylor1934mechanism; @4Orowan; @55polanyi1934lattice], dislocations have been shown to have strong influences on materials, including the governing role in the plastic mechanical process, and widespread impact on materials’ thermal, electrical and optical properties etc [@1nabarro1967theory; @6hirth1982theory]. However, despite being one of the major driving forces in the development of modern metallurgy and material sciences, a fundamental quantum-mechanical theory of dislocations is long overdue, partly due to the multivalued or discontinuous displacement field leading to a difficulty in going from a particular lattice model to an effective theory. The lack of a fundamental quantum-field theory of dislocations has impeded the study of the role of dislocations on non-mechanical material properties, limiting the usage of the terms “impurity” and “disordered systems” to point-defect related properties in many circumstances [@Ziman]. In general, the theoretical attempts to study the role of a dislocation in material’s non-mechanical properties can be divided into at least 2 mainstream categories. One is classical scattering theory, where a dislocation is treated as a classical charged line [@7ng1998role; @8weimann1998scattering; @9jena2000dislocation; @10look1999dislocation]. This allows one to study electron-dislocation scattering, but with obvious limitations when presented in a classical framework without taking into account of the displacement field a dislocation induces. The other is a geometrical approach, where a dislocated crystal is treated as a lattice in curved space as in general relativity [@11yavari2012riemann; @12schmeltzer2012geometrical; @13sansour2012generalized; @14ruggiero2003einstein; @Dislocation_GR]. This approach includes the displacement-field scattering but very problem-specific and mathematically cumbersome. Another promising approach is a gauge theory of a dislocation [@15kleinert1989gauge; @16edelen2012gauge; @Lazar], which resembles quantized gauge theory in form but is limited only to classical elasticity field. In this Letter, we provide an exact and mathematically manageable quantum field theory of a dislocation line, based on classical dislocation theory and a canonical quantization procedure. We find that in an isotropic medium, the exact Hamiltonian for both the edge and screw dislocations can be written as a new type of harmonic-oscillator-like Bosonic excitation along dislocation line, hence the name “dislon”. A dislon is similar to a phonon as lattice displacement with both kinetic and potential energy, but satisfying dislocation’s topological constraint $\oint_L d{\mathbf u}= -\mathbf b$, where ${\mathbf b}$ is the Burgers vector, $L$ is a closed contour enclosing the dislocation line, and $\mathbf u$ is the elastic displacement vector. For the first time, the scattering between an electron and the 3D displacement field induced by dislocation can easily be solved through many-body approach, with the topological constraint $\oint_L d{\mathbf u}= -\mathbf b$ maintained all along. ![(color online). (a) A long dislocation line along the $z$-direction vibrating within slip plane ($x$z) with $Q(z)$ the transverse displacement. Such vibration share similarities with phonons as it is also quantized lattice vibration, but constrained by $\oint_L d{\mathbf u}= -\mathbf b$ where $L$ is an arbitrary loop circling dislocation. An electron located at position $r$ will be scattered by dislocations. (b) Quantized vibrational excitation along dislocation line (“dislon”) for both edge (hot-colors) and screw (cool-colors) dislocations at various Poisson ratios $\nu$. The classical shear wave is shown as a linear-dispersive black-dotted line, and the arrows indicate a decreasing trend of Poisson ratio for edge (red arrow) and screw (blue arrow) dislons, respectively.[]{data-label="Fig1"}](Figure1.png){width="0.85\columnwidth"} To begin with, defining ${\mathbf u}({\mathbf R})$ as the displacement at 3D position vector ${\mathbf R}$, its $i^{\rm th}$ component can be written generically as [@17kosevich1986course; @18deWit1960249] $$\label{eq1} u_i({\mathbf R})=-c_{jklm}\int u_{lm}({\mathbf R}')\frac{\partial G_{ij}({\mathbf R}-{\mathbf R}')}{\partial R_k}d^3{\mathbf R}'$$ where $i, j, k, l, m=1, 2, 3$ are Cartesian components, $c_{ijkl}=\lambda\delta_{ij}\delta_{kl}+\mu(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})$ is the elastic constant tensor in an isotropic medium, $u_{lm}$ is the plastic distortion tensor, $G_{ij}$ is the Green’s function of the force equilibrium equation whose Fourier transform can be written as [@18deWit1960249] $$\begin{aligned} \label{eq2} G_{ij}({\mathbf k})&=&\frac{1}{\mu}{\left[}\frac{\delta_{ij}}{k^2}-\frac{\lambda+\mu}{\lambda+2\mu}\frac{k_ik_j}{k^4} {\right]}\nonumber\\ &=&\frac{1}{\mu}{\left[}\frac{\delta_{ij}}{k^2}-\frac{1}{2(1-\nu)}\frac{k_ik_j}{k^4} {\right]}\end{aligned}$$ where $\lambda$ and $\mu$ are 1$^{\rm st}$ and 2$^{\rm nd}$ Lamé constants, respectively, and $\nu=\frac{\lambda}{2(\lambda+\mu)}$ is the Poisson ratio. For a long, straight dislocation line extending along the $z$-direction and sitting at $(x_0,y_0)=(0,0)$, this dislocation line would vibrate within the slip plane ($xz$ plane) as an atomic motion, similar to phonons. Such vibration can be regarded as a precursor of the dislocation glide motion, and a direct generalization of long, straight static dislocation configuration. Defining $Q(z)$ to be the transverse displacement along the $x$ direction at position $z$, Eq. (\[eq1\]) can be rewritten as [@18deWit1960249; @19ninomiya1968dislocation] (See in Supplemental Material A) $$\label{eq3} \resizebox{0.43 \textwidth}{!}{$ u_i({\mathbf R})=\!u_i({\mathbf r},z)\!=\!-b_mc_{jklm}\int_{-\infty}^{+\infty}\! n_l \frac{\partial G_{ij}({\mathbf r},\!z\!-\!z')}{\partial R_k}Q(z')dz'$}$$ where ${\mathbf r}=(x, y)$ and ${\mathbf R}=({\mathbf r}, z)$ are the 2D and 3D position vectors, $b_m$ is the $m^{\rm th}$ component of the Burgers vector and $\mathbf n$ is the direction perpendicular to the slip plane with the $l^{\rm th}$ component $n_l$. Mode-expanding the dislocation displacement as $$\label{eq4} Q(z)=\sum_\kappa Q_{\kappa}e^{i\kappa z}$$ where $\kappa$ is the wavenumber along the $z$-direction. Then the displacement $u({\mathbf R})$ can be expressed as $$\label{eq5} \resizebox{0.43 \textwidth}{!}{$ \!\!u_i({\mathbf R})\!=\!\!\sum_\kappa f_i({\mathbf r};\kappa)e^{i\kappa z}Q_\kappa \!=\!\frac{1}{L^2}\sum_{{\mathbf s},\kappa}\!B_i(\mathbf{s};\kappa)e^{i\mathbf{s}\cdot{\mathbf r}+i\kappa z}Q_\kappa$}$$ where $L$ is the system length, ${\mathbf s}= ( k_x , k_y )$ is the 2D part of the 3D wavevector ${\mathbf k}= ({\mathbf s},\kappa )$ in Eq. (\[eq2\]), and $B_i(\mathbf{s};\kappa)=\int f_i({\mathbf r};\kappa) e^{-i{\mathbf s}\cdot{\mathbf r}}d^2{\mathbf r}$ is the 2D Fourier transform of $f_i({\mathbf r};\kappa)$ which can be obtained as $$\label{eq6} B_i(\mathbf{s};\kappa)=+i\mu b_m n_l{\left[}k_mG_{il}({\mathbf k})+k_lG_{im}({\mathbf k}) {\right]}$$ by comparing Eq. (\[eq3\]) to Eq. (\[eq5\]). The classical 3D kinetic and potential energy of such a vibrating dislocation can be written as a 1D effective Hamiltonian [@19ninomiya1968dislocation] $$\label{eq7} \resizebox{0.43 \textwidth}{!}{$ H=T+U=\frac{L}{2}\sum_\kappa m(\kappa)\dot{Q}_\kappa\dot{Q}^*_\kappa+\frac{L}{2}\sum_\kappa \kappa^2 K(\kappa)Q_\kappa Q^*_\kappa$}$$ where $m(\kappa)$ and $\kappa^2K(\kappa)$ are identified as the classical linear mass density and tension, respectively, and can be written as [@19ninomiya1968dislocation] (See also in Supplemental Material B) $$\resizebox{0.48 \textwidth}{!}{$m(\kappa)\!=\!\frac{\rho}{4\pi^2}\int \! d^2\mathbf{s}\frac{1}{k^4}{\left[}{\left(}\mathbf{b}\cdot {\mathbf k}{\right)}^2\!+\!b^2{\left(}\mathbf{n}\cdot {\mathbf k}{\right)}^2 \!+\!\frac{4\nu-3}{{\left(}1-\nu{\right)}^2}\frac{{\left(}\mathbf{n}\cdot {\mathbf k}{\right)}^2{\left(}\mathbf{b}\cdot {\mathbf k}{\right)}^2}{k^4} {\right]}$}$$ $$\resizebox{0.48 \textwidth}{!}{$\kappa^2K(\kappa)\!=\!U_0\!-\!\frac{\mu}{4\pi^2}\int \! d^2\mathbf{s} {\left[}\frac{{\left(}\mathbf{b}\cdot {\mathbf k}{\right)}^2\!+\!b^2{\left(}\mathbf{n}\cdot {\mathbf k}{\right)}^2}{k^2}\!-\!\frac{2}{1-\nu}\frac{{\left(}\mathbf{n}\cdot {\mathbf k}{\right)}^2{\left(}\mathbf{b}\cdot {\mathbf k}{\right)}^2}{k^4} {\right]}$}$$ where $U_{0}$ is a $\kappa$-independent zero-point-energy term depending on the dislocation type. Now imposing a canonical quantization condition, $$\label{eq8} Q_\kappa =Z_\kappa {\left[}a_\kappa+a^+_{-\kappa} {\right]},\quad P_\kappa =\frac{i\hbar}{2Z_\kappa} {\left[}a^+_\kappa-a_{-\kappa} {\right]}$$ where $P_\kappa=\frac{\partial\mathcal{L}}{\partial \dot{Q_\kappa}}=Lm(\kappa)\dot{Q}^*_\kappa$, $Z_\kappa=\sqrt{\frac{\hbar}{Lm(\kappa)\omega(\kappa)}}$, then the classical dislocation Hamiltonian Eq. (\[eq7\]) is quantized as (See Supplemental Material C) $$\label{eq9} H_D=\sum_{\kappa}\hbar \omega(\kappa){\left[}a^+_\kappa a_\kappa +\frac{1}{2}{\right]}$$ with eigenfrequency $\omega(\kappa)=\kappa\sqrt{K(\kappa)/m(\kappa)}$ , which has the form as a collection of non-interacting Bosonic excitations. Despite the observation that such an excitation shares the similarity with the phonon excitation as a type of quantized lattice vibration, the topological constraint here $\oint_L d{\mathbf u}= -\mathbf b$ leads to a different excitation quantum along the dislocation line, which may suitably be called a “dislon”, to distinguish the dislon from a non-interacting phonon. In particular, by imposing the in-plane Debye cutoff $k_D$, the dislon dispersion relation for an edge dislocation $({\mathbf b}\bot z, b=b_x )$ and a screw dislocation $({\mathbf b}\| z, b=b_z )$, can separately be written in a closed-form as \[eq10\] $$\resizebox{0.41 \textwidth}{!} {$w_E(\kappa)= v_s\kappa\!\sqrt{\frac{C_1\mathrm{log}{\left(}\frac{k_D^2}{\kappa^2}+1 {\right)}+1-C_2\frac{\kappa^2}{k_D^2+\kappa^2}}{(1+C_3)\mathrm{log}{\left(}1+\frac{k_D^2}{\kappa^2}{\right)}-\frac{k_D^2}{k_D^2+\kappa^2}-C_3\frac{k_D^2(3k_D^2+2\kappa^2)}{2(k_D^2+\kappa^2)}}}\tag{10a}\label{eq10a}$}$$ $$\label{eq10b} \resizebox{0.41 \textwidth}{!}{ $w_S(\kappa)= v_s\kappa\!\sqrt{\frac{C_4\mathrm{log}{\left(}\frac{k_D^2}{\kappa^2}+1 {\right)}-C_4+4C_2\frac{\kappa^2}{k_D^2+\kappa^2}}{\frac{k_D^2}{2(\kappa^2+k_D^2)}+\frac{1}{2}\mathrm{log}{\left(}1+\frac{k_D^2}{\kappa^2} {\right)}+2C_3\frac{k_D^4}{{\left(}k_D^2+\kappa^2{\right)}^2}}}\tag{10b}$}$$ where $v_s=\sqrt{\mu/\rho}$ is the shear velocity, and the four coefficients are $C_1=\frac{1-2\nu}{2(1-\nu)}$, $C_2=\frac{1}{4(1-\nu)}$, $C_3=\frac{4\nu-3}{8(1-\nu)^2}$ and $C_4=\frac{1+\nu}{2(1-\nu)}$. The dislon dispersion at various $\nu$ values and keeping constant $v_s = 1$ are plotted in Fig. \[Fig1\](b), where the classical shear wave, or equivalently the transverse acoustic phonon mode $\omega(\kappa)=v_s\kappa$ (black-dotted line) serves as a pre-factor in the quantum-mechanical version of the dislocation excitation in Eq. (\[eq10\]). The higher excitation energy of the edge dislon than the screw dislon is reasonable, since even in a pure classical picture, the edge dislocation energy density is higher than the screw dislocation by a factor of $1/(1-\nu)$ [@20hull2001introduction]; due to the different zero-point-energy, they have opposite trends at various $\nu$ values (red and blue arrows in Fig. \[Fig1\]b). ![(color online). The self-energy of an electron away from a dislocation line with a dislocation core $(x_0,y_0)=(0,0)$. The ratio between the self-energy $|\Sigma|$ and electron energy $\varepsilon_p$ on a logarithmic scale, as a function of the 2D coordinate ${\mathbf r}=( x,y)$ for edge (a, c) and screw (b, d) dislocations. The self-energy decays fast away from the dislocation core. Compared with the asymptotic exponential decay behavior (a, b), the full coupling constants (c, d) indeed reveal an exotic Friedel oscillation, which is anisotropic and can occur with only single electron at present. This can be seen more clearly on linear scale waterfall plots (e, f). The much more drastic oscillation for edge dislocation (e) than screw dislocation (f) is caused by dilatation effect and resulting electrostatic potential.[]{data-label="Fig2"}](Figure2.png){width="1.00\columnwidth"} To study the electron-dislocation interaction from a full $2^{\rm nd}$-quantization level, we start from a generic electron-ion interaction Hamiltonian [@21bruus2004many; @22mahan2013many] $$\label{eq11} H_{e-ion}=\int d^3{\mathbf R}\rho_e({\mathbf R})\sum_{j=1}^N\nabla_{\mathbf R}V_{ei}({\mathbf R}-{\mathbf R}^0_j)\cdot {\mathbf u}_j\tag{11}$$ where $\rho_e$ is the electron charge density, ${\mathbf R}^0_j$ is the equilibrium position of the ion with label $j$, and sum on j is over all $N$ ions in the crystal. By expanding the electron-ion interaction potential $V_{ei}$ as $$\label{eqex} V_{ei}({\mathbf R}-{\mathbf R}^0_j)=\frac{1}{V}\sum_{q\in 1\mathrm{BZ} }\sum_{\mathbf{G}}V_{{\mathbf q}+\mathbf{G}}e^{i(\mathbf{G}+{\mathbf q})\cdot({\mathbf R}-{\mathbf R}^0_j)}\tag{12}$$ and using Eqs. (\[eq5\])-(\[eq8\]), and moreover by assuming a non-Umklapp normal process $(\mathbf{G}=0)$, the electron-dislocation interaction Hamiltonian Eq. (\[eq11\]) can be re-written as (see Supplemental Material D) $$\begin{aligned}\label{eq12} H_{el-dis}=&-\frac{N}{V}\frac{1-2\nu}{1-\nu}\sum_{\mathbf q}\rho({\mathbf q})eV_{\mathbf q}\nonumber\frac{({\mathbf b}\cdot{\mathbf q})(\mathbf{n}\cdot{\mathbf q})}{L^2q^2}\times\\ &\sqrt{\frac{\hbar}{2Lm(\kappa)\omega(\kappa)}}(a_\kappa+a^+_{-\kappa}) \end{aligned}\tag{13}$$ where the screened Coulomb potential $V_{\mathbf q}=\frac{4\pi Z e}{q^2+k^2_{TF}}$ with $k_{\rm TF}$ defined as the Thomas-Fermi screening wavenumber, $\rho({\mathbf q})$ is the Fourier-transformed electron number density. At $\nu = 1/2$ both Eqs. (\[eq9\]) & (\[eq12\]) vanish due to the important pre-factor $1-2\nu$. This is reasonable since at $\nu = 1/2$ the system demonstrates only elasticity without plasticity (such as rubber), resulting in both vanishing dislon excitation and quantized electron-dislocation interaction. For a single electron located at ${\mathbf R}=({\mathbf r},z)=(r\cos\phi,r\sin\phi,z)$, we have $\rho({\mathbf q})=e^{i\mathbf{s}\cdot{\mathbf r}+i\kappa z}$, and Eq. (\[eq12\]) can be further simplified as $$\label{eq13} H_{e-dis}=\frac{1}{\sqrt{L}}\sum_\kappa e^{i\kappa z}{M}_{{\mathbf b},{\mathbf r}}(\kappa)(a_\kappa+a_{-\kappa}^+)\tag{14}$$ with the single-electron position-dependent coupling constant ${M}_{{\mathbf b},{\mathbf r}}(\kappa)$ written as $$\begin{aligned}\label{eq14} &{M}_{{\mathbf b},{\mathbf r}}(\kappa)=\frac{Ne^2}{V}\frac{1-2\nu}{1-\nu}\sqrt{\frac{\hbar}{2m(\kappa)\omega(\kappa)}}\nonumber\\ &\times\int_0^{k_D}ds s^2\frac{b_s s J_2(rs)sin(2\phi)-2ib_z\kappa J_1(rs)\sin\phi}{(s^2+\kappa^2+k_{TF}^2)(s^2+\kappa^2)} \end{aligned}\tag{15}$$ where $J_n(rs)$ is the $n^{\rm th}$ order Bessel function of the $1^{\rm st}$ kind. One promising feature here is that the 3D electron-displacement interaction in Eq. (\[eq11\]) can be rewritten in an effective 1D electron-dislon interaction as Eq. (\[eq13\]), greatly simplifying the calculation of dislocation’s influence to electron motion. In particular, in the non-screening, stiff-solid limit $k_{\rm TF}\rightarrow 0$ and $k_D \rightarrow \infty$ , and in $r \rightarrow \infty$ limit where an electron is far apart from a dislocation core, the asymptotic coupling constants have a closed-form: $$\begin{aligned} &|M_{\substack{edge\\r \rightarrow \infty}}(\kappa)|^2\tag{16a}\label{eq15a} \\=&{\left(}\frac{Ne^2}{8V}{\right)}^2{\left(}\frac{1-2\nu}{1-\nu}{\right)}^2\frac{\hbar \pi b^2(2\kappa r-3)^2}{m(\kappa)\omega(\kappa)\kappa r}\sin^2(2\phi)e^{-2\kappa r}\nonumber\\\nonumber\\ &|M_{\substack{screw\\r \rightarrow \infty}}(\kappa)|^2\tag{16b}\label{eq15b} \\=&{\left(}\frac{Ne^2}{8V}{\right)}^2{\left(}\frac{1-2\nu}{1-\nu}{\right)}^2\frac{4\hbar \pi b^2(2\kappa r-1)^2}{m(\kappa)\omega(\kappa)\kappa r}\sin^2(\phi)e^{-2\kappa r}\nonumber\end{aligned}$$ which shows an exponential decay of the electron-dislocation coupling strength at long distance. In addition, we could write down the Feynman rule for electron-dislocation scattering accordingly from the coupling constants (See Supplemental Material E): Internal dislon line $|{M}_{{\mathbf b},{\mathbf r}}(\kappa)|^2D^{(0)}(\kappa,i\omega_m)$, with $D^{(0)}(\kappa,i\omega_m)=-\frac{2\omega_\kappa}{\omega_m^2+\omega_\kappa^2}$ and $\omega_m=2m\pi/\beta$($\beta=1/(k_{B}T)$), and other Feynman rules unchanged. Therefore, to one-loop correction, in which case an electron emits and re-absorbs a virtual dislon, we could compute the position-dependent electron self-energy as follows: $$\begin{aligned} \label{eq16} &\Sigma^{(1)}({\mathbf r},\mathbf{p},E)=\int \frac{d\kappa}{2\pi}|{M}_{{\mathbf b},{\mathbf r}}(\kappa)|^2\tag{17}\\ &\times{\left[}\frac{n_B(\omega_\kappa)+n_F(\varepsilon_{\mathbf{p}+\kappa})}{E-\varepsilon_{\mathbf{p}+\kappa}+\omega_\kappa+i\delta}\!+ \frac{n_B(\omega_\kappa)+1-n_F(\varepsilon_{\mathbf{p}+\kappa})}{E-\varepsilon_{\mathbf{p}+\kappa}-\omega_\kappa+i\delta} {\right]}\nonumber\end{aligned}$$ We take germanium as a prototype example since it has isotropic bands, but we do not intend to compute real material. At zero temperature and using reasonable values of elastic parameters of germanium [@23claeys2008extended]( $b =\unit[0.4]{nm} $, $\rho= \unit[5.3]{g/cm^3}$, $\mu = \unit[67]{GPa}$, $\nu = 0.28$, cutoff $\kappa_{min} =\unit[0.05]{nm^{-1}}$ and $\kappa_{max} =\unit[10]{nm^{-1}}$), the electron on-shell self-energy is plotted in Fig. \[Fig2\], for edge (Fig. \[Fig2\] a, c, e) and screw (Fig. \[Fig2\] b, d, f) dislocations, with full coupling constants Eq. (\[eq14\]) (Fig. \[Fig2\] a, b) and asymptotic form Eqs. (\[eq15a\], b) (Fig. \[Fig2\] c, d). Note $\Sigma< 0$ in all cases, indicating a reduction of the electron energy, hence an increase of the electron effective mass when being scattered by a dislocation, similar to electron-phonon scattering. The 4-fold self-energy symmetry for the edge dislocation and 2-fold symmetry for the screw dislocation is also reasonable- the classical displacement field distributions ${\mathbf u}({\mathbf R})$ have 2-fold and 1-fold symmetry from sinusoidal terms, respectively, while the energy $\sim |{\mathbf u}({\mathbf R})|^2$ doubles the symmetry. The most prominent feature is that the electron self-energy shows an anisotropic Friedel oscillation behavior (Fig. \[Fig2\] e, f). Since a single dislocation could be treated as a classical 1D charged line, such a quantum-mechanical oscillation may not appear surprising as a direct generalization to a 0D point-defect Friedel oscillation. However, what is striking is that such an oscillation can occur with the presence of only one electron, since it is the oscillation of the single-electron self-energy instead of charge density oscillation which gives rise to the traditional Friedel oscillation. Another feature is that the oscillation caused by an edge dislocation (Fig. \[Fig2\] e) is much more drastic than a screw dislocation (Fig. \[Fig2\] f). This can be understood from the distinct electrostatic effect contributing to Friedel oscillation. For an edge dislocation there is finite inhomogeneous dilatation $\Delta=-\frac{b}{2\pi}\frac{1-2\nu}{1-\nu}\frac{sin\theta}{r}$, leading to a compensating electrostatic potential to reach uniformly distributed Fermi energy and equilibrium, while for a screw dislocation, linear elasticity gives no dilatation hence no electrostatic effect emerges [@1nabarro1967theory]. The observation of the predicted self-energy’s single-electron Friedel oscillation may provide strong evidence of the existence of dislon and the quantum nature of dislocations. ![(color online). Comparison of the normalized logarithmic relaxation rate using the classical result [@24dexter1952effects](dashed lines) and Eq. (\[eq16\])(solid lines).[]{data-label="Fig3"}](Figure3.png){width="1.00\columnwidth"} To test the power of this theoretical framework, we compare the electron-dislocation scattering relaxation rate $\frac{1}{\tau_{dis}({\mathbf r})} \propto \mathrm{Im}\Sigma({\mathbf r},\mathbf{p},\varepsilon_p)$ from Eq. (\[eq16\]) to the classical results with empirical deformation-potential parameters [@24dexter1952effects]. Despite different methods, our one-loop result Eq. (\[eq16\]) shares an identical pre-factor ${\left(}\frac{1-2\nu}{1-\nu} {\right)}^2$ and the same temperature dependence with the classical relation $\frac{1}{\tau_{dis}({\mathbf r})} \propto \frac{N_{dis}}{T}{\left(}\frac{1-2\nu}{1-\nu} {\right)}^2$, but has a stronger capability to compute the position and energy dependent relaxation time. Assuming the average dislocation-electron distance $\bar{r} = \sqrt{1/ N_{dis}}$ , the comparison (normalized at $\unit[10^{10}]{cm^{-2}}$) of the relaxation rate is plotted in Fig. \[Fig3\] and shows very similar trends. To summarize, we have demonstrated a quantized Bosonic excitation along a crystal dislocation, “dislon”, whose excitation spectra are obtained in closed-form in an isotropic medium. Such a framework allows the study of classical electron static-dislocation scattering at a full dynamical many-body quantum-mechanical level, and is expected to greatly facilitate the study of the effects of isolated dislocations on the electrical properties of materials because of the case with which the effects of an isolated dislon can be incorporated into existing theories without loss of rigor. In fact, with a fully quantized dislocation, it can be shown a decades-long debate of the nature of dislocation-phonon interaction-whether static strain scattering or dynamic fluttering dislocation scattering- shares the same origin as phonon renormalization [@quasi_phonon]. What’s more, since the dislon is a type of Bosonic excitation, the dislon may also couple 2 electrons to form Cooper pair, becoming an extra contributor to superconductivity besides a phonon. This may seem counterintuitive since dislocations as defects would only shorten the electron mean free path and lead to weakening of superconducting phenomena, yet early experimental evidence did show a sample annealing temperature-dependent superconducting transition temperature $T_c$ [@25Hauser1961Annealing], whereby different samples have identical stoichiometry but different dislocation densities, and a slight increase of $T_c$ under plastic deformation in another experiment [@26Tc], indicating a more profound role that dislocations may play in superconductivity, but further studies of this phenomenon is now under more detailed investigation. M.L. would thank helpful discussions with Prof. Hong Liu, and suggestions by Jiawei Zhou, Shengxi Huang, and Maria Luckyanova. M.L., M.S.D. and G.C. would like to thank support by S$^{3}$TEC, an EFRC funded by DOE BES under Award No. DE-SC0001299/DE-FG02-09ER46577.
--- abstract: 'The quantum ellipsoid of a two qubit state is the set of Bloch vectors that Bob can collapse Alice’s qubit to when he implements all possible measurements on his qubit. We provide an elementary construction of the ellipsoid for arbitrary states, and explain how this geometric representation can be made faithful. The representation provides a range of new results, and uncovers new features, such as the existence of ‘incomplete steering’ in separable states. We show that entanglement can be analysed in terms of three geometric features of the ellipsoid, and prove that a state is separable if and only if it obeys a ‘nested tetrahedron’ condition. We provide a volume formula for the ellipsoid in terms of the state density matrix, which identifies exactly when steering is fully three dimensional, and identify this as a new feature of correlations, intermediate between discord and entanglement.' author: - Sania Jevtic - Matthew Pusey - David Jennings - Terry Rudolph bibliography: - 'SteerEll\_bib\_SJ\_041012.bib' title: The Quantum Steering Ellipsoid --- The Bloch sphere provides a remarkably simple representation for the state space of the most primitive quantum unit - the single qubit system - and results in geometric intuitions that are invaluable in countless fundamental information-processing scenarios. Unfortunately, such a direct three-dimensional depiction of the state space is impossible for any quantum system of larger dimension. While the qubit is the primitive unit of quantum information, the two qubit system constitutes the primitive unit for the theory of bipartite quantum correlations. However, the two qubit state space is already 15-dimensional and possesses a surprising amount of structure and complexity. As such, it is a highly non-trivial challenge to both faithfully represent its states and to acquire natural intuitions for their properties [@Vis2Qubits; @Tstates; @Geometry]. The phenomenon of steering in quantum states was first uncovered by Schrödinger [@Schrodinger] (and subsequently rediscovered by others [@Gisin; @HJW; @Rob-Terry]), who realised that measurements on B on the pure entangled state $|\psi\>_{AB}$ could be used to “steer” the state at $A$ into all different convex decompositions of the reduced state $\rho_A$; in particular for rank-one POVM measurements at $B$ we have that $A$ is steered into ensembles of pure states. More significantly, given any ensemble decomposition of $\rho_A$ into either pure or mixed states there always exists a POVM measurement at $B$ that generates that ensemble. In this regard we say that the steering at $A$ for the pure state $|\psi\>_{AB}$ is *complete* within the Bloch sphere of $A$. More generally, for a 2-qubit mixed state $\rho_{AB}$ it is known [@FrankPhD] that the convex set of states that $A$ can be steered to is instead an ellipsoid $\E_A$ that contains $\rho_A$, as in figure \[canonical\]. In this Letter we show that all correlation features of a two qubit state are encoded in its steering ellipsoid and local Bloch vectors, and argue that this ellipsoid representation for two qubits provides the natural generalization of the Bloch sphere picture, in that it gives a simple geometric representation of an arbitrary two qubit state $\rho$ in three dimensions, which makes the key properties of the state manifest in simple geometric terms. By adopting the ellipsoid representation we are lead to a range of novel results and fresh insights into quantum correlations for both entangled and separable states. Firstly, we provide an alternative construction to the steering ellipsoid $\E_A$ to that in [@FrankFilter], which applies even when $\E_A$ is degenerate. We then provide a reconstruction of a state $\rho_{AB}$ from its geometric data to explore the faithfulness of the representation. This analysis reveals a new phenomenon of incomplete steering for certain separable quantum states, in which some decompositions of $\rho_A$ within of the steering ellipsoid $\E_A$ are inaccessible. ![**Ellipsoid representation of a two-qubit state.** For any two-qubit state $\rho_{AB}$, the set of states to which Bob can steer Alice forms an ellipsoid $\E_A$ in Alice’s Bloch sphere, containing her Bloch vector $\mathbf{a}$. The inclusion of Bob’s Bloch vector $\mathbf{b}$ determines $\rho_{AB}$ up to a choice of basis for Bob, which can be fixed by indicating the orientation of $\E_A$. []{data-label="canonical"}](canonical.png){width="5cm"} The representation then allows us to decompose entanglement into simple geometric features, and show how it depends only on (a) the spatial orientation of the ellipsoid, (b) its distance from the origin and (c) its size. The representation also leads us to establish the surprising *nested tetrahedron condition*: a state $\rho_{AB}$ is separable if and only if its ellipsoid fits inside a tetrahedron that itself fits inside the Bloch sphere. We then study the minimal number of product states in the ensemble of a separable state, which we show is determined solely by the dimension of $\E_A$. We then develop a useful volume formula that identifies exactly when steering is fully three-dimensional. Finally the work suggests a new feature of quantum correlations, called *obesity*, which is neither discord nor separability nor entanglement. We observe that quantum discord arises from a combination of both the obesity of the state and the orientation of its ellipsoid. Beyond these new results, we also feel that the method of compactly depicting any two qubit state $\rho_{AB}$ in three dimensions, via a single ellipsoid and two local Bloch vectors, should be of interest to a range of researchers in both the theoretical and experimental quantum sciences. *The Pauli basis.* Let $\sigma_0$ denote the $2\times2$ identity matrix and $\sigma_i$, $i=1,2,3$ be the usual Pauli matrices. Any single-qubit Hermitian operator $E$ can be written $E = \frac12\sum_{\mu=0}^3 X_\mu \sigma_\mu$, where the $X_\mu = \operatorname{tr}(E\sigma_\mu)$ are real. $E$ is positive iff $X_0 \geq 0$ and $X_0^2 \geq \sum_{i=0}^3 X_i^2 $. With $X$ viewed as a 4-vector, this identifies the set of positive operators with the usual forward light cone in Minkowski space. In the case of a qubit state, $X_0 = 1$ and the remaining components form the Bloch vector. Similarly, we can write a two-qubit state as $\rho=\frac{1}{4}\sum_{\mu ,\nu =0}^{3}\Theta _{\mu \nu }\sigma _{\mu }\otimes \sigma _{\nu }$. The components of the 4 $\times$ 4 real matrix $\Theta$ are $\Theta_{\mu\nu} = \operatorname{tr}(\rho \sigma_\mu \otimes \sigma_\nu)$. As a block matrix we have $\Theta= \begin{pmatrix} 1 & \boldsymbol{b}^T \\ \boldsymbol{a} & T \end{pmatrix}$, where $\boldsymbol{a},\boldsymbol{b}$ are the Bloch vectors of the reduced states $\rho_A$ and $\rho_B$ of $\rho$ and $T$ is a 3 $\times$ 3 matrix encoding correlations [@Tstates]. Now if Bob does a measurement and obtains a POVM outcome $E$, then Bob steers Alice to the state proportional to $\operatorname{tr}_B(\rho (\sigma_0 \otimes E))$, with the latter being given by $\frac12 \Theta X$ in the Pauli basis. The matrix $\Theta$ transforms, up to a normalization, under SLOCC operations $\rho' = S_A\otimes S_B \rho (S_A \otimes S_B)^\dag$, where $S_A, S_B$ are invertible complex matrices, as $\Theta'=\Lambda_A \Theta \Lambda^T_B$ [@FrankFilter] where $\Lambda_A, \Lambda_B$ are proper orthochronous Lorentz transformations (see the appendices). In the case of a SLOCC operation affecting only Bob ($\Theta' = \Theta\Lambda_B$), the set of states Bob can steer Alice to is unaffected: $X$ is in the forward light cone if and only if $X' = \Lambda_B X$ is, and $\Theta' X = \Theta X'$. *Construction of the quantum steering ellipsoid for a given state $\rho$*. Steering ellipsoids have been studied for the representation of SLOCC transformations [@FrankPhD], and more recently in relation to quantum discord [@Shi1]. The steering ellipsoid is easiest to understand for states with $\boldsymbol{b}=\boldsymbol{0}$. Suppose that Bob projects onto some pure state $X = \begin{pmatrix}1 \\ \boldsymbol{v}\end{pmatrix}$ with $v = 1$. Then $$Y = \frac12 \Theta X = \frac12 \begin{pmatrix}1 & \boldsymbol{0}^T \\ \boldsymbol{a}&T\end{pmatrix}\begin{pmatrix}1 \\ \boldsymbol{v}\end{pmatrix} = \frac12 \begin{pmatrix} 1 \\ \boldsymbol{a} + T\boldsymbol{v}\end{pmatrix},$$ showing that Bob will obtain that outcome with probability $1/2$ and Alice’s Bloch vector will then be $\boldsymbol{a} + T\boldsymbol{v}$. The set of all states Alice can end up with is simply the unit sphere of possible $\boldsymbol{v}$, shrunk and rotated by $T$ and translated by $\boldsymbol{a}$, i.e. an ellipsoid centred at $\boldsymbol{a}$ with orientation and semiaxes given by the eigenvectors and eigenvalues of $T T^T$. The dimension of the ellipsoid is $\operatorname{rank}(T) = \operatorname{rank}(\Theta) - 1$. Points inside the ellipsoid can be reached via convex combinations of projective measurements, and conversely any POVM element for Bob can be decomposed into a mixture of projections, thus giving a point within the ellipsoid. Now consider a general state with $\boldsymbol{b}\neq\boldsymbol{0}$. If $b=1$ then $\rho$ is a product state, and so the steering ellipsoid is a single point. Hence assume that $b < 1$. Then the SLOCC operator $\I\otimes\rho_B^{-\frac{1}{2}}$ that corresponds to a Lorentz boost $\Lambda_B$ by a ‘velocity’ $\boldsymbol{b}$, transforms $\rho_B$ to the maximally mixed state. We refer to this filtered state $\widetilde{\rho}$ as the *canonical state* on the SLOCC orbit $\mathcal{S}(\rho)$ (see appendix). As already noted, SLOCC operations on Bob do not affect Alice’s steering ellipsoid. Therefore we can find the parameters of an arbitrary state’s steering ellipsoid by boosting $\Theta$ by $\Lambda_B$ and then reading off the ellipsoid parameters. This gives (see appendix) a steering ellipsoid centred at $\boldsymbol{c}_A = \frac{\boldsymbol{a}-T\boldsymbol{b}}{1-b^{2}}$, and orientation and semiaxes lengths $s_i = \sqrt{q_i}$ given by the eigenvectors and eigenvalues $q_i$ of the ellipsoid matrix $$\label{QA} Q_A =\frac{1}{1-b^2}\left( T-\boldsymbol{a}\boldsymbol{b}^{T}\right) \left( \I+\frac{\boldsymbol{b}\boldsymbol{b}^{T}}{ 1-b^{2}}\right) \left( T^{T}-\boldsymbol{b}\boldsymbol{a}^{T}\right).$$ This together with $\boldsymbol{c}_A$, specify the ellipsoid $\E_A$. To obtain the ellipsoid at B, $\mathcal{E}_B$, the roles of A and B are reversed, which amounts to a transposition of the matrix $\Theta$. In terms of its components, this reads as $\boldsymbol{b}\rightarrow\boldsymbol{a}, \boldsymbol{a}\rightarrow\boldsymbol{b}, T \rightarrow T^T$, and thus $\E_A$ and $\E_B$ always have the same dimensionality, $\operatorname{rank}(\Theta)-1$. This completes the construction of the geometric data $(\E_A, \boldsymbol{a}, \boldsymbol{b})$ for a given state $\rho$. We next describe the reverse direction, obtaining $\rho_{AB}$ from an ellipsoid $\E_A$ and the ectors $\boldsymbol{a}$ and $\boldsymbol{b}$ [@FrankPhD]. *Faithfulness of the representation: the reconstruction of $\rho$ from geometric data.* Since we are given $\boldsymbol{a}$ and $\boldsymbol{b}$, all that remains is to obtain the correlation matrix $T$ as a function of $(Q_A, \boldsymbol{c}_A, \boldsymbol{a}, \boldsymbol{b})$. We find that the components of the matrix are given by $$\begin{aligned} T_{ij} = (c_A)_i b_j + \sum_{k=1}^3 (\sqrt{Q_A}M)_{ik} \operatorname{tr}(\sqrt{\rho_B} \sigma_k \sqrt{\rho_B} \sigma_j)\end{aligned}$$ where $M$ is an orthogonal matrix, obtained from $M \boldsymbol{b} = (\sqrt{Q_A})^{-1}(\boldsymbol{a} - \boldsymbol{c}_A)$. This specifies $M$ up to an orthogonal matrix $O_B \boldsymbol{b} = \boldsymbol{b},$ with $O_B \in \mathrm{O}(3)$, which corresponds to the choice of bases at $B$. This choice of basis can be simply encoded, for example, by providing two contour lines on $\E_A$. The derivation is provided in the appendices. *A theorem on “complete” and “incomplete" steering.* The steering ellipsoid specifies for which states there is at least one measurement outcome for Bob that steers Alice to it. A more subtle question is for which decompositions of Alice’s reduced state $\rho_A$ is there a measurement for Bob that steers to the decomposition. Clearly a necessary condition is that all of the states in the decomposition must be in $\mathcal{E}_A$, surprisingly however it turns out that this is not sufficient. The situation is captured by the following result. Consider some non-product two-qubit state $\Theta$ with ellipsoids $\mathcal{E}_A$ and $\mathcal{E}_B$. The following are equivalent: 1. (*Complete steering*) For any convex decomposition of $\boldsymbol{a}$ into states in $\mathcal{E}_A$ or on its surface, there exists a POVM for Bob that steers Alice to it. 2. Alice’s Bloch vector $\boldsymbol{a}$ lies on the surface of $\mathcal{E}_A$ scaled down by $b$. 3. The affine span of $\mathcal{E}_B$ contains the maximally mixed state. The proof is in the appendices. In particular, these conditions hold for all non-degenerate ellipsoids (which includes all entangled states) as well as all states where $\boldsymbol{b} = \boldsymbol{0}$. An example of a state where the above conditions fail is the state $\rho=\frac12\left( {|00\rangle}{\langle 00|} + {|1+\rangle}{\langle 1+|} \right)$. *The three geometric contributions to entanglement.* The Peres-Horodecki criterion [@PeresPT; @HorodeckiPT] asserts that a two-qubit state $\rho_e$ is entangled if and only if $\rho_e^{T_B}$ has a negative eigenvalue. It was shown in [@FrankFilter] that at most one eigenvalue of $\rho_e^{T_B}$ can be negative, furthermore in [@Sanpera] it is demonstrated that $\rho_e^{T_B}$ is full rank for all entangled states. It follows that $\det \rho_e^{T_B} < 0 $ is a necessary and sufficient condition for entanglement. [|M|M|M|M|]{} Dim. & Type & Alice’s ellipsoid & Bob’s ellipsoid\ 3 (Obese) & Entangled & ![image](entangled-A.png){width="1.2in"} & ![image](entangled-B.png){width="1.2in"}\ 3 (Obese) & Separable & ![image](separable-A.png){width="1.2in"} & ![image](separable-B.png){width="1.2in"}\ 2 (Pancake) & Incomplete & ![image](pancake-A.png){width="1.2in"} & ![image](pancake-B.png){width="1.2in"}\ 2 (Pancake) & Complete & ![image](nc-pancake-A.png){width="1.2in"} & ![image](nc-pancake-B.png){width="1.2in"}\ 1 (Needle) & Incomplete & ![image](needle-A.png){width="1.2in"} & ![image](needle-B.png){width="1.2in"} Suppose $\rho$ is entangled, then any state in its SLOCC orbit $\mathcal{S}(\rho)$ is also entangled [@FrankFilter] and in particular, the canonical state $\tilde{\rho} \in \mathcal{S}(\rho)$. Therefore, we can use $\det(\rho^{T_B})<0 \Leftrightarrow \det(\tilde{\rho}^{T_B})<0$ to show that a quantum steering ellipsoid corresponds to an entangled state if and only if $$\label{DetEntCond} c^4+c^2(1-\operatorname{tr}(Q) + \hat{\boldsymbol{n}}^T Q \hat{\boldsymbol{n}}) + h(Q) <0$$ where $\boldsymbol{c} = c \hat{\boldsymbol{n}}$ and $ h(Q) = 1- 8 \det \sqrt{Q} +2 \operatorname{tr}(Q^2) -(\operatorname{tr}(Q))^2 - 2\operatorname{tr}( Q )$. We have dropped the $A,B$ labels for the centre $\boldsymbol{c}$ and $Q$ because entanglement is a “symmetric” relation. This equation is manifestly invariant under global rotations of the ellipsoid, corresponding to local unitaries on the quantum state. It elucidates that entanglement correlations between the qubits manifest themselves in three geometric properties: (1) the distance of $\E_A$ from the origin, (2) The dimensions of $\E_A$, given by singular values of $Q$ and (3) the “skew” of $\E_A$, captured by the term $\hat{\boldsymbol{n}}^T Q \hat{\boldsymbol{n}}$, which depends on the alignment of the ellipsoid relative to the radial direction. *The nested tetrahedron condition*. The condition for entanglement given by equation , provides a compact algebraic condition for non-separability and uncovers contributions from different geometric aspects. However, the steering setting allows us to go further, and capture the distinction between separable and non-separable states in a remarkably elegant and intuitive way. The result is an single, geometric criterion for separability: > *A two qubit state $\rho$ is separable if and only if its steering ellipsoid $\E_A$ fits inside a tetrahedron that fits inside the Bloch sphere.* To prove necessity, suppose Alice and Bob share a separable state $\rho = \sum_{i=1}^n p_i \alpha_i \otimes \beta_i$. Since we can always take $n \leq 4$ [@PhysRevLett.80.2245], the Bloch vectors of the $\alpha_i$ define a (possibly degenerate) tetrahedron $\mathcal{T}$ within Alice’s Bloch sphere. If Bob measures an outcome $E$ then Alice is collapsed to $\sum_{i=1}^n \frac{p_i \operatorname{tr}(E \alpha_i)}{\operatorname{tr}(E \rho_B)} \alpha_i$. Hence her new Bloch vector will be a convex combination of the Bloch vectors for the $\alpha_i$ — in other words her steering ellipsoid is contained in $\mathcal{T}$. We prove in the appendix that the non-trivial converse holds: any ellipsoid that fits inside a tetrahedron that itself fits inside the Bloch sphere must arise from a separable state, and thus the nested tetrahedron condition is both necessary and sufficient for separability of the state. Building further on this geometric understanding, we can prove the following result. Let $\rho$ be separable and $n$ minimal such that $\rho = \sum_{i=1}^n p_i \alpha_i \otimes \beta_i$. Then $n = \operatorname{rank}(\Theta)$. Proof: since the $\Theta$ matrix for a product state has rank 1, it is clear that $n \geq \operatorname{rank}(\Theta)$. Since any separable state can be written using 4 (pure) product states [@PhysRevLett.80.2245] we have $n \leq 4$. If $\operatorname{rank}(\Theta) = 1$ then $\rho$ is a product state and so $n=1$. If $\operatorname{rank}(\Theta) = 2$ then $\E_A$ is a line segment and we can form a decomposition of $\rho$ using the endpoints of its steering ellipsoid, giving $n=2$. The case $\operatorname{rank}(\Theta) = 3$ (where $\E_A$ is an ellipse) is non-trivial, but is solved in the appendix by using the fact that any ellipse inside a tetrahedron inside the unit sphere also fits inside a triangle inside the unit sphere [@Mihai]. *Volume of the ellipsoid.* We can now arrive at a compact and useful expression for the volume of the steering ellipsoids, which allows a simple inference as to the type of steering a given state $\rho$ provides. The volume of any ellipsoid is proportional to the product of its semiaxes $V=\frac{4\pi}{3}s_1s_2s_3$. Therefore the $\mathcal{E}_A$ has volume $V_A = \frac{4\pi}{3}|\sqrt{\det Q_A}|$, which may be rewritten as $V_A=\frac{4\pi}{3}\frac{\left\vert\mathrm{det}\Theta\right\vert}{(1-b^2)^2}$. However, it turns out (see appendix) that $|\mathrm{det} \Theta|=16 |\mathrm{det} \rho - \mathrm{det} \rho^{T_B}|$, and so we obtain that $$\label{VolumeInRho} V_A = \frac{64\pi }{3}\frac{\left\vert \det \rho -\det \rho^{T_B} \right\vert }{\left( 1-b^{2}\right) ^{2}}.$$ The $\mathcal{E}_B$ volume can be calculated from $V_A$ via the simple relation $V_B = \frac{(1-b^2)^2}{(1-a^2)^2}V_A$, and thus if one side has non-zero volume then so too must the other. This volume is a witness of entanglement, in the sense any state with $V_A >V_{\star}= 4\pi /81$ must be entangled. This can be seen either from the nested tetrahedron theorem in the earlier section or from equation (\[DetEntCond\]). With the former one simply constructs the largest tetrahedron possible in the Bloch sphere, while for the latter one observes that $c$ must vanish, and then maximizes subject to $h(Q)=0$. In both cases one obtains $V_\star$, namely the volume of the Werner state on the separable-entangled boundary. *Quantum discord and the ellipsoid.* Quantum discord has received much attention as a measure of the quantumness of correlations (see [@Discord] for details) in which zero discord for one party roughly corresponds to them possessing a non-disturbing projective measurement. In the appendix, we show that $\rho$ has zero discord for $A$ if and only if $\E_A$ is a segment of a diameter, while $\rho$ has zero discord for $B$ if and only if $\E_A$ is one-dimensional and $b = \frac{2|\boldsymbol{c}_A - \boldsymbol{a}|}{l_A}$, where $l_A$ is the length of $\E_A$. *A new perspective on quantum correlations: obesity.* Consider a game in which Bob must convince Alice that the qubits they each hold possess more-than-classical correlations: he succeeds in his task if he can steer Alice to 3 states with linearly independent Bloch vectors. A state that is a resource for this game is called “obese”: it has an ellipsoid with finite volume. Obesity is neither the same as entanglement nor separability nor discord. All entangled states have $\det \rho^{T_B} < 0$ and so it follows from (\[VolumeInRho\]) that all entangled states have non-zero volume and hence are obese. Equation (\[VolumeInRho\]) also implies that the set of separable states divides into obese and non-obese (skinny) states. All zero discord states are skinny, however one can have $\E_A$ being one-dimensional (steering needles) but the state having positive discord due to $\E_A$ not being radially aligned. In this sense, quantum discord can be viewed as arising from a combination of an obesity component and a skew contribution. It turns out that $V_A \propto |\mathrm{det}(T -\boldsymbol{a}\boldsymbol{b}^T)|$, and so an obese state must have $T-\boldsymbol{a}\boldsymbol{b}^T$ being full rank. Since $\boldsymbol{a}\boldsymbol{b}^T$ is rank-one we must have $\mathrm{rank}(T) \geq 2$, however we cannot simply reduce the condition of non-zero obesity to full rank $T$, since there exist states with full rank $T$ that have degenerate steering ellipsoids (ellipses) and so have zero volume, for example $\boldsymbol{a} = \boldsymbol{b} = \left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)^T$ and $T = \operatorname{diag}\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)$. However, if we replace this $T$ with $T = \operatorname{diag}\left(\frac{1}{3},\frac{1}{3},0\right)$ we obtain an obese state with $\mathrm{rank}(T)=2$. To contrast entanglement, discord and obesity, we can consider a family of canonical states $\rho(\theta) = \frac{1}{4}\left(\I + \frac{1}{2}\sigma_z\otimes \I + \sum_{ij} T_{ij}(\theta) \sigma_i \otimes \sigma_j\right)$ in which the skew of the ellipsoid smoothly varies via the elements of $T(\theta)=R_y(\theta) T R^T_y(\theta)$, with $R_y(\theta)$ being a rotation about the $y$ axis by $\theta \in [0,\pi)$ and $T = \operatorname{diag}(-\frac{9}{20},-\frac{3}{10},-\frac{3}{10})$. The ellipsoids have equal volumes and are centred at $\boldsymbol{c}_A = (0,0,\frac{1}{2})^T$. This family of states illustrate an opposing behavior of the discord and concurrence as a function of skew, see Fig. \[DiscConcVol\]. The entanglement favors an orientation in which the longest ellipsoid semiaxis is aligned radially, while discord is maximised when this axis is orthogonal to the radial direction [^1]. *Conclusion.* The quantum steering ellipsoid provides a faithful representation of any two qubit state and a natural geometric classification of states (as in Table 1). It provides clear and intuitive understanding into the usual key aspects of two qubit states, uncovers surprising new features (such as the nested tetrahedron condition, skew & obesity, and incomplete steering) while prompting novel questions, such as: can we use (\[DetEntCond\]) to define a class of “least-classical” separable states for fixed $(\boldsymbol{a}, \boldsymbol{b}, \boldsymbol{c})$? Can we use the nested tetrahedron condition to provide a simple construction for the best separable approximation [@Lewenstein] for a state $\rho$? What is the characterisation of steering ellipsoids for real states? This would be particularly useful experimentally, acting as a new type of tomography for two qubits. ![Discord (solid) and concurrence (dotted) of the states $\rho(\theta)$ as a function of the orientation of the ellipsoid. Entanglement is maximized when the major axis is radial.[]{data-label="DiscConcVol"}](Discord_rotT3){width="3.5in"} We wish to acknowledge Zuzana Gavorova and Peter Lewis for their early contributions to this topic. This work was supported by the EPSRC. DJ is funded by the Royal Commission for the Exhibition of 1851. This following appendices contain the details for some of the statements and theorems that are presented in the main article. The steering ellipsoid for a general two qubit state ==================================================== Previous approaches to representing two qubit states have included partitioning the set of all two qubit states into SLOCC (Stochastic Local Operations and Classical Communication) equivalence classes [@Vis2Qubits], which results in a three dimensional representation of a state $\rho$ through its SLOCC orbit, defined as $$\mathcal{S}(\rho):=\{ \frac{ S_A \otimes S_B \rho( S_A \otimes S_B)^{\dag} }{ \operatorname{tr}( S_A \otimes S_B \rho ( S_A \otimes S_B)^{\dag}) } : \, S_A,S_B \in \mathrm{GL}(2,\mathbb{C}) \}\nonumber$$ with $\mathrm{GL}(2,\mathbb{C})$ the group of invertible, complex 2 $\times$ 2 matrices. However replacing $\rho$ with its SLOCC orbit is far from faithful, and amounts to a highly coarse-grained representation of the state that erases much of its detail. Another approach is to start with a Pauli basis expansion of $\rho$, which can be converted via a state-dependent choice of local unitaries on both qubits, to a representation involving three spatial vectors [@HorodeckiTstate]. Again, this is still not a faithful representation, and more importantly it is extremely difficult to develop any intuition for what the vector representing correlations actually means. In this section we provide the details for constructing the steering ellipsoid representation of an arbitrary two qubit quantum state. Consider a two-qubit state $$\Theta = \begin{pmatrix} 1 & {\mathbf}b^T\\ {\mathbf}a & T \end{pmatrix}.$$ The matrix $\Theta$ transforms, up to a normalization, under SLOCC operations $\rho' = S_A\otimes S_B \rho (S_A \otimes S_B)^\dag$ as $\Theta'=\Lambda_A \Theta \Lambda^T_B$ [@FrankFilter] where $\Lambda_A, \Lambda_B$ are proper orthochronous Lorentz transformations given by $\Lambda_{W} = \Upsilon S_W \otimes S_W^* \Upsilon^\dag/|\det S_W|,$ $W \in \{A,B\}$ and $$\label{Seagull} \Upsilon=\frac{1}{\sqrt{2}} \begin{pmatrix}1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & i & -i & 0 \\ 1 & 0 & 0 & -1 \end{pmatrix}.$$ In particular, local unitaries rotate the Pauli basis: $\boldsymbol{a}\rightarrow O_A\boldsymbol{a}$, $\boldsymbol{b}\rightarrow O_B\boldsymbol{b}$, $T\rightarrow O_AT O_B^T$, where $O_A,O_B \in \mathrm{SO}(3)$. If $b = 1$ then $\Theta$ must be a product state and no steering can occur, so assume otherwise. Define $\gamma = 1/\sqrt{1-b^2}$, and a new state $$\Theta' = \gamma \Theta L_{{\mathbf}b}, \label{thetapdef}$$ where $L_{{\mathbf}b}$ is a Lorentz boost by ${\mathbf}b$ [@lorentz]: $$L_{{\mathbf}b} = \begin{pmatrix} \gamma & -\gamma {\mathbf}b^T \\ -\gamma {\mathbf}b & \I + \frac{\gamma - 1}{b^2}{\mathbf}b {\mathbf}b^T \end{pmatrix}.$$ The $\gamma$ in ensures that $\Theta'$ is normalized: the top-left element is $$\gamma \begin{pmatrix} 1 & {\mathbf}b^T\end{pmatrix} \begin{pmatrix}\gamma \\-\gamma{\mathbf}b\end{pmatrix} = \gamma^2(1-b^2) = 1.$$ The boost means Bob’s reduced state is maximally mixed: the top-right block of $\Theta'$ is $$\begin{gathered} \gamma \begin{pmatrix} 1 & {\mathbf}b^T\end{pmatrix} \begin{pmatrix}-\gamma {\mathbf}b^T \\ \I + \frac{\gamma - 1}{b^2}{\mathbf}b {\mathbf}b^T \end{pmatrix} = \\\gamma(-\gamma {\mathbf}b^T + {\mathbf}b^T + (\gamma - 1){\mathbf}b^T) = {\mathbf}0^T.\end{gathered}$$ By the above two observations we can write $$\Theta' = \begin{pmatrix}1 & {\mathbf}0^T \\ {\mathbf}a' & T'\end{pmatrix}.\label{thetapform}$$ The set of states Bob can steer Alice to will be exactly the same for $\Theta$ and $\Theta '$: $Y = \frac12 \Theta X \iff Y = \frac12 \Theta' X'$ where $X' = L_{- {\mathbf}b} X/\gamma$ and $X$ corresponds to a positive operator iff $X'$ does because $L_{- {\mathbf}b}/\gamma$ preserves the forward light cone. What is that set of states? Writing $X = \begin{pmatrix}t \\ {\mathbf}x\end{pmatrix}$ we see that without loss of generality we can take $t=1$ since the effect of multiplying $X$ by a positive number is undone when normalizing $Y$. Hence the positivity condition becomes $x \leq 1$. If $x<1$ then we could write $X$ as a convex combination of ones with $x=1$, so we restrict attention to the latter case. So any state ${\mathbf}y$ that Alice can steer Bob to is a convex combination of states of the form ${\mathbf}y = {\mathbf}a' + T'{\mathbf}x$ with $x = 1$. But this defines a linear image of the unit sphere displaced by ${\mathbf}a'$, or in other words an ellipsoid centred at ${\mathbf}a'$. Consider the singular value decomposition $T' = O_1 D O_2$. $O_2$ simply rotates the unit sphere and so can be ignored. $D$ stretches the sphere and thus gives the lengths of the semi-axes of the resulting ellipsoid. $O_1$ rotates the semi-axes. We have $T' T'^{T} = O_1 D^2 O_1^T$ and so the lengths of the semi-axes can also be found by square rooting the eigenvalues of $T' T'^T$ whilst their directions can be found from its eigenvectors. Combining with we find $${\mathbf}a' = \gamma\begin{pmatrix}{\mathbf}a & T\end{pmatrix}\begin{pmatrix}\gamma \\ -\gamma {\mathbf}b\end{pmatrix} = \gamma^2({\mathbf}a - T{\mathbf}b),$$ $$\begin{aligned} T' &=& \gamma\begin{pmatrix}{\mathbf}a & T\end{pmatrix}\begin{pmatrix}-\gamma {\mathbf}b^T \I + \frac{\gamma - 1}{b^2}{\mathbf}b {\mathbf}b^T \end{pmatrix}\\ &=& \gamma\left(-\gamma {\mathbf}a {\mathbf}b^T + T + \frac{\gamma - 1}{b^2}T{\mathbf}b {\mathbf}b^T\right).\end{aligned}$$ And so, after some algebra, $$\begin{aligned} T' T'^T = \gamma^2(T T^T - {\mathbf}a {\mathbf}a^T) + {\mathbf}a' {\mathbf}a'^T.\end{aligned}$$ Using $-{\mathbf}a {\mathbf}b^T \left(\I + \frac{\gamma-1}{b^2}{\mathbf}b {\mathbf}b^T\right) = -\gamma {\mathbf}a {\mathbf}b^T$ we can also write $$T' = \gamma\left(T - {\mathbf}a {\mathbf}b^T\right)\left(\I + \frac{\gamma-1}{b^2}{\mathbf}b {\mathbf}b^T\right).\label{alternativeT}$$ leading to the form in the main text: $$T' T'^T = \gamma^2(T-{\mathbf}a {\mathbf}b^T)(\I + \gamma^2 {\mathbf}b {\mathbf}b^T)(T^T - {\mathbf}b {\mathbf}a^T) =: Q_A$$ Since $\gamma L_{{\mathbf}b}$ is invertible we have $\operatorname{rank}(\Theta') = \operatorname{rank}(\Theta)$. By counting linearly independent columns in we have $\operatorname{rank}(\Theta') = \operatorname{rank}(T') + 1$ thus proving that the dimension of the ellipsoid is $\operatorname{rank}(\Theta) - 1$. Proof of the complete steering theorem ====================================== We present an extended formulation of the theorem: *Theorem* Consider some non-product two-qubit state $\Theta = \begin{pmatrix}1 & {\mathbf}b^T\\{\mathbf}a & T\end{pmatrix}$ with ellipsoids ${\mathcal{E}}_A$ and ${\mathcal{E}}_B$. The following are equivalent: 1. Complete steering: for all convex decompositions of ${\mathbf}a$ into states in ${\mathcal{E}}_A$, there exists a POVM for Bob that steers Alice to it.\[fullsteer\] 2. Any surface steering: there exists a convex decomposition of ${\mathbf}a$ into states on the surface of ${\mathcal{E}}_A$ with a POVM for Bob that steers Alice to it.\[surfsteer\] 3. Alice’s Bloch vector lies on the surface of her ellispoid scaled down by $b$.\[blochcond\] 4. The affine span of ${\mathcal{E}}_B$ contains the maximally mixed state.\[spancond\] 5. $\begin{pmatrix}1 \\ {\mathbf}0\end{pmatrix} \in \mathrm{range}(\Theta^T)$.\[rangecond\] 6. $\begin{pmatrix}1 \\ {\mathbf}0\end{pmatrix} \in \ker(\Theta)^\perp$.\[kercond\] Let $\gamma = 1/\sqrt{1 - b^2}$ and $\Theta' = \gamma \Theta L_{{\mathbf}b}$. Then $\Theta = \Theta' \frac{L_{-{\mathbf}b}}{\gamma}$ and so \[kercond\] is equivalent to $\begin{pmatrix}1 \\ {\mathbf}b\end{pmatrix} \in \ker(\Theta')^\perp$. If we write $\Theta' = \begin{pmatrix}1 & {\mathbf}0^T \\ {\mathbf}a' & T'\end{pmatrix}$ then we see that any vector in $\ker(\Theta')$ is of the form $\begin{pmatrix}0 \\ {\mathbf}x\end{pmatrix}$ with $T'{\mathbf}x = {\mathbf}0$ and so \[kercond\] is equivalent to ${\mathbf}b \in \ker(T')^\perp$. ${\mathcal{E}}_A$ are the points that can be written ${\mathbf}a' + T'{\mathbf}x'$ where $x' \le 1$. Therefore the surface of ${\mathcal{E}}_A$ are the points that can be written ${\mathbf}a' + T'{\mathbf}x'$ where $x' = 1$ and ${\mathbf}x' \in \ker(T')^\perp$. Hence the scaled down surface is ${\mathbf}a' + T'{\mathbf}x'$ where $x' = b$ and ${\mathbf}x' \in \ker(T')^\perp$. From $\Theta = \Theta' \frac{L_{-{\mathbf}b}}{\gamma}$ we calculate that ${\mathbf}a = {\mathbf}a' + T'{\mathbf}b$. Trivial. Let ${\mathbf}y_i$ on the surface of ${\mathcal{E}}_A$ form a convex decomposition $\sum_i p_i {\mathbf}y_i = {\mathbf}a$. Since they are on the surface, we have ${\mathbf}y_i = {\mathbf}a' + T'{\mathbf}x_i'$ where $x_i'=1$ and ${\mathbf}x_i' \in \ker(T')^\perp$. Suppose we also have ${\mathbf}y_i = {\mathbf}a' + T' {\mathbf}x_i''$ with $x_i'' \leq 1$. Then ${\mathbf}x_i' - {\mathbf}x_i'' \in \ker(T')$, and the only way that the difference between two vectors can be perpendicular to the longer one is if they are equal. Therefore $2p_i\begin{pmatrix}1 \\ {\mathbf}x'_i\end{pmatrix}$ is the unique element of the forward light cone that $\frac12 \Theta'$ maps to $p_i\begin{pmatrix}1 \\ {\mathbf}y_i\end{pmatrix}$, and therefore $\gamma L_{{\mathbf}b}$ times these form the only possible POVM elements for Bob. But to be a valid POVM, they must sum to the identity $\begin{pmatrix}2 \\ {\mathbf}0\end{pmatrix}$, i.e. $\sum_i 2 p_i \begin{pmatrix}1 \\ {\mathbf}x'_i\end{pmatrix} = \frac{L_{-{\mathbf}b}}{\gamma}\begin{pmatrix}2 \\ {\mathbf}0\end{pmatrix} = 2\begin{pmatrix}1 \\ {\mathbf}b\end{pmatrix}$. Since the ${\mathbf}x_i' \in \ker(T')^\perp$, this implies ${\mathbf}b \in \ker(T')^\perp$ which is equivalent to \[kercond\]. Let ${\mathbf}y_i \in {\mathcal{E}}_A$ form a convex decomposition $\sum_i p_i {\mathbf}y_i = {\mathbf}a$. Since ${\mathbf}y_i \in {\mathcal{E}}_A$ we have ${\mathbf}y_i = {\mathbf}a' + T'{\mathbf}x_i'$ where $x_i' \leq 1$. Write ${\mathbf}x_i' = {\mathbf}k_i + {\mathbf}c_i$ where ${\mathbf}k_i \in \ker(T')$ and ${\mathbf}c_i \in \ker(T')^\perp$. This implies $c_i \leq x_i' \leq 1$ and ${\mathbf}y_i = {\mathbf}a' + T'{\mathbf}c_i$. So $2 p_i\begin{pmatrix}1 \\ {\mathbf}c_i\end{pmatrix}$ are in the forward light cone and map to $p_i\begin{pmatrix}1 \\ {\mathbf}y_i\end{pmatrix}$ under $\frac12 \Theta'$. Hence $\gamma L_{{\mathbf}b}$ times these are in the forward line cone and map to $p_i\begin{pmatrix}1 \\ {\mathbf}y_i\end{pmatrix}$ under $\frac12 \Theta$. Since $\sum_i p_i {\mathbf}y_i = {\mathbf}a = {\mathbf}a' + T'{\mathbf}b$ we have $T'\sum_i p_i {\mathbf}c_i = T'{\mathbf}b$. By construction ${\mathbf}c_i \in \ker(T')^\perp$ and by assumption ${\mathbf}b \in \ker(T')^\perp$, and so this implies $\sum_i p_i{\mathbf}c_i = {\mathbf}b$. Then $\sum_i \gamma L_{{\mathbf}b} 2 p_i \begin{pmatrix}1 \\ {\mathbf}c_i\end{pmatrix} = 2\gamma L_{{\mathbf}b} \begin{pmatrix}1 \\ {\mathbf}b\end{pmatrix} = \begin{pmatrix}2 \\ {\mathbf}0\end{pmatrix}$ so we have a valid POVM. Immediate from form of scaled down ${\mathcal{E}}_A$ and ${\mathbf}a$ in preliminaries. If ${\mathbf}a$ is on the scaled down surface then ${\mathbf}a' + T'{\mathbf}x' = {\mathbf}a' + T' {\mathbf}b$ where $x' = b$ and ${\mathbf}x' \in \ker(T')^\perp$. Hence ${\mathbf}x' - {\mathbf}b \in \ker(T')$. The only way the difference between two vectors of the same length can be perpendicular to one of them is if they are the same, and so \[kercond\] follows. Suppose $\sum q_i {\mathbf}x_i = 0$, $\sum q_i = 1$ with ${\mathbf}x_i \in {\mathcal{E}}_B$. Recalling that swapping parties sends $\Theta \to \Theta^T$ we see that there exists $Y_i$ in the forward light-cone with $\begin{pmatrix}1 \\ {\mathbf}x_i\end{pmatrix} = \frac12 \Theta^T Y_i$. But then $\Theta^T \frac12 \sum_i q_i Y_i = \sum_i q_i \frac12 \Theta^T Y_i = \sum_i q_i \begin{pmatrix}1 \\ {\mathbf}x_i\end{pmatrix} = \begin{pmatrix}1 \\ {\mathbf}0\end{pmatrix}$. Suppose there exists $Y$ with $\frac12 \Theta^T Y = \begin{pmatrix}1 \\ {\mathbf}0\end{pmatrix}$. If $Y = \begin{pmatrix}t \\ {\mathbf}y\end{pmatrix}$ is in the forward light cone (i.e. $t \geq y$) then ${\mathcal{E}}_B$ itself contains the maximally mixed state and we are done. Otherwise, notice that $Y_1 = \begin{pmatrix}y - t \\ 0\end{pmatrix}$ and $Y_2 = \begin{pmatrix}y \\ {\mathbf}y\end{pmatrix}$ are in the forward light cone. Writing $\frac12 \Theta^T Y_i$ as $q_i \begin{pmatrix} 1 \\ {\mathbf}x_i\end{pmatrix}$ we have ${\mathbf}x_i \in {\mathcal{E}}_B$. Noting that $Y = \sum_i Y_i$ we have $\sum_i \frac12 \Theta^T Y_i = \begin{pmatrix}1 \\ {\mathbf}0\end{pmatrix}$, in other words $\sum_i q_i = 1$ and $\sum_i q_i {\mathbf}x_i = {\mathbf}0$. $\range(A^T) = \ker(A)^\perp$ is a theorem of linear algebra, which follows straightforwardly from the singular value decomposition. Reconstructing the state from its steering ellipsoid and the Bloch vectors ========================================================================== Alice’s steering ellipsoid $\mathcal{E}_A$ is described by matrix $Q_A$ and centre ${\mathbf}{c}_A$. One quantum state that corresponds to it is the “canonical” state $\tilde{\rho}$ with Bloch vectors $\tilde{{\mathbf}{b}} = {\mathbf}{0}$, $\tilde{{\mathbf}{a}} = {\mathbf}{c}_A$ and T-matrix $\tilde{T} = \sqrt{Q_A}O$, with $O \in \mathrm{O}(3)$ (since $Q_A = \tilde{T}\tilde{T}^T$). Written explicitly it is $$\begin{aligned} \tilde{\rho}=\frac{1}{4}(\I + \boldsymbol{c}_A\cdot\sigma \otimes \I + \sum_{i,j=1}^{3} (\sqrt{Q_A}O)_{ij} \sigma_i \otimes \sigma_j)\end{aligned}$$ The orthogonal matrix $O$ is not fixed. If in addition to $Q_A, \boldsymbol{c}_A$ we are also given the Bloch vectors $\boldsymbol{a}, \boldsymbol{b}$, a corresponding state is $$\begin{aligned} \rho= \I \otimes \sqrt{2\rho_B} \, \tilde{\rho} \,\I \otimes\sqrt{2\rho_B}\end{aligned}$$ where $$\rho_B = \frac{1}{2}(\I + \boldsymbol{b}\cdot\boldsymbol{\sigma})$$ This is because $\sqrt{2\rho_B}$ is a SLOCC operator for $\mathbf{b} \ne \mathbf{0}$ and Alice’s ellipsoid is invariant under such local operations on Bob’s qubit. Note that $$\operatorname{tr}(\rho \I \otimes \boldsymbol{\sigma}) = \boldsymbol{b},$$ where $\boldsymbol{\sigma}$ is a vector of Paulis, as required. However $$\operatorname{tr}(\rho \boldsymbol{\sigma} \otimes \I) = \boldsymbol{c}_A + \sqrt{Q_A}O \boldsymbol{b}$$ and the right hand side of this should be $\boldsymbol{a}$. This constrains the orthogonal matrix $O$ as a solution to $$\label{findO} O \boldsymbol{b} = (\sqrt{Q_A})^{-1}(\boldsymbol{a} - \boldsymbol{c}_A)$$ We can split up the orthogonal matrix into two orthogonal rotations: $O = M B$ where $M\in \mathrm{SO}(3)$ (properly) rotates $\boldsymbol{b}$ to $(\sqrt{Q_A})^{-1}(\boldsymbol{b} - \boldsymbol{c}_A)$ about a unit vector perpendicular to the plane they lie in, and $B \in \mathrm{O}(3)$ rotates about $\boldsymbol{b}$, that is $B \boldsymbol{b} = \boldsymbol{b}$. The equation only determines $M$. Define the unitary matrix $U_B$ as the one corresponding to $B$, it has the property $[ U_B, \rho_B] = 0$. Then we may write $$\tilde{\rho} = \I \otimes U_B \chi \I \otimes U^\dag_B$$ where $$\begin{aligned} \chi &=& \I \otimes U^\dag_B \tilde{\rho} \I \otimes U_B\\ &=&\frac{1}{4}(\I + \boldsymbol{c}_A\cdot\sigma \otimes \I + \sum_{i,j=1}^{3} (\sqrt{Q_A}M)_{ij} \sigma_i \otimes \sigma_j)\end{aligned}$$ Thus $$\begin{aligned} \rho &=& \I \otimes \sqrt{2\rho_B}U_B \, \chi \,\I \otimes U^\dag_B \sqrt{2\rho_B}\\ &=& \I \otimes U_B\sqrt{2\rho_B} \, \chi \,\I \otimes \sqrt{2\rho_B}U^\dag_B\end{aligned}$$ since if $[ U_B, \rho_B] = 0$ then $[ U_B, \sqrt{\rho_B}] = 0$. Therefore the state $\rho$ compatible with $Q_A, \boldsymbol{c}_A, \boldsymbol{a}, \boldsymbol{b} $ is $$\begin{aligned} \rho = \frac{1}{4}\left( \I + \boldsymbol{a}.\boldsymbol{\sigma}\otimes \I + \I\otimes \boldsymbol{b}.\boldsymbol{\sigma} + \sum_{i,j=1}^3 T_{ij} \sigma_i \otimes \sigma_j \right)\end{aligned}$$ where $T = R B$ and $$\begin{aligned} R_{ij} &=& \operatorname{tr}(\I \otimes \sqrt{2\rho_B} \, \chi \,\I \otimes \sqrt{2\rho_B} \sigma_i \otimes \sigma_j)\\ &=& (c_A)_i b_j + \sum_{k=1}^3 (\sqrt{Q_A}M)_{ik} \operatorname{tr}(\sqrt{\rho_B} \sigma_k \sqrt{\rho_B} \sigma_j)\end{aligned}$$ and so we can reconstruct the state up to a local unitary $B$ on Bob’s qubit that leaves it invariant. Steering ellipsoids in a tetrahedron in the Bloch sphere correspond to separable states ======================================================================================= We prove that for any ${\mathcal{E}}$ inside a tetrahedron inside the Bloch sphere, there is a separable state with $\rho_B = \frac12\mathbb{I}$ and ${\mathcal{E}}$ as its steering ellipse. We present the proofs for each possible dimension of ${\mathcal{E}}$ separately, although each one is basically a slightly more involved version of the previous one. Note that in the 0 and 1 dimensional cases the requirement to fit inside a tetrahedron is trivially satisfied by any ${\mathcal{E}}$ inside the Bloch sphere. This result suffices to show that any state with an ellipse that fits inside a tetrahedron is separable by the following argument. Suppose $\rho_{AB}$ has an ellipse that fits inside the tetrahedron. If $b = 1$ then $\rho_{AB}$ is a product state and we are done. Otherwise, apply a SLOCC operator to Bob and obtain $\tilde\rho_{AB}$ with ${\mathbf}b = 0$, recalling that SLOCC operators cannot change a state from being entangled to separable. This will leave Alice’s ellipse unchanged whilst moving her reduced state to the centre of her ellipse. Since by the above statement there exists a separable state with the correct ellipse and reduced states, $\tilde\rho_{AB}$ must equal the separable state up to a choice of basis for Bob, and hence must itself be separable. In fact the separable states constructed below use a number of product states equal to the dimension of the ellipse plus one. Since the SLOCC operator and choice of basis for Bob do not affect the number of product states in a decomposition, we furthermore have that $\rho_{AB}$ can be built using that number of product states. 0-dimensional ------------- If the steering ellipsoid is a single point ${\mathbf}r$ then simply take $\rho$ with Bloch vector ${\mathbf}r$ and let $\rho_{AB} = \rho \otimes \frac12 \mathbb{I}$. 1-dimensional ------------- Suppose ${\mathcal{E}}$ is a line segment from ${\mathbf}r_0$ to ${\mathbf}r_1$. Take $\rho_i$ with Bloch vectors ${\mathbf}r_i$ and let $\rho_{AB} = \frac12 \sum_i \rho_i \otimes {|i\rangle}{\langle i|}$. 2-dimensional ------------- If an ellipse fits inside a tetrahedron in the Bloch sphere, it also fits inside a triangle in the Bloch sphere [@Mihai]. Therefore, suppose an ellipse ${\mathcal{E}}$ fits within a triangle in the Bloch sphere whose vertices are $\{ {\mathbf}r_0, {\mathbf}r_1, {\mathbf}r_2 \}$. Without loss of generality we can take the ellipse to be tangent to each edge of the triangle, at points $\{ {\mathbf}s_i \}$ where ${\mathbf}s_i$ is on the face opposite to ${\mathbf}r_i$. Denote the centre of the ellipse by ${\mathbf}c$. Clearly there exists unique $p_i \geq 0$ such that $\sum_i p_i {\mathbf}r_i = {\mathbf}c$ and $\sum_i p_i = 1$. By the definition of an ellipse, there is an invertible affine transformation ${\mathcal{A}}$ that maps ${\mathcal{E}}$ to the unit circle in the $(x,z)$-plane, centred at the origin. Let $\rho_i$ have Bloch vectors ${\mathbf}r_i$ and${|\psi_i\rangle}$ be such that the Bloch vector of ${|\psi_i\rangle}{\langle \psi_i|}$ is $-{\mathcal{A}}({\mathbf}s_i)$. We claim that the (manifestly separable) state $$\rho_{AB} = \sum_i p_i \rho_i \otimes {|\psi_i\rangle}{\langle \psi_i|}$$ has $\rho_B = \frac{1}{2}\I$ and that Alice’s steering ellipsoid for this state is ${\mathcal{E}}$. To prove the first part, notice that the Bloch vector of $\rho_B$ is $-\sum_i p_i {\mathcal{A}}({\mathbf}s_i)$. Since ${\mathcal{A}}$ is affine, the unit circle will be tangent to the triangle with vertices $\{{\mathcal{A}}({\mathbf}r_i)\}$ at the points $\{{\mathcal{A}}({\mathbf}s_i)\}$, and $\sum_i p_i {\mathcal{A}}({\mathbf}r_i) = {\mathcal{A}}({\mathbf}c) = {\mathbf}0$. Hence it suffices to prove Suppose the triangle with vertices $\{ {\mathbf}v_i \}$ contains the unit circle centered at the origin, and the circle is tangent to each edge of the triangle at the points $\{ {\mathbf}t_i \}$ (where ${\mathbf}t_i$ is on the edge opposite ${\mathbf}v_i$). Fix $p_i$ by the requirements that $\sum_i p_i {\mathbf}v_i = {\mathbf}0$ and $\sum_i p_i = 1$. Then $\sum_i p_i {\mathbf}t_i = {\mathbf}0$. \[lem1\] We use ${\mathbf}x$ to represent points on or within the tetrahedron using normalized barycentric co-ordinates $(x_0, x_1, x_2)$ where $\sum_i x_i = 1$ and $\vec x = \sum_i x_i {\mathbf}v_i$. Let $A_0$ be the volume of the triangle with vertices $\{{\mathbf}x, {\mathbf}v_1, {\mathbf}v_2\}$, $A_1$ be the area of the triangle with vertices $\{{\mathbf}v_0, {\mathbf}x, {\mathbf}v_2\}$ and similarly for $A_2$. Let $A$ be the area of the original triangle (notice $A = \sum_i A_i$). Then $x_i = A_i / A$. By definition the barycentric co-ordinates of the origin are $(p_0, p_1, p_2)$. Let $L_i$ be the length of the edge opposite ${\mathbf}v_i$, and let $L = \sum_i L_i$. By using that the area of a triangle = $\frac12$ (base) $\times$ (perpendicular height) and noting that by the tangency assumption the relevant triangles have a perpendicular height of 1, we obtain that $p_i = L_i / L$. ![The various quantities used in proving Lemma \[lem1\]. The dashed lines form the three triangles used to show $p_i = L_i/L$, the dotted lines indicate their perpendicular heights (which are equal to the radius of the circle: 1).[]{data-label="lemfig"}](lemfig.pdf) Let $M_0^{(1)} = {\left\lvert{\mathbf}v_0 - {\mathbf}t_1\right\rvert}$, $M_0^{(2)} = {\left\lvert{\mathbf}v_0 - {\mathbf}t_2\right\rvert}$. In fact $M_0^{(1)} = M_0^{(2)}$ because they are both the unique length defined by the requirement of being from a fixed point to a point on the circle such that the line between them is tangent to the sphere, so we can write this length simply as $M_0$. Define the other two $M_i$ by a similar argument. All this is illustrated in Figure \[lemfig\]. Notice that $$\begin{aligned} L_0 &= M_{1} + M_{2},\\ L_1 &= M_{0} + M_{2},\\ L_2 &= M_{0} + M_{1},\\\end{aligned}$$ The barycentric co-ordinates of ${\mathbf}t_0, {\mathbf}t_1$ and ${\mathbf}t_2$ can now be calculated as $$(0, M_2, M_1)/L_0,$$ $$(M_2, 0, M_0)/L_1,$$ and $$(M_1, M_0, 0)/L_2,$$ respectively. Using $p_i = L_i / L$ and the fact that barycentric coordinates respect convex combinations the required result is now immediate. Suppose Bob projects qubit onto ${|\psi\rangle}$ and the orthogonal state. Since $\rho_B = \frac{1}2 \I$ he will obtain each outcome with probability $\frac12$. Therefore if he obtains the ${|\psi\rangle}$ outcome then Alice’s state will be $$\begin{aligned} \rho_{A}({|\psi\rangle}) &:=& \frac{\operatorname{tr}_B(\rho_{AB}(I \otimes {|\psi\rangle}{\langle \psi|}))}{\operatorname{tr}(\rho_B{|\psi\rangle}{\langle \psi|})} \\ &=& \frac{\sum_i p_i \rho_i {\left\lvert{\langle \psi_i | \psi|\right\rangle}\rvert}^2}{\frac12} \\ &=& 2\sum_i p_i \rho_i {\left\lvert{\langle \psi_i | \psi|\right\rangle}\rvert}^2\end{aligned}$$ Recalling that the Bloch vector of ${|\psi_i\rangle}{\langle \psi_i|}$ is $-{\mathcal{A}}({\mathbf}s_i)$, then if ${|\psi\rangle}{\langle \psi|}$ has Bloch vector ${\mathbf}r$ then the Bloch vector of $\rho_A({|\psi\rangle})$ will be $$f({\mathbf}r) := 2 \sum_i p_i {\mathbf}r_i \frac{1 - {\mathbf}r \cdot {\mathcal{A}}({\mathbf}s_i)}2 = \sum_i p_i {\mathbf}r_i \left(1 - {\mathbf}r \cdot {\mathcal{A}}({\mathbf}s_i)\right).$$ Let us extend this expression to all ${\mathbf}r$ to define an affine function $f$. The statement that Alice’s steering ellipsoid is ${\mathcal{E}}$ is equivalent to the statement that ${\mathcal{E}}$ is the image of the unit sphere under $f$. Since all the ${\mathcal{A}}({\mathbf}s_i)$ are in the $(x,z)$-plane, we have $f\left( (0,1,0) \right) = f({\mathbf}0)$, i.e. we can think of $f$ as first projecting onto the $(x,z)$-plane and then applying some affine transformation. The image of the unit sphere under that projection is the unit disc, and so it suffices to check that $\E$ is the image of the unit circle under $f$. Define $g({\mathbf}r) = {\mathcal{A}}(f({\mathbf}r))$. Since ${\mathcal{A}}$ is invertible and maps ${\mathcal{E}}$ to unit circle it suffices to prove that $g$ is the identity on the $(x,z)$-plane. Since $g$ is the composition of two affine functions it is also affine. By the definition of the $p_i$, $g({\mathbf}0) = {\mathcal{A}}({\mathbf}c) = {\mathbf}0$ so $g$ is in fact linear. Hence it suffices to check that $g({\mathbf}u_j) = {\mathbf}u_j$ for some spanning set of vectors $\{{\mathbf}u_j\}$. Since the triangle cannot be degenerate, its vertex set $\{ {\mathbf}r_j \}$ span some plane. Since ${\mathcal{A}}$ is invertible, $\{ {\mathcal{A}}({\mathbf}r_j) \}$ must span the $(x,z)$ plane. For $i \neq j$, $\{{\mathbf}0, {\mathcal{A}}({\mathbf}s_i), {\mathcal{A}}({\mathbf}r_j)\}$ form a right-angle triangle, and ${\left\lvert{\mathcal{A}}({\mathbf}s_i)\right\rvert} = 1$. Therefore ${\mathcal{A}}({\mathbf}r_j) \cdot {\mathcal{A}}({\mathbf}s_i) = 1$ whenever $i \neq j$. But $\sum_i p_i(1 - {\mathbf}r \cdot {\mathcal{A}}({\mathbf}s_i)) = \sum_i p_i - {\mathbf}r\cdot\left(\sum_i p_i {\mathcal{A}}({\mathbf}s_i)\right) = 1 - {\mathbf}r \cdot {\mathbf}0 = 1$ (the penultimate equality is from Lemma \[lem1\]). Hence $p_i(1 - {\mathcal{A}}({\mathbf}r_j)\cdot{\mathcal{A}}({\mathbf}s_i)) = \delta_{ij}$ and we are done. 3-dimensional ------------- Suppose an ellipsoid ${\mathcal{E}}$ fits within a tetrahedron in the Bloch sphere whose vertices are $\{ {\mathbf}r_0, {\mathbf}r_1, {\mathbf}r_2, {\mathbf}r_3 \}$. Without loss of generality we can take the ellipsoid to be tangent to each face of the tetrahedron, at points $\{ {\mathbf}s_i \}$ where ${\mathbf}s_i$ is on the face opposite to ${\mathbf}r_i$. Denote the centre of the ellipsoid by ${\mathbf}c$. Clearly there exists unique $p_i \geq 0$ such that $\sum_i p_i {\mathbf}r_i = {\mathbf}c$ and $\sum_i p_i = 1$. By the definition of an ellipsoid, there is an invertible affine transformation ${\mathcal{A}}$ that maps ${\mathcal{E}}$ to the unit sphere centred at the origin. Let $\rho_i$ have Bloch vectors ${\mathbf}r_i$ and ${|\psi_i\rangle}$ be such that the Bloch vector of ${|\psi_i\rangle}{\langle \psi_i|}$ is $-{\mathcal{A}}({\mathbf}s_i)$. We claim that the (manifestly separable) state $$\rho_{AB} = \sum_i p_i \rho_i \otimes {|\psi_i\rangle}{\langle \psi_i|}$$ has $\rho_B = \frac{1}{2}\I$ and that Alice’s steering ellipsoid for this state is ${\mathcal{E}}$. To prove the first part, notice that the Bloch vector of $\rho_B$ is $-\sum_i p_i {\mathcal{A}}({\mathbf}s_i)$. Since ${\mathcal{A}}$ is affine, the unit sphere will be tangent to the tetrahedron with vertices $\{{\mathcal{A}}({\mathbf}r_i)\}$ at the points $\{{\mathcal{A}}({\mathbf}s_i)\}$, and $\sum_i p_i {\mathcal{A}}({\mathbf}r_i) = {\mathcal{A}}({\mathbf}c) = {\mathbf}0$. Hence it suffices to prove the following 3-dimensional analogue to Lemma \[lem1\]: Suppose the tetrahedron with vertices $\{ {\mathbf}v_i \}$ contains the unit sphere centered at the origin, and the sphere is tangent each face of the tetrahedron at the points $\{ {\mathbf}t_i \}$ (where ${\mathbf}t_i$ is on the face opposite ${\mathbf}v_i$). Fix $p_i$ by the requirements that $\sum_i p_i {\mathbf}v_i = {\mathbf}0$ and $\sum_i p_i = 1$. Then $\sum_i p_i {\mathbf}t_i = {\mathbf}0$. \[lem2\] We use ${\mathbf}x$ to represent points on or within the tetrahedron using normalized barycentric co-ordinates $(x_0, x_1, x_2, x_3)$ where $\sum_i x_i = 1$ and $\vec x = \sum_i x_i {\mathbf}v_i$. Let $V_0$ be the volume of the tetrahedron with vertices $\{{\mathbf}x, {\mathbf}v_1, {\mathbf}v_2, {\mathbf}v_3\}$, $V_1$ be the volume of the tetrahedron with vertices $\{{\mathbf}v_0, {\mathbf}x, {\mathbf}v_2, {\mathbf}v_3\}$ and so on. Let $V$ be the volume of the original tetrahedron (notice $V = \sum_i V_i$). Then $x_i = V_i / V$. By definition the barycentric co-ordinates of the origin are $(p_0, p_1, p_2, p_3)$. Let $A_i$ be the area of the face opposite ${\mathbf}v_i$, and let $A = \sum_i A_i$. By using that the volume of a tetrahedron = $\frac13$ (area of base) $\times$ (perpendicular height) and noting that by the tangency assumption the relevant tetrahedra have a perpendicular height of 1, we obtain that $p_i = A_i / A$. Let $A_{23}^{(0)}$ be the area of the triangle with vertices $\{{\mathbf}v_2, {\mathbf}v_3, {\mathbf}t_0\}$. Let $A_{23}^{(1)}$ be the area of the triangle with vertices $\{{\mathbf}v_2, {\mathbf}v_3, {\mathbf}t_1\}$. Now we have that ${\left\lvert{\mathbf}v_2 - {\mathbf}t_0\right\rvert} = {\left\lvert{\mathbf}v_2 - {\mathbf}t_1\right\rvert}$ because they are both the unique length defined by the requirement of being from a fixed point to a point on the sphere such that the line between them is tangent to the sphere. Similarly ${\left\lvert{\mathbf}v_3 - {\mathbf}t_0\right\rvert} = {\left\lvert{\mathbf}v_3 - {\mathbf}t_1\right\rvert}$. Hence the two triangles are congruent and we can simply write their areas as $A_{23}$. Define the other five $A_{ij}$ by a similar argument. Notice that $$\begin{aligned} A_0 &= A_{12} + A_{13} + A_{23},\\ A_1 &= A_{02} + A_{03} + A_{23},\\ A_2 &= A_{01} + A_{03} + A_{13},\\ A_3 &= A_{01} + A_{02} + A_{12}.\end{aligned}$$ The barycentric co-ordinates of ${\mathbf}t_0, {\mathbf}t_1, {\mathbf}t_2$ and ${\mathbf}t_3$ can now be calculated as $$(0, A_{23}, A_{13}, A_{12})/A_0,$$ $$(A_{23}, 0, A_{03}, A_{02})/A_1,$$ $$(A_{13}, A_{03}, 0, A_{01})/A_2,$$ and $$(A_{12}, A_{02}, A_{01}, 0)/A_3$$ respectively. Using $p_i = A_i / A$ and the fact that barycentric coordinates respect convex combinations the required result is now immediate. As in the 2-dimensional case we find that if Bob projects onto the state with Bloch vector ${\mathbf}r$ than Alice’s Bloch vector is $$f({\mathbf}r) := \sum_i p_i {\mathbf}r_i \left(1 - {\mathbf}r \cdot {\mathcal{A}}({\mathbf}s_i)\right).$$ Let us extend this expression to all ${\mathbf}r$ to define an affine function $f$. The statement that Alice’s steering ellipsoid is ${\mathcal{E}}$ is equivalent to the statement that ${\mathcal{E}}$ is the image of the unit sphere under $f$. Define $g({\mathbf}r) = {\mathcal{A}}(f({\mathbf}r))$. Since ${\mathcal{A}}$ is invertible and maps ${\mathcal{E}}$ to unit sphere it suffices to prove that $g$ is the identity. Since $g$ is the composition of two affine functions it is also affine. By the definition of the $p_i$, $g({\mathbf}0) = {\mathcal{A}}({\mathbf}c) = {\mathbf}0$ so $g$ is in fact linear. Hence it suffices to check that $g({\mathbf}u_j) = {\mathbf}u_j$ for some spanning set of vectors $\{{\mathbf}u_j\}$. Since the tetrahedron cannot be degenerate, its vertex set $\{ {\mathbf}r_j \}$ must be spanning. Since ${\mathcal{A}}$ is invertible, $\{ {\mathcal{A}}({\mathbf}r_j) \}$ is also spanning. As in the 2-dimensional case, for $i \neq j$, $\{{\mathbf}0, {\mathcal{A}}({\mathbf}s_i), {\mathcal{A}}({\mathbf}r_j)\}$ form a right-angle triangle, and ${\left\lvert{\mathcal{A}}({\mathbf}s_i)\right\rvert} = 1$. Therefore ${\mathcal{A}}({\mathbf}r_j) \cdot {\mathcal{A}}({\mathbf}s_i) = 1$ whenever $i \neq j$. But $\sum_i p_i(1 - {\mathbf}r \cdot {\mathcal{A}}({\mathbf}s_i)) = \sum_i p_i - {\mathbf}r\cdot\left(\sum_i p_i {\mathcal{A}}({\mathbf}s_i)\right) = 1 - {\mathbf}r \cdot {\mathbf}0 = 1$ (the penultimate equality is from Lemma \[lem2\]). Hence $p_i(1 - {\mathcal{A}}({\mathbf}r_j)\cdot{\mathcal{A}}({\mathbf}s_i)) = \delta_{ij}$ and we are done. Discord and steering ellipsoids =============================== Below we outline the condition for zero discord for Alice from either her or Bob’s ellipsoid. > *A state has zero discord for Alice iff her ellipsoid is a segment of a diameter.* The “only if” part: A general zero discord state for Alice $\rho = p {|e\rangle}{\langle e|} \otimes \rho_0 + (1-p){|\bar{e}\rangle}{\langle \bar{e}|} \otimes \rho_1$ has $\langle e |\bar{e} \rangle = 0$ and $$\begin{aligned} {\mathbf}{a} &=& t {\mathbf}{e}\\ {\mathbf}{b} &=& {\mathbf}{x} \\ T &=& {\mathbf}{e y}^T\end{aligned}$$ where $t = 2p-1$, ${\mathbf}{e} = {\langle e|} \boldsymbol{\sigma} {|e\rangle}$ and ${\mathbf}{x} = \operatorname{tr}\left[ \left(p \rho_0 + (1-p)\rho_1\right) \boldsymbol{\sigma} \right]$, ${\mathbf}{y} = \operatorname{tr}\left[ \left(p \rho_0 - (1-p)\rho_1\right) {\mathbf}{\sigma} \right]$ [@VlatkoDiscord]. Alice’s steering ellipsoid $\mathcal{E}_A$ has centre ${\mathbf}{c}_A = \left(\frac{t- {\mathbf}{x}\cdot{\mathbf}{y}}{1-x^2}\right){\mathbf}{e}$ and matrix $Q_A = s_A^2 {\mathbf}{e} {\mathbf}{e}^T$ with $$s_A^2 = \frac{1}{1-x^2} \left[ \left( {\mathbf}{y}-t{\mathbf}{x} \right)^T \left( \I + \frac{{\mathbf}{x}{\mathbf}{x}^T}{1-x^2}\right) \left( {\mathbf}{y}-t{\mathbf}{x} \right)\right]$$ So $\mathcal{E}_A$ is a segment of the diameter. The “if” part: suppose we are given $\mathcal{E}_A$, a segment of the diameter. Denote the states endpoints of the ellipsoid $\rho_0$ and $\rho_1$. Alice’s state can always be decomposed as $\rho_A = q \rho_0 + (1-q) \rho_1$. However since all the Bloch vectors of $\rho_A, \rho_0, \rho_1$ are collinear they will eigendecompose into the same pair of orthogonal states, call them ${|\psi\rangle},{|\bar{\psi}\rangle}$. Writing $\rho_i = p_i {|\psi\rangle} {\langle \psi|} + (1-p_i){|\bar{\psi}\rangle} {\langle \bar{\psi}|}$, for $i=0,1$ then $\rho_A = p {|\psi\rangle} {\langle \psi|} + (1-p) {|\bar{\psi}\rangle} {\langle \bar{\psi}|}$ with $p = qp_0 + (1-q)p_1$. Then the joint state $$\begin{aligned} \rho = p {|\psi\rangle} {\langle \psi|}\otimes \beta_0 + (1-p) {|\bar{\psi}\rangle} {\langle \bar{\psi}|}\otimes\beta_1\end{aligned}$$ is a zero discord state for Alice with the correct $\mathcal{E}_A$ and $\rho_A$ for any mixed states on Bob’s side $\beta_0,\beta_1$. > *There is zero discord for Alice iff Bob’s ellipsoid is a line segment and the length of Alice’s Bloch vector is equal to the distance from the centre of Bob’s ellipsoid to his Bloch vector divided by the radius of his ellipsoid.* The “only if” part can easily be checked: it requires $a = \frac{|\boldsymbol{c}_B-\boldsymbol{b}|}{s_B}$. Since, after some algebra, $$\begin{aligned} {\mathbf}{c}_B &=& \frac{{\mathbf}{x} - t {\mathbf}{y}}{1-t^2} \\ Q_B &=& \frac{1}{(1-t^2)^2}({\mathbf}{y} - t{\mathbf}{x})({\mathbf}{y} - t{\mathbf}{x})^T\end{aligned}$$ then $|\boldsymbol{c}_B-\boldsymbol{b}| = \frac{t |{\mathbf}{y} - t{\mathbf}{x}|}{1-t^2} = as_B$. For the “if” part, let $\rho_0$ and $\rho_1$ be the endpoints of Bob’s ellipsoid, and let the state corresponding to Alice’s Bloch vector have eigen-decomposition $\rho_A = p_0 {|\psi_0\rangle}{\langle \psi_0|} + p_1 {|\psi_1\rangle}{\langle \psi_1|}$. Then the joint state $$\rho = p_0 {|\psi_0\rangle}{\langle \psi_0|} \otimes \rho_0 + p_1{|\psi_1\rangle}{\langle \psi_1|} \otimes \rho_1$$ has zero discord for Alice, the correct Bloch vector for Alice and the correct ellipsoid for Bob. If necessary swapping $\rho_0$ and $\rho_1$, it also has the right Bloch vector for Bob. Bob’s ellipsoid $\mathcal{E}_B$ is invariant under local unitaries on Alice’s qubit, so Alice and Bob’s actual state is therefore equivalent to $\rho$ up to this transformation, which preserves discord. Volume formula for the steering ellipsoid ========================================= The volume of any ellipsoid is proportional to the product of its eigenvalues $V=\frac{4\pi}{3}s_1s_2s_3$. Therefore the $\mathcal{E}_A$ has volume $V_A = \frac{4\pi}{3}|\sqrt{\det Q_A}|$, which may be rewritten as $$\label{VolumeRaw} V_A=\frac{4\pi }{3}\frac{\left\vert \det(T- \boldsymbol{a}\boldsymbol{b}^T) \right\vert }{\left( 1-b^{2}\right) ^{2}}=\frac{4\pi}{3}\frac{\left\vert\mathrm{det}\Theta\right\vert}{(1-b^2)^2}.$$ To express this in terms of the density matrix $\rho$, we use the equation $\Theta = 2 \Upsilon \rho^R \Upsilon^T$ [@FrankFilter], where the unitary matrix $\Upsilon$ is given in equation and $R$ denotes a reshuffling operation: if $\rho = \sum_{i,j=0}^1 \rho_{ij;kl}{|ij\rangle}{\langle kl|}$ then $\rho^R=\sum_{i,j=0}^1 \rho_{ik;jl}{|ij\rangle}{\langle kl|}$. We also require a curious relation that holds for any 4$\times$4 or 9$\times$9 matrix $M$. For any such $M$ we have that $$\det M = \det M^{T_B} - \det( M^{T_B})^R.$$ Applying this relation to (\[VolumeRaw\]), together with the reshuffled form of $\Theta$, we obtain $$\label{VolumeInRho} V_A = \frac{64\pi }{3}\frac{\left\vert \det \rho -\det \rho^{T_B} \right\vert }{\left( 1-b^{2}\right) ^{2}}.$$ The steering ellipsoid zoo ========================== In this section we illustrate the main types of ellipsoid. Entangled states ---------------- For every pure entangled state the ellipsoid coincides with the Bloch sphere. When the state is mixed and entangled, the ellipsoid does not satisfy the tetrahedral condition because, loosely speaking, the ellipsoid is either *too big* (with volume $V \ge V_\star > 0$ ) or *too near* (large $c$) to the surface of the Bloch sphere, see figure \[entangled\]. Every entangled state is completely steerable. ![A generic entangled state $\rho_{AB}$: both ellipsoids $\E_A$ and $\E_B$ are always full rank and neither can be inscribed within a tetrahedron within the Bloch sphere.[]{data-label="entangled"}](entangled-A.png "fig:"){width="3cm"} ![A generic entangled state $\rho_{AB}$: both ellipsoids $\E_A$ and $\E_B$ are always full rank and neither can be inscribed within a tetrahedron within the Bloch sphere.[]{data-label="entangled"}](entangled-B.png "fig:"){width="3cm"} Separable states with full-dimensional ellipsoids ------------------------------------------------- Separable states admit a convex decomposition in terms of product states, and have “more classical" correlations. Steering is still possible, however the steering ellipsoids necessarily obey the tetrahedral condition, as in figure \[separable\]. If the state has a three dimensional $\E_A$ then it has non-zero obesity and non-zero discord, and furthermore, it can be written as a mixture of just four product states. Such states are also completely steerable. ![A generic separable state $\rho_{AB}$, where both ellipsoids $\E_A$ and $\E_B$ are full rank fit inside a tetrahedron.[]{data-label="separable"}](separable-A.png "fig:"){width="3cm"} ![A generic separable state $\rho_{AB}$, where both ellipsoids $\E_A$ and $\E_B$ are full rank fit inside a tetrahedron.[]{data-label="separable"}](separable-B.png "fig:"){width="3cm"} ![A Bell-diagonal state: the ellipsoid is centred at the origin and its semiaxes are given by the three singular values of $T$. The vector of these singular values $\boldsymbol{t} = (t_1,t_2,t_3)$ lives in a tetrahedron with vertices at (1,1,-1),(1,-1,1),(-1,1,1),(-1,-1,-1), and when it is inside an octahedron inside of this tetrahedron, then the state is necessarily separable. This defines the set of ellipsoids that fit inside the nested tetrahedron (these are not the same tetrahedra).[]{data-label="bell"}](bell-diagonal.png){width="4cm"} ![A Werner state. The steering ellipsoid $\E_A$ is a sphere centered at the origin. The ellipsoid fits inside a tetrahedron when its radius is less than $\frac{1}{3}$ and thus the state is separable.[]{data-label="werner"}](werner.png){width="3cm"} Steering pancakes ----------------- The set of states that Bob can steer Alice to may become degenerate, and form a *two-dimensional* set. This “steering pancake" will not only fit inside a tetrahedron, but will fit within a *triangle* that is inscribed within the Bloch sphere as shown in \[pancake\]. ![A generic separable state $\rho_{AB}$, where both ellipsoids $\E_A$ and $\E_B$ are steering pancakes.[]{data-label="pancake"}](pancake-A.png "fig:"){width="3cm"} ![A generic separable state $\rho_{AB}$, where both ellipsoids $\E_A$ and $\E_B$ are steering pancakes.[]{data-label="pancake"}](pancake-B.png "fig:"){width="3cm"} Recall that we have a novel feature for some steering pancakes (and steering needles) of *incomplete* steering. For steering pancakes we have complete steering of qubit A if and only if the affine span of $\E_B$ contains the origin of the Bloch sphere. Steering needles ---------------- The steering can become even more degenerate, and the steering set collapse to a one-dimensional line segment, or “steering needle". These states include perfectly classical (doubly zero-discord states) with needles being radial (figure \[needle\]), but also includes non-zero discord states for which either one or both of the steering needles $\E_A$ and $\E_B$ is not radial (figure \[needle2\]). Being radial is indicated by dashed lines on the figures, which depict diameters. ![A doubly zero discord state, where both $\E_A$ and $\E_B$ are radial line segments.[]{data-label="needle"}](doubly-zero-discord-A.png "fig:"){width="3cm"} ![A doubly zero discord state, where both $\E_A$ and $\E_B$ are radial line segments.[]{data-label="needle"}](doubly-zero-discord-B.png "fig:"){width="3cm"} ![A state with zero discord at $B$, but *non-zero* discord at $A$. We have $\E_B$ being a radial line segment, but $\E_A$ is not a radial line segment.[]{data-label="needle2"}](zero-Bdiscord-A.png "fig:"){width="3cm"} ![A state with zero discord at $B$, but *non-zero* discord at $A$. We have $\E_B$ being a radial line segment, but $\E_A$ is not a radial line segment.[]{data-label="needle2"}](zero-Bdiscord-B.png "fig:"){width="3cm"} [^1]: Discord can be calculated analytically when $\theta =0,\frac{\pi}{2}$, and at these points $\rho$ is an X-state [@ShiXstateDiscord] and this agrees with our findings.
--- abstract: | We re-examine the fraction of low redshift Sloan Digital Sky Survey satellites and centrals in which star formation has been quenched, using the environment quenching efficiency formalism that separates out the dependence of stellar mass. We show that the centrals of the groups containing the satellites are responding to the environment in the same way as their satellites (at least for stellar masses above $10^{10.3}\: M_\odot$), and that the well-known differences between satellites and the general set of centrals arise because the latter are overwhelmingly dominated by isolated galaxies. The widespread concept of “satellite quenching” as the cause of environmental effects in the galaxy population can therefore be generalized to “group quenching”. We then explore the dependence of the quenching efficiency of satellites on overdensity, group-centric distance, halo mass, the stellar mass of the satellite, and the stellar mass and specific star formation rate (sSFR) of its central, trying to isolate the effect of these often interdependent variables. We emphasize the importance of the central sSFR in the quenching efficiency of the associated satellites, and develop the meaning of this “galactic conformity” effect in a probabilistic description of the quenching of galaxies. We show that conformity is strong, and that it varies strongly across parameter space. Several arguments then suggest that environmental quenching and mass quenching may be different manifestations of the same underlying process. The marked difference in the apparent mass dependencies of environment quenching and mass quenching which produces distinctive signatures in the mass functions of centrals and satellites will arise naturally, since, for satellites at least, the distributions of the environmental variables that we investigate in this work are essentially independent of the stellar mass of the satellite.\ *Key words:* cosmology: observations -– galaxies: evolution – galaxies: groups: general –- galaxies: star formation -– galaxies: statistics author: - 'Christian Knobel, Simon J. Lilly, Joanna Woo, and Katarina Kovač' bibliography: - 'apj-jour.bib' - 'bibliography.bib' nocite: - '[@mo2010]' - '[@tinker2013]' title: | Quenching of Star Formation in SDSS Groups:\ Centrals, Satellites, and Galactic Conformity --- Introduction ============ One of the key questions in the study of galaxies is how their evolution is influenced by their local environment, where environment can refer to any measurable parameter that somehow relates a galaxy to its cosmic neighborhood. Typically, environmental parameters characterize the neighborhood of a galaxy on scales similar or smaller than the size of its dark matter (DM) halo, since DM halos, being gravitational bound objects, can produce and maintain physical conditions within the halo which are very different from those outside the halo.[^1] It is useful to differentiate between environmental parameters according to whether they are the same for all galaxies within a halo or whether they vary within the DM halo. Examples of the first category would be the halo mass, or the mass and the specific star formation rate (sSFR) of the central galaxy, while examples of the second category would be the local space density of galaxies or the distance of the galaxy from the group center. One of the most striking events in the life of a galaxy is the relatively rapid cessation of its star formation rate (SFR), which leads to the observed bimodality within the population of galaxies [e.g., @strateva2001][^2]: most galaxies are either blue and star forming, with an SFR that is closely linked to the existing stellar mass [e.g., @elbaz2007; @noeske2007; @salim2007], or red with an SFR that is lower by 1-2 orders of magnitude. We refer to this cessation of star formation as “quenching”. Some of the basic questions are how the mechanisms of quenching are related to the stellar mass and the environment of the galaxies [see, e.g., @balogh2004; @kauffmann2004; @baldry2006; @vandenbosch2008; @peng2010; @peng2012; @presotto2012; @wetzel2012; @carollo2013; @cibinel2013a; @cibinel2013b; @knobel2013; @woo2013; @carollo2014; @bluck2014; @omand2014; @wetzel2014; @woo2014 and references therein], and whether all galaxies are affected in the same way by these quantities, and particularly whether the central galaxies, which are typically the most massive galaxies lying at the center of a DM halo, respond to the environment in the same way as the surrounding satellite galaxies [e.g., @weinmann2006; @vandenbosch2008; @peng2012; @knobel2013; @kovac2014; @tal2014]. Analyzing the fraction of quenched galaxies for centrals and satellites as a function of stellar mass and environmental parameters has emerged as a fruitful way of gaining insights into the phenomenology and the physical processes of quenching. However, one of the difficulties in interpreting this fraction is that many of the environmental parameters are statistically correlated both with each other or with the stellar mass of the galaxy. For example, there is good evidence that halo mass correlates with the mass of the central galaxy [e.g., @yang2009], that the local overdensity correlates with the group-centric distance [e.g., @peng2012; @woo2013], and that the local overdensity also correlates with halo mass [e.g., @haas2012; @carollo2013]. These correlations among the parameters can introduce spurious dependencies, i.e., dependencies which have no direct causal relation to quenching, if the quenched fraction is regarded as a function of certain parameters in isolation. Therefore, in principle one has to study the quenched fraction in the full parameter space and to vary only one parameter at a time, while keeping the others fixed (see, e.g., the discussion in @mo2010; Section 15.5). However, this approach does also not necessarily guarantee that the measured dependence of the quenched fraction on a certain parameter is directly related to a physical quenching process, since the red fraction may also depends on the *history* of the galaxy, i.e., how the galaxy is moving through the parameter space. Therefore, interpreting trends of the quenched fraction even in the full parameter space has turned out to be quite difficult and has led to some confusing and apparently inconsistent statements in the literature. In this paper, we use the Sloan Digital Sky Survey (SDSS; @york2000) and the group catalog of [@yang2012] to take a new look at some of the issues that have been recently raised. Several recent papers have constructed “quenching efficiencies” which construct the probability that a given galaxy is quenched relative to some comparison sample. An important example is the satellite quenching efficiency $\epsilon_{\rm sat}$ [@vandenbosch2008; @peng2012; @knobel2013; @kovac2014; @phillips2014; @omand2014] which compares the quenched fraction of satellites at a given stellar mass to a comparison sample of central galaxies at the same stellar mass. In simple terms, $\epsilon_{\rm sat}$ therefore reflects the chance that a given central galaxy becomes quenched when it falls into another DM halo, thereby becoming a satellite of another galaxy. The use of stellar mass alone in computing $\epsilon_{\rm sat}$ has been justified on the basis that the properties of central galaxies in general appear to be largely independent of environmental measures [@peng2012] (beyond those that correlate closely with stellar mass). One of the goals of the present paper is to re-visit the environmental dependence of the properties of central galaxies, and particularly, of those central galaxies that are the centrals of the groups in which most of the satellites can be found rather than the general population of centrals. Of course $\epsilon_{\rm sat}$ is simply a straightforward “renormalization” of the observable quenched fraction of satellite galaxies, and some analyses in the literature work instead with the latter quantity. The use of $\epsilon_{\rm sat}$, rather than the quenched fraction of satellites and centrals, is advantageous in our view since it removes the strong dependence of galaxy quenching on stellar mass [@peng2010; @peng2012]. It also has a rather simple physical interpretation in terms of central galaxies becoming satellites (of the same stellar mass) when they enter another halo. This makes it easier to study the dependence of $\epsilon_{\rm sat}$ on other quantities, with stellar mass already taken into account. The view of field centrals as the set of representative progenitors of the satellites raises a few issues. First, there is the question of the epoch at which the centrals and satellites should be compared. [@wetzel2013] suggested that the satellites should be compared with centrals at the (significantly earlier) epoch at which the satellites first became satellites, and tried thereby to constrain the timescale for the quenching of satellites. This approach is correct if one wishes to study the overall quenching of the satellites (as @wetzel2013 did), but would not be correct if satellites undergo separate continuing “central-like” quenching processes and if one wished to isolate the pure satellite effects. In this case one should compare the satellites with the field centrals at the epoch of observation and not at the epoch of infall. A further complication with using “back-dated” centrals is that in principle one should compare a given satellite with centrals at a slightly lower stellar mass, corresponding to the mass that the satellite had when it became a satellite. This “mass offset” would depend on the time since infall (and thus possibly on other environmental parameters) but would be of order 0.15 dex for a star-forming SDSS satellite that became a satellite at $z = 0.3$. Comparing centrals and satellites at the same epoch (as in this paper) would remove this “mass offset” for star-forming satellites, since star-forming centrals and satellites are observed to have broadly similar sSFR [@peng2012]. However, and we thank the anonymous referee for pointing this out, the variation in star-formation histories that is observed between different satellites (i.e., between star-forming and quenched satellites) means that this “mass offset” will in any case vary from satellite to satellite and will be close to zero for any satellite that immediately quenched. This makes it impossible to identify any single “central-progenitor” comparison mass for the satellites of a given observed stellar mass. This shuffling of progenitor masses due to different star-formation histories would also be accounted for in our approach, but only if the mix of star-formation histories was exactly the same for centrals and satellites, which of course we know is not the case, since satellites are more likely to be quenched at a given mass. These are unlikely to be large effects, but together they caution against an over-interpretation of $\epsilon_{\rm sat}$. Quite apart from these methodological issues, the real problem with the [@wetzel2013] approach is practical. There is continuing observational uncertainty about the rate of evolution of the quenched fraction of the centrals. A useful parameterization is the rate of change of the actively star-forming fraction with redshift, $df_{\rm SF}/dz = - df_{\rm q}/dz$. [@wetzel2013] themselves used a strongly evolving fraction of quenched centrals with $df_{\rm SF}/dz \sim 0.6$ (and $\sim \! 0.65$ for all galaxies) which is much steeper than other estimates. Most published estimates have not differentiated between centrals and satellites and consider the overall population. However at high masses around $M^\ast$, the quenching of galaxies is dominated by the quenching of centrals and, since environmentally driven quenching becomes more important with time (see @peng2010) the change in the quenched fraction of all galaxies should provide an upper bound to the change in the centrals alone. Estimates of this gradient over the redshift range of interest and for masses around $10^{10.5}\: M_\odot$ include 0.26 [@moustakas2013 their Figure 12], 0.22 [@george2013 their Figure 4], 0.17 [@muzzin2013 their Figure 8]. Tinker et al. (2013, their Figure 11) have 0.16 for COSMOS and 0.12 for PRIMUS, while four studies have zero or even negative values for this gradient [@ilbert2013; @knobel2013; @hartley2013; @kovac2014]. We note that the continuity analysis of [@peng2010] implies that if the Schechter $M^\ast$ is constant than the red fraction should also be more or less constant. At the least, we conclude that it is hard to reliably correct for an evolving red fraction of centrals, and, given also the methodological issues discussed in the previous paragraph, we do not take this approach in the current paper. To facilitate this sort of study, we introduce a new estimator for $\epsilon_{\rm sat}$, that can be applied to individual galaxies and thereby avoids the need for prior binning of samples in computing it. Such binning introduces issues of matching between samples, unless the bins are extremely small, and becomes impracticable when the number of dimensions being examined becomes large. The use of this new estimator enables us to flexibly examine the dependence of $\epsilon_{\rm sat}$ on a whole range of parameters of interest. The environmental parameters which are considered in this paper are the halo mass, the mass of the central galaxy, the local galaxy density, the group-centric distance, and the sSFR of the central galaxy. The sSFR of the central galaxy emerges as a major factor in the quenching of satellites. This phenomenon was first noted — at least in a large galaxy survey — by [@weinmann2006] and was called by them “galactic conformity”. It has been studied in a number of subsequent papers [e.g., @ann2008; @kauffmann2010; @prescott2011; @wang2012; @kauffmann2013; @hearin2013; @phillips2014; @hartley2014; @hearin2014]. The question of galactic conformity can be confusing, since similar effects can arise rather trivially. In this paper we attempt to clarify the meaning of this effect and the conditions under which it can arise. In addition we quantify its importance with respect to that of other environmental parameters. Before concluding this introduction we briefly discuss the role of structure and morphology. Quenching has been observed to correlate strongly with morphology and galaxy structure in the local universe [@kauffmann2003b; @franx2008; @bell2008; @vandokkum2011; @robaina2012; @cibinel2013b; @mendel2013; @schawinski2014; @omand2014; @bluck2014; @woo2014] and at high-$z$ [@wuyts2011; @cheung2012; @bell2012; @szomoru2012; @wuyts2012; @barro2013; @lang2014]. On the other hand, quenched disk galaxies have also been observed in significant numbers ([@bundy2010; @stockton2004; @mcgrath2008; @vandokkum2008; @vandenbergh2009; @vanderwel2011; @salim2012; @bruce2012; @carollo2014]). Despite this clear correlation between galactic structure and quenching, we do not consider galactic structure further in this paper, for the following reasons. First, it is clear that morphology can be trivially altered during quenching, e.g., through the fading of a star-forming disk enhancing the relative importance of the bulge [see, e.g., @carollo2014] and so some apparent correlation would be expected even if there was no physical link. Second, even a physical connection could have many different causal origins: quenching could result in structural changes, e.g., if quenching was caused by a starburst (or active galactic nucleus (AGN) activity) that had been triggered by a major merger [see, e.g., @dimatteo2005], or structural changes could be responsible for quenching, e.g., if the presence of a massive bulge impedes star formation by stabilizing the disks [see, e.g., @martig2009; @genzel2014] or by producing a massive black hole resulting in AGN driven winds [see, e.g., @fabian1999]. The structure of a galaxy could also be linked to its star formation history quite indirectly, e.g., if the density of a stellar population reflects its formation epoch [see, e.g., @carollo2013b]. It is therefore not at all clear whether structure is the driver or the result of quenching. Finally, while the quenched fraction of satellites increases for denser environments, the early-type fraction of galaxies at constant mass [@bamford2009] and of quenched galaxies in particular [@carollo2014] is mostly constant with environment, indicating that mass and environment quenching either similarly affect galaxy morphology or are two reflections of the same physical phenomenon. Our paper is organized as follows: in Section \[sec:data\], we describe the data and the derived data products that we use for our analysis. In Section \[sec:method\], we describe the methods of our analysis and, particularly, we introduce our new estimator for the satellite quenching efficiency. In Section \[sec:centrals\_satellites\], we discuss, whether centrals and satellites are differently influenced by the environment, specifically differentiating between centrals in general (which are mostly singletons, i.e., centrals with no satellites within the sample) and those in the groups in which the sample of satellites reside. In Section \[sec:eps\_sat\_analysis\], we analyze the satellite quenching efficiency as a function of our environmental parameters — with the exception of the sSFR of the central galaxy — taking into account the correlations among these parameters. Section \[sec:galaxy\_conformity\] is then fully devoted to a discussion of galactic conformity i.e., the effect of the sSFR of the central. Finally, in Section \[sec:discussion\] we discuss our results and Section \[sec:conclusion\] concludes the paper. Throughout this work, a concordance cosmology with $H_0 = 70\ \rm{km\; s^{-1}\; \rm{Mpc}^{-1}}$, $\Omega_{\rm m} = 0.3$, and $\Omega_\Lambda = 0.7$ is applied. All magnitudes are quoted in the AB system. We use the term “dex” to express the anti-logarithm, i.e., 0.1 dex corresponds to a factor $10^{0.1} \simeq 1.259$. We use $\log m$ as short term for $\log(m/M_\odot)$ and $\log$ sSFR as short term for $\log({\rm sSFR} \times {\rm Gyr})$. Throughout this paper all observational error bars indicate the 68% confidence interval, which is estimated by means of 50 bootstrapped galaxy samples. Data and Data Products {#sec:data} ====================== Basic Sample {#sec:basic_section} ------------ This work is based on the SDSS DR7 galaxy sample [@abazajian2009]. We use the Petrosian photometry and redshifts from the New York University Value-Added Galaxy Catalog [NYU-VAGC; @blanton2005; @padmanabhan2008], where $K$-corrections for the estimation of absolute magnitudes and $V_{\rm max}$ volumes were obtained using the utilities (v4\_2) of [@blanton2007]. The stellar masses $m_{\ast}$ and SFR estimates used in this work correspond to an updated version of those derived in [@brinchmann2004], where the uncertainty in the stellar mass is $\sim \! 0.1$ dex.[^3] We restrict our analysis to galaxies which lie in the redshift range $0.01 < z < 0.06$ and which have stellar masses $m_\ast > 10^9\:M_\odot$. Each galaxy is assigned a weight $W = 1/V_{\rm max} \times 1/$SSR, where SSR is the spectroscopic success rate of the galaxy (also obtained from the NYU-VAGC). The $V_{\rm max}$ was set to 1, if it exceeded the volume of our sample. The upper redshift limit was chosen so that our sample is essentially complete for all galaxies with $m_\ast > 10^{10}\:M_\odot$. After removing duplicated objects and objects with low quality redshifts, our sample contains $\sim\! 123,\! 000$ objects of which $\sim\! 53,\! 000$ objects have $m_\ast >10^{10}\:M_\odot$. In Figure \[fig:SSFR\_mass\_diagram\] we show the sSFR-mass diagram, where the black line is given by $$\label{eq:SSFR_division} \log {\rm sSFR} = -0.3 (\log m_\ast-10)-1.85\:.$$ This dividing line was drawn by eye in order to separate the star-forming sequence from the quenched galaxies. ![sSFR-$m_\ast$ diagram for the galaxies in our sample (including objects with $m_\ast < 10^9\:M_\odot$). The colors refer to the (unweighted) number density of galaxies in logarithmic scale. The black line, which is chosen to separate between star-forming sequence and quenched cloud, is given in Equation (\[eq:SSFR\_division\]).[]{data-label="fig:SSFR_mass_diagram"}](1.eps){width="48.00000%"} Galaxies that lie above the line are regarded as star forming and those below as quenched. The numbers of star-forming and quenched galaxies for our sample are summarized in Table \[tab:galaxy\_samples\]. [cccccccc]{} $\phantom{00}0$ & 73801 & $11.52\pm 0.48 $ & 52306 & 21495 && (17471) & (14684)\ $\phantom{00}1$ & 37464 & $11.90\pm 0.44$ & 18842 & 18622 && (16463) & (14615)\ $\phantom{00}2$ & $\phantom{0}$4526 & $12.62\pm 0.55$ & $\phantom{0}$1160 & $\phantom{0}$3366 && (12418) & (13797)\ $\phantom{00}3$ & $\phantom{0}$1893 & $13.02\pm 0.48$ & $\phantom{00}$267 & $\phantom{0}$1626 && $\phantom{0}$9645 & 12186\ $\phantom{00}5$ & $\phantom{00}$820 & $13.39\pm 0.37$ & $\phantom{000}$69 & $\phantom{00}$751 && $\phantom{0}$7314 & 10380\ $\phantom{0}10$ & $\phantom{00}$336 & $13.70\pm 0.28$ & $\phantom{000}$19 & $\phantom{00}$317 && $\phantom{0}$5214 & $\phantom{0}$8148\ $\phantom{0}20$ & $\phantom{00}$126 & $13.97\pm 0.24$ & $\phantom{0000}$4 & $\phantom{00}$122 && $\phantom{0}$3177 & $\phantom{0}$5762\ $\phantom{0}50$ & $\phantom{000}$30 & $14.29\pm 0.14$ & $\phantom{0000}$2 & $\phantom{000}$28 && $\phantom{0}$1398 &$\phantom{0}$2996\ $100$ & $\phantom{0000}$8 & $14.43\pm 0.07$ & $\phantom{0000}$1 & $\phantom{0000}$7 && $\phantom{00}$556 & $\phantom{0}$1306 \[tab:galaxy\_samples\] The Group Catalog {#sec:group_catalog} ----------------- The group catalog which we use in this work is taken from [@yang2012].[^4] It was produced by the application of a sophisticated, iterative group-finding method, which is described in [@yang2007] in detail. We use the halo masses in that catalog that were derived by means of an abundance matching method between stellar mass and halo mass, which is based on the mass function of [@warren2006] and the transfer function of [@eisenstein1998]. Using simulated galaxy mock catalogs, [@yang2007] estimated that the uncertainties of the halo masses were $\lesssim \! 0.3$ dex, more or less independent of the luminosity of the groups. A feature of the catalog of [@yang2007] is that essentially all galaxies are assigned to “groups” even if there is only one member. We refer to these as “singletons”. We define the richness $N$ of a given group to be the number of observed members within the parent galaxy sample. Since our sample is not mass complete below $10^{10}\:M_\odot$, we also introduce $N_{\rm abs}$ to be the number of members which are more massive than $10^{10}\: M_\odot$. To avoid biases in the selection of galaxies with redshift, we generally use $N_{\rm abs}$ to select groups (and singletons). However, it should be kept in mind that a group with $N_{\rm abs} =1$ or even $N_{\rm abs} =0$ may contain more than one galaxy below the mass completeness limit (see Table \[tab:galaxy\_samples\]). The center of a group is defined by the $\log m_\ast$-weighted mean position of all its members. To compute the centers, we also consider galaxies with $m_\ast < 10^9\:M_\odot$, since the estimates become more robust the more galaxies we use and there is no danger here for selection biases with redshift. Some works define the center of a groups to be the position of the “central” galaxy, which is often taken to be the most massive within the group. This definition, however, is dependent on the identification of the central galaxy. If the central is identified wrongly, the estimated group center may be significantly off. The mass-weighted center has the advantage to be independent of the choice of the central galaxy and proves to be quite robust [cf. Figure 11 of @knobel2012]. In order to compute the group-centric distance for each galaxy we first perform a principal component analysis of the projected positions of the group members on the sky to find the projected major and minor axes, $x$ and $y$, respectively, of the group. We then compute for each group member the coordinate difference $\Delta x_i = x_i - x_{\rm gr}$ and $\Delta y_i = y_i - y_{\rm gr}$ from the group center $(x_{\rm gr},y_{\rm gr})$ with respect to these axes, and subsequently we compute the rms $\Delta x_{\rm rms}$ and $\Delta y_{\rm rms}$ of $\Delta x_i$ and $\Delta y_i$, respectively, over all $N$ group members, which serves as an (asymmetric) measure of the extension of the group. Finally, the normalized “group-centric distance” $R$ for the galaxy $i$ is defined by $$\label{eq:R} R_i = \sqrt{\left( \frac{ \Delta x_i}{\Delta x_{\rm rms}}\right)^2 + \left( \frac{ \Delta y_i}{\Delta y_{\rm rms}}\right)^2 }\:.$$ The advantage of this definition is that it can be directly computed from the positional information of the galaxies relative to each other and that it takes into account that, in general, galaxy groups are not rotationally symmetric. It works well for groups with at least five identified members, which comprise 43% of the groups and 81% of the satellites used in this paper (as defined below). For the remaining groups (i.e., those with $N_{\rm abs} \geq 3$ and $N < 5$) we simply set $x$ and $y$ to be the right ascension $\alpha$ and declination $\delta$, respectively. We, however, checked that all of our results do not change, if we use instead the group-centric distance normalized by the virial radius (as, e.g., in @woo2013). The galaxy population of the groups is often divided into “centrals” and “satellites”, where the centrals are thought to be the most massive galaxies within the groups lying at the deepest point of the gravitational potential well of the halo and the satellites are just all remaining group members. This framework is sometimes called the “central galaxy paradigm” [@vandenbosch2005]. However, there are indications that reality may be more complex than this [cf. @skibba2011] and there are also circumstances when there may be groups for which no well-defined central should exist (e.g., in the case of merging or unrelaxed groups, see extensive discussion in @carollo2013). In practice, the identification of the central galaxy is further exacerbated due to observational uncertainties in the stellar mass as well as imperfections in the group membership due to misidentification of groups (“fragmentation” or “over-merging” of groups, the presence of unrelated interlopers or the exclusion of actual members; cf., e.g., @knobel2009). For our analysis we simply define the centrals to be the most massive galaxies within the groups and the satellites to be all remaining group members. However, to avoid likely cases of misclassification and to enhance the fidelity of both centrals and satellites, we also require the central to have a group-centric distance $R < 2$, otherwise the whole group is discarded. This selection is aimed to exclude groups which are either completely unrelaxed or which have a massive interloper at the outskirts [@carollo2013; @cibinel2013a]. In addition, since the purity of satellites is naturally worse for small groups (i.e., particularly for pairs) and at the outskirts of groups [cf., e.g., @knobel2009; @knobel2012], we also require that our sample of satellites lie in groups with $N_{\rm abs}\geq 3$ and consider only group-centric distances $R < 3$. We do not attempt to quantify, or correct for, the resulting purity and completeness of the central-satellite classification [cf., e.g., @knobel2013], because the impurities of both centrals and satellites that are expected within the catalog from [@yang2012] are $\lesssim 10\%$ for most stellar masses and galaxy densities [@hirschmann2014]. The correct identification of the central is particularly important for studies of galactic conformity, which is based on whether the central is quenched or not. Misidentification of the central can cause all the associated satellites to be moved to the incorrect bin. Therefore, for our conformity analyses we use an additional “high purity sample” as a check to estimate the impact of such misidentifications. Since the uncertainty of the stellar mass is $\lesssim\! 0.1$ dex, we require for this high purity sample that all galaxies with a stellar mass difference of less than $0.1$ dex from the nominal central must be all star forming or all quenched. By discarding all groups which do not meet this condition, we remove 5% of all satellites that are in groups with $N_{\rm abs}\geq 3$ and 15% of all satellites that are in groups with $N_{\rm abs}\geq 20$. In order to not enforce a spurious conformity signal on our high purity sample, we also discard the second, third, etc. ranked objects in stellar mass, which are within this 0.1 dex difference from the first ranked object and which were required to show the same star formation behavior as the central. Within our sample there are two particular clusters that are rich and have a star forming central. These will be treated specially. The first of these clusters has $N_{\rm eff} = 104$ and a central with $\log m_\ast \simeq 11.8$ and log sSFR $\simeq -2.24$, while the second one has $N_{\rm eff} = 65$ and a central with $\log m_\ast \simeq 11.1$ and log sSFR $\simeq -0.68$. In the first case, the classification of the central as star forming is uncertain because it lies only 0.14 dex above the black line in Figure \[fig:SSFR\_mass\_diagram\] and therefore falls between the star forming sequence and the quenched cloud. In the second case, the central lies in the star forming sequence and is even included in our high purity sample, which was introduced in the previous paragraph. Since these clusters contain many satellites, they can dominate the population of satellites with a star forming central in certain bins of our environmental parameters. Therefore, since these clusters constitute rare objects which are not representative for the bulk of groups in our sample, we exclude them from our sample, whenever they affect the result substantially. The Density Field {#sec:density_field} ----------------- Following [@peng2010; @peng2012] and [@woo2013] we estimate the local galaxy overdensity $\delta = (n - \bar n)/\bar n$, where $n$ is the number density of galaxies which we use as tracers and $\bar n$ the corresponding mean number density, by means of the “fifth nearest neighbor” approach (for a general discussion on density fields see, e.g., @kovac2010). That is, around each galaxy $i$, we put a cylinder with radius $r$ perpendicular to the line of sight and length $\pm dz$ along the line of sight, where $dz = (1+z_i) dv/c$ with $dv = 1000$ km s$^{-1}$ and $c$ the speed of light. We then continuously shrink $r$ until the cylinder contains exactly five tracer galaxies (if the galaxy $i$ is a tracer galaxy, then it is counted as well). To guarantee that the density field is not subjected to biases with redshift, we use only a mass complete sample of tracers (i.e., galaxies with $m_\ast > 10^{10}\: M_\odot$). If $V_i$ is the volume of the cylinder around galaxy $i$, we regard $n_i = 1/V_i$ as the estimate of the local number density of galaxies. However, since these densities $n_i$ are centered around galaxies, i.e., special locations, they cannot be directly used to estimate the mean density $\bar n$. Instead, to estimate $\bar n$, we put the centers of the cylinders at randomly sampled positions, which have the same distribution with respect to the right ascension $\alpha$, the declination $\delta$, and the redshift $z$ as the actual galaxies. We construct realizations of such random positions by scrambling the redshifts among the galaxies, while keeping their $\alpha$ and $\delta$ fixed. Finally, we can compute the fifth nearest neighbor overdensity at the position of the $i$th galaxy by $\delta_i = (n_i-\bar n)/\bar n$. The distributions of $\log(\delta + 1)$ for group galaxies with various $N_{\rm abs}$ are shown in Figure \[fig:density\_field\_histogram\]. As expected the higher $N_{\rm abs}$ the higher is the average $\log(\delta + 1)$ of the corresponding group galaxies. ![Distributions of galaxy overdensity $\delta$ (see Section \[sec:density\_field\]). The color refers to different richnesses $N_{\rm abs}$ as indicated in the legend. The red histogram consists mostly of centrals, while the green and the black histograms consist mostly of satellites.[]{data-label="fig:density_field_histogram"}](2.eps){width="48.00000%"} As was pointed out in the literature [e.g., @peng2012; @carollo2013; @woo2013; @kovac2014] a major disadvantage of the fifth nearest neighbor approach is that for small groups (i.e., groups with $N_{\rm abs} \lesssim 5$) the length scale, on which $\delta$ is estimated, can be much larger than the size of the corresponding DM halo. That is, for galaxies within such groups $\delta$ is not probing the local density of tracer galaxies within the halo, but rather the density on a scale beyond the group, which furthermore depends strongly on the richness of the group. For such low richness groups (and singletons), $\delta$ defines an environment that is expected to have little influence on the quenching of galaxies. For much richer groups, $\delta$ in contrast is measuring the environment well within the group virial radius. Therefore, we should not be surprised, if we do not find a dependence of the quenched fraction on $\delta$ for low richness groups. Methods {#sec:method} ======= In this section we describe the methods that we use for our analysis. In the following, an index $i$ refers to the $i$th galaxy in the sample. For example, $m_{\ast,i}$ is the stellar mass of the $i$th galaxy and for any generic parameters $p_1,p_2,\ldots$ the symbols $p_{1,i}, p_{2,i},\ldots$ denotes the position of the $i$th galaxy in the corresponding multi-dimensional parameter space. Estimation of $f_{\rm q}$ ------------------------- The fraction $f_{\rm q | S}$ of quenched objects within a sample $S$ is estimated by $$\label{eq:f_q} f_{\rm q | S} = \frac{\sum_i W_i q_i}{\sum_i W_i}\:,$$ where $q_i$ is 1 if the galaxy $i$ is quenched and 0 otherwise. The sum runs over all galaxies within the sample $S$ and $W_i$ are the weights of the galaxies in the sample as described in Section \[sec:basic\_section\]. The sample $S$ usually refers to a (sub)sample of centrals (i.e., $S = \rm cen$) or satellites (i.e., $S = \rm sat$). By selecting galaxies within bins of some given parameters $p_1, p_2,\ldots$ (e.g., stellar mass $m_\ast$, overdensity $\delta$, halo mass $m_{\rm h}$, etc.), it is straightforward to compute $f_{\rm q | S}(p_1,p_2,\ldots)$ as a function of these parameters. Estimation of $\epsilon_{\rm sat}$ {#sec:eps_sat} ---------------------------------- The estimation of the satellite quenching efficiency $\epsilon_{\rm sat}$ is a bit more subtle. In the literature [@vandenbosch2008; @peng2012; @knobel2013; @kovac2014], $\epsilon_{\rm sat}$ was computed either as a function of $m_\ast$ only or as a function of $m_\ast$ and $\delta$ as follows: $$\label{eq:eps_sat_standard} \epsilon_{\rm sat}(m_\ast,\delta) = \frac{f_{\rm q | sat}(m_\ast,\delta) - f_{\rm q | cen}(m_\ast)}{1 - f_{\rm q | cen}(m_\ast)}\:.$$ It should be noted that in either case the quenched fraction of centrals, which plays the role of the background or control sample, is only considered as a function of $m_\ast$ and not as a function of $\delta$. Defined like this, $\epsilon_{\rm sat}(m_\ast,\delta)$ loosely has the simple interpretation as the probability that a central becomes quenched, as it falls into another DM halo becoming a satellite [for a discussion see @knobel2013; @kovac2014]. The special role of the stellar mass $m_\ast$ is that it is the one quantity that should stay the same (or more strictly vary continuously) when a central becomes a satellite, since $m_\ast$ is an intrinsic rather than an environmental parameter. As noted above, the quenching probability depends strongly on mass in a similar way for centrals and satellites, as also evidenced by their similar Schechter $M^*$ characteristic mass [see @peng2012]. By using the mass-dependent quenched fraction of centrals to “renormalize” the quenched fraction of the satellites, we effectively take out this strong empirical mass-dependence. Because centrals and satellites are distributed differently in the $(m_\ast,\delta)$ parameter space, estimating $\epsilon_{\rm sat}(m_\ast,\delta)$ requires the use of very small bins or a careful matching of the distributions of both samples to each other [@kovac2014]. In order to circumvent this computationally inconvenient situation and to allow us to examine the dependence of $\epsilon_{\rm sat}$ on any set of parameters $p_1, p_2, \ldots$, we introduce a new estimator that defines a quenching efficiency on a galaxy-by-galaxy basis such that the quenching efficiency for a set of galaxies is obtained by simply averaging over these individual quenching efficiencies. We must first compute the quenched fraction of centrals as a function of their stellar mass, using Equation (\[eq:f\_q\]), i.e., $$f_{\rm q | cen} = \frac{\sum_i W_{i} q_{i}}{\sum_i W_{i}}\:,$$ where, as above, $q_i$ is unity for quenched galaxies and zero otherwise. Since there are usually many more centrals than satellites, $f_{\rm q | cen}$ can be evaluated on a fine grid in stellar mass, and represented by some suitably smooth curve, which has to be computed only once. Then, by inserting $$f_{\rm q | sat} = \frac{\sum_i W_{i} q_{i}}{\sum_i W_{i}}$$ into (\[eq:eps\_sat\_standard\]), where the sum runs over all satellites within a given bin $(m_\ast,\delta)$, we obtain $$\begin{aligned} \epsilon_{\rm sat}(m_\ast,\delta) &= \frac{ \dfrac{\sum_i W_{i} q_{i}}{\sum_i W_{i}}-f_{\rm q | cen}(m_\ast)}{1-f_{\rm q | cen}(m_\ast)}\\ &= \frac{\sum_i W_{i}\left( \dfrac{ q_{i} - f_{\rm q | cen}(m_\ast)}{1-f_{\rm q | cen}(m_\ast)} \right) }{\sum_i W_{i}}\:,\end{aligned}$$ where for the second equality we have just rearranged the terms. If we now shrink the size of the bin until only one satellite is left, we can effectively interpret the corresponding value, i.e., $$\label{eq:eps_sat_i} \epsilon_{{\rm sat},i} = \frac{q_i - f_{{\rm q | cen}}(m_{\ast,i})}{1-f_{{\rm q | cen}}(m_{\ast,i})}\:,$$ as an estimate of the satellite quenching efficiency for the single galaxy $i$. Here, $f_{{\rm q | cen}}(m_{\ast,i})$ is our previously computed smooth curve for the quenched fraction of centrals, which is evaluated at $m_{\ast,i}$. Of course, the meaning of $\epsilon_{{\rm sat},i}$ for any individual galaxy is limited, because it is either quenched or not, i.e., $q_i$ is either unity or zero. However, we can obtain $\epsilon_{\rm sat}$ for any set of galaxies by simply averaging over these individual estimates, i.e., $$\label{eq:epsilon_sat_new} \epsilon_{\rm sat}(m_\ast,\delta) = \frac{\sum_i W_{i}\epsilon_{{\rm sat},i}}{\sum_i W_{i}}\:,$$ where the sum runs over all satellites within a particular sample of interest. Estimating $\epsilon_{\rm sat}$ using Equation (\[eq:epsilon\_sat\_new\]) not only avoids the need for sample matching, but also enables us to easily compute $\epsilon_{\rm sat}(p_1,p_2,\ldots)$ as a function of as many parameters $p_1,p_2,\ldots$ as we like. The individual estimates $\epsilon_{{\rm sat},i}$ need only be calculated once for each $i$th galaxy and then combined at will to compute $\epsilon_{\rm sat}$ for any set of galaxies and thus also to compute $\epsilon_{\rm sat}(p_1,p_2,\ldots)$. For some applications it could be useful to generalize the definition of the satellite quenching efficiency in Equation (\[eq:eps\_sat\_standard\]) to include additional parameter dependencies in the quenched fraction of centrals, i.e., to use $f_{\rm q | cen}(m_\ast,\tilde p_1,\tilde p_2,\ldots)$ instead of $f_{\rm q | cen}(m_\ast)$. This generalization changes the interpretation of the satellite quenching efficiency in so far as it now corresponds to the probability that a central with mass $m_\ast$, which originally resided in the environment $\tilde p_1,\tilde p_2,\ldots$, becomes quenched, as it becomes a satellite in the environment $p_1, p_2,\ldots$. We call the parameters $\tilde p_1,\tilde p_2,\ldots$ “secondary parameters” and denote them by a tilde to distinguish them from the primary parameters $p_1,p_2,\ldots$. Using our estimator of the previous paragraph, this generalized satellite quenching efficiency can be easily computed by simply exchanging $f_{\rm q | cen}(m_{\ast,i})$ in Equation (\[eq:eps\_sat\_i\]) by $f_{\rm q | cen}(m_{\ast,i},\tilde p_{1,i},\tilde p_{2,i},\ldots)$. If we consider the dependencies $f_{\rm q | cen}(m_\ast,\tilde p_1,\tilde p_2,\ldots)$ we denote the corresponding satellite quenching efficiency by $\epsilon_{{\rm sat},\tilde p_1,\tilde p_2,\ldots}(p_1,p_2,\ldots)$. The mass $m_\ast$ is not explicitly included in the notation, since we assume here that it is always considered in $f_{\rm q | cen}$. We will use this special satellite quenching efficiency only once below, in Section \[sec:eps\_sat\_analysis\]. The physical meaning of this more general $\epsilon_{{\rm sat},\tilde p_1,\tilde p_2,\ldots}(p_1,p_2,\ldots)$ is to more directly compare the properties of a given satellite with those that a central has in exactly the same location in the $(\tilde p_1,\tilde p_2,\ldots)$ space. Matching of Galaxy Samples -------------------------- To exclude spurious segregation effects when comparing the $\epsilon_{\rm sat}$ of two different samples of satellites, using the above individual estimator $\epsilon_{{\rm sat},i}$, we must still worry whether these two samples are well matched with respect to one or more parameters, i.e., whether the distribution of galaxies within the bin in question is the same in the two samples [@kovac2014]. The same is also true when we compute a quenched fraction for two samples. We perform the required matching as follows: if the samples are to be matched in the parameters $p_1,p_2,\ldots$, we select for each galaxy $i$ of the smaller sample the corresponding galaxy $j$ of the larger sample which has the smallest separation to the galaxy $i$ in the considered parameter space. If this separation is smaller than $0.1\sqrt{N_{\rm p}}$ in a log space, where $N_{\rm p}$ is the number of matching parameters, i.e., if it holds $$\sqrt{(p_{1,i}-p_{1,j})^2 + (p_{2,i}-p_{2,j})^2 +\cdots } < 0.1 \sqrt{N_{\rm p}}\:,$$ the galaxy pair $(i,j)$ constitutes a match and the galaxy $j$ is removed from the sample, so that each galaxy of the larger sample is considered only once. If the separation is larger than $0.1\sqrt{N_{\rm p}}$, then there is no match and the galaxy is removed from the smaller sample. The matching for parameters is always performed in logarithmic space, since we will find that the relevant parameter ranges in logarithmic space are fairly similar for different parameters (see Section \[sec:eps\_sat\_analysis\]). After applying this procedure for each galaxy $i$ of the smaller sample we end up with two new, “matched” subsamples which contain the same number of galaxies, are paired galaxy by galaxy and which should therefore have a very similar distribution across the bin. Environmental quenching of centrals {#sec:centrals_satellites} =================================== There is an ongoing discussion as to what extent the centrals of galaxy groups are ‘special’ compared with satellites [for recent discussions based on “brightest group galaxies” see, e.g., @vonderlinden2007; @shen2014]. In the context of galaxy quenching, [@peng2012] made the hypothesis that, while satellites encounter both mass quenching and environment quenching, centrals encounter only mass quenching, and are not influenced by their local environment. There were two pieces of evidence which led [@peng2012] to this conclusion: first, it was shown that the red fraction of the bulk of centrals at fixed mass is much more weakly dependent on $\delta$ than the corresponding red fraction of satellites. The weak deviations that were observed could be attributed to secondary effects (e.g., impurities of the group sample, especially for low mass centrals in high density regions). Second, it was pointed out that the mass function of red centrals, unlike that of red satellites, does not show any signs of a second Schechter function at the low mass end. In [@peng2010] it was clear that this second Schechter component, with a faint end slope that is essentially the same as that of the star-forming population, arises from the action of the (mass-independent) “environment quenching”. Both arguments are still true. It should however be appreciated that, as is clear from the figures and discussion in [@peng2012], the range of $\delta$ of the majority of centrals is different from that of the majority of satellites — simply because most of centrals are not the centrals of the groups that contain most satellites and are in fact singletons. In this section, we will investigate how the quenching of centrals is influenced by the environment by looking at the same regions in the multi-parameter space that are occupied by both centrals and satellites, i.e., by isolating those centrals that are the centrals of the richer groups, $N_{\rm abs} \geq 3$. In the panels (a) and (b) of Figure \[fig:fq\_cen\_sat\] we show the quenched fraction of all centrals, $f_{\rm q,cen}(m_{\ast},\delta)$, and of satellites, $f_{\rm q,sat}(m_{\ast},\delta)$, as a function of their stellar mass $m_\ast$ and galaxy overdensity $\delta$. Then in panel (c) we show the corresponding satellite quenching efficiency $\epsilon_{\rm sat}(m_{\ast},\delta)$ computed in the usual way using the mass-dependent $f_{\rm q,cen}(m_{\ast})$. ![image](3.eps){width="70.00000%"} It is easy to recognize the main features that were earlier found by [@peng2012]. First, the $f_{\rm q | cen}$ at fixed mass of the general central population is indeed only weakly dependent on $\delta$, as illustrated by the near vertical contours over much in panel (a). This is less true at low central masses but the density dependence of $f_{\rm q | cen}$ for these centrals could reflect the misidentification of low mass centrals in high density environments. There are not so many low mass centrals in high density environments and there could be some significant contamination from misidentified satellites (as argued by @peng2012). The different pattern of $f_{\rm q|sat}$ in panel (b) reflects the combination of mass quenching effects (at high masses) and mass-independent “environment quenching” dominating at lower masses. Once we take out the mass effects via $\epsilon_{\rm sat}$ we also reproduce the remarkably weak dependence of $\epsilon_{\rm sat}$ on $m_\ast$ for fixed $\delta$ in panel (c). Since at low masses, where environmental effects dominate [@peng2010], the number of satellites in high density environments far outweighs the number of centrals, it is clear that, as correctly stated by both [@peng2012] and [@kovac2014], satellites are responsible for most of the $\delta$ dependence within the overall population of galaxies. The dominance of singleton centrals also accounts for the differences in the shape, at low masses, of the mass function of passive central and satellite galaxies that was highlighted in [@peng2012], i.e., the prominence of the second Schechter component for satellites and its absence for centrals. However, two related points should be noted. First, as can be seen from the white contours in Figure \[fig:fq\_cen\_sat\], and also from the histograms in Figure \[fig:density\_field\_histogram\] and the figures in [@peng2012], the bulk of centrals and satellites occupies different regions in the $(m_\ast,\delta)$ plane, and especially in $\delta$. The peak of the centrals is at $\log(\delta+1) \sim -0.5$, while the peak of the satellites is at $\log(\delta+1) \sim 1$. Second, 98% of our centrals inhabit groups with $N_{\rm abs}\leq 2$. That is, the sample of centrals is overwhelmingly dominated by galaxies for which the estimated $\delta$ is based on length scales that are typically much larger than the virial radii of the halos concerned and the weak dependence of $\delta$ for the centrals could conceivably be a consequence of the fact that we are not measuring, for most centrals, the actual local environment, which may be most relevant for the quenching of galaxies. To look at the centrals that are in the same groups as the satellites, we first recompute $f_{\rm q | cen}$ after constraining the sample of centrals to groups with $N_{\rm abs} \geq 3$ (see Figure \[fig:fq\_cen\_sat\], panel (d)), i.e., the centrals of the satellites we are considering. Careful inspection shows that the $f_{\rm q | cen}$ of these centrals is systematically higher than for the general set of centrals and is actually comparable or even larger than the $f_{\rm q | sat}$ of the satellites. To see this more clearly, we show two further panels in Figure \[fig:fq\_cen\_sat\]. First, in panel (e) we show a new quenching efficiency that is computed for the centrals in the groups using exactly the same general $f_{\rm q | cen}$ for the background sample as was used to compute the $\epsilon_{\rm sat}$ in panel (c). We denote this new quantity in formal analogy to Equation (\[eq:eps\_sat\_standard\]) by $$\label{eq:eps_cen} \epsilon_{\rm cen}(m_\ast,\delta) = \frac{f_{\rm q | cen}(m_\ast,\delta)|_{N_{\rm abs} \geq 3} - f_{\rm q | cen}(m_\ast)}{1-f_{\rm q | cen}(m_\ast)}\:.$$ Here $f_{\rm q | cen}(m_\ast,\delta)|_{N_{\rm abs} \geq 3}$ denotes the quenched fraction of centrals restricted to groups with $N_{\rm abs} \geq 3$. This therefore represents an environmental quenching efficiency for centrals. It can be seen in panel (e) that, over the limited range of mass where this is defined, $\epsilon_{\rm cen}$ is very similar to the $\epsilon_{\rm sat}$ surface, emphasizing the similarity in the $(m_\ast,\delta)$ plane in the quenching of satellites and their associated centrals. Second, in panel (f), we show a modified satellite quenching efficiency $\epsilon_{\rm sat,\delta}(m_\ast,\delta)$ that is now computed with the density-dependent $f_{\rm q | cen}$ of the centrals (see Section \[sec:eps\_sat\]) with $N_{\rm abs} \geq 3$. This is, over the region that can be computed, close to zero or even negative. This again illustrates the similarity in the quenching of centrals and satellites in the same groups. A more direct comparison between $f_{\rm q | cen}$ and $f_{\rm q | sat}$ is given in Figure \[fig:central\_satellite\_matched\] that is based on carefully matched samples of satellites and centrals. ![Comparison of the quenched fraction $f_{\rm q}$ of centrals (red) and satellites (blue) for different samples. For panel (a) we used all centrals (i.e., $N_{\rm abs} \geq 0$), while for all other panels we used only the centrals of the groups, in which our satellites are (i.e., $N_{\rm abs} \geq 3$). In the panels (a) and (b) we match the samples of centrals and satellites only with respect to stellar mass, while in panel (c) we match them in $m_\ast$ and $\delta$ and in panel (d) in $m_\ast$, $\delta$, and $R$. The number of galaxies within the matched samples is indicated at the bottom of each panel. If we consider only centrals in groups with $N_{\rm abs} \geq 3$, we do not detect any significant difference in the quenched fraction. This remains true as we additionally match in $\delta$ and $R$.[]{data-label="fig:central_satellite_matched"}](4.eps){width="48.00000%"} Here we compare the mean values of $f_{\rm q | cen}$ and $f_{\rm q | sat}$ for different sets of mass-matched samples of centrals and satellites. In panel (a), we mass-match a set of general centrals (i.e., $N_{\rm abs} \geq 0$) and satellites, without regard to the richness of the groups or to the overdensities $\delta$. As expected, the mass-matched centrals have a systematically lower $f_{\rm q}$ than the equivalent satellites. This is a reflection of the average $\epsilon_{\rm sat} \sim 0.4$ derived by [@vandenbosch2008], [@peng2012], and at higher redshifts by [@knobel2013], and indeed we can compute from the numbers in this panel an average $\epsilon_{\rm sat}$ for this overall sample of $\epsilon_{\rm sat} \simeq 0.41$. In the other panels to the right, we now consider only those centrals in $N_{\rm abs} \geq 3$ groups and then additionally match the environment $\delta$ and distance $R$ for the centrals and satellites. It is obvious that the difference between the quenched fractions of centrals and satellites in the leftmost panel completely disappears as soon as we focus on only the $N_{\rm abs} \geq 3$ groups, and remains as we additionally match in $\delta$ and $R$. However, as can be seen from the numbers in Figure \[fig:central\_satellite\_matched\], the regions in the parameter space where the centrals with $N_{\rm abs} \geq 3$ and satellites overlap are relatively small, so that these carefully matched samples now constitute only a very small fraction of the original samples. In summary, we find that we cannot detect any difference in the quenched fractions of centrals and satellites if we focus only on the centrals and satellites in the same $N_{\rm abs} \geq 3$ groups, i.e., if we exclude the vast majority of centrals (i.e., the singletons and pairs), for which $\delta$ is anyway not probing the local density within the halo, and if the centrals and satellites are properly matched in stellar mass and $\delta$. This is an important point: in the groups and clusters where the satellites actually reside, the centrals are as quenched as the satellites at a given $(m_\ast,\delta)$, at least in the range of $(m_\ast,\delta)$ that is probed by both centrals and satellites. At least in this restricted part of parameter space, we conclude that centrals do indeed feel environment in the same way as satellites. We stress that this conclusion does not invalidate the more general statements that were made in [@peng2012] but refines them for the particular subset (2%) of centrals that are in the same halos as the satellites. It remains to be seen (and we cannot test) whether the environmental effects on massive centrals and satellites at high $\delta$ reflect the same physical effects that are responsible for the environmental effects on lower mass satellites which also dominate the environmental effects in the overall population (e.g., the striking separability in $f_{\rm q}$ in the overall population highlighted by @peng2010). However, the mass independence of the $\epsilon_{\rm sat}$ in panel (c) of Figure \[fig:fq\_cen\_sat\] suggests that there may indeed be a connection between high and low mass satellites and thus to the centrals in these same environments. We return to this point in Section \[sec:discussion\]. Satellite Quenching Efficiency as a Function of Environment {#sec:eps_sat_analysis} =========================================================== In this section, we will analyze the dependence of the satellite quenching efficiency $\epsilon_{\rm sat}$ on six parameters that describe the satellites and their environments, namely the stellar mass $m_{\rm sat}$ of the satellites, the host halo mass $m_{\rm h}$, the stellar mass of the central $m_{\rm cen}$, the overdensity of galaxies $\delta$, group-centric distance $R$, and the sSFR of the central (see Table \[tab:parameters\]). To treat the parameters more or less equally, we look at the logarithm of each quantity. Particularly, we measure the sSFR$_{\rm cen}$ offset from the black line in Figure \[fig:SSFR\_mass\_diagram\] (in dex) and denote this as $\Delta$sSFR$_{\rm cen}$. The logarithmic widths of the distributions in each parameter are broadly similar, between 1.3 and 3 dex. The average satellite quenching efficiency as a function of each of these six parameters individually is shown in Figure \[fig:e\_s\_1D\], where we plot the average $\epsilon_{\rm sat}$ of all satellites in five equally spaced bins of each of the six parameters in turn. First, the dependence of $\epsilon_{\rm sat}$ on both $m_{\rm sat}$ (and also $m_{\rm cen}$) is very weak. The lack of a significant dependence on the mass of the satellite is a confirmation of the results of [@vandenbosch2008] and [@peng2012] and we will not discuss it further. On the other hand, we find a strong dependence of the average $\epsilon_{\rm sat}$ on $m_{\rm h}$, $\delta$, and $R$. This is consistent with results previously reported by [@peng2012] and [@woo2013]. It is noticeable that $\epsilon_{\rm sat}(\delta)$ seems to change its slope around $\delta \sim 1$ and we will return to this point below. We would expect $\epsilon_{\rm sat}$ to have some dependence on $m_{\rm cen}$ because $m_{\rm cen}$ and $m_{\rm h}$ are coupled. The link between satellite quenching and the sSFR of the central galaxy has been noted by [@weinmann2006] in the context of “galactic conformity” and has been analyzed in the $\epsilon_{\rm sat}$ formalism by [@phillips2014] at least for galaxy pairs. We devote Section \[sec:galaxy\_conformity\] to an extensive discussion of this sSFR dependence. As an aside, we further illustrate the similarity of centrals and satellites in the groups discussed in Section \[sec:centrals\_satellites\] by plotting $\epsilon_{\rm cen}$ (see Equation (\[eq:eps\_cen\])) on the four relevant panels of Figure \[fig:e\_s\_1D\] as dotted curves. (We choose to represent the dependence of $\epsilon_{\rm cen}$ on the stellar mass of the central on the $m_{\rm sat}$ panel rather than the $m_{\rm cen}$ panel below it, simply because we are interested in the mass of the galaxy itself irrespective of whether it is a central or a satellite.) It can be seen that these curves are fairly similar, reinforcing our earlier conclusion that the centrals in the groups are experiencing the same environmental effects as the satellites. ![image](5.eps){width="70.00000%"} [llrr]{} $\phantom{000}\log m_{\rm sat}$ & Stellar mass of the satellites & $9.0\phantom{0}$ & $11.5\phantom{0}$\ $\phantom{000}\log m_{\rm h}$ & Halo mass of the group & $12.0\phantom{0}$ & $14.7\phantom{0}$\ $\phantom{000}\log m_{\rm cen}$ & Stellar mass of the central of the group & $10.5\phantom{0}$ & $12.0\phantom{0}$\ $\phantom{000}\log(\delta+1)$ & Overdensity based on the fifth nearest neighbor approach (see Section \[sec:density\_field\])& $-0.75$ & $2.25$\ $\phantom{000}\log R$ & Group-centric distance as defined in Equation (\[eq:R\]) & $-1.0\phantom{0}$ & $0.5\phantom{0}$\ $\phantom{000}\Delta {\rm sSFR}_{\rm cen}$ & Offset (in dex) of log sSFR of the central from the black line in Figure \[fig:SSFR\_mass\_diagram\] & $-1.5\phantom{0}$ & $1.8\phantom{0}$ \[tab:parameters\] The difficulty in interpreting the one-dimensional plots in Figure \[fig:e\_s\_1D\] arises because there are often strong correlations between these six parameters. Table \[tab:correlations\] gives the correlation coefficients between the five parameters (excluding the sSFR of the central). If two parameters $p_1$ and $p_2$ are correlated and $p_1$ is driving the slope of the satellite quenching efficiency, the correlation between $p_1$ and $p_2$ could induce a correlation between $p_2$ and $\epsilon_{\rm sat}$ which is of no physical relevance. Looking at Table \[tab:correlations\], we observe that, on the one hand, there is a strong correlation between $\log m_{\rm h}$ and $\log m_{\rm cen}$ for the satellites and, on the other hand, there is a strong correlation (or anti-correlation) between $\log m_{\rm h}$, $\log(\delta+1)$, and $\log R$. To unravel these correlations and to try to gain more information on the physical origins of $\epsilon_{\rm sat}$, it is useful to move from the one-dimensional plots of Figure \[fig:e\_s\_1D\] to two-dimensional plots for pairs of the parameters displaying the strongest one-dimensional effects. We look first at Figure \[fig:fq\_Mh\_Mc\], which shows $\epsilon_{\rm sat}(m_{\rm h},m_{\rm cen})$. The gradient of $\epsilon_{\rm sat}$ is basically pointing entirely in the direction of $m_{\rm h}$ for $\log m_{\rm h} > 13.5$ (i.e., the contours are more or less vertical). For $\log m_{\rm h} < 13.5$ the pattern of the contours is more complex, but still there is no clear trend with $m_{\rm cen}$ visible. It is not clear, whether the deviations from vertical lines reflect an actual signal or whether they are due to misclassifications of centrals in low mass groups and/or misidentification of low mass groups in general. We can conclude that at fixed halo mass, the stellar mass of the central galaxy seems to have no influence of the probability that a satellite is quenched beyond the effect of the halo mass (at least for $\log m_{\rm h} > 13.5$). ![Satellite quenching efficiency $\epsilon_{\rm sat}$ as a function of halo mass $m_{\rm h}$ and stellar mass of the central $m_{\rm cen}$. The colors correspond to the values of $\epsilon_{\rm sat}$ as indicated in the color bar. The white contours indicate the (unweighted) number density of galaxies in logarithmic scale. To produce the contour plots we have used running bins with size $0.2 \times 0.2$ in logarithmic parameter space and have additionally smoothed the resulting image by a Gaussian filter with standard deviation 0.3. Bins with less than 30 galaxies were discarded.[]{data-label="fig:fq_Mh_Mc"}](6.eps){width="48.00000%"} [l|ccccc]{} $\log m_{\rm sat}$ & $\phantom{-}1.00$ & $\phantom{-}0.04$ & $\phantom{-}0.06$ & $\phantom{-}0.14$ & $-0.08$\ $\log m_{\rm h}$ & $\phantom{-}0.04$ & $\phantom{-}1.00$ & $\phantom{-}0.69$ & $\phantom{-}0.41$ & $-0.08$\ $\log m_{\rm cen}$ & $\phantom{-}0.06$ & $\phantom{-}0.69$ & $\phantom{-}1.00$ & $\phantom{-}0.26$ & $-0.04$\ $\log(1+\delta)$ & $\phantom{-}0.14$ & $\phantom{-}0.41$ & $\phantom{-}0.26$ & $\phantom{-}1.00$ & $-0.52$\ $\log R$ & $-0.08$& $-0.08$ & $-0.04$ & $-0.52$ & $\phantom{-}1.00$ \[tab:correlations\] More interesting are the two-dimensional plots between each of the three parameters which have relatively strong effects in the one-dimensional plots, i.e., $\log m_{\rm h}$, $\log(\delta+1)$, and $\log R$. These are shown in Figure \[fig:Mh\_delta\_r\]. Given the issues discussed earlier in the interpretation of the overdensity $\delta$, the thick white line in each panel indicates the contour for which the median richness of the satellites is $N_{\rm abs} = 5$, i.e., it separates the plane into regions that are dominated by rich groups, for which $\delta$ is measuring the local density, and small groups, for which $\delta$ is less informative. We can immediately observe the following trends. ![Satellite quenching efficiency $\epsilon_{\rm sat}$ as a function of the pairwise parameters $m_{\rm h}$, $\delta$, and $R$. The colors correspond to the values of $\epsilon_{\rm sat}$ as indicated in the color bar. The thin white contours indicate the (unweighted) number density of galaxies in logarithmic scale and the thick white contour indicates the contour for which the median richness of the satellites is $N_{\rm abs} = 5$. To produce the contour plots we have used running bins with size $0.2 \times 0.2$ in logarithmic parameter space and have additionally smoothed the resulting image by a Gaussian filter with standard deviation 0.3. Bins with less than 30 galaxies were discarded.[]{data-label="fig:Mh_delta_r"}](7.eps){width="48.00000%"} On the right side of the thick white line in the top panel (i.e., in rich groups), the gradient of $\epsilon_{\rm sat}$ has a strong component in the direction of $\delta$. In fact, the gradient is stronger in the direction of $\delta$ than in the direction of $m_{\rm h}$. However, on the left side of the thick white line, the dependence on $\delta$ is almost negligible. This behavior largely explains the origin for the change of slope of $\epsilon_{\rm sat}(\delta)$ in Figure \[fig:e\_s\_1D\]. A primacy of $\delta$ over $m_{\rm h}$ in the regime where $\delta$ is most meaningful was pointed out by [@peng2012]. On the right side of the thick white line in the middle panel, the dependence of $\epsilon_{\rm sat}$ on the group-centric distance $R$ is similarly important as for $\delta$. On the left side, where $\delta$ is less meaningful, however, $R$ is the dominating parameter (as was also found by @carollo2014). However, on all three panels of Figure \[fig:Mh\_delta\_r\] the contours are not simply horizontal or vertical, and so we must conclude that all of $\delta$, $R$ and $m_{\rm h}$ are playing a role in driving quenching, with relative importance depending on the region of parameter space in consideration. This conclusion is further confirmed by a more quantitative analysis in the five-dimensional parameter space including $m_\ast$, $m_{\rm h}$, $m_{\rm cen}$, $\delta$, and $R$. To try to disentangle the correlations between different parameters, we identified pairs of galaxies aligned along a given dimension (i.e., which differed primarily in only one of the parameters). From all these pairs aligned along one dimension, we estimated an effective mean gradient along this dimension. We find that these gradients (normalized by the width of the parameters space given in Table \[tab:parameters\]) for $m_{\rm h}$, $m_{\rm cen}$, and $\delta$ are all of similar strength and inconsistent with zero at about $3\sigma$. On the other hand, the effective gradients for both $m_\ast$ and $m_{\rm h}$ are consistent with zero. Here, we briefly comment what we mean by saying “parameter $p_1$ in a certain regime more dominant than parameter $p_2$”. We mean that the gradient of $\epsilon_{\rm sat}(p_1,p_2)$ normalized to the typical range of relevant values of $p_1$ and $p_2$ in log scale has a larger component in the direction $p_1$ than $p_2$. Since the axes in Figure \[fig:Mh\_delta\_r\] roughly correspond to these ranges, the dominant parameter is the one in whose direction most contour lines are crossed. However, due to the rather vague definition of “parameter space” and due to the fact that the gradient of $\epsilon_{\rm sat}$ is changing over the parameter space, it is difficult to precisely quantify this concept. A further difficulty arises because the measurement uncertainty in each of these three parameters almost certainly varies across the diagrams. Most of these conclusions have been seen before in similar analyses, which have however been more limited or have not used the quenching efficiency formalism. One of the goals of this section has been to present these results in a single coherent and homogeneous way. To summarize, we conclude from this analysis that all of $\delta$, $R$, and $m_{\rm h}$ are playing a role in the environmentally driven quenching of satellites. The stellar mass of the satellite and of the central are not correlated with $\epsilon_{\rm sat}$, except to the extent that the latter may reflect halo mass. Finally, the sSFR of the central is a major factor, and it is to this effect that we now turn in the next section. Galactic Conformity {#sec:galaxy_conformity} =================== We showed in Figure \[fig:e\_s\_1D\] that one of the strong environmental dependencies of $\epsilon_{\rm sat}$ is the sSFR of the central, which is parameterized as $\Delta {\rm sSFR}_{\rm cen}$ (see Table \[tab:parameters\]) being the offset in sSFR from the black line in Figure \[eq:SSFR\_division\]. In this section, we will analyze and discuss this effect in detail. Since the distribution of satellites strongly peaks around $\Delta {\rm sSFR}_{\rm cen} \simeq -1$ (i.e., $92\%$ of the satellites have a central that is quenched), we will only consider for our analysis, whether $\Delta {\rm sSFR}_{\rm cen}$ for a given satellite is larger or smaller than zero (i.e, whether the central of a given satellite is star forming or quenched). The Meaning of Galactic Conformity {#sec:meaning_conformity} ---------------------------------- Since it was introduced by [@weinmann2006], the term “galactic conformity” has entered widespread use in the lexicon of galaxy evolution to describe the effect whereby quenched centrals are more likely to have quenched satellites (and vice versa). We think it is important to clarify the use and astrophysical implications of this word. We first distinguish between the probability that a particular galaxy is quenched, $P_{\rm q}$, and the observed fraction of similar galaxies that have actually been quenched, i.e., the outcome $f_{\rm q}$ that we have used previously in this paper. Generally, we would regard the latter as a good estimator of the former, neglecting complications such that $P_{\rm q}$ may have varied with time and $f_{\rm q}$ is usually the integrated outcome observed at one particular epoch. However, galaxies are (probably) not intrinsically probabilistic systems, and therefore the probabilistic aspect of $P_{\rm q}$ must reflect the action of some number of “hidden variables” — to use the parlance of quantum mechanics — in the general sense of “missing information” which would be necessary to completely understand the causal physical process of galaxy quenching. Some of these variables may be known and their effect can, if desired, be removed by considering $P_{\rm q}$ as a function of those variables, i.e., $P_{\rm q}(p_1,p_2,\ldots)$ as estimated by $f_{\rm q}(p_1,p_2,\ldots)$. The action of all other relevant variables that are not explicitly treated in this way will be to introduce a “probabilistic” aspect to quenching. Some of the relevant variables may well be unknown or impractical to observe (or even reflect some complicated physical process that cannot be adequately described with a few global parameters). However, if we had complete knowledge of all the relevant parameters involved in quenching, then quenching would (presumably) no longer be probabilistic. The term “galactic conformity” was introduced by [@weinmann2006] to describe the situation where there is a correlation between the state of the centrals and that of the satellites (i.e., that quenched centrals generally have quenched satellites and star-forming centrals have star-forming satellites), [*even when the two samples of satellites have been carefully matched in one or more parameters*]{}. In other words, the $f_{\rm q}(p_1,p_2,\ldots)$ for the satellites of quenched centrals is higher than the $f_{\rm q}(p_1,p_2,\ldots)$ of satellites of non-quenched centrals at the same values of $p_1,p_2,\ldots$. In the particular case of [@weinmann2006], the parameters that were matched were the halo mass and the luminosity of the satellites. It is clear that an overall association of quenched centrals and satellites will arise rather trivially, if the individual probabilities that centrals and satellites be quenched, i.e., $P_{\rm q,cen}$ and $P_{\rm q,sat}$, respectively, both depend on some external parameter whose value is shared by both the centrals and satellites. One could imagine, as did [@weinmann2006], that the chance for galaxies to be quenched depends on the mass of the parent DM halo, which will be the same for the central and all the satellites in a given halo. In this case, low mass halos would statistically have a lower quenched fraction of both centrals and satellites than would high mass halos, and therefore quenched centrals would generally be found with quenched satellites [cf. @wang2012]. If the effect of the halo mass was extremely strong, i.e., if there was a sharp threshold in halo mass above which all galaxies were quenched, then the resulting association of quenched satellites with quenched centrals would likewise be very strong, in that essentially all quenched centrals would have quenched satellites, and vice versa. However, in the simple situation described in the previous paragraph, if we now consider just a single value of halo mass or, equivalently, if we carefully match the distribution of halo masses of the satellites with quenched and non-quenched centrals, then we would find that the preferential association of quenched centrals and satellites would vanish. The distribution of halo masses of the satellites with quenched and non-quenched centrals would by construction be exactly the same, and therefore, if the quenching probability depended only on the halo mass, there would be no difference in the fraction of quenched satellites for the two sets of centrals, i.e., no “conformity”. Conformity would have been seen in a general set of centrals and satellites (as described in the previous paragraph), but would then disappear once we matched the samples of satellites in halo mass. The existence or non-existence of galactic conformity therefore depends critically on the construction and analysis of the galaxy sample and particularly on the choice of any parameters that have been matched between the samples of satellites with quenched and non-quenched centrals. The meaning of galactic conformity (in a given sample) is that the satellites somehow “know” the outcome of whether the central was quenched (and vice versa). This is equivalent to saying that there must be one or more “hidden variables” that are influencing the quenching of both centrals and satellites, but are not being matched in the sample in question. Furthermore, these hidden variable(s) must also be correlated (or even the same) for the centrals and satellites in a given halo. If both these hold, then the values of these “hidden common variable(s)” will be different for the quenched and non-quenched samples of galaxies, leading to the correlation of quenched satellites with quenched centrals that we call galactic conformity, even when other known variables have been matched. Given the extensive correlations between the variables that are encountered in descriptions of galaxies and their environments, a hidden variable could well be correlated with one of the variables that had previously been matched. However, in this case, it is clear that it is only that component of the hidden variable that is orthogonal to those previously matched variables which can cause any (remaining) conformity. If we were now to include a previously hidden variable (that was previously producing conformity) by additionally matching the values of this variable between the samples of satellites with quenched and non-quenched centrals, then the conformity would disappear (or at least be reduced if further hidden variables remained hidden). We stress that conformity should therefore be thought of as arising from the analysis rather than being some kind of absolute physical effect. The same set of galaxies will show conformity when analyzed in one way, but this will be reduced or eliminated as more variables are introduced and matched. Obvious possibilities for potential hidden common variables include variables that are (1) relatively easy to observe and therefore “unhide”, like halo mass or local overdensity (as in this paper), or (2) presently unobservable for most groups, like the gas entropy or the presence of shock heating or of cold gas flows, or (3) almost unobservable, such as the time that has elapsed since some particular event such as group formation. As an example of the last case, [@hearin2014] argued that the “two-halo conformity” (i.e., conformity on scales larger than the virial radius, which we have not studied in this paper) could arise naturally as a consequence of an “assembly bias”. To demonstrate this, they produced simulated galaxy mock catalogs for SDSS, in which — at a given stellar mass — SFRs were randomly drawn from the corresponding distribution of SDSS galaxies (at this mass) and assigned to the mock galaxies (at this mass) according to the relative “age” of their (sub)halos (i.e., the older the (sub)halo the lower the SFRs). The “age” of a (sub)halo was quantified by the (sub)halo property $z_{\rm starve}$, which correlates with the epoch of star formation cessation within the (sub)halo [for details, see @hearin2013]. They did not differentiate between centrals and satellites nor did they attribute any special role to the virial radius of the halos. As a result they observe a strong conformity signal within these mock catalogs on the scale 1-5 Mpc without the need of introducing any additional quenching for the satellites after they fall into the halos. Another important complication is that the (unknown) residuals that arise from observational scatter in the practical estimation of the observables (like halo mass) will effectively also act as a hidden variable. To summarize, the existence of galactic conformity in a given sample and in a given analysis tells us primarily that there are still additional hidden variables which must affect in some way the quenching of both central and satellite galaxies and whose value must be shared in some way by centrals and satellites. When a complete description of quenching (with noiseless data) is achieved, then conformity will have disappeared. Galactic Conformity in the Current Sample {#sec:current_conformity} ----------------------------------------- In the following, we denote the satellite quenching efficiency for satellites with a star-forming central by $\epsilon_{\rm sat,SF}$ and for satellites with a quenched central by $\epsilon_{\rm sat,q}$. Averaged over all parameters, we measure a mean satellite quenching efficiency $\langle \epsilon_{\rm sat,q} \rangle \simeq 0.44$ around quenched centrals and $\langle \epsilon_{\rm sat,SF} \rangle \simeq 0.17$ for the $\sim \! 8\%$ of satellites that are found around star-forming centrals. In other words, within the general (unmatched) satellite population and taking out the effects of stellar mass (by using $\epsilon_{\rm sat}$), the environmentally driven quenching of satellites is $2.6$ times stronger in satellites with quenched centrals than in those with star-forming centrals. However, in the context of the discussion in Section \[sec:meaning\_conformity\], this difference could arise simply because of the dependencies of $\epsilon_{\rm sat}$ on the other variables (i.e., $m_{\rm sat}$, $m_{\rm h}$, $m_{\rm cen}$, $\delta$, $R$) that is shown in Figure \[fig:e\_s\_1D\], if these variables are also affecting the quenching of the centrals. Accordingly, we construct a sample of satellites with quenched centrals that is matched to the sample of satellites with non-quenched centrals with respect to [*all*]{} five of these parameters. The result is shown in Figure \[fig:galaxy\_conformity\_matched\]. ![Quenched fraction $f_{\rm q}$ and satellite quenching efficiency $\epsilon_{\rm sat}$ for two matched samples of satellites. The red and the blue points correspond to satellites with quenched centrals and to satellites with star-forming centrals, respectively, which have been matched to each other with respect to stellar mass $m_{\rm sat}$ and all four remaining environmental parameters (i.e., $m_{\rm h}$, $m_{\rm cen}$, $\delta$, and $R$). The number of galaxies within the matched samples is indicated in the right panel. For the mean parameter values of the matched sample see Table \[tab:sample\_mean\].[]{data-label="fig:galaxy_conformity_matched"}](8.eps){width="48.00000%"} We can see immediately that, although the mean $\langle \epsilon_{\rm sat,q}\rangle \simeq 0.38$ of the matched sample and the mean $\langle \epsilon_{\rm sat,SF} \rangle \simeq 0.16$ of the matched sample are both slightly lower, there is still a clear difference between the two satellite quenching efficiencies. [l|ccccc]{} Total sample & $9.8 \pm 0.6$ & $13.6 \pm 0.6$ & $11.2 \pm 0.3$ & $1.0 \pm 0.5$ & $0.0 \pm 0.3$\ Matched sample & $9.8 \pm 0.6$ & $13.2 \pm 0.5$ & $11.0 \pm 0.2$ & $0.8 \pm 0.5$ & $0.1 \pm 0.2$ \[tab:sample\_mean\] In other words, after removing the effects of stellar mass and the four environmental variables through this matching of the samples, we still find that satellites around quenched centrals are $2.4$ times more likely to be environmentally quenched than those around non-quenched centrals, very similar to the factor of 2.6 obtained simply by averaging across all satellites without any matching. It should be noted that a consequence of our matching is that we are confined to a reduced range of the overall parameter space and so these numbers are not directly comparable. The (weighted) mean values and (weighted) standard deviations of the five environmental parameters in the matched sample of satellites are compared with the values for the overall, unmatched population of satellites in Table \[tab:sample\_mean\]. It is mainly the halo mass that is different. As discussed in Section \[sec:meaning\_conformity\], the persistence of conformity even after these five variables are matched implies that there must still be additional “hidden” variables at play that (1) affect the quenching of both satellites and centrals and (2) are distributed across the halo in the sense that satellites and centrals must share, at least to some degree, the values of these variables. We also noted in Section \[sec:meaning\_conformity\] that the conformity can be produced from residual observational errors in common variables that are playing a role in quenching (such as halo mass). However, we see no increase in conformity when we artificially introduce additional noise into these parameters and so we are quite confident that such errors represent a negligible contribution to the strong conformity effects presented in this section. Variations in the Strength of Conformity across the Parameter Space ------------------------------------------------------------------- In order to try to gain clues as to the nature of the missing variable(s), we can look at the strength of the conformity effect as functions of each of the identified variables in turn. In other words, we can simply split the average $\epsilon_{\rm sat}$ that was plotted against each of the five variables in Figure \[fig:e\_s\_1D\] into the contributions of those satellites with quenched and with non-quenched centrals, i.e., $\epsilon_{\rm sat,q}$ and $\epsilon_{\rm sat,SF}$, respectively. These are shown as the red and the blue solid lines, respectively, in Figure \[fig:e\_s\_1D\_conformity\]. In each panel, the black dotted line is the overall average $\epsilon_{\rm sat}$ transferred from Figure \[fig:e\_s\_1D\]. Given that the vast majority of satellites have quenched centrals, it is not surprising that this is very similar to the red line for the satellites of quenched centrals. The blue dashed line is what would be obtained for the satellites of non-quenched centrals if the $\epsilon_{\rm sat,q}$ is uniformly decreased by the factor of $\sim \! 2.5$ that was derived above as the ratio of the average values of $\epsilon_{\rm sat,q}$ and $\epsilon_{\rm sat,SF}$. If the observed blue solid line lies on this blue dashed line, then it implies that the strength of the conformity is “average”. If the blue solid line lies below the blue dashed line, i.e., with $\epsilon_{\rm sat,SF}$ approaching zero, it indicates a strong conformity effect at this position in the parameter space in the sense that the satellites of star-forming centrals are hardly being quenched at all by environmental processes. Lines lying above, with $\epsilon_{\rm sat,SF}$ approaching $\epsilon_{\rm sat,q}$, indicate that there is a weak conformity effect at this location, as the difference between the satellites of quenched and non-quenched centrals is becoming small. The data are quite noisy, because so few satellites have star forming centrals and so these plots should not be over-interpreted. In order to check the robustness of our result, we also plot $\epsilon_{\rm sat,SF}$ for the high purity sample of satellites (blue open circles), which was defined in Section \[sec:group\_catalog\]. $\epsilon_{\rm sat,SF}$ for this sample is very similar to $\epsilon_{\rm sat,SF}$ for the general sample of satellites (blue solid lines) and almost always within the error bars of the blue lines. This demonstrates that our conformity signal is, if at all, only weakly affected by uncertainties due to possible misclassification of centrals. As an aside, it should also be noted in the context of the discussion in Section \[sec:meaning\_conformity\] that the conformity that is evident on the different panels of Figure \[fig:e\_s\_1D\_conformity\] as the difference between the red and blue solid lines is the conformity that is associated with matching just the one parameter that is plotted as the horizontal axis in each panel. In other words the conformity seen in the upmost right panel is the conformity that remains once the halo mass (alone) is matched, as in [@weinmann2006], and so on. The strongest variation in conformity strength in Figure \[fig:e\_s\_1D\_conformity\] is with halo mass $m_{\rm h}$, where we see a very strong conformity effect at $\log m_{\rm h} \sim 12.5$ (where there are essentially no environmentally quenched satellites around star forming centrals) and a negligible conformity effect at $\log m_{\rm h} \sim 13.5$ (where the satellite quenching efficiency is $\epsilon_{\rm sat} \sim 0.4$, almost independent of the state of the central). Interestingly, we see little evidence in Figure \[fig:e\_s\_1D\_conformity\] that the strength of conformity changes with group-centric distance. If anything, conformity appears to be weaker at small $R$, whereas one might have expected to have seen the effect of the central most strongly imprinted on the satellites at small radii. We can try to quantify the effect, or strength, of conformity in a number of ways. One way would be to consider $\epsilon_{\rm sat,SF}$ and then consider an additional “conformity quenching efficiency” $\epsilon_{\rm conf}$ as an additional quenching term that acts on the surviving star forming satellites when the central becomes quenched. It is then easy to show that $$\epsilon_{\rm conf} = \frac{\epsilon_{\rm sat,q}-\epsilon_{\rm sat,SF}}{1-\epsilon_{\rm sat,SF}} \:.$$ However, it may be more natural to consider the effect of conformity as a boosting of the quenching effect of the environment if the central is quenched, or equivalently, as a suppression of the effect of the environment if the central is still star forming. We may represent this suppression factor $\xi_{\rm conf}$ as $$\xi_{\rm conf} = \frac{\epsilon_{\rm sat,SF}}{\epsilon_{\rm sat,q}}\:,$$ which is simply the inverse of the corresponding boost factor. Conformity is therefore strong when $\xi_{\rm conf}$ is small, and weak when $\xi_{\rm conf}$ approaches unity. The dashed blue lines in Figure \[fig:e\_s\_1D\_conformity\] would therefore represent an average strength of conformity, $\xi_{\rm conf} \simeq 0.39$. We choose to consider the suppression $\xi_{\rm conf}$, rather than the boost $\xi_{\rm conf}^{-1}$, simply to have a parameter that varies between zero and unity, and to have $\epsilon_{\rm sat,SF}$ which is generally small and less accurately known than $\epsilon_{\rm sat,q}$ as the numerator of the ratio. It is a curious coincidence that the overall average value of $\xi_{\rm conf} \simeq 0.39$ is very similar to the overall average value of $\epsilon_{\rm sat} \sim 0.4$ [@vandenbosch2008; @peng2012 and this work], but two quantities are physically quite distinct, and this coincidence is just that. It is important to appreciate that the strength of conformity could vary for several reasons. In terms of our previous discussion in Section \[sec:meaning\_conformity\], it means that the contribution of the particular “hidden common parameter(s)” is varying. ![image](9.eps){width="70.00000%"} The effect of conformity, parameterized by $\xi_{\rm conf}$, is shown in Figure \[fig:e\_s\_1D\_conformity\_efficiency\] and is obtained by dividing the blue lines in Figure \[fig:e\_s\_1D\_conformity\] by the red lines. Although we are looking at a highly derived quantity, the details of which should therefore be treated with some caution, it is noticeable that there is a large variation in the strength of conformity, with values of $\xi_{\rm conf}$ close to zero (strong conformity) in some regions of parameter space and close to unity (no conformity) in others. ![Conformity strength $\xi_{\rm conf} = \epsilon_{\rm sat,SF} / \epsilon_{\rm sat,q}$ as a function of stellar mass and the environmental parameters. The red and blue dashed lines correspond to $\epsilon_{\rm sat,SF}$ and $\epsilon_{\rm sat,q}$, respectively (see Figure \[fig:e\_s\_1D\_conformity\]). There is a large variation of the values of $\xi_{\rm conf}$ ranging from close to zero (strong conformity) to unity (no conformity).[]{data-label="fig:e_s_1D_conformity_efficiency"}](10.eps){width="47.00000%"} The strength of the conformity effect appears to weaken as we increase the halo mass from $\log m_{\rm h} \sim 12.5$ to $\log m_{\rm h} \sim 13.5$ and beyond. How could this arise? One possibility (but assuredly not the only one) would be that in lower mass halos around $\log m_{\rm h} \sim 12.5$, quenching is caused by a “hidden common parameter” associated with, say, a cut-off in gas supply that affects both centrals and satellites, so that conformity would be strong. If, however the presence of star formation in the very few star-forming centrals in higher mass halos $\log m_{\rm h} \sim 13.5$ was unrelated to a common cut-off of fuel, perhaps being rejuvenated during a merger of the central galaxy, then we would expect conformity to be much weaker or absent. This example emphasizes a fundamental difficulty with understanding conformity effects. It is already difficult enough to interpret trends in $\epsilon_{\rm sat}$ with our different parameters because of built-in correlations between those parameters. Interpreting trends in the strength of conformity $\xi_{\rm conf}$ with those same parameters is even harder because we are dealing with the effects of “hidden variables” whose identity is by definition unknown. For this reason we do not speculate further about the physical origin of some of the other trends, e.g., the increase in the strength of the conformity with increasing stellar mass of the satellite. In the context of the discussion in Section \[sec:meaning\_conformity\], it should be noted that the “one-dimensional” conformity strength plots in each panel of Figure \[fig:e\_s\_1D\_conformity\_efficiency\] give the strength of the conformity that is computed when matching only the single parameter plotted on the horizontal axis in each panel. This is because they are computed using the average values of $\epsilon_{\rm sat}$ from the corresponding panels in Figure \[fig:e\_s\_1D\_conformity\]. However, the close similarity noted in Section \[sec:current\_conformity\] between the overall conformity that is obtained from the average $\epsilon_{\rm sat}$ values for all satellites and that obtained from the highly restrictive subsample matched in all five parameters suggests that this may not be an important distinction. Discussion {#sec:discussion} ========== In the Sections \[sec:centrals\_satellites\]-\[sec:galaxy\_conformity\], we have developed our understanding of the environmental effects on galaxies in groups within the context of the “quenching efficiency” formalism of galaxies, in which the quenched fraction of a set of galaxies is compared with a control sample; in this case the general set of centrals of the same stellar mass, which are overwhelmingly dominated by singleton centrals. At least in the case of satellites, this control sample is presumably representative of what the galaxies would have been before they entered the group environment. The main observational results presented in this paper can be summarized as follows. First, the centrals of groups experience similar environmental effects as their satellites relative to the control sample of general centrals, i.e., the well-known distinction between centrals and satellites in their quenched fractions disappears when we consider the centrals of the groups containing the satellites. This result holds at least for the range of stellar mass that is sampled by the centrals within the groups, i.e., $m_\ast \gtrsim 10^{10.3}\: M_\odot$. While it is true that environmental effects within the galaxy population are dominated by satellites [e.g., @vandenbosch2008; @peng2012; @knobel2013; @kovac2014] this is because, within these groups, satellites far outweigh the single central per group. The demonstration that centrals also feel the same environmental quenching effects as satellites suggests that we can replace the idea of satellite quenching with “group quenching”. Second, we have examined the dependence of the quenching of satellites on six parameters by looking first at the dependence of $\epsilon_{\rm sat}$ on the halo mass $m_{\rm h}$, the overdensity $\delta$, the group-centric distance $R$, the stellar masses of the satellite, $m_{\rm sat}$, and its central, $m_{\rm cen}$, and the sSFR of the central. The first three of these all effect $\epsilon_{\rm sat}$ quite strongly, but these parameters are inter-related in the data. We therefore also looked at the variation of $\epsilon_{\rm sat}$ in two-dimensional plots, using pairs of these parameters. Generally, we find similar results to what has been seen before, i.e., $\delta$, $R$, and $m_{\rm h}$ are all playing a role in the quenching of satellites. It is hard to clearly disentangle the effects of one from the other, and we can safely conclude only that these three have a much stronger effect on $\epsilon_{\rm sat}$ than either the stellar mass of the satellite or that of the central. Third, we have explored in more detail than hitherto the meaning and occurrence of “galactic conformity”, i.e., the effect by which the quenching of satellites and centrals are correlated. This is a strong effect in the sense that satellites of star forming centrals are much less likely to be quenched than the very much larger population of satellites of quenched centrals. We have argued that the presence of conformity in a given sample and analysis is the reflection of the action of “hidden common variables” that (1) must affect the quenching of both satellites and centrals, (2) must somehow be “shared” by the centrals and satellites in a given group, and (3) must not have been an “observable” in the sense of having been controlled in the analysis, e.g., through matching (or binning) of that variable in the samples. We have only been able to probe these environmental effects within the halo mass range in the [@yang2012] SDSS group catalog that we have been using, i.e., $\log m_{\rm h} \gtrsim 12.5$. The mean $\epsilon_{\rm sat}$ in all groups is falling as we approach this limit (from above), and has a value $\epsilon_{\rm sat} \sim 0.2$ at $\log m_{\rm h} = 12.5$, only a third of that found at $\log m_{\rm h} = 14.5$. The effects of conformity are however strong at the lower end of this mass range, and the satellites of star-forming centrals at $\log m_{\rm h} = 12.5$ already show no sign of environmental quenching. It should be noted that the rise of $\epsilon_{\rm sat}$ with halo mass is not driven by the changing fraction of star-forming to quenched centrals as the halo mass increases. The increase of $\epsilon_{\rm sat}$ with halo mass is seen in the satellites of both quenched and star-forming centrals and the decreasing importance of the star-forming centrals at higher halo masses has a negligible effect on the overall population because there are so few of them. Over the whole halo mass range, there is evidence for a strong radial (or local overdensity) dependence of $\epsilon_{\rm sat}$, but not much evidence that the effects of conformity vary strongly with group-centric distance and overdensity. Satellite galaxies appear to experience as strong a conformity effect out to $\sim \! 2R$, where the group-centric distance $R$ is defined in Equation (\[eq:R\]). As far as we can tell, the effect of conformity extends across much of the halo, or at least affects most of the group members — we note that on average the virial radius typically corresponds to $2 \lesssim R \lesssim 3$. Our whole analysis has been based on the quenching efficiency formalism via $\epsilon_{\rm sat}$, which takes out the effects of mass quenching [@peng2010; @peng2012] of the galaxies by comparing the quenched fraction to that of the set of centrals (dominated by singleton centrals) at a given stellar mass. This approach not only allows us to see the environmental effects more clearly, but it is additionally physically motivated by the fact that these satellites were presumably once centrals of their own halos, and indeed in many cases will remain as “centrals” of their own subhalos. The observational evidence presented in this paper that centrals in the groups that contain the satellites also experience the same environmental effects as satellites opens the question of whether “mass quenching” and “satellite quenching” (or now more generally “group quenching”) are actually the same thing, or at least closely related. At first, they might be thought to be quite different. A robust result [@vandenbosch2008; @peng2010; @peng2012 this work] has been that the value of $\epsilon_{\rm sat}$, i.e., the strength of satellite quenching, is strikingly independent of the stellar mass of the satellite. In contrast, mass quenching is, by definition, strongly dependent on the stellar mass [@peng2012]. Indeed, our whole analysis has been based on the fact that the effects of stellar mass and of environment are multiplicative, i.e., that the overall quenched fraction $f_{\rm q}$ is separable in the two dimensions of mass and environment [@peng2010]. However, it should be appreciated that many of the quantities that might be associated with the quenching of satellites will, to first order, be independent of the stellar mass of the satellite. Put another way, at a given satellite stellar mass, the distributions of these variables across the satellite population will not depend strongly on the choice of the stellar mass. Examples of such variables would include the halo mass, the group-centric distance, overdensity (assuming there is no mass segregation in the group), and the time of infall. Indeed the first three of these are precisely the variables that we and others have shown most strongly affect the strength of satellite quenching. If satellite quenching is driven by one or more of these, then we would not expect $\epsilon_{\rm sat}$ to vary with stellar mass, provided (and this is an important and interesting caveat) the ability of a satellite to resist quenching and continue forming stars is not itself a function of its stellar mass. That the distributions of these parameters are indeed more or less independent of stellar mass within the current sample is shown in Figure \[fig:satellite\_distribution\] which plots the cumulative distributions of $m_{\rm h}$, $m_{\rm cen}$, $\delta$, and $R$ for the five bins of satellite stellar mass used in Figure \[fig:e\_s\_1D\]. Only the most massive satellites with $\log m_{\rm sat} > 11$ have significantly different distributions in these parameters, for the obvious reason that these very massive satellites will not exist in the lowest mass halos. This difference in the distribution of $m_{\rm h}$ for the most massive satellites then folds through to differences in $\delta$ because of the correlation between these parameters. The difference in the $R$ distribution could reflect some mass segregation within the groups. ![Cumulative distributions of the environmental parameters for our satellite sample in different stellar mass bins. Each color corresponds to a stellar mass bin as indicated in the top panel. It is obvious that the distributions are fairly similar for different mass bins except for the highest mass bin (i.e., $\log m_{\rm sat} > 11$). To compute these distributions the satellites are weighed (see Section \[sec:basic\_section\]).[]{data-label="fig:satellite_distribution"}](11.eps){width="40.00000%"} Suggestive evidence that mass and environment quenching may be very closely linked comes from the convergence of the halo mass at which these effects occur. Satellite quenching starts to become important around $11.5 \lesssim \log m_{\rm h} \lesssim 12.5$, when the curves of $\epsilon_{\rm sat}(m_{\rm h})$ (linearly extrapolated) intercept the $\epsilon_{\rm sat} = 0$ axis in Figure \[fig:e\_s\_1D\_conformity\]. Thus, the onset of environmentally driven satellite quenching occurs at more or less the halo mass that is associated [from abundance matching, e.g., @moster2013; @behroozi2013] with the Schechter $M^\ast$ in stellar mass (i.e., $\log m_{\ast} \sim 10.8$) that characterizes mass quenching [@peng2010; @peng2012]. Observational evidence in favor of linking the two effects comes, for example, from the recent structural analysis of [@carollo2014] who pointed out that the structural mix of quenched satellites does not depend on the group-centric distance, even though the fraction of these quenched satellites that can be inferred to have been environment-quenched (as opposed to mass-quenched) changes substantially with distance, from essentially zero on the group outskirts to about one-third at the center of the group. This was taken to indicate that either the structural and morphological signatures of the two putative quenching channels are identical, or that neither is associated with a structural or morphological signature. In other words, the overall morphology-density relation is primarily a reflection of $\epsilon_{\rm sat}$ and not of variations within the quenched population due to different quenching channels (see @carollo2014 and discussion therein). Linking the quenching of centrals and satellites in this way, and the evident strong effect of conformity (especially around the halo masses at which quenching appears) that we have highlighted in this paper, starts to strongly constrain the physical origin of quenching. Recall that “conformity” is a reflection of the action of a hidden common variable that affects the quenching of both satellites and centrals, has a value that is “shared” between a central and its satellites. Furthermore, the quenching effect comes from a component of the hidden variable that is not correlated with any of the controlled parameters (five in the current study, including halo mass). Taken at face value, this would seem to argue against processes that are internal to individual galaxies, including internal structural changes and AGN activity, unless that information could be transmitted across the halo via a “hidden common variable”. Even then, we reiterate that the trend in $\epsilon_{\rm sat}$ with halo mass in Figure \[fig:e\_s\_1D\_conformity\] is not simply due to the effect of the changing fraction of quenched and star-forming centrals. In other words, some kind of “quenching at a distance” by an internal process operating in the central (or satellites) is not enough, because both $\epsilon_{\rm sat,q}$ and $\epsilon_{\rm sat,SF}$ increase with $m_{\rm h}$ as well as with $\delta$ and $R$. If the response of centrals and satellites to the environment is after all the same, then this would also suggest that any processes that would be unique to satellites, or to centrals, are unlikely to be dominating the environmental response of either. A significant caveat to this is that we have only been able to establish this similarity over a very small range of stellar mass. Conclusions {#sec:conclusion} =========== In this paper, we have re-examined the quenched fractions of galaxies in the SDSS DR7 spectroscopic catalog as a function of their group environment. We consider both the satellites and the centrals in the groups. The analysis was performed in the framework of the “quenching efficiency” formalism that removes the large effects of stellar mass on the quenched fractions through comparison with a control sample of galaxies at the same stellar mass. This control sample is the set of central galaxies that are mostly singletons and which can be considered to be representative of the likely progenitors of the satellites in the groups. The principal findings in this paper are as follows. 1. Whereas, at a given stellar mass, satellites are systematically more quenched than the control sample of general centrals (as has been seen before), we show that the centrals of these same groups are also more quenched than the control sample. In fact, as far as we can see, the centrals in these groups are experiencing essentially the same environmental quenching effects as the satellites when we compare centrals and satellites with the same environmental parameters, e.g., overdensity, group-centric distance, and halo mass. This suggests that the concept of “satellite quenching” that was introduced earlier [e.g., @vandenbosch2008; @peng2012], and which is describable by a satellite quenching efficiency $\epsilon_{\rm sat}(p_1,p_2,\ldots)$, should be generalized to “group quenching” that affects centrals and satellites in the same way (at least within the mass range that we can probe, i.e., $m_\ast \gtrsim 10^{10.3}\: M_\odot$). 2. The dependence of $\epsilon_{\rm sat}$ on different parameters for the satellites shows a wide range of behaviors. As has been seen before, $\epsilon_{\rm sat}$ is independent of the stellar mass of the satellite (we show it is also independent of the stellar mass of the central) but depends strongly on the inter-linked parameters of overdensity $\delta$, the group-centric distance $R$ with also a dependence on halo mass $m_{\rm h}$. It is hard to isolate the effects of these, even on two-dimensional plots, and no simple dependences emerge across the whole of parameter space. Some of the difficulty may stem from ambiguities in the definition of these parameters and/or uncertainties in their measurement. 3. A very strong dependence is also seen on the sSFR of the central galaxy, a phenomenon called “galactic conformity” [@weinmann2006]. A larger fraction of the satellites of quenched centrals are quenched than those of star-forming centrals. This effect persists even when we carefully match the satellites in stellar mass $m_{\rm sat}$ and all four of the environmental parameters used in this study, $m_{\rm h}$, $m_{\rm cen}$, $\delta$, and $R$. The persistence of conformity, even after such matching, is that the quenching of both the centrals and satellites must be strongly affected by one or more additional “hidden common variables” that have not been matched in the analysis and which contribute to the “probabilistic” aspect of quenching. These hidden variables must also be in some way “shared” by the satellites and central of a given group, i.e., they must either be equal (as would be the case of a single parameter describing the group) or at least correlated (e.g., as in local overdensity). The strength of conformity can be described by the ratio $\xi_{\rm conf}$ of the $\epsilon_{\rm sat}$ for those satellites with star-forming and quenched centrals. 4. This conformity strength $\xi_{\rm conf}$ has an average value of about 0.4, meaning that satellites of quenched centrals respond to the environment $\sim \! 2.5$ times more strongly than do those of star forming centrals. This has the same value whether we simply average over all satellites or look at the five-parameter-matched sample. However, the value of $\xi_{\rm conf}$ varies with most of the parameters and has values over the whole interval between zero (very strong conformity) and almost unity (no conformity) across the parameter space. Conformity is strong at low halo masses (i.e., $m_{\rm h} \sim 10^{12.5}\: M_\odot$) but much weaker at high halo masses. However, a variation in the strength of conformity can have several causes and so we do not attempt to interpret these trends in detail. It is a curious coincidence that this mean value of $\xi_{\rm conf}$ is essentially the same as the mean value of $\epsilon_{\rm sat}$. 5. A very interesting question raised by these results is whether our new environmentally driven group quenching process could be essentially the same process as that which drives the mass quenching [@peng2010; @peng2012] that we have up to this point separated out from the analysis through the use of the $\epsilon_{\rm sat}$ formalism. It was suggested in [@carollo2014] that these could be different manifestations of the same underlying process, based on the observation that there seems to be no detectable difference in the structural properties of galaxies that were inferred to have undergone mass quenching and environment quenching and that the apparent mass independence of satellite quenching could then arise if the distribution of host halo masses for satellites was largely independent of their stellar mass, as indicated by the independence of the satellite mass function on group halo mass (e.g., Peng et al. 2012). We showed here that the distribution of all of the environmental parameters that are most important for satellite quenching, i.e., $m_{\rm h}$, $\delta$, and $R$ are more or less independent of the stellar mass of a satellite (except for the most massive satellites which are not found in low mass halos). This would require that the response of a satellite to an environmental quenching “pressure” must be interestingly independent of its mass. A strong argument in favor of linking environment quenching and mass quenching is that the halo mass scale at which environment quenching is inferred to start, which we obtain by a small extrapolation of $\epsilon_{\rm sat}(m_{\rm h})$, is very similar to the halo mass that corresponds to the stellar mass that is associated with mass quenching. 6. The fact that the strong galactic conformity effect linking the quenching of centrals and satellites extends (without a strong variation in strength) out to group-centric distances of 2-3 $R$, i.e., comparable to the virial radius, suggests that the causes and/or effects of quenching cannot be local within galaxies and must rather extend across the whole extent of halos. This argues against internal galaxy processes driving quenching, unless the subsequent effects can be transmitted over large distances. Even then, it is clear that the quenching of the central alone is not sufficient (even with conformity) to explain the increase of satellite quenching with e.g., halo mass, since this is observed separately for the satellites of both quenched and star-forming centrals. Another interesting constraint is that, if the response of centrals and satellites to the environment is the same, as we have suggested here at least in the range of parameter space sampled by both, then it argues against the importance of satellite-specific physical processes in the quenching of galaxies. The validity of this conclusion at low stellar masses is, however, conjectural and is based only on the mass independence of the environmental quenching of satellites. This research was supported by the Swiss National Science Foundation. The idea that environmental quenching and mass quenching could be closely related was stimulated by a conversation with John Kormendy, which we gratefully acknowledge. We also thank Marcella Carollo for helpful discussions and the anonymous referee for their careful reading of the paper. [^1]: There are also indications that the state of a galaxy depends on environmental scales much larger than the virial radius [e.g., @lu2012]. For a general discussion of the concept of environment, we refer to, e.g., [@cooper2005], Mo et al. (2010; Section 15.5), [@carollo2013], and [@haas2012]. [^2]: For an alternative view of quenching durations, see, e.g., [@schawinski2014] and [@woo2014]. [^3]: <http://www.mpa-garching.mpg.de/SDSS/DR7/> [^4]: <http://gax.shao.ac.cn/data/Group.html>
--- abstract: 'Famously, a $d$-dimensional, spatially homogeneous random walk whose increments are non-degenerate, have finite second moments, and have zero mean is recurrent if $d \in \{1,2\}$ but transient if $d \geq 3$. Once spatial homogeneity is relaxed, this is no longer true. We study a family of zero-drift spatially non-homogeneous random walks (Markov processes) whose increment covariance matrix is asymptotically constant along rays from the origin, and which, in any ambient dimension $d \geq 2$, can be adjusted so that the walk is either transient or recurrent. Natural examples are provided by random walks whose increments are supported on ellipsoids that are symmetric about the ray from the origin through the walk’s current position; these *elliptic random walks* generalize the classical homogeneous Pearson–Rayleigh walk (the spherical case). Our proof of the recurrence classification is based on fundamental work of Lamperti.' author: - 'Nicholas Georgiou[^1] [^2]' - 'Mikhail V. Menshikov' - 'Aleksandar Mijatović[^3]' - 'Andrew R. Wade' date: 29 June 2015 title: 'Anomalous recurrence properties of many-dimensional zero-drift random walks' --- [*Key words:*]{} Non-homogeneous random walk; elliptic random walk; zero drift; recurrence; transience. [*AMS Subject Classification:*]{} 60J05 (Primary) 60J10, 60G42, 60G50 (Secondary) Introduction {#sec:intro} ============ A $d$-dimensional random walk that proceeds via a sequence of unit-length steps, each in an independent and uniformly random direction, is sometimes called a [*Pearson–Rayleigh*]{} random walk (PRRW), after the exchange in the letters pages of [*Nature*]{} between Karl Pearson and Lord Rayleigh in 1905 [@pr]. Pearson was interested in two dimensions and questions of migration of species (such as mosquitoes) [@pearson], although Carazza has speculated that Pearson was a golfer [@carazza p. 419]; Rayleigh had earlier considered the acoustic ‘random walks’ in phase space produced by combinations of sound waves of the same amplitude and random phases. The PRRW can be represented via partial sums of sequences of i.i.d. random vectors that are uniformly distributed on the unit sphere ${{\mathbb S}^{d-1}}$ in ${{\mathbb R}}^d$. Clearly the increments have mean zero, i.e., the PRRW has *zero drift*. The PRRW has received some renewed interest recently as a model for microbe locomotion [@berg; @nossal; @nw]. Chapter 2 of [@hughes] gives a general discussion of these walks, which have been well-understood for many years. In particular, it is well known that the PRRW is recurrent for $d \in \{1, 2\}$ and transient if $d \geq 3$. Suppose that we replace the spherically symmetric increments of the PRRW by increments that instead have some *elliptical* structure, while retaining the zero drift. For example, one could take the increments to be uniformly distributed on the surface of an ellipsoid of fixed shape and orientation, as represented by the picture on the right of Figure \[fig1\]. More generally, one should view the ellipses in Figure \[fig1\] as representing the *covariance* structure of the increments of the walk (we will give a concrete example later; the uniform distribution on the ellipse is actually not the most convenient for calculations). (-2.5,-2.5) rectangle (2.5,2.5); (0,-2) edge (0,2) (-2,0) edge (2,0); /in [-1/-1, -1/1, 1/-1, 1/1, 0/0]{} (,) ellipse (10pt and 10pt); (-2.5,-2.5) rectangle (2.5,2.5); (0,-2) edge (0,2) (-2,0) edge (2,0); /in [-1/-1, -1/1, 1/-1, 1/1, 0/0]{} (,) ellipse (18pt and 6pt); A little thought shows that the walk represented by the picture on the right of Figure \[fig1\] is essentially no different to the PRRW: an affine transformation of ${{\mathbb R}}^d$ will map the walk back to a walk whose increments have the same covariance structure as the PRRW. To obtain genuinely different behaviour, it is necessary to abandon spatial homogeneity. In this paper we consider a family of spatially *non-homogeneous* random walks with zero drift. These include generalizations of the PRRW in which the increments are not i.i.d. but have a distribution supported on an ellipsoid of fixed size and shape but whose orientation depends upon the current position of the walk. Figure \[fig2\] gives representations of two important types of example, in which the ellipsoid is aligned so that its principal axes are parallel or perpendicular to the vector of the current position of the walk, which sits at the centre of the ellipse. (-2.5,-2.5) rectangle (2.5,2.5); (0,-2) edge (0,2) (-2,0) edge (2,0); in [22.5, 67.5, ..., 360]{} (0:1.5) ellipse (18pt and 8pt); (-2.5,-2.5) rectangle (2.5,2.5); (0,-2) edge (0,2) (-2,0) edge (2,0); in [0, 45, ..., 359]{} (0:1.5) ellipse (6pt and 14pt); The random walks represented by Figure \[fig2\] are no longer sums of i.i.d. variables. These modified walks can behave very differently to the PRRW. For instance, one of the two-dimensional random walks represented in Figure \[fig2\] is *transient* while the other (as in the classical case) is recurrent. The reader who has not seen this kind of example before may take a moment to identify which is which. It is this *anomalous recurrence behaviour* that is the main subject of the present paper. In the next section, we give a formal description of our model and state our main results. We end this introduction with a brief comment on motivation. In biology, the PRRW is more natural than a lattice-based walk for modelling the motion of microscopic organisms, such as certain bacteria, on a surface. Experiment suggests that the locomotion of several kinds of cells consists of roughly straight line segments linked by discrete changes in direction: see, e.g., [@nossal; @nw]. The generalization to elliptically-distributed increments studied here represents movement on a surface on which either radial or transverse motion is inhibited. In chemistry and physics, the trajectory of a finite-step PRRW (also called a ‘random chain’) is an idealized model of the growth of weakly interacting polymer molecules: see, e.g., §2.6 of [@hughes]. The modification to ellipsoid-supported jumps represents polymer growth in a biased medium. Model and main results {#sec:rws} ====================== We work in ${{\mathbb R}}^d$, $d \geq 1$. Our main interest is in $d \geq 2$, as we shall explain shortly. Write ${{\mathbf{e}}}_1, \ldots, {{\mathbf{e}}}_d$ for the standard orthonormal basis vectors in ${{\mathbb R}}^d$. Write ${{\mathbf{0}}}$ for the origin in ${{\mathbb R}}^d$, and let $\| \, \cdot \, \|$ denote the Euclidean norm and ${\langle}\,\cdot\, ,\!\, \cdot\, {\rangle}$ the Euclidean inner product on ${{\mathbb R}}^d$. Write ${{\mathbb S}^{d-1}} := \{ {{\mathbf{u}}}\in {{\mathbb R}}^d : \| {{\mathbf{u}}}\| = 1\}$ for the unit sphere in ${{\mathbb R}}^d$. For ${{\mathbf{x}}}\in {{\mathbb R}}^d \setminus \{ {{\mathbf{0}}}\}$, set $\hat {{\mathbf{x}}}:= {{\mathbf{x}}}/ \| {{\mathbf{x}}}\|$; also set $\hat {{\mathbf{0}}}:= {{\mathbf{e}}}_1$, for convenience. For definiteness, vectors ${{\mathbf{x}}}\in {{\mathbb R}}^d$ are viewed as column vectors throughout. We now define $X=(X_n , n \in {{\mathbb Z}_+})$, a discrete-time, time-homogeneous Markov process on a (non-empty, unbounded) subset ${{\mathbb X}}$ of ${{\mathbb R}}^d$. Formally, $({{\mathbb X}},{{\mathcal B}}_{{\mathbb X}})$ is a measurable space, ${{\mathbb X}}$ is a Borel subset of ${{\mathbb R}}^d$, and ${{\mathcal B}}_{{\mathbb X}}$ is the $\sigma$-algebra of all $B \cap {{\mathbb X}}$ for $B$ a Borel set in ${{\mathbb R}}^d$. Suppose $X_0$ is some fixed (i.e., non-random) point in ${{\mathbb X}}$. Write $$\Delta_n := X_{n+1} - X_n ~~ (n \in {{\mathbb Z}_+})$$ for the increments of $X$. By assumption, given $X_0, \ldots, X_n$, the law of $\Delta_n$ depends only on $X_n$ (and not on $n$); so often we ease notation by taking $n=0$ and writing just $\Delta$ for $\Delta_0$. We also use the shorthand ${{\mathbb P}}_{{\mathbf{x}}}[ \, \cdot \, ] = {{\mathbb P}}[ \, \cdot \, \! \mid X_0 = {{\mathbf{x}}}]$ for probabilities when the walk is started from ${{\mathbf{x}}}\in {{\mathbb X}}$; similarly we use $\operatorname{\mathbb{E}}_{{\mathbf{x}}}$ for the corresponding expectations. We make the following moments assumption: : There exists $p >2$ such that $\sup_{{{\mathbf{x}}}\in {{\mathbb X}}} \operatorname{\mathbb{E}}_{{\mathbf{x}}}[ \| \Delta \|^p ] < \infty$. The assumption ensures that $\Delta$ has a well-defined mean vector $\mu({{\mathbf{x}}}) := \operatorname{\mathbb{E}}_{{\mathbf{x}}}[ \Delta ]$, and we suppose that the random walk has *zero drift*: : Suppose that $\mu({{\mathbf{x}}}) = {{\mathbf{0}}}$ for all ${{\mathbf{x}}}\in {{\mathbb X}}$. The assumption also ensures that $\Delta$ has a well-defined covariance matrix, which we denote by $ M ({{\mathbf{x}}}) := \operatorname{\mathbb{E}}_{{\mathbf{x}}}[ \Delta \Delta^{\!{{\scalebox{0.6}{$\top$}}}} ], $ where $\Delta$ is viewed as a column vector. To rule out pathological cases, we assume that $\Delta$ is *uniformly non-degenerate*, in the following sense. : There exists $v > 0$ such that $\operatorname{tr}(M({{\mathbf{x}}})) = \operatorname{\mathbb{E}}_{{\mathbf{x}}}[ \| \Delta \|^2 ] \geq v$ for all ${{\mathbf{x}}}\in {{\mathbb X}}$. Note that assumption  is weaker than *uniform ellipticity*, which in this context usually means, for some ${\varepsilon}>0$, ${{\mathbb P}}_{{\mathbf{x}}}[ \Delta \cdot {{\mathbf{u}}}\geq {\varepsilon}] \geq {\varepsilon}$ for all ${{\mathbf{u}}}\in {{\mathbb S}^{d-1}}$ and all ${{\mathbf{x}}}$. Our main interest is in a recurrence classification. First, we state the following basic ‘non-confinement’ result. \[lem:lim\_sup\_infty\] Suppose that $X$ satisfies assumptions , and . Then $$\label{eqn:limsup=+infty} \limsup_{n\to\infty} \| X_n \| = +\infty, {\ \text{a.s.}}$$ We give the proof of Proposition \[lem:lim\_sup\_infty\] in Section \[sec:non-confinement\]; we actually prove more, namely that the hypotheses of Proposition \[lem:lim\_sup\_infty\] ensure that a version of Kolmogorov’s ‘other’ inequality holds. The fact ensures that questions of the escape of trajectories to infinity are non-trivial. Indeed, we will give conditions under which one or other of the following two behaviours (which are not *a priori* exhaustive) occurs: - $\lim_{n \to \infty} \| X_n \| = +\infty$, a.s., in which case we say that $X$ is *transient*; - $\liminf_{n \to \infty} \| X_n \| \leq r_0$, a.s., for some constant $r_0 \in {{\mathbb R}_+}$, when we say $X$ is *recurrent*. If $X$ is an irreducible time-homogeneous Markov chain on a locally finite state-space, these definitions reduce to the usual notions of transience and recurrence; in general state-spaces, our approach allows us to avoid unnecessary technicalities concerning irreducibility. In dimension $d=1$, it is a consequence of the classical Chung–Fuchs theorem (see [@cf] or Chapter 9 of [@kall]) that a spatially *homogeneous* random walk with zero drift is necessarily recurrent. However, this is *not* true for a spatially non-homogeneous random walk: as observed by Rogozin and Foss [@rf78], a counterexample is provided by a version of the ‘oscillating random walk’ of Kemperman [@kemperman] in which the increment law is one of two distributions (with mean zero but infinite second moment) depending on the walk’s present sign. Our conditions exclude these heavy-tailed phenomena, so that in $d=1$ recurrence is assured in our setting. \[t:zero\_drift\_implies\_recurrence\] Suppose that $d=1$. Suppose that $X$ satisfies assumptions , , and . Then $X$ is recurrent. Theorem \[t:zero\_drift\_implies\_recurrence\] is essentially contained in a result of Lamperti [@lamp1 Theorem 3.2]; we give a self-contained proof below. Theorem \[t:zero\_drift\_implies\_recurrence\] shows that in $d=1$, under mild conditions, the classical Chung–Fuchs recurrence classification for homogeneous zero-drift random walks extends to zero-drift non-homogeneous random walks. The purpose of the present paper is to demonstrate a natural family of examples in dimension $d \geq 2$ where this extension fails, and hence exhibit the following. There exist spatially non-homogeneous random walks whose increments are non-degenerate, have uniformly bounded second moments, and have zero mean, which are -3mm - transient in $d=2$; - recurrent in $d \geq 3$. Although certainly appreciated by experts, this fact is perhaps not as widely known as it might be. Zeitouni (pp. 91–92 of [@zeit]) describes an example of a transient zero-drift random walk on ${{\mathbb Z}}^2$, and states that the idea “goes back to Krylov (in the context of diffusions)”. Peres, Popov and Sousi [@pps] investigate the minimal number of different increment distributions required for anomalous recurrence behaviour. We now introduce our family of non-homogeneous random walks. Write $\| \, \cdot \, \|_{\rm op}$ for the matrix (operator) norm given by $\| M \|_{\rm op} = \sup_{{{\mathbf{u}}}\in {{\mathbb S}^{d-1}}} \| M {{\mathbf{u}}}\|$. The following assumption on the asymptotic stability of the covariance structure of the process along rays is central. : Suppose that there exists a positive-definite matrix function $\sigma^2$ with domain ${{\mathbb S}^{d-1}}$ such that, as $r \to \infty$, $${\varepsilon}(r) := \sup_{{{\mathbf{x}}}\in {{\mathbb X}}: \| {{\mathbf{x}}}\| \geq r} \| M( {{\mathbf{x}}}) - \sigma^2 ( \hat{{\mathbf{x}}}) \|_{\rm op} \to 0 .$$ A little informally, says that $M ({{\mathbf{x}}}) \to \sigma^2 (\hat {{\mathbf{x}}})$ as $\| {{\mathbf{x}}}\| \to \infty$; in what follows, we will often make similar statements, formal versions of which may be cast as in . Note that and together imply that $\operatorname{tr}(\sigma^2({{\mathbf{u}}}) ) \geq v >0$; next we impose a key assumption on the form of $\sigma^2$ that is considerably stronger. To describe this, it is convenient to introduce the notation ${\langle}\, \cdot \, , \! \, \cdot \, {\rangle}_{{{\mathbf{u}}}}$ that defines, for each ${{\mathbf{u}}}\in {{\mathbb S}^{d-1}}$, an inner product on ${{\mathbb R}}^d$ via $${\langle}{{\mathbf{y}}}, {{\mathbf{z}}}{\rangle}_{{{\mathbf{u}}}} := {{\mathbf{y}}}^{{\scalebox{0.6}{$\top$}}}\cdot \sigma^2 ( {{\mathbf{u}}}) \cdot {{\mathbf{z}}}= {\langle}{{\mathbf{y}}}, \sigma^2({{\mathbf{u}}})\cdot{{\mathbf{z}}}{\rangle}, ~~ \text{for }{{\mathbf{y}}}, {{\mathbf{z}}}\in {{\mathbb R}}^d .$$ : Suppose that there exist constants $U$ and $V$ with $0 < U < V < \infty$ such that, for all ${{\mathbf{u}}}\in {{\mathbb S}^{d-1}}$, $${\langle}{{\mathbf{u}}}, {{\mathbf{u}}}{\rangle}_{{{\mathbf{u}}}} = U, ~~\text{and}~~ \operatorname{tr}(\sigma^2({{\mathbf{u}}}) ) = V.$$ Informally, $V$ quantifies the total variance of the increments, while $U$ quantifies the variance in the radial direction; necessarily $U \leq V$. The assumption that $0 \neq U \neq V$ excludes some degenerate cases. As we will see, one possible way to satisfy condition is to suppose that the eigenvectors of $\sigma^2({{\mathbf{u}}})$ are all parallel or perpendicular to the vector ${{\mathbf{u}}}$, and that the corresponding eigenvalues are all constant as ${{\mathbf{u}}}$ varies; the level sets of the corresponding quadratic forms $q_{{\mathbf{u}}}({{\mathbf{x}}}) := {\langle}{{\mathbf{x}}},{{\mathbf{x}}}{\rangle}_{{\mathbf{u}}}$ for ${{\mathbf{u}}}\in {{\mathbb S}^{d-1}}$ are then ellipsoids like those depicted in Figure \[fig2\]. Our main result is the following, which shows that both transience and recurrence are possible for *any* $d \geq 2$, depending on parameter choices; as seen in Theorem \[t:zero\_drift\_implies\_recurrence\], this possibility of anomalous recurrence behaviour is a genuinely multidimensional phenomenon under our regularity conditions. \[thm:recurrence\] Suppose that $X$ satisfies –, with constants $0 < U < V$ as defined in . The following recurrence classification is valid. - If $2U < V$, then $X$ is transient. - If $2U > V$, then $X$ is recurrent. - If $2U = V$ and holds with ${\varepsilon}(r) = O(r^{-\delta_0})$ for some $\delta_0 > 0$, then $X$ is recurrent. Moreover, we show that in any of the above cases, $X$ is *null* in the following sense. \[thm:null\] Suppose that $X$ satisfies –, with constants $0 < U < V$ as defined in . Then, in any of the cases (i)–(iii) in Theorem \[thm:recurrence\], for any bounded $A \subset {{\mathbb R}}^d$, $$\label{eq:null} \lim_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} {{\mathbf 1}{\{ X_k \in A \}}} = 0, {\ \text{a.s.}}\text{ and in } L^q \text{ for any } q \geq 1.$$ \[rem:UequalsV\] Theorems \[thm:recurrence\] and \[thm:null\] both remain valid if we permit $V = U >0$ in ; indeed, the condition $U < V$ is not used in the proof of Theorem \[thm:recurrence\] given below, so this case is recurrent, by Theorem \[thm:recurrence\](ii). The condition $U < V$ is used at one point to simplify the proof of Theorem \[thm:null\] given below, but a small modification of the argument also works in the case $U=V$. The remainder of the paper is organised as follows. In Section \[sec:ell-rws\] we describe a specific family of examples called *elliptic random walk models* that satisfy assumptions – and exhibit both transient and recurrent behaviour dependent on the parameters of the model. We also present some simulated data that depicts the random walks in both cases. In Section \[sec:non-confinement\] we prove a $d$-dimensional martingale version of Kolmogorov’s other inequality and use that to prove the non-confinement result (Proposition \[lem:lim\_sup\_infty\]). In Section \[sec:rw\] we prove the recurrence classification (Theorem \[thm:recurrence\]), and in Section \[sec:nullity\] we prove Theorem \[thm:null\]. In the appendix we prove recurrence in the one-dimensional case (Theorem \[t:zero\_drift\_implies\_recurrence\]). Finally, we remark that in work in progress we investigate diffusive scaling limits for random walks of the type described in the present paper; the diffusions that appear as scaling limits possess certain pathologies from the point of view of diffusion theory that make them interesting in their own right. Example: Elliptic random walk model {#sec:ell-rws} =================================== Let $d \geq 2$. We describe a specific model on ${{\mathbb X}}= {{\mathbb R}}^d$ where the jump distribution at ${{\mathbf{x}}}\in {{\mathbb R}}^d$ is supported on an ellipsoid having one distinguished axis aligned with the vector ${{\mathbf{x}}}$. The model is specified by two constants $a,b >0$. Construct $\Delta$ as follows. Given $X_0 = {{\mathbf{x}}}$, take ${{\bm{\zeta}}}$ uniform on ${{\mathbb S}^{d-1}}$ and set $$\label{eqn:Delta-d-dim} \Delta = Q_{\hat{{{\mathbf{x}}}}} D {{\bm{\zeta}}}$$ for $Q_{\hat{{{\mathbf{x}}}}}$ an orthogonal matrix representing a transformation of ${{\mathbb R}}^d$ mapping ${{\mathbf{e}}}_1$ to $\hat{{\mathbf{x}}}$, and $D = \sqrt{d} \diag { a, b , \ldots, b } $. See Figure \[fig:Delta\]. (-2,-2.2) rectangle (10,2); (-0.5,0) circle (1); (-1.5,0) arc (180:360:1 and 0.4); (0.5,0) arc (0:180:1 and 0.4); at (-0.3,0.65) [${{\bm{\zeta}}}$]{}; (4,0) ellipse (1.2 and 2); (2.8,0) arc (180:360:1.2 and 0.6); (5.2,0) arc (0:180:1.2 and 0.6); at (4.2,1.2) [$D{{\bm{\zeta}}}$]{}; (9,0) ellipse (1.2 and 1.8); (7.8,0) arc (180:360:1.2 and 0.75); (10.2,0) arc (0:180:1.2 and 0.75); (-0.5,0) edge (0.1,0.6) (4,0) edge (4.72,1.2) \[rotate around=[-30:(9,0)]{}\] (9,0) edge (9.72,1.08); at (10,0.85) [$\Delta$]{}; at (9.2,-0.3) [${{\mathbf{x}}}$]{}; at (7.2,-2.5) [${{\mathbf{0}}}$]{}; (9,0) edge (9,-3); (9,-3) circle (1pt); (1,0) – (2.4,0); (5.7,0) – (7.1,0); (Recall that $\hat{{\mathbf{0}}}= {{\mathbf{e}}}_1$, so for $X_0 = {{\mathbf{0}}}$ we can take $Q_{\hat{{\mathbf{x}}}} = I$ and $\Delta = D{{\bm{\zeta}}}$.) Thus $\Delta$ is a random point on an ellipsoid that has one distinguished semi-axis, of length $a\sqrt{d}$, aligned in the $\hat{{\mathbf{x}}}$ direction, and all other semi-axes of length $b\sqrt{d}$. Note that the law of $\Delta$ is well defined owing to the spherical symmetry of the uniform distribution on ${{\mathbb S}^{d-1}}$ and the fact that only one axis of the ellipsoid is distinguished (for this reason it is enough to take any $Q_{\hat{{{\mathbf{x}}}}}$ satisfying $Q_{\hat{{{\mathbf{x}}}}} {{\mathbf{e}}}_1 = \hat{{\mathbf{x}}}$ in order to define $\Delta$; see also Remark \[rem:Hx\] below). Note also that $\Delta$ is not chosen to be uniformly distributed on the surface of the ellipsoid; this does not affect the range of asymptotic behaviour exhibited by the family of walks as $a$ and $b$ vary, but it does simplify the calculation of $M({{\mathbf{x}}})$. Indeed, we have $$M({{\mathbf{x}}}) = \operatorname{\mathbb{E}}_{{\mathbf{x}}}[ \Delta \Delta^{{\scalebox{0.6}{$\top$}}}] = \operatorname{\mathbb{E}}[ Q_{\hat{{{\mathbf{x}}}}} D {{\bm{\zeta}}}{{\bm{\zeta}}}^{{\scalebox{0.6}{$\top$}}}D Q_{\hat{{{\mathbf{x}}}}}^{{\scalebox{0.6}{$\top$}}}] = Q_{\hat{{{\mathbf{x}}}}} D \operatorname{\mathbb{E}}[ {{\bm{\zeta}}}{{\bm{\zeta}}}^{{\scalebox{0.6}{$\top$}}}] D Q_{\hat{{{\mathbf{x}}}}}^{{\scalebox{0.6}{$\top$}}}= \frac{1}{d} Q_{\hat{{{\mathbf{x}}}}} D^2 Q_{\hat{{{\mathbf{x}}}}}^{{\scalebox{0.6}{$\top$}}},$$ by linearity of expectation, and using the fact that $\operatorname{\mathbb{E}}[{{\bm{\zeta}}}{{\bm{\zeta}}}^{{\scalebox{0.6}{$\top$}}}] = \frac{1}{d}I$ for ${{\bm{\zeta}}}$ uniformly distributed on ${{\mathbb S}^{d-1}}$. Also, a calculation similar to the above confirms that $\mu({{\mathbf{x}}}) = {{\mathbf{0}}}$ for all ${{\mathbf{x}}}\in {{\mathbb R}}^d$, since $\operatorname{\mathbb{E}}[{{\bm{\zeta}}}] = {{\mathbf{0}}}$. Since $\| \Delta \|$ is bounded above by $\sqrt{d}\max\{a,b\}$, assumption holds. Clearly and hold, with $\sigma^2({{\mathbf{u}}}) =\frac{1}{d} Q_{{\mathbf{u}}}D^2 Q_{{\mathbf{u}}}^{{\scalebox{0.6}{$\top$}}}$ for ${{\mathbf{u}}}\in {{\mathbb S}^{d-1}}$. It is also a simple matter to check that and hold: the matrix $\sigma^2({{\mathbf{u}}})$ represented in coordinates for the orthonormal basis $\{ Q_{{\mathbf{u}}}{{\mathbf{e}}}_1 = {{\mathbf{u}}}, Q_{{\mathbf{u}}}{{\mathbf{e}}}_2, \dots , Q_{{\mathbf{u}}}{{\mathbf{e}}}_d \}$ is diagonal with entries $a^2, b^2, \dots, b^2$. Indeed, $$\begin{split} \sigma^2({{\mathbf{u}}}) = \frac{1}{d} Q_{{\mathbf{u}}}D^2 Q_{{\mathbf{u}}}^{{\scalebox{0.6}{$\top$}}}&= Q_{{\mathbf{u}}}[ b^2 I + (a^2-b^2) {{\mathbf{e}}}_1{{\mathbf{e}}}_1^{{\scalebox{0.6}{$\top$}}}] Q_{{\mathbf{u}}}^{{\scalebox{0.6}{$\top$}}}\\ &= a^2 {{\mathbf{u}}}{{\mathbf{u}}}^{{\scalebox{0.6}{$\top$}}}+ b^2( I - {{\mathbf{u}}}{{\mathbf{u}}}^{{\scalebox{0.6}{$\top$}}}), \end{split}$$ and therefore ${\langle}{{\mathbf{u}}}, {{\mathbf{u}}}{\rangle}_{{\mathbf{u}}}= {\langle}{{\mathbf{u}}}, \sigma^2({{\mathbf{u}}}) \cdot {{\mathbf{u}}}\, {\rangle}= a^2 > 0$ for all ${{\mathbf{u}}}\in {{\mathbb S}^{d-1}}$, and $\operatorname{tr}{(M({{\mathbf{x}}}))} = \operatorname{tr}{(\sigma^2(\hat{{\mathbf{x}}}))} = a^2 + (d-1)b^2 > 0$ for all ${{\mathbf{x}}}\in {{\mathbb R}}^d$. \[rem:Hx\] The seeming ambiguity in the definition of $\Delta$ due to the choice of $Q_{\hat{{\mathbf{x}}}}$ can be resolved by noting that $\Delta$ can be rewritten as $$\Delta = Q_{\hat{{\mathbf{x}}}} D Q_{\hat{{\mathbf{x}}}}^{{\scalebox{0.6}{$\top$}}}Q_{\hat{{\mathbf{x}}}} {{\bm{\zeta}}}= Q_{\hat{{\mathbf{x}}}} D Q_{\hat{{\mathbf{x}}}}^{{\scalebox{0.6}{$\top$}}}\tilde{{\bm{\zeta}}},$$ where $\tilde{{\bm{\zeta}}}= Q_{\hat{{\mathbf{x}}}} {{\bm{\zeta}}}$ is also uniform on ${{\mathbb S}^{d-1}}$ (this follows from the spherical symmetry of the uniform distribution on ${{\mathbb S}^{d-1}}$). Moreover, the symmetric matrix $H_{\hat{{\mathbf{x}}}} := Q_{\hat{{\mathbf{x}}}} D Q_{\hat{{\mathbf{x}}}}^{{\scalebox{0.6}{$\top$}}}$ is determined explicitly in terms of $\hat{{\mathbf{x}}}$: $$\begin{split} H_{\hat{{\mathbf{x}}}} = Q_{\hat{{\mathbf{x}}}} D Q_{\hat{{\mathbf{x}}}}^{{\scalebox{0.6}{$\top$}}}&= Q_{\hat{{\mathbf{x}}}}( b\sqrt{d} I + (a-b)\sqrt{d}{{\mathbf{e}}}_1{{\mathbf{e}}}_1^{{\scalebox{0.6}{$\top$}}}) Q_{\hat{{\mathbf{x}}}}^{{\scalebox{0.6}{$\top$}}}\\ &= b\sqrt{d} I + (a-b) \sqrt{d} \hat{{\mathbf{x}}}\hat{{\mathbf{x}}}^{{\scalebox{0.6}{$\top$}}}. \end{split}$$ Consequently, we could choose to specify $\Delta$ explicitly as $$\Delta = H_{\hat{{\mathbf{x}}}}\tilde{{\bm{\zeta}}}= b\sqrt{d}\tilde{{\bm{\zeta}}}+ (a-b)\sqrt{d}\hat{{\mathbf{x}}}{\langle}\hat{{\mathbf{x}}},\tilde{{\bm{\zeta}}}{\rangle},$$ with $\tilde{{\bm{\zeta}}}$ taken to be uniform on ${{\mathbb S}^{d-1}}$. As before, we find that $\operatorname{\mathbb{E}}_{{{\mathbf{x}}}}[\Delta] = H_{\hat{{\mathbf{x}}}} \operatorname{\mathbb{E}}[ \tilde{{\bm{\zeta}}}] = {{\mathbf{0}}}$ and $$\operatorname{\mathbb{E}}_{{\mathbf{x}}}[ \Delta\Delta^{{\scalebox{0.6}{$\top$}}}] = H_{\hat{{\mathbf{x}}}} \operatorname{\mathbb{E}}[ \tilde{{\bm{\zeta}}}\tilde{{\bm{\zeta}}}^{{\scalebox{0.6}{$\top$}}}]H_{\hat{{\mathbf{x}}}} = \textstyle\frac{1}{d}H_{\hat{{\mathbf{x}}}}^2 = a^2\hat{{\mathbf{x}}}\hat{{\mathbf{x}}}^{{\scalebox{0.6}{$\top$}}}+ b^2( I - \hat{{\mathbf{x}}}\hat{{\mathbf{x}}}^{{\scalebox{0.6}{$\top$}}}).$$ Recall that we assume our random walk to be time-homogeneous, so that equation  in fact determines the distribution of $\Delta_n$ for all $n \geq 0$. Formally, we define ${{\bm{\zeta}}}_0, {{\bm{\zeta}}}_1,\dots$ a sequence of independent random variables uniformly distributed on ${{\mathbb S}^{d-1}}$, and for each $n \geq 0$ we define $\Delta_n$ conditional on $\{X_n = {{\mathbf{x}}}\}$ via $$\label{eqn:Delta-n-d-dim} \Delta_n = Q_{\hat{{{\mathbf{x}}}}} D {{\bm{\zeta}}}_n.$$ We call $X = (X_n, n \in {{\mathbb Z}_+})$ defined in this way an *elliptic random walk*. As a corollary to Theorems \[thm:recurrence\] and \[thm:null\], we get the following recurrence classification for the elliptic random walk model. For this model the ${\varepsilon}(r)$ in is identically zero so we get a complete classification that includes the boundary case. \[cor:ellipsoid-Lamperti\] Let $d \geq 2$ and $a, b \in (0,\infty)$. Let $X$ be an elliptic random walk on ${{\mathbb R}}^d$. Then $X$ is transient if $a^2 < (d-1)b^2$ and null-recurrent if $a^2 \geq (d-1)b^2$. In two dimensions we can explicitly describe the random walk as follows. For ${{\mathbf{x}}}\in {{\mathbb R}}^2$, ${{\mathbf{x}}}\neq {{\mathbf{0}}}$ with ${{\mathbf{x}}}= (x_1, x_2)$ in Cartesian components, set ${{\mathbf{x}}}^{{\mkern -1mu \scalebox{0.5}{$\perp$}}}:= (-x_2, x_1)$. Fix $a,b \in (0,\infty)$. Let $E_{{\mathbf{x}}}(a,b)$ denote the ellipse with centre ${{\mathbf{x}}}$ and principal axes aligned in the ${{\mathbf{x}}}$, ${{\mathbf{x}}}^{{\mkern -1mu \scalebox{0.5}{$\perp$}}}$ directions, with lengths $2\sqrt{2}a$, $2\sqrt{2}b$ respectively, given in parametrized form by $$\label{param} E_{{\mathbf{x}}}(a,b) := \left\{ {{\mathbf{x}}}+ \sqrt{2}a \frac{{{\mathbf{x}}}}{\| {{\mathbf{x}}}\|} \cos \phi + \sqrt{2}b \frac{{{\mathbf{x}}}^{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}{\| {{\mathbf{x}}}\|} \sin \phi : \phi \in (-\pi,\pi] \right\},$$ and for ${{\mathbf{x}}}= {{\mathbf{0}}}$ set $$E_{{\mathbf{0}}}(a,b) := \left\{ \sqrt{2}a\, {{\mathbf{e}}}_1 \cos \phi + \sqrt{2}b\, {{\mathbf{e}}}_2 \sin \phi : \phi \in (-\pi,\pi] \right\}.$$ The parameter $\phi$ in the parametrization (\[param\]) should be interpreted with caution: it is *not*, in general, the central angle of the parametrized point on the ellipse. Given $X_n = {{\mathbf{x}}}\in {{\mathbb R}}^2$, $X_{n+1}$ is taken to be distributed on $E_{{{\mathbf{x}}}}(a,b)$, ‘uniformly’ with respect to the parametrization (\[param\]). Precisely, let $\phi_0, \phi_1, \ldots$ be a sequence of independent random variables uniformly distributed on $(-\pi,\pi]$. Then, on $\{ X_n \neq {{\mathbf{0}}}\}$, $$\label{elljumps} X_{n+1} = X_n + \sqrt{2}a \frac{X_n}{\| X_n\|} \cos \phi_n + \sqrt{2}b \frac{X_n^{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}{\| X_n \|} \sin \phi_n ,$$ while, on $\{ X_n = {{\mathbf{0}}}\}$, $$\label{jump0} X_{n+1} = (\sqrt{2}a \cos \phi_n, \sqrt{2}b \sin \phi_n ) .$$ Figure \[fig:sim\] shows two sample paths of a simulation of the elliptic random walk in ${{\mathbb R}}^2$ in the two cases of recurrence and transience. In each picture the walk starts at the origin at the centre of the picture; time is represented by the variation in colour (from red to yellow, or from dark to light if viewed in grey-scale). ![Simulation of the elliptic random walk in ${{\mathbb R}}^2$ for the recurrent case $a > b$ (*left*) and the transient case $a < b$ (*right*).[]{data-label="fig:sim"}](radial3.pdf "fig:") ![Simulation of the elliptic random walk in ${{\mathbb R}}^2$ for the recurrent case $a > b$ (*left*) and the transient case $a < b$ (*right*).[]{data-label="fig:sim"}](transverse3.pdf "fig:") \[rem:ellipse\] (a) The process $X$ reduces to the classical PRRW when $a = b$: in that case it is spatially homogeneous, i.e., the distribution of the increment $X_{n+1}-X_n$ does not depend on $X_n$. For $a \neq b$ the random walk is not spatially homogeneous, and the jump distribution depends upon the projection onto the unit sphere of the walk’s current position. (b) As mentioned earlier, we choose to take increments as defined at , rather than increments that are uniform on the ellipse with respect to one-dimensional Lebesgue measure on $E_{{\mathbf{x}}}(a,b)$, purely for computational reasons. In fact, in two dimensions, since the Lebesgue measure on $E_{{\mathbf{x}}}(a,b)$ coincides with the measure induced by taking $\phi$ uniformly distributed on $(-\pi,\pi]$ when $a = b$, and the case $a=b$ is critically recurrent, the qualitative behaviour will be the same in either case: the walk will be transient for $a<b$ and recurrent for $a \geq b$. For higher dimensions, taking increments that are uniform with respect to the Lebesgue measure on $E_{{\mathbf{x}}}^d(a,b) := \{ Q_{\hat{{\mathbf{x}}}}D{{\mathbf{u}}}: {{\mathbf{u}}}\in {{\mathbb S}^{d-1}} \}$ will still specify a family of models that exhibit a phase transition, from transience (for $a/b$ small) to recurrence (for $a/b$ large) but the exact shape of the ellipsoid in the critical case (i.e., the smallest ratio $a/b$ for which the walk is recurrent) may be different. (c) It follows from that $$\begin{aligned} \|X_{n+1}\|^2 &= \|X_n\|^2 + 2\|X_n\|{\langle}\widehat{X}_n , \Delta_n {\rangle}+ \| \Delta_n \|^2 \nonumber\\ &= \|X_n\|^2 + 2\|X_n\|{\langle}{{\mathbf{e}}}_1, D {{\bm{\zeta}}}_n {\rangle}+ {\langle}{{\bm{\zeta}}}_n, D^2 {{\bm{\zeta}}}_n {\rangle}\nonumber\\ &= \|X_n\|^2 + 2a\sqrt{d}\|X_n\|{\langle}{{\mathbf{e}}}_1, {{\bm{\zeta}}}_n {\rangle}+ (a^2-b^2)d {\langle}{{\mathbf{e}}}_1 , {{\bm{\zeta}}}_n {\rangle}^2 + b^2d. \label{eqn:norm-X}\end{aligned}$$ In particular, for this family of models $(\|X_n\|,n \in {{\mathbb Z}_+})$ is itself a Markov process, since the distribution of $\|X_{n+1}\|$ depends only on $\|X_n\|$ and not $X_n$; however, in the general setting of Section \[sec:rws\], this need not be the case. One-dimensional processes with evolutions reminiscent to that given by have been studied previously by Kingman [@kingman] and Bingham [@bingham]. Those processes can be viewed, respectively, as the distance from its start point of a random walk in Euclidean space, and the geodesic distance from its start point of a random walk on the surface of a sphere, but in both cases the increments of the random walk have the property that the distribution of the jump vector is a product of the independent marginal distributions of the length and direction of the jump vector. In contrast, for the elliptic random walk the laws of $\|\Delta_n\|$ and ${\langle}\widehat X_n , \widehat\Delta_n {\rangle}$ are *not* independent (except when $a=b$). (d) \[rem:Stas\] The theory equally applies to the case where the ellipsoid specifying the jump distribution is oriented with some fixed angle $\alpha \in [0,\pi)$ with respect to the radial direction. If we define $\Delta = Q^\alpha_{\hat {{\mathbf{x}}}} D {{\bm{\zeta}}}$, where $Q^\alpha_{\hat {{\mathbf{x}}}}$ is an orthogonal matrix that maps ${{\mathbf{e}}}_\alpha := {{\mathbf{e}}}_1 \cos{\alpha} + {{\mathbf{e}}}_2 \sin{\alpha}$ to $\hat {{\mathbf{x}}}$, then we find that transience of $X$ is equivalent to $$(a^2 - b^2) \cos{2 \alpha} < (d-2) b^2.$$ Note that for $d=2$, $Q^\alpha_{\hat{{\mathbf{x}}}}$ and therefore $\Delta$ are well defined, but this is not so for higher dimensions. Nevertheless, for *any* collection of matrices $(Q^\alpha_{{{\mathbf{u}}}}; {{\mathbf{u}}}\in {{\mathbb S}^{d-1}})$ satisfying $Q^\alpha_{{\mathbf{u}}}{{\mathbf{e}}}_\alpha = {{\mathbf{u}}}$ for all ${{\mathbf{u}}}\in {{\mathbb S}^{d-1}}$ we get the same recurrence classification. This is because the distribution of $\| X_{n+1} \|$ given $X_n$ is determined through the angle $\alpha$ via $$\|X_n\|^2 + 2\sqrt{d}\|X_n\|(a {\langle}{{\mathbf{e}}}_1, {{\bm{\zeta}}}_n {\rangle}\cos{\alpha} + b {\langle}{{\mathbf{e}}}_2, {{\bm{\zeta}}}_n {\rangle}\sin{\alpha} ) + (a^2-b^2)d{\langle}{{\mathbf{e}}}_1,{{\bm{\zeta}}}_n{\rangle}^2 + b^2d,$$ and therefore assumption holds with $U = a^2\cos^2{\alpha} + b^2\sin^2{\alpha}$ and $V = a^2 + (d-1)b^2$. Non-confinement {#sec:non-confinement} =============== In this section we prove that the assumptions , , and imply that $\limsup_{n \to \infty} \|X_n\| = +\infty$, a.s. We first present a general result for martingales on ${{\mathbb R}}^d$ satisfying a “uniform dispersion” condition; the result can be viewed as a $d$-dimensional martingale version of *Kolmogorov’s other inequality* (see e.g. [@gut pp. 123, 502]). \[lem:d-dim-KOI\] Let $d \in {{\mathbb N}}$. Suppose that $(Y_n, n \in {{\mathbb Z}_+})$ is an ${{\mathbb R}}^d$-valued process adapted to a filtration $({{\mathcal G}}_n , n \in {{\mathbb Z}_+})$, with ${{\mathbb P}}[ Y_0 = {{\mathbf{0}}}\mid {{\mathcal G}}_0 ] =1$. Suppose that there exist $p>2, v>0, B <\infty$ such that for all $n \in {{\mathbb Z}_+}$, a.s., $$\begin{aligned} \operatorname{\mathbb{E}}[ \| Y_{n+1} - Y_n \|^p \mid {{\mathcal G}}_n ] &\leq B; \label{KOI:moments}\\ \operatorname{\mathbb{E}}[ \| Y_{n+1} - Y_n \|^2 \mid {{\mathcal G}}_n ] &\geq v; \label{KOI:unif-ellip} \\ \operatorname{\mathbb{E}}[ Y_{n+1} - Y_n \mid {{\mathcal G}}_n ] & = {{\mathbf{0}}}. \label{KOI:zero-drift} \end{aligned}$$ Then there exists $D <\infty$, depending only on $B$, $p$, and $v$, such that for all $n\in{{\mathbb Z}_+}$ and all $x\in {{\mathbb R}_+}$, $${{\mathbb P}}\Bigl[ \max_{0\leq \ell \leq n} \| Y_\ell \| \geq x {\; \Bigl| \;}{{\mathcal G}}_0 \Bigr] \geq 1 - \frac{D(1+x)^2}{n}, {\ \text{a.s.}}$$ Let $x>0$ and set $\tau = \min\{ n \geq 0 : \| Y_n \| \geq x \}$; throughout the paper we adopt the usual convention $\min \emptyset := \infty$. In analogy with previous notation, write $\Delta_n = Y_{n+1} - Y_n$ for the jump distribution, and let $$W_n = \begin{cases} Y_n &\text{if $\|Y_n\| \leq A(1+x)$},\\ Y_{n-1} + \widehat{\Delta}_{n-1}(A-1)(1+x) &\text{if $\|Y_n\| > A(1+x)$}, \end{cases}$$ where $A>1$ is a constant to be specified later. Note that $W_n$ is ${{\mathcal G}}_n$-measurable. Now, on $\{ \|Y_n\| \leq x \}$, $W_n = Y_n$ and $$\begin{aligned} \operatorname{\mathbb{E}}[W_{n+1} - W_n \mid {{\mathcal G}}_n ] = {} &\operatorname{\mathbb{E}}[ \Delta_n \mid {{\mathcal G}}_n ]\\ & {} + \operatorname{\mathbb{E}}[ \widehat{\Delta}_n\left((A-1)(1+x) - \|\Delta_n\|\right){{\mathbf 1}{\{\|Y_{n+1}\| > A(1+x)\}}} \mid {{\mathcal G}}_n ]. \end{aligned}$$ But $\{\|Y_{n+1}\| > A(1+x)\} \cap \{\|Y_n\| \leq x \}$ implies that $\|\Delta_n\| > (A-1)(1+x)$, and by , $\operatorname{\mathbb{E}}[\Delta_n \mid {{\mathcal G}}_n] = {{\mathbf{0}}}$. Hence, on $\{\|Y_n\| \leq x\}$, $$\begin{aligned} \bigl\| \operatorname{\mathbb{E}}[ W_{n+1} - W_n \mid {{\mathcal G}}_n ] \bigr\| &\leq \operatorname{\mathbb{E}}[ \|\Delta_n\| {{\mathbf 1}{\{ \|\Delta_n\| > (A-1)(1+x)\}}} \mid {{\mathcal G}}_n]\\ &\leq (A-1)^{-1}(1+x)^{-1} \operatorname{\mathbb{E}}[ \|\Delta_n\|^2 \mid {{\mathcal G}}_n ]\\ &\leq B'(A-1)^{-1}(1+x)^{-1}, {\ \text{a.s.}},\end{aligned}$$ where, by and Lyapunov’s inequality, $B' < \infty$ depends only on $B$ and $p$. Hence we can choose $A \geq A_0$ for some $A_0 = A_0 (B, p, v)$ large enough so that $$\label{eq:w-drift} \bigl\| \operatorname{\mathbb{E}}[ W_{n+1} - W_n \mid {{\mathcal G}}_n ] \bigr\| \leq (v/8)(1+x)^{-1}, \text{ on } \{ \| Y_n \| \leq x \} .$$ \[eq:w-square\] Also, on $\{\|Y_n\| \leq x\}$, by a similar argument, $$\begin{aligned} \operatorname{\mathbb{E}}[ \|W_{n+1} - W_n\|^2 \mid {{\mathcal G}}_n ] \bigr\| &= \operatorname{\mathbb{E}}[ \|\Delta_n\|^2 \mid {{\mathcal G}}_n ] \nonumber\\ &\quad + \operatorname{\mathbb{E}}[ \left( (A-1)^2(1+x)^2 - \|\Delta_n\|^2 \right){{\mathbf 1}{\{ \|Y_{n+1}\| > A(1+x) \}}} \mid {{\mathcal G}}_n ] \nonumber\\ &\geq \operatorname{\mathbb{E}}[ \|\Delta_n\|^2 \mid {{\mathcal G}}_n ] - \operatorname{\mathbb{E}}[ \|\Delta_n\|^2 {{\mathbf 1}{\{ \| \Delta_{n}\| > (A-1)(1+x) \}}} \mid {{\mathcal G}}_n ] \nonumber\\ &\geq v - (A-1)^{2-p}(1+x)^{2-p} \operatorname{\mathbb{E}}[ \|\Delta_n\|^p \mid {{\mathcal G}}_n ]\nonumber\\ &\geq v/2,\end{aligned}$$ for all $x \geq 0$ and $A \geq A_1$ for sufficiently large $A_1 = A_1 (B, p, v)$, using and . Now, set $Z_n = \| W_{n \wedge \tau} \|^2$. Then, on $\{ n < \tau \}$, by and , $$\begin{aligned} \operatorname{\mathbb{E}}[ Z_{n+1} - Z_n \mid {{\mathcal G}}_n ] &= \operatorname{\mathbb{E}}[ \|W_{n+1}\|^2 - \|W_n\|^2 \mid {{\mathcal G}}_n ]\\ &= \operatorname{\mathbb{E}}[ \|W_{n+1} - W_n\|^2 \mid {{\mathcal G}}_n ] + 2 \bigl{\langle}W_n , \operatorname{\mathbb{E}}[ W_{n+1} - W_n \mid {{\mathcal G}}_n ] \bigr{\rangle}\\ &\geq \frac{v}{2} - \frac{ 2 \|W_n\| v }{ 8(1+x) } \geq \frac{v}{2} - \frac{ v x }{ 4(1+x) } \geq \frac{v}{4}.\end{aligned}$$ Hence $Z_n - \sum_{k=0}^{n-1} v_k$ is a ${{\mathcal G}}_n$-adapted submartingale, where $$v_k = \frac{v}{4} {{\mathbf 1}{\{k < \tau\}}} \geq \frac{v}{4} {{\mathbf 1}{\{n < \tau\}}} , ~~ \text{for} ~ 0 \leq k < n .$$ By construction, $0 \leq Z_n \leq A^2(1+x)^2$, so $$0 = \operatorname{\mathbb{E}}[ Z_0 \mid {{\mathcal G}}_0 ] \leq \operatorname{\mathbb{E}}[ Z_n \mid {{\mathcal G}}_0 ] - \sum_{k=0}^{n-1} \operatorname{\mathbb{E}}[ v_k \mid {{\mathcal G}}_0] \leq A^2(1+x)^2 - \sum_{k=0}^{n-1} \frac{v}{4} {{\mathbb P}}[ n < \tau \mid {{\mathcal G}}_0 ],$$ which implies $n(v/4){{\mathbb P}}[ n < \tau \mid {{\mathcal G}}_0 ] \leq A^2(1+x)^2$. In other words, $${{\mathbb P}}\Bigl[ \max_{0\leq \ell \leq n} \| Y_\ell \| < x {\; \Bigl| \;}{{\mathcal G}}_0 \Bigr] \leq \frac{4A^2(1+x)^2}{v n} , {\ \text{a.s.}}\qedhere$$ Now we can give the proof of Proposition \[lem:lim\_sup\_infty\]. It is enough to show that for all $x \in {{\mathbb R}_+}$ the event $\{ \| X_n \| \geq x \}$ occurs infinitely often. For a given $x$, we will apply Lemma \[lem:d-dim-KOI\] to $Y_n = X_{m+n} - X_m$ with ${{\mathcal G}}_n = \sigma(X_0,\dots,X_{m+n})$; that result is applicable, since , and imply , and , respectively. Thus Lemma \[lem:d-dim-KOI\] shows that, for some finite $t = t(x)$, $$\label{eq:escape-chance} {{\mathbb P}}{\Bigl[ \max_{0 \leq \ell \leq t-1} \| X_{m+\ell} - X_m \| \geq 2x {\; \Bigl| \;}X_0,\dots,X_m \Bigr]} \geq \frac{1}{2}, {\ \text{a.s.}},$$ for all $m \geq 0$. For $k=1,2,\dotsc$, define the event $$A_k = \Bigl\{ \max_{0 \leq \ell \leq t-1} \|X_{(k-1)t + \ell} - X_{(k-1)t} \| \geq 2x \Bigr\},$$ and filtration ${{\mathcal G}}'_{k-1} = \sigma(X_0,\dots,X_{(k-1)t})$. Then $A_k \in {{\mathcal G}}'_k$, and, by , ${{\mathbb P}}{[ A_k \mid {{\mathcal G}}'_{k-1}]} \geq \frac{1}{2}$, a.s., for all $k$. An application of Lévy’s extension of the Borel–Cantelli lemma (see, e.g., [@kall Cor. 7.20]) shows that $A_k$ occurs infinitely often, a.s. For each $k$ such that $A_k$ occurs, either - $\| X_{(k-1)t} \| \geq x$, or - $\| X_{(k-1)t} \| \leq x$ and $\|X_n \| \geq x$ for some $(k-1)t < n < k t$. Since one of these cases must occur for infinitely many $k$, we have that $\{ \| X_n \| \geq x\}$ occurs infinitely often, as required. Recurrence classification {#sec:rw} ========================= In this section we study the random walk $X_n$ and give the proof of the recurrence classification, Theorem \[thm:recurrence\]. The method of proof is based on applying classical results of Lamperti [@lamp1] to the ${{\mathbb R}_+}$-valued radial process given by $R_n := \| X_n \|$. The method rests on an analysis of the increments $R_{n+1} - R_n$ given $X_n = {{\mathbf{x}}}\in {{\mathbb X}}$; in general, $R_n$ is not itself a Markov process. The following notation will be useful. Given ${{\mathbf{x}}}\neq {{\mathbf{0}}}$ and ${{\mathbf{y}}}\in {{\mathbb R}}^d$, write $${{\mathbf{y}}}_{{\mathbf{x}}}:= \frac{{\langle}{{\mathbf{x}}}, {{\mathbf{y}}}{\rangle}}{\|{{\mathbf{x}}}\|} = {\langle}\hat {{\mathbf{x}}}, {{\mathbf{y}}}{\rangle},$$ so that ${{\mathbf{y}}}_{{\mathbf{x}}}$ is the component of ${{\mathbf{y}}}$ in the $\hat {{\mathbf{x}}}$ direction, and ${{\mathbf{y}}}- {{\mathbf{y}}}_{{\mathbf{x}}}\, \hat {{\mathbf{x}}}$ is a vector perpendicular to $\hat {{\mathbf{x}}}$. First we state a general result on the increments of $R_n$ for a Markov process $X_n$ on ${{\mathbb X}}$. Recall that we write $\Delta = X_1 - X_0$, and let $\Delta_{{\mathbf{x}}}$ be the radial component of $\Delta$ at $X_0 = {{\mathbf{x}}}$ in accordance with the notation described above; no confusion should arise with our notation $\Delta_n$ defined previously. We make an important comment on notation. When we write $O( \|{{\mathbf{x}}}\|^{-1-\delta} )$, and similar expressions, these are understood to be uniform in ${{\mathbf{x}}}$. That is, if $f : {{\mathbb R}}^d \to {{\mathbb R}}$ and $g : {{\mathbb R}_+}\to {{\mathbb R}_+}$, we write $f ( {{\mathbf{x}}}) = O ( g(\|{{\mathbf{x}}}\|) )$ to mean that there exist $C \in {{\mathbb R}_+}$ and $r \in {{\mathbb R}_+}$ such that $$\label{eqn:big-O} | f ({{\mathbf{x}}}) | \leq C g( \| {{\mathbf{x}}}\|) \text{ for all } {{\mathbf{x}}}\in {{\mathbb X}}\text{ with } \| {{\mathbf{x}}}\| \geq r .$$ \[lemma1\] Suppose that $X$ is a discrete-time, time-homogeneous Markov process on ${{\mathbb X}}\subseteq {{\mathbb R}}^d$ satisfying for some $p >2$. Then, for $R_n := \| X_n \|$, we have $$\label{bound} \sup_{{{\mathbf{x}}}\in {{\mathbb X}}} \operatorname{\mathbb{E}}[ | R_{n+1} - R_n |^p \mid X_n = {{\mathbf{x}}}] < \infty,$$ and the radial increment moment functions satisfy $$\begin{aligned} \label{mu1} \mu_1({{\mathbf{x}}}) & := \operatorname{\mathbb{E}}[ R_{n+1} - R_n \mid X_n = {{\mathbf{x}}}] = \operatorname{\mathbb{E}}_{{\mathbf{x}}}[ \Delta_{{\mathbf{x}}}] + \frac{ \operatorname{\mathbb{E}}_{{\mathbf{x}}}[ \| \Delta \|^2 - \Delta^2_{{{\mathbf{x}}}} ] } {2 \| {{\mathbf{x}}}\|} + O (\| {{\mathbf{x}}}\|^{-1-\delta} ) ,\\ \label{mu2} \mu_2({{\mathbf{x}}}) & := \operatorname{\mathbb{E}}[ (R_{n+1}- R_n)^2 \mid X_n = {{\mathbf{x}}}] = \operatorname{\mathbb{E}}_{{\mathbf{x}}}[ \Delta_{{\mathbf{x}}}^2 ] + O( \|{{\mathbf{x}}}\|^{-\delta} ) , \end{aligned}$$ as $\| {{\mathbf{x}}}\| \to \infty$, for some $\delta=\delta(p) > 0$. By time-homogeneity, it suffices to consider the case $n=0$. By the triangle inequality, $| R_{1} - R_0 | = \bigl| \| X_0 + \Delta \| - \| X_0 \| \bigr| \leq \| \Delta \|$, so that follows from . We prove and by approximating $$\label{eqn:|x+Delta|-|x|} \begin{split} \| {{\mathbf{x}}}+ \Delta \| - \| {{\mathbf{x}}}\| &= \sqrt{ {\langle}{{\mathbf{x}}}+\Delta , {{\mathbf{x}}}+ \Delta {\rangle}} - \|{{\mathbf{x}}}\|\\ &= \| {{\mathbf{x}}}\| \left[ \left( 1 + \frac{2 \Delta_{{\mathbf{x}}}}{\|{{\mathbf{x}}}\|} + \frac{\|\Delta\|^2}{\|{{\mathbf{x}}}\|^2} \right)^{1/2} - 1 \right] \end{split}$$ for large ${{\mathbf{x}}}$. Let $A_{{\mathbf{x}}}= \{ \| \Delta \| \leq \| {{\mathbf{x}}}\|^\beta \}$ for some $\beta \in (0,1)$ to be determined later. On the event $A_{{\mathbf{x}}}$ we approximate using Taylor’s formula for $(1+y)^{1/2}$, and on the event $A_{{\mathbf{x}}}^{{{\mathrm{c}}}}$ we bound using . Indeed, for all $y > -1$, Taylor’s theorem with Lagrange remainder shows that $$(1+y)^{1/2} = 1 + \frac{1}{2}y - \frac{1}{8}y^2(1+\gamma y)^{-3/2},$$ for some $\gamma = \gamma(y) \in [0,1]$, so on the event $A_{{\mathbf{x}}}$, $$\begin{aligned} \label{eqn:on-Ax} \| {{\mathbf{x}}}+ \Delta \| - \| {{\mathbf{x}}}\| &= \| {{\mathbf{x}}}\| \left( \frac{\Delta_{{\mathbf{x}}}}{\|{{\mathbf{x}}}\|} + \frac{\| \Delta \|^2}{2\| {{\mathbf{x}}}\|^2} - \frac{1}{8}\left( \frac{2 \Delta_{{\mathbf{x}}}}{\|{{\mathbf{x}}}\|} + \frac{\|\Delta\|^2}{\|{{\mathbf{x}}}\|^2} \right)^2 \left( 1+ O(\| {{\mathbf{x}}}\|^{\beta-1} ) \right) \right) \nonumber\\ &= \Delta_{{\mathbf{x}}}+ \frac{\| \Delta \|^2}{2\| {{\mathbf{x}}}\|} - \frac{\| {{\mathbf{x}}}\|}{8}\left( \frac{4 \Delta_{{\mathbf{x}}}^2}{\|{{\mathbf{x}}}\|^2} + \frac{\|\Delta\|^2}{\|{{\mathbf{x}}}\|^2}\left( \frac{4\Delta_{{\mathbf{x}}}}{\|{{\mathbf{x}}}\|} + \frac{\|\Delta\|^2}{\|{{\mathbf{x}}}\|^2}\right) \right) \left( 1+ O(\| {{\mathbf{x}}}\|^{\beta-1} ) \right)\nonumber\\ &= \Delta_{{\mathbf{x}}}+ \frac{\| \Delta \|^2}{2\| {{\mathbf{x}}}\|} - \frac{\Delta_{{\mathbf{x}}}^2}{2\| {{\mathbf{x}}}\|} \left( 1+ O(\| {{\mathbf{x}}}\|^{\beta-1} ) \right) + \frac{\|\Delta\|^2}{\|{{\mathbf{x}}}\|} O(\| {{\mathbf{x}}}\|^{\beta-1} )\nonumber\\ &= \Delta_{{\mathbf{x}}}+ \left( \frac{\| \Delta \|^2 - \Delta_{{\mathbf{x}}}^2 }{2\| {{\mathbf{x}}}\|} \right) \left( 1+ O(\| {{\mathbf{x}}}\|^{\beta-1} ) \right),\end{aligned}$$ where the error terms follow from the fact that $| \Delta_{{\mathbf{x}}}| \leq \|\Delta\| \leq \| {{\mathbf{x}}}\|^\beta$ for $\beta < 1$. On the other hand, $$\label{eqn:on-AxC} \bigl| \| {{\mathbf{x}}}+ \Delta \| - \|{{\mathbf{x}}}\| \bigr| {{\mathbf 1}{( A_{{\mathbf{x}}}^{{{\mathrm{c}}}} )}} \leq \| \Delta \| {{\mathbf 1}{( A_{{\mathbf{x}}}^{{{\mathrm{c}}}} )}} = \| \Delta\|^p \|\Delta\|^{1-p} {{\mathbf 1}{( A_{{\mathbf{x}}}^{{{\mathrm{c}}}} )}} \leq \| \Delta \|^p \| {{\mathbf{x}}}\|^{\beta(1-p)},$$ by the triangle inequality and the fact that $\| \Delta \| > \|{{\mathbf{x}}}\|^\beta$ on $A_{{\mathbf{x}}}^{{{\mathrm{c}}}}$. Since $$\| {{\mathbf{x}}}+ \Delta \| - \|{{\mathbf{x}}}\| = ( \| {{\mathbf{x}}}+ \Delta \| - \|{{\mathbf{x}}}\| ) {{\mathbf 1}{( A_{{\mathbf{x}}})}} + ( \| {{\mathbf{x}}}+ \Delta \| - \|{{\mathbf{x}}}\| ) {{\mathbf 1}{( A_{{\mathbf{x}}}^{{{\mathrm{c}}}} )}},$$ we can combine and to give $$\begin{aligned} \biggl| \| {{\mathbf{x}}}+ \Delta \| \biggr. &- \biggl.\|{{\mathbf{x}}}\| - \left[\Delta_{{\mathbf{x}}}+ \left( \frac{\| \Delta \|^2 - \Delta_{{\mathbf{x}}}^2 }{2\| {{\mathbf{x}}}\|} \right) \left( 1+ O(\| {{\mathbf{x}}}\|^{\beta-1} ) \right) \right] \biggr|\\ & =\biggl| \| {{\mathbf{x}}}+ \Delta \| - \|{{\mathbf{x}}}\| - \left[\Delta_{{\mathbf{x}}}+ \left( \frac{\| \Delta \|^2 - \Delta_{{\mathbf{x}}}^2 }{2\| {{\mathbf{x}}}\|} \right) \left( 1+ O(\| {{\mathbf{x}}}\|^{\beta-1} ) \right) \right] \biggr| {{\mathbf 1}{(A_{{\mathbf{x}}}^{{{\mathrm{c}}}} )}}\\ & \leq \| \Delta \|^p \| {{\mathbf{x}}}\|^{\beta(1-p)} + \left| \Delta_{{\mathbf{x}}}+ \left( \frac{\| \Delta \|^2 - \Delta_{{\mathbf{x}}}^2 }{2\| {{\mathbf{x}}}\|} \right) \left( 1+ O(\| {{\mathbf{x}}}\|^{\beta-1} ) \right) \right| {{\mathbf 1}{( A_{{\mathbf{x}}}^{{{\mathrm{c}}}} )}}\\ & \leq 2 \| \Delta \|^p \| {{\mathbf{x}}}\|^{\beta(1-p)} + \frac{\| \Delta \|^p}{2\| {{\mathbf{x}}}\|} \left( 1+ O(\| {{\mathbf{x}}}\|^{\beta-1} ) \right) \|{{\mathbf{x}}}\|^{\beta(2-p)} .\end{aligned}$$ Therefore, taking expectations and using , we obtain $$\mu_1({{\mathbf{x}}})= \operatorname{\mathbb{E}}_{{\mathbf{x}}}[\Delta_{{\mathbf{x}}}] + \frac{ \operatorname{\mathbb{E}}_{{\mathbf{x}}}[\| \Delta \|^2 - \Delta_{{\mathbf{x}}}^2 ]}{2\| {{\mathbf{x}}}\|} + O( \| {{\mathbf{x}}}\|^{\beta-2} ) + O( \| {{\mathbf{x}}}\|^{\beta(1-p)} ) + O( \| {{\mathbf{x}}}\|^{\beta(2-p) - 1} ).$$ Taking $\beta = 2/p$ makes all the error terms of size $O( \| {{\mathbf{x}}}\|^{-1-\delta})$ for some $\delta = \delta(p) > 0$, namely for $\delta = (p-2)/p$. For the second moment, we use the identity $$\begin{split} (\|{{\mathbf{x}}}+\Delta\| - \|{{\mathbf{x}}}\|)^2 &= \|{{\mathbf{x}}}+\Delta\|^2 - \|{{\mathbf{x}}}\|^2 - 2\|{{\mathbf{x}}}\|(\|{{\mathbf{x}}}+\Delta\|-\|{{\mathbf{x}}}\|)\\ &= 2\|{{\mathbf{x}}}\|\Delta_{{\mathbf{x}}}+ \|\Delta\|^2 - 2\|{{\mathbf{x}}}\|(\|{{\mathbf{x}}}+\Delta\|-\|{{\mathbf{x}}}\|), \end{split}$$ so that $$\mu_2({{\mathbf{x}}}) = 2\|{{\mathbf{x}}}\|\operatorname{\mathbb{E}}_{{\mathbf{x}}}[\Delta_{{\mathbf{x}}}] + \operatorname{\mathbb{E}}_{{\mathbf{x}}}[\|\Delta\|^2] - 2\|{{\mathbf{x}}}\|\mu_1({{\mathbf{x}}}) = \operatorname{\mathbb{E}}_{{\mathbf{x}}}[\Delta_{{\mathbf{x}}}^2] + O(\|{{\mathbf{x}}}\|^{-\delta}),$$ as required. With the additional assumptions , , and , we can use Lemma \[lemma1\] to prove the following result. \[lemma2\] Suppose that $X$ is a discrete-time, time-homogeneous Markov process on ${{\mathbb X}}\subseteq {{\mathbb R}}^d$ satisfying , , , and . Then, with $\mu_1, \mu_2$ defined at , , and ${\varepsilon}(r)$ defined at , there exists $\delta >0$ such that, as $\|{{\mathbf{x}}}\| \to \infty$, $$\label{eqn:mu_k-r} 2\|{{\mathbf{x}}}\| \mu_1({{\mathbf{x}}}) = V-U + O({\varepsilon}(\|{{\mathbf{x}}}\|)) + O(\|{{\mathbf{x}}}\|^{-\delta}), ~~ \mu_2({{\mathbf{x}}}) = U + O({\varepsilon}(\|{{\mathbf{x}}}\|)) + O(\|{{\mathbf{x}}}\|^{-\delta}).$$ By definition of ${\varepsilon}(r)$ at we have $\| M({{\mathbf{x}}}) - \sigma^2(\hat {{\mathbf{x}}}) \|_{\rm op} = O({\varepsilon}(\|{{\mathbf{x}}}\|))$ as $\|{{\mathbf{x}}}\| \to \infty$. Then implies that $$\begin{split} \operatorname{\mathbb{E}}_{{{\mathbf{x}}}}[ \|\Delta\|^2 ] &= \operatorname{tr}{(M({{\mathbf{x}}}))}\\ &= \operatorname{tr}{(\sigma^2(\hat{{\mathbf{x}}}))} + O({\varepsilon}(\|{{\mathbf{x}}}\|))\\ &= V + O({\varepsilon}(\|{{\mathbf{x}}}\|)), \end{split}$$ and $$\begin{split} \operatorname{\mathbb{E}}_{{{\mathbf{x}}}}[ \Delta_{{\mathbf{x}}}^2 ] &= {\langle}\hat{{\mathbf{x}}}, M({{\mathbf{x}}}) \cdot \hat{{\mathbf{x}}}{\rangle}\\ &= {\langle}\hat{{\mathbf{x}}}, \sigma^2(\hat{{\mathbf{x}}}) \cdot \hat{{\mathbf{x}}}{\rangle}+ O({\varepsilon}(\|{{\mathbf{x}}}\|))\\ &= U + O({\varepsilon}(\|{{\mathbf{x}}}\|)), \end{split}$$ and implies that $\operatorname{\mathbb{E}}_{{{\mathbf{x}}}} [\Delta_{{\mathbf{x}}}] = \operatorname{\mathbb{E}}_{{{\mathbf{x}}}} [ {\langle}\Delta , \hat{{\mathbf{x}}}{\rangle}] = {\langle}\mu({{\mathbf{x}}}) , \hat{{\mathbf{x}}}{\rangle}= 0$. Using these expressions in Lemma \[lemma1\] yields . Now we can complete the proof of Theorem \[thm:recurrence\]. We apply Lamperti’s [@lamp1] recurrence classification to $R_n = \|X_n\|$, the radial process. Proposition \[lem:lim\_sup\_infty\] shows that $\limsup_{n\to\infty}R_n = +\infty$, and Lemma \[lemma1\] tells us that is satisfied. Because the error terms in are uniform in ${{\mathbf{x}}}$, Lemma \[lemma2\] shows that for all $\eta > 0$ there exists $C < \infty$ such that $$2\|{{\mathbf{x}}}\|\mu_1({{\mathbf{x}}}) - \mu_2({{\mathbf{x}}}) \in [ V-2U - \eta, V-2U + \eta ]$$ for all ${{\mathbf{x}}}\in {{\mathbb X}}$ with $\|{{\mathbf{x}}}\|\geq C$. Therefore, it follows from Theorem 3.2 of [@lamp1] that $X$ is transient if $V-2U > 0$ and recurrent if $V-2U < 0$. For the boundary case, when $V-2U=0$, if ${\varepsilon}(r) = O(r^{-\delta_0})$ then $$2\|{{\mathbf{x}}}\|\mu_1({{\mathbf{x}}}) - \mu_2({{\mathbf{x}}}) = O(\|{{\mathbf{x}}}\|^{-\delta_1}),$$ for $\delta_1 = \min\{\delta,\delta_0\}$, which implies that $X$ is recurrent, again by Theorem 3.2 of [@lamp1]. Nullity {#sec:nullity} ======= In this section we give the proof of Theorem \[thm:null\]. In the transient case, this is straightforward. \[lem:null-trans\] In case (i) of Theorem \[thm:recurrence\], for any bounded $A \subset {{\mathbb R}}^d$, as $n \to \infty$, the null property holds. It is sufficient to prove in the case where $A = B_r := \{ {{\mathbf{x}}}\in {{\mathbb X}}: \| {{\mathbf{x}}}\| \leq r \}$. In case (i), $X$ is transient, meaning that $\| X_n \| \to \infty$ a.s., so that ${{\mathbf 1}{\{ X_n \in B_r \}}} \to 0$, a.s., for any $r \in {{\mathbb R}_+}$. Hence the Cesàro limit in is also $0$, a.s., and the $L^q$ convergence follows from the bounded convergence theorem. It remains to consider cases (ii) and (iii), when $X$ is recurrent. Thus there exists $r_0 \in {{\mathbb R}_+}$ such that $\liminf_{n \to \infty} \| X_n\| \leq r_0$, a.s. Let $\tau_r := \min \{ n \in {{\mathbb Z}_+}: X_n \in B_r \}$. It suffices to take $A = B_r$, $r > r_0$, so $X_n \in B_r$ infinitely often. We make the following claim, whose proof is deferred until the end of this section, which says that if the walk has not yet entered a ball of radius $R$ (for any $R >r$ big enough), the time until it reaches the ball of radius $r$ has tail bounded below as displayed. \[lem:null-estimate\] In cases (ii) and (iii) of Theorem \[thm:recurrence\], there exists a finite $r_1 \geq r_0$ such that for any $r>r_1$ and $R>r$ there exists a finite positive $c$ such that $$\label{eq:tail_lower_bound} {{\mathbb P}}[ \tau_r \geq n + m \mid X_0, \ldots, X_n ] \geq c m^{-1/2} , \text{ on } \{ n < \tau_{R} \} ,$$ for all sufficiently large $m$. Assuming this result, we can complete the proof of Theorem \[thm:null\]. In case (i), the result is contained in Lemma \[lem:null-trans\]. So consider cases (ii) and (iii). Fix $r$ and $R$ with $R > r > r_1$, with $r_1$ as in Lemma \[lem:null-estimate\]. Note that $\liminf_{n \to \infty} \| X_n\| \leq r_0 \leq r_1$, a.s. Set $\gamma_1 := 0$ and then define recursively, for $\ell \in {{\mathbb N}}$, the stopping times $$\eta_\ell := \min \{ n \geq \gamma_\ell : X_{n} \notin B_{R} \} , ~~~ \gamma_{\ell+1} := \min \{ n \geq \eta_\ell : X_{n} \in B_{r}\} ,$$ with the convention that $\min \emptyset := \infty$. Since $r > r_0$ and $\limsup_{n \to \infty } \| X_n \| = \infty$ (by Proposition \[lem:lim\_sup\_infty\]), for all $\ell \in {{\mathbb N}}$ we have $\eta_\ell <\infty$ and $\gamma_\ell < \infty$, a.s., and $$0 = \gamma_1 < \eta_1 < \gamma_2 < \eta_2 < \cdots .$$ In particular, $\lim_{\ell \to \infty} \gamma_\ell = \lim_{\ell \to \infty} \eta_\ell = \infty$, a.s. We now write ${{\mathcal F}}_n := \sigma (X_0, \ldots, X_n)$. We use Lemma \[lem:d-dim-KOI\] to show that the process must exit from $B_R$ rapidly enough. Indeed, if $\kappa$ is any finite stopping time, set $Y_n = X_{\kappa+n} - X_{\kappa}$ and ${{\mathcal G}}_n = {{\mathcal F}}_{\kappa + n}$. Then the assumptions , and show that the hypotheses of Lemma \[lem:d-dim-KOI\] are satisfied, since, for example, $$\operatorname{\mathbb{E}}[ \| Y_{n+1} -Y_n \|^p \mid {{\mathcal G}}_n ] = \operatorname{\mathbb{E}}[ \| X_{\kappa+n+1} -X_{\kappa+n} \|^p \mid {{\mathcal F}}_{\kappa+n} ] = \operatorname{\mathbb{E}}_{X_{\kappa +n}} [ \| \Delta \|^p ] ,$$ by the strong Markov property for $X$ at the finite stopping time ${\kappa + n}$. In particular, another application of Lemma \[lem:d-dim-KOI\], similarly to , shows that we may choose $n = n(R) \in {{\mathbb N}}$ sufficiently large so that $$\label{eq:big_ball_escape} {{\mathbb P}}\Bigl[ \max_{0 \leq \ell \leq n(R)} \| X_{\kappa + \ell} - X_\kappa \| \geq 2 R {\; \Bigl| \;}{{\mathcal F}}_{\kappa} \Bigr] \geq \frac{1}{2} , {\ \text{a.s.}},$$ an event whose occurrence ensures that if $X_\kappa \in B_R$, then $X$ exits $B_R$ before time $\kappa+n(R)$. Fix $k \in {{\mathbb N}}$. Then, an application of at stopping time $\kappa = \gamma_k$ shows that $${{\mathbb P}}[ \eta_k - \gamma_k > n (R) \mid {{\mathcal F}}_{\gamma_k} ] \leq {{\mathbb P}}\Bigl[ \max_{0 \leq \ell \leq n(R)} \| X_{{\gamma_k} + \ell} - X_{\gamma_k} \| < 2 R {\; \Bigl| \;}{{\mathcal F}}_{\gamma_k} \Bigr] \leq \frac{1}{2} , {\ \text{a.s.}}$$ Similarly, $$\begin{aligned} {{\mathbb P}}[ \eta_k - \gamma_k > 2 n (R) \mid {{\mathcal F}}_{\gamma_k} ] & = \operatorname{\mathbb{E}}\bigl[ {{\mathbf 1}{\{ \eta_k - \gamma_k > n(R) \}}} \operatorname{\mathbb{E}}[ {{\mathbf 1}{\{ \eta_k - \gamma_k > 2 n(R) \}}} \mid {{\mathcal F}}_{\gamma_k + n(R)} ] {\; \bigl| \;}{{\mathcal F}}_{\gamma_k} \bigr] \\ & \leq \frac{1}{2} {{\mathbb P}}[ \eta_k - \gamma_k > n (R) \mid {{\mathcal F}}_{\gamma_k} ] \leq \frac{1}{4} ,\end{aligned}$$ this time applying at stopping time $\kappa = \gamma_k + n(R)$ as well. Iterating this argument, it follows that ${{\mathbb P}}[ \eta_k - \gamma_k > m \cdot n(R) \mid {{\mathcal F}}_{\gamma_k} ] \leq 2^{-m}$, a.s., for all $m \in {{\mathbb N}}$. From here, it is straightforward to deduce that, for some constant $C < \infty$, for any $k \in {{\mathbb N}}$, $$\label{eq:escape_upper_bound} \operatorname{\mathbb{E}}[ \eta_k - \gamma_k \mid {{\mathcal F}}_{\gamma_k} ] \leq C , {\ \text{a.s.}}$$ On the other hand, the tail estimate implies that $$\label{eq:return_lower_bound} {{\mathbb P}}[ \gamma_{k+1} -\eta_k \geq m \mid {{\mathcal F}}_{\eta_k} ] \geq c m^{-1/2} , {\ \text{a.s.}},$$ for $c >0$ and all sufficiently large $m$. For any $n \in {{\mathbb N}}$, set $k(n) := \min\{ k \geq 2 : \gamma_k > n \}$, so that $\gamma_{k(n)-1} \leq n < \gamma_{k(n)}$ for $k(n) \in \{2,3,\ldots\}$. Note $k(n) < \infty$ and $\lim_{n \to \infty} k(n) = \infty$, a.s. Then we claim $$\begin{aligned} \label{eq:ratio} \frac{1}{n} \sum_{k=0}^{n-1} {{\mathbf 1}{\{ X_k \in B_{r} \}}} \leq \frac{\sum_{k =1}^{k(n)-1} \left( \eta_{k} - \gamma_k \right)}{\sum_{k =1}^{k(n)-2} \left( \gamma_{k+1} - \eta_k \right)} .\end{aligned}$$ This is easiest to see by considering two separate cases. First, if $\eta_{k(n) -1} < n < \gamma_{k(n)}$, $$\frac{1}{n} \sum_{k=0}^{n-1} {{\mathbf 1}{\{ X_k \in B_{r} \}}} \leq \frac{1}{\eta_{k(n)-1}} \sum_{k=0}^{\eta_{k(n)-1}} {{\mathbf 1}{\{ X_k \in B_{r} \}}},$$ which implies , since the set of $k$ less than $n$ for which $X_k \in B_{r}$ is contained in the set $\cup_{k=1}^{k(n)-1} [\gamma_k , \eta_{k})$. On the other hand, if $\gamma_{k(n)-1} \leq n \leq \eta_{k(n) -1}$, using the elementary inequality $\frac{a}{b} \leq \frac{a+c}{b+c}$ for non-negative $a,b,c$ with $a/b \leq 1$, we have $$\begin{aligned} \frac{1}{n} \sum_{k=0}^{n-1} {{\mathbf 1}{\{ X_k \in B_{r} \}}} \leq \frac{1}{\eta_{k(n) -1}} \left(\sum_{k=0}^{n-1} {{\mathbf 1}{\{ X_k \in B_{r} \}}} + (\eta_{k(n) -1} -n) \right) , \end{aligned}$$ which again gives . To estimate the growth rates of the numerator and denominator of the right-hand side of , we apply some results from [@hmmw]. First, writing $Z_m = \sum_{k=1}^{m-1} (\eta_k - \gamma_k)$ and ${{\mathcal G}}_m = {{\mathcal F}}_{\gamma_m}$, by we can apply Theorem 2.4 of [@hmmw] to the ${{\mathcal G}}_m$-adapted process $Z_m$ to obtain that for any ${\varepsilon}>0$, a.s., for all but finitely many $m$, $$\sum_{k=1}^{m -1} \left( \eta_{k} - \gamma_k \right) \leq m^{1+{\varepsilon}} .$$ On the other hand, writing $Z_m = \sum_{k=1}^{m-1} (\gamma_{k+1} - \eta_k)$ and ${{\mathcal G}}_m = {{\mathcal F}}_{\eta_m}$, by we can apply Theorem 2.6 of [@hmmw] to the ${{\mathcal G}}_m$-adapted process $Z_m$ to obtain that for any ${\varepsilon}>0$, for all $m$ sufficiently large, $$\sum_{k =1}^{m -1} \left( \gamma_{k+1} - \eta_k \right) \geq m^{2-{\varepsilon}} .$$ Now gives the almost-sure version of the result . The $L^q$ version follows from the bounded convergence theorem. It remains to complete the proof of Lemma \[lem:null-estimate\]. A more general, two-sided version of the inequality in Lemma \[lem:null-estimate\] is proved in [@hmw1 Theorem 2.4] but under slightly different assumptions. Because of this, we cannot apply that result directly; nevertheless, the proof techniques naturally transfer to our setting. In doing so, the arguments become simpler to apply, so we reproduce them here. By the Markov property for $X$ it is enough to prove the statement for $n=0$, namely that there exists finite $r_1 \geq r_0$ such that for any $r > r_1$ and $R>r$ there exists a finite positive constant $c$ such that, if $X_0 \not\in B_R$ then $${{\mathbb P}}[ \tau_r > m \mid X_0 ] \geq c m^{-1/2},$$ for sufficiently large $m$. We outline the two intuitive steps in the proof. First we show that the probability that $\max_{0 \leq k \leq \tau_r} \|X_k\|$ exceeds some large $x$ is bounded below by a constant times $1/x$. Second, we show that if the latter event does occur, with probability at least $1/2$ it takes the process time at least a constant times $x^2$ to reach $B_r$. Combining these two estimates will show that with probability of order $1/x$ the walk takes time of order $x^2$ to reach $B_r$, which gives the desired tail bound. Roughly speaking, the first estimate (reaching distance $x$) is provided by the optional stopping theorem and the fact that $\| X_k \|$ is a submartingale (cf. [@hmw1 Theorem 2.3]), and the second (taking quadratic time to return) is provided by a maximal inequality applied to an appropriate quadratic displacement functional (cf. [@hmw1 Lemma 4.11]). A technicality required for the first estimate is that to apply optional stopping, we need uniform integrability; so we actually work with a truncated version of $\| X_k\|$. We now give the details. Recall that $R_k = \|X_k\|$ and let ${{\mathcal F}}_k = \sigma(X_0,\dots,X_k)$. Lemmas \[lemma1\] and \[lemma2\], with the fact that $V >U$ by , imply that $$\label{eqn:diff-R-bound} \operatorname{\mathbb{E}}[ R_{k+1} - R_k \mid {{\mathcal F}}_k ] \geq \frac{2{\varepsilon}}{R_k} + o(R_k^{-1}) \geq \frac{{\varepsilon}}{R_k},$$ for all $R_k > r_1$, for sufficiently large $r_1 \geq r_0$ and some positive constant ${\varepsilon}$. Now, suppose that $r$ and $R$ satisfy $R> r>r_1$ and fix $x$ with $x \gg R$. Set $R_k^x := \min\{2x,R_k\}$ and $\sigma_x := \min\{k \geq 0 : R_k > x \}$. Since $X_k$ is a martingale, we have that $R_k$ is a submartingale, as is the stopped process $Y_k := R_{k \wedge \tau_r \wedge \sigma_x}$. In order to achieve uniform integrability, we consider the truncated process $Y_k^x := R_{k \wedge \tau_r \wedge \sigma_x}^x$ and show that this is a submartingale. For $k \geq \tau_r \wedge \sigma_x$, we have $Y_{k+1}^x - Y_k^x = 0$ so $\operatorname{\mathbb{E}}[Y_{k+1}^x -Y_k^x \mid {{\mathcal F}}_k ] = 0$. For $k < \tau_r \wedge \sigma_x$, $$Y_{k+1}^x - Y_k^x = R_{k+1} - R_k + (2x - R_{k+1}) {{\mathbf 1}{\{R_{k+1} > 2x\}}},$$ and the last term can be bounded in absolute value: $$\begin{split} | (2x - R_{k+1}) {{\mathbf 1}{\{R_{k+1} > 2x \}}} | &\leq | R_{k+1}-R_k| {{\mathbf 1}{\{R_{k+1} > 2x \}}} \\ &\leq |R_{k+1}-R_k| {{\mathbf 1}{\{|R_{k+1}-R_k| > x \}}}\\ &\leq |R_{k+1}-R_k|^p x^{1-p}, \end{split}$$ for $p>2$ as appearing in , since on $\{ k < \sigma_x\}$ we have $R_k < x$ and therefore $R_{k+1}> 2x$ implies that $|R_{k+1} - R_k| > x$. Applying  from Lemma \[lemma1\] we obtain $$\operatorname{\mathbb{E}}[ |(Y_{k+1}^x - Y_k^x) - (R_{k+1} - R_k) | \mid {{\mathcal F}}_k ] \leq Bx^{1-p},$$ for some $B<\infty$ not depending on $x$. Combining this with and again the fact that $R_k < x$ on $\{ k < \sigma_x\}$, we have that $$\operatorname{\mathbb{E}}[ Y_{k+1}^x - Y_k^x \mid {{\mathcal F}}_k ] \geq \frac{{\varepsilon}}{R_k} - Bx^{1-p} \geq \frac{{\varepsilon}}{x} - Bx^{1-p} \geq 0,$$ for sufficiently large $x$. Hence, for sufficiently large $x$, $Y_k^x$ is a uniformly integrable submartingale and therefore, given $X_0 \not\in B_R$, by optional stopping, $$\begin{split} R < R_0 = Y^x_0 \leq \operatorname{\mathbb{E}}[ Y_{\sigma_x \wedge \tau_r}^x \mid X_0 ] &= \operatorname{\mathbb{E}}[ Y_{\sigma_x}^x {{\mathbf 1}{\{\sigma_x < \tau_r\}}} \mid X_0 ] + \operatorname{\mathbb{E}}[ Y_{\tau_r}^x{{\mathbf 1}{\{\tau_r < \sigma_x\}}} \mid X_0 ] \\ &\leq 2x {{\mathbb P}}[ \sigma_x < \tau_r \mid X_0 ] + r. \end{split}$$ In other words, given $X_0 \not\in B_R$, $$\label{eqn:max-tail} {{\mathbb P}}\Big[ \max_{0 \leq k \leq \tau_r} R_k > x {\; \Bigl| \;}X_0 \Big] \geq \frac{R-r}{2x},$$ for all sufficiently large $x$. Now, consider $W_k := R_{\sigma_x+k} - R_{\sigma_x}$, adapted to ${{\mathcal G}}_k := {{\mathcal F}}_{\sigma_x+k}$. We have $$W_{k+1}^2 - W_k^2 = R_{\sigma_x+k+1}^2 - R_{\sigma_x+k}^2 - 2R_{\sigma_x}(R_{\sigma_x+k+1}-R_{\sigma_x+k}).$$ Using the fact that $R_k$ is a submartingale together with the strong Markov property for $X$ at the stopping time $\sigma_x+k$ yields $\operatorname{\mathbb{E}}[ R_{\sigma_x+k+1}-R_{\sigma_x+k} \mid {{\mathcal F}}_{\sigma_x+k} ] \geq 0 {\ \text{a.s.}}$, and Lemmas \[lemma1\] and \[lemma2\] again with the strong Markov property imply that $\operatorname{\mathbb{E}}[ R_{\sigma_x+k+1}^2-R_{\sigma_x+k}^2 \mid {{\mathcal F}}_{\sigma_x+k} ] \leq C {\ \text{a.s.}}, $ for some constant $C<\infty$; hence $\operatorname{\mathbb{E}}[W_{k+1}^2- W_k^2 \mid {{\mathcal G}}_k ] \leq C {\ \text{a.s.}}$, for some constant $C< \infty$. Then a maximal inequality [@mvw Lemma 3.1] similar to Doob’s submartingale inequality implies that, on $\{\sigma_x < \infty\}$, $${{\mathbb P}}\Big[ \max_{0 \leq k \leq n} W_k^2 \geq y {\; \Bigl| \;}{{\mathcal G}}_0 \Big] \leq \frac{Cn}{y}, \text{ for any } y >0.$$ In particular, we may choose ${\varepsilon}>0$ small enough so that $$\label{eqn:long-return} {{\mathbb P}}\Big[ \max_{0 \leq k \leq {\varepsilon}x^2} |R_{\sigma_x+k} - R_{\sigma_x}| \geq x/2 {\; \Bigl| \;}{{\mathcal F}}_{\sigma_x} \Big] \leq \frac{1}{2}, \text{ on } \{\sigma_x < \infty\}.$$ Combining the inequalities  and , we find that given $X_0 \not\in B_R$, $$\begin{split} {{\mathbb P}}\Big[\big\{\max_{0 \leq k \leq \tau_r} R_k > x \big\} &\cap \big\{ \max_{0 \leq k \leq {\varepsilon}x^2} |R_{\sigma_x+ k} - R_{\sigma_x}| < x/2 \big\} {\; \Bigl| \;}X_0 \Big]\\ &= \operatorname{\mathbb{E}}\Big[ {{\mathbf 1}{\{\sigma_x < \tau_r\}}} {{\mathbb P}}\big[ \max_{0 \leq k \leq {\varepsilon}x^2} |R_{\sigma_x+k} - R_{\sigma_x}| < x/2 {\; \bigl| \;}{{\mathcal F}}_{\sigma_x} \big] {\; \Bigl| \;}X_0 \Big]\\ &\geq \frac{1}{2} {{\mathbb P}}\Big[ \max_{0 \leq k \leq \tau_r} R_k > x {\; \Bigl| \;}X_0 \Big] \geq \frac{R-r}{4x}, \end{split}$$ for sufficiently large $x$, where the equality here uses the fact that $\{ \sigma_x < \tau_r \} \in {{\mathcal F}}_{\sigma_x}$. If both of the events $\{ \max_{0 \leq k \leq \tau_r} R_k > x \}$ and $ \{ \max_{0 \leq k \leq {\varepsilon}x^2} |R_{\sigma_x+k} - R_{\sigma_x}| < x/2 \}$ occur, then the process $X_k$ leaves the ball $B_x$ before time $\tau_r$ and takes more than ${\varepsilon}x^2$ steps to return to the ball $B_{x/2} \subset B_r$, and therefore $\tau_r > {\varepsilon}x^2$. Setting $m = {\varepsilon}x^2$ and $c = (R-r)\sqrt{{\varepsilon}}/4$ yields the claimed inequality. It is only in the proof of Lemma \[lem:null-estimate\] that we use the condition $U < V$ from . In the case $U=V$, inequality holds only for (any) ${\varepsilon}<0$, and not ${\varepsilon}>0$; thus to obtain a submartingale one should look at $( Y^x_k )^\gamma$ for $\gamma >1$. The modified argument yields a weaker version of , with $m^{-1/2}$ replaced by $m^{-(1/2)-\delta}$ for any $\delta >0$, but, as stated in Remark \[rem:UequalsV\], this is still comfortably enough to give Theorem \[thm:null\] (any exponent greater than $-1$ in the tail bound will do). We omit these additional technical details, as the case $U=V$ is outside our main interest. Recurrence in one dimension =========================== We use a Lyapunov function method with function $f(x) = \log ( 1 + |x| )$. \[l:zero\_drift\_implies\_recurrence\] Suppose that $X$ is a discrete-time, time-homogeneous Markov process on ${{\mathbb X}}\subseteq {{\mathbb R}}$. Suppose that for some $p>2$ and $v >0$, $$\begin{aligned} \sup_{x \in {{\mathbb X}}} \operatorname{\mathbb{E}}[ ( X_{n+1} - X_n )^p \mid X_n = x] & < \infty ; \\ \inf_{x \in {{\mathbb X}}} \operatorname{\mathbb{E}}[ ( X_{n+1} - X_n )^2 \mid X_n = x] & \geq v .\end{aligned}$$ Suppose also that for some bounded set $A \subset {{\mathbb R}}$, $$\operatorname{\mathbb{E}}[ X_{n+1} - X_n \mid X_n = x] = 0, \text{ for all } x \in {{\mathbb X}}\setminus A .$$ Then there exists a bounded set $A' \subset {{\mathbb R}}$ for which $$\operatorname{\mathbb{E}}[ f(X_{n+1} ) - f (X_n) \mid X_n = x ] \leq 0, \text{ for all } x \in {{\mathbb X}}\setminus A' .$$ Write $\Delta = X_{1} - X_0$ and $E_x = \{ | \Delta | < | x | \}$. We compute $$\begin{aligned} \operatorname{\mathbb{E}}[ f(X_{n+1} ) - f (X_n) \mid X_n = x ] & = \operatorname{\mathbb{E}}_x [ ( f(x + \Delta ) - f (x) ) {{\mathbf 1}{( E_x )}} ] \\ &{} ~~ {} + \operatorname{\mathbb{E}}_x [ ( f(x + \Delta ) - f (x) ) {{\mathbf 1}{( E_x^{{\mathrm{c}}})}} ] .\end{aligned}$$ On $\{ | \Delta | < | x | \}$ we have that $x$ and $x + \Delta$ have the same sign, so $$\begin{aligned} & {}~~~{} \operatorname{\mathbb{E}}_x [ ( f(x + \Delta ) - f (x) ) {{\mathbf 1}{( E_x )}} ] \\ & = \operatorname{\mathbb{E}}_x \left[ \log \left( \frac{ 1 + | x + \Delta |}{1+|x|} \right) {{\mathbf 1}{( E_x )}} \right] \\ & = \operatorname{\mathbb{E}}_x \left[ \log \left( 1 + \frac{ \Delta \operatorname{sgn}(x)}{1+|x|} \right) {{\mathbf 1}{( E_x )}} \right] \\ & \leq \left( \frac{ \operatorname{sgn}(x)}{1+|x|} \right) \operatorname{\mathbb{E}}_x [ \Delta {{\mathbf 1}{( E_x )}} ] - \frac{1}{6} ( 1+|x| )^{-2} \operatorname{\mathbb{E}}_x [ \Delta^2 {{\mathbf 1}{( E_x )}} ], \end{aligned}$$ using the inequality $\log (1+ y) \leq y - \frac{1}{6} y^2$ for all $-1 < y \leq 1$. Here, since $ \operatorname{\mathbb{E}}_x [ \Delta ] =0$ for $x \not\in A$, $$| \operatorname{\mathbb{E}}_x [ \Delta {{\mathbf 1}{( E_x )}} ] | \leq \operatorname{\mathbb{E}}_x [ | \Delta | {{\mathbf 1}{( E^{{\mathrm{c}}}_x )}} ] \leq \operatorname{\mathbb{E}}_x [ | \Delta |^p |x|^{1-p} ] = o(|x|^{-1} ) .$$ Similarly, $$\operatorname{\mathbb{E}}_x [ \Delta^2 {{\mathbf 1}{( E_x )}} ] \geq v - \operatorname{\mathbb{E}}_x [ \Delta^2 {{\mathbf 1}{( E^{{\mathrm{c}}}_x )}} ] \geq v- o(1) .$$ (Note that here, and in what follows, our notation follows the convention as described by ; consequently, in one dimension the error terms are understood to be uniform as either $x \to +\infty$, or $x \to -\infty$.) Finally we estimate the term $$\left| \operatorname{\mathbb{E}}_x [ ( f(x + \Delta ) - f (x) ) {{\mathbf 1}{( E_x^{{\mathrm{c}}})}} ] \right| \leq \operatorname{\mathbb{E}}_x \left[ \left( \log ( 1 + | \Delta | ) + \log ( 1 + 2 |\Delta | ) \right) {{\mathbf 1}{( E_x^{{\mathrm{c}}})}} \right] .$$ Here, $$\begin{aligned} \log ( 1 + 2 |\Delta | ) {{\mathbf 1}{( E_x^{{\mathrm{c}}})}} &= \log ( 1 + 2 |\Delta | ) |\Delta|^p|\Delta|^{-p} {{\mathbf 1}{( E_x^{{\mathrm{c}}})}} \\ &\leq |x|^{-p} \log ( 1 +2 |x| ) |\Delta|^p,\end{aligned}$$ for all $x$ with $|x|$ greater than some $x_0$ sufficiently large, using the fact that $y \mapsto y^{-p}\log(1+2y)$ is eventually decreasing. It follows that $$\left| \operatorname{\mathbb{E}}_x [ ( f(x + \Delta ) - f (x) ) {{\mathbf 1}{( E_x^{{\mathrm{c}}})}} ] \right| \leq 2|x|^{-p} \log ( 1 +2 |x| )\operatorname{\mathbb{E}}_x[|\Delta|^p] = o(|x|^{-2}).$$ Combining these calculations we obtain $$\begin{aligned} \operatorname{\mathbb{E}}[ f(X_{n+1} ) - f (X_n) \mid X_n = x ] & \leq \left( \frac{\operatorname{sgn}(x)}{1 +|x|} \right) o ( |x |^{-1} ) - \frac{1}{6} (1 + |x| )^{-2} ( v - o(1) ) + o( |x|^{-2} ) \\ & \leq - \frac{v}{6} ( 1+|x| )^{-2} + o (|x|^{-2} ) , \end{aligned}$$ which is negative for all $x$ with $|x|$ sufficiently large. Under assumptions , and , the hypotheses of Lemma \[l:zero\_drift\_implies\_recurrence\] are satisfied, so that for some $x_0 \in {{\mathbb R}}_+$, $$\operatorname{\mathbb{E}}[ f(X_{n+1} ) - f (X_n) \mid X_n = x ] \leq 0, \text{ for all $x \in {{\mathbb X}}$ with $|x| \geq x_0$},$$ where $f(x) = \log(1+|x|)$. We note that assumption implies that $\operatorname{\mathbb{E}}[|X_n|] < \infty$ for all $n$, and therefore $\operatorname{\mathbb{E}}[f(X_n)] < \infty$ for all $n$. Let $n_0 \in {{\mathbb N}}$ and set $\tau = \min \{n \geq n_0: |X_n| \leq x_0 \}$. Let $Y_n = f(X_{n \wedge \tau})$. Then $(Y_n, n \geq n_0)$ is a non-negative supermartingale, and hence there exists a random variable $Y_\infty \in {{\mathbb R}}_+$ with $\lim_{n\to\infty}Y_n = Y_\infty$, a.s. In particular, this means that $$\limsup_{n\to\infty}f(X_n) \leq Y_\infty, \text{ on } \{\tau = \infty\}.$$ Setting $\zeta = \sup \{ |x| : x \in {{\mathbb X}}, f(x) \leq Y_\infty\}$, which satisfies $\zeta<\infty$, a.s., since $f(x) \to \infty$ as $|x| \to \infty$, it follows that $\limsup_{n\to\infty}|X_n| \leq \zeta$ on $\{\tau = \infty \}$. However, under assumptions , and , Proposition \[lem:lim\_sup\_infty\] implies that $\limsup_{n\to\infty}|X_n| = +\infty$, a.s., so to avoid contradiction, we must have $\tau < \infty$, a.s. In other words, $${{\mathbb P}}\Big[ \inf_{n \geq n_0}|X_n| \leq x_0 \Big] = 1,$$ and since $n_0$ was arbitrary, it follows that $${{\mathbb P}}\Big[ \bigcap_{n_0 \in {{\mathbb N}}} \big\{\inf_{n \geq n_0}|X_n| \leq x_0\big\}\Big] = 1,$$ which gives the result. Acknowledgements {#acknowledgements .unnumbered} ================ Part of this work was supported by the Engineering and Physical Sciences Research Council \[grant number EP/J021784/1\]. An antecedent of this work, concerning only the elliptic random walk in two dimensions, was written down in 2008–9 by MM and AW, who benefited from stimulating discussions with Iain MacPhee (7/11/1957–13/1/2012). The present authors also thank Stas Volkov for a comment that inspired Remark \[rem:ellipse\]. [99]{} H.C. Berg, *Random Walks in Biology.* Expanded Edition, Princeton University Press, Princeton, 1993. N.H. Bingham, Random walk on spheres. *Z. Wahrschein. verw. Gebiete* [**22**]{} (1972), 169–192. B. Carazza, The history of the random-walk problem: considerations on the interdisciplinarity in modern physics. *Rivista del Nuovo Cimento Serie 2* [**7**]{} (1977) 419–427. K.L. Chung and W.H.J. Fuchs, On the distribution of values of sums of random variables. *Mem. Amer. Math. Soc.* [**6**]{} (1951) 12pp. A. Gut, *Probability: A Graduate Course*, Springer, Uppsala, 2005. O. Hryniv, I.M. MacPhee, M.V. Menshikov, and A.R. Wade, Non-homogeneous random walks with non-integrable increments and heavy-tailed random walks on strips. *Electr. J. Probab.* [**17**]{} (2012) article 59, 28pp. O. Hryniv, M.V. Menshikov, and A.R. Wade, Excursions and path functionals for stochastic processes with asymptotically zero drifts. *Stoch. Process. Appl.* [**123**]{} (2013) 1891–1921. B.D. Hughes, *Random Walks and Random Environments. Volume 1: Random Walks.* Clarendon Press, Oxford, 1995. O. Kallenberg, *Foundations of Modern Probability*. 2nd ed., Springer, New York, 2002. J.H.B. Kemperman, The oscillating random walk. *Stoch. Process. Appl.* [**2**]{} (1974) 1–29. J.F.C. Kingman, Random walks with spherical symmetry. *Acta Math.* [**109**]{} (1963) 11–63. J. Lamperti, Criteria for the recurrence and transience of stochastic processes I. *J. Math. Anal. Appl.* [**1**]{} (1960) 314–330. M.V. Menshikov, M. Vachkovskaia, and A.R. Wade, Asymptotic behaviour of randomly reflecting billiards in unbounded tubular domains. [*J. Statist. Phys.*]{} (2008) 1097–1133. R. Nossal, Stochastic aspects of biological locomotion. [*J. Statist. Phys.*]{} [**30**]{} (1983) 391–400. R.J. Nossal and G.H. Weiss, A generalized Pearson random walk allowing for bias. [*J. Statist. Phys.*]{} [**10**]{} (1974) 245–253. K. Pearson and Lord Rayleigh, The problem of the random walk. [*Nature*]{} [**72**]{} (1905) pp. 294, 318, 342. K. Pearson, *A Mathematical Theory of Random Migration*. Drapers’ Company Research Memoirs, Dulau and Co., London, 1906. Y. Peres, S. Popov, and P. Sousi, On recurrence and transience of self-interacting random walks. [*Bull. Brazilian Math. Soc.*]{} [**44**]{} (2013) 841–867. B.A. Rogozin and S.G. Foss, The recurrence of an oscillating random walk. *Theor. Probability Appl.* [**23**]{} (1978) 155–162. Translated from *Teor. Veroyatn. Primen.* [**23**]{} (1978) 161–169. O. Zeitouni, *Lecture Notes on Random Walks in Random Environment*. 9th May 2006 version, <http://www.wisdom.weizmann.ac.il/~zeitouni/>. [^1]: Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE, UK. [^2]: Heilbronn Institute for Mathematical Research, School of Mathematics, University of Bristol, University Walk, Bristol, BS8 1TW. [^3]: Department of Mathematics, Imperial College London, 180 Queen’s Gate, London SW7 2AZ, UK.
--- abstract: 'The giant radio galaxy M87 was observed at TeV energies with the Cherenkov telescopes of the H.E.S.S. collaboration (High Energy Stereoscopic System). The observations have been performed in the year 2003 during the comissioning phase and in 2004 with the full four telescope setup. The observations were motivated by the measurement of the HEGRA collaboration which reported a $4.7 \, \sigma$ excess of TeV $\gamma$-rays from the direction of M87. The results of the H.E.S.S. observations – indicating a possible variability of TeV $\gamma$-ray emission from M87 (compared to the HEGRA result) – are presented.' author: - 'M. Beilicke, R. Cornils, G. Heinzelmann, M. Raue, J. Ripken' - 'W. Benbow, D. Horns' - 'M. Tluczykont' - 'for the H.E.S.S. collaboration' title: 'Observation of the giant radio galaxy M87 at TeV energies with H.E.S.S.' --- Introduction ============ The giant radio galaxy M87 is located at a distance of $\sim16\,\mathrm{Mpc}$ ($z=0.00436$) in the Virgo cluster of galaxies. The angle between the parsec scale plasma jet – well studied at radio, optical and X-ray wavelengths – and the observer’s line of sight has been estimated to be in the order of $20^{\circ}-40^{\circ}$. The mass of the black hole in the center of M87 is of the order of $2-3 \, \cdot \, 10^{9}\,M_{\odot}$. M87 is discussed to be a powerful accelerator of high energy particles, possibly even up to the highest energies [@M87_UHECR_1; @M87_UHECR_2]. This makes M87 an interesting candidate for TeV $\gamma$-ray emission. M87 was observed with the HEGRA stereoscopic telescope system in 1998/1999 for a total of $77\,\mathrm{h}$ (after quality cuts) above an energy threshold of $730\,\mathrm{GeV}$. An excess of TeV $\gamma$-rays has been found with a significance of $4.7\,\sigma$ [@HEGRA_M87_1; @HEGRA_M87_2]. The integral flux was calculated to be $3.3\%$ of the flux of the Crab Nebula. M87 is of particular interest for observations at TeV energies: The large jet angle makes it different from the so far observed TeV emitting active galactic nuclei (AGN) which are of the blazar type, i.e. with their plasma jets pointing directly towards the observer. Various models exist to describe emission of TeV photons from M87. Leptonic models (i.e. inverse Compton scattering) are discussed in [@M87_Leptonic], whereas [@M87_Jets] consider the TeV $\gamma$-ray production in large scale plasma jets. From the experimental view, the TeV $\gamma$-ray production in large scale jets would be of particular interest since the extension of the M87 jet structure could be resolved at TeV energies with the typical angular resolution of stereoscopic Cherenkov telescope arrays of $\le 0.1^{\circ}$ per event. Hadronic models do also exist [@M87_Hadronic_1; @M87_Hadronic_2] as well as TeV $\gamma$-ray production scenarios correlated with the cosmic ray population of the radio galaxy [@M87_Hadronic_3]. Finally, the hypothesis of annihilating exotic particles (i.e. neutralinos) has been discussed by [@M87_Neutralino]. Observations with the H.E.S.S. telescopes have been initiated to confirm the HEGRA result and to further clarify the origin of the TeV $\gamma$-ray emission, The H.E.S.S. experiment ======================= ![image](HessT.eps){width="98.00000%"} The High Energy Stereoscopic System (H.E.S.S.) collaboration operates an array of four imaging atmospheric Cherenkov telescopes optimized for an energy range between $100 \, \mathrm{GeV}$ and $10\, \mathrm{TeV}$. The telescopes are located in the Khomas Highlands in Namibia ($23\mathrm{d}\,16'\,18''\,\mathrm{S}$, $16\mathrm{d}\,30'\,1''\,\mathrm{E}$) at a height of $1\,800\,\mbox{m}$ above sea level, see Fig. \[fig:Hess\]. Each telescope has a $107\,\mbox{m}^{2}$ tessellated mirror surface [@HESS_Mirror1; @HESS_Mirror2] and is equipped with a 960 photomultiplier tube (PMT) camera with a field of view of $\sim 5^{\circ}$ [@HESS_Camera]. The full four telescope array is operational since December, 2003. Since July 2003 the telescopes are operated in a coincident mode [@HESS_Trigger] assuring that at least two telescopes record images for each event which is important for an improved reconstruction of the shower geometry, and $\gamma$-hadron separation. More information about H.E.S.S. can be found in [@HESS_Status]. Observations of M87 with H.E.S.S. ================================= M87 has been observed with the H.E.S.S. Cherenkov telescopes between March and May, 2003 and February to May, 2004. The 2003 data were taken during the comissioning phase of the experiment with only two telescopes. The stereo events have been merged offline based on their individual GPS time stamps. The 2004 data were taken with the full four telescope array with the hardware coincidence trigger. The sensitivity of the full setup increased by more than a factor of two compared to the sensitivity of the instrument during the 2003 observation campaign on M87. The average zenith angle of the observations was $\sim 40^{\circ}$ for both years. Due to technical reasons one of the four telescopes was excluded from the analysis in the February/March 2004 observation period affecting $\sim 9 \, \mathrm{h}$ of the data by a slightly reduced sensitivity. Standard cuts on the data quality (stable weather and detector status) have been applied leaving a dead-time corrected observation time of $13\,\mathrm{h}$ for the 2003 data and $32\,\mathrm{h}$ for the 2004 data. After data calibration [@HESS_Calibration] and application of image cleaning tail-cuts Hillas parameters [@HILLAS_Parameters] are calculated for the individual recorded images. The geometric shower reconstruction (direction, energy, etc.) follows the standard H.E.S.S. analysis technique [@HESS_Analysis]. Results ======= Cuts which were optimized on Monte Carlo simulated sources comprising $10 \%$ of the flux of the Crab Nebula have been applied to the data including a tight angular cut of $\Delta \Theta^{2} < 0.0125 \, \mathrm{deg}^{2}$. Although the [*s*oftware stereo]{} telescope setup in 2003 would legitimate a separately optimized set of cuts, the same cuts as for the 2004 data were applied to the 2003 data[^1]. The distribution of excess events as a function of the squared angular distance $\Delta\Theta^{2}$ between the reconstructed shower direction and the nominal position of M87 is shown in Fig. \[fig:OnOff\] for the combined data set. An excess of $216 \pm 49$ events is obtained from the direction of M87 corresponding to a significance of $4.6 \, \sigma$. The sky map of the combined data set is shown in Fig. \[fig:SkyMap\]. The ring background model was used in which the background is determined from a ring region with a radius $r = 0.5^{\circ}$ centred around the putative source position. The position of the TeV excess as measured by H.E.S.S. is plotted in the radio map of M87 together with the TeV position reported by HEGRA (see Fig. \[fig:RadioTeVMap\]). The TeV excess is compatible with a point-source and its position was found to be compatible with the center of the extended structure of M87 as well as the position reported by the HEGRA collaboration within statistical errors. More observations are needed to reduce the statistical error on the derived position to further exclude regions within the extended structure of M87. ![Distribution of ON-source events (solid histogram) and normalized OFF-source events (filled histogram) vs. the squared angular distance $\Delta\Theta^{2}$ between the reconstructed shower direction and the nominal object position.[]{data-label="fig:OnOff"}](HESS_M87_OnOff.eps){width="50.00000%"} ![The TeV excess sky map showing a $3^{\circ} \times 3^{\circ}$ sky region centered around the position of M87. The number of events are integrated within the optimal point-source angular cut of $\Theta \le 0.11^{\circ}$ for each of the correlated bins. The background is estimated using the ring background model. The event-by-event angular resolution is indicated by the circle (beamsize).[]{data-label="fig:SkyMap"}](HESS_M87_SkyMap.eps){width="50.00000%"} ![The positions of the H.E.S.S. TeV excess (white cross) together with the position measured by HEGRA (filled cross) plotted in the radio map of M87 (adopted from [@M87RadioMap]). Within statistical errors, both positions are compatible with the M87 central region.[]{data-label="fig:RadioTeVMap"}](M87.eps){width="50.00000%"} ![image](HESS_M87_LightCurve.eps){width="80.00000%"} In order to calculate the integral flux above the energy threshold of the HEGRA measurement ($730 \, \mathrm{GeV}$) for the 2003 and 2004 H.E.S.S. observations a power-law spectrum $\mathrm{d}N/\mathrm{d}E \sim E^{-\Gamma}$ with a photon index of $\Gamma = 2.9$ (as reported in [@HEGRA_M87_2]) was assumed. The integral flux points are shown in Fig. \[fig:LightCurve\] together with the HEGRA measurement. Note, that the 2004 flux of M87 is below the $1 \%$ flux level of the Crab Nebula. Although the H.E.S.S. fluxes of 2003 and 2004 seem still to be compatible – taken the large statistical error of the 2003 measurement – the comparison of the H.E.S.S. 2004 flux with the HEGRA measurement indicates variable TeV $\gamma$-ray emission from M87 on time-scales of years. Summary & Conclusion ==================== The giant radio galaxy M87 has been observed with H.E.S.S. in 2003 during the comissioning phase and in 2004 with the full four telescope setup for a total of $45\,\mathrm{h}$ remaining after quality cuts. An excess of $216 \pm 49$ $\gamma$-ray events has been measured from the direction of M87 with a significance of $4.6\,\sigma$. This confirms the HEGRA measurement, although on a lower flux level. The TeV excess is compatible with a point-source and within statistics its position is located at the center of the extended M87 structure. The measured flux was found to be on the sub $1\,\%$ level of the flux from the Crab Nebula in the 2004 data which indicates flux variability if compared to the $3.3\,\%$ flux level in 1998/99 reported by the HEGRA collaboration. To confirm the indications of variability of the TeV $\gamma$-ray emission from M87 more observations are needed. Such a result would be very important since various models for the TeV $\gamma$-ray production in M87 could be ruled out. Mechanisms correlated with cosmic rays [@M87_Hadronic_3], large scale jet structures [@M87_Jets] and exotic dark matter particle annihilation [@M87_Neutralino] could not explain variability in the TeV $\gamma$-ray emission on these time-scales. The measurement of an accurate energy spectrum could further help to reduce the amount of possible models as well as a more precise location of the emission region; both goals require a measurement with higher event statistics as currently available by the H.E.S.S. 2003/2004 data set. M87 has been observed in a wide range of the electromagnetic spectrum. Simultaneous observations at other wavelengths – especially in X-rays, such as the Chandra monitoring of the HST-1 knot in the inner jet region [@Chandra_HST-1] – are of great importance since a correlation would further reveal the TeV $\gamma$-ray production mechanism of this AGN which is the first one not belonging to the blazar class. The support of the Namibian authorities and of the University of Namibia in facilitating the construction and operation of H.E.S.S. is gratefully acknowledged, as is the support by the German Ministry for Education and Research (BMBF), the Max Planck Society, the French Ministry for Research, the CNRS-IN2P3 and the Astroparticle Interdisciplinary Programme of the CNRS, the U.K. Particle Physics and Astronomy Research Council (PPARC), the IPNP of the Charles University, the South African Department of Science and Technology and National Research Foundation, and by the University of Namibia. We appreciate the excellent work of the technical support staff in Berlin, Durham, Hamburg, Heidelberg, Palaiseau, Paris, Saclay, and in Namibia in the construction and operation of the equipment. [99]{} Aharonian, F., et al. (H.E.S.S. collab.), 2005, A&A, 430, 865 Bernlöhr, K. et al. (H.E.S.S. collab.), 2003, Astroparticle Physics, 20, 111 Cornils, R. et al. (H.E.S.S. collab.), 2003, Astroparticle Physics, 20, 129 Vincent, P., Denance, J.-P., Huppert, J.-F., et al. (H.E.S.S. collab.), 2003, Proc. of the 28$^{th}$ ICRC (Tsukuba), p.2887 Funk, S., Hermann, G., Hinton, J., et al. (H.E.S.S. collab.), 2004, Astroparticle Physics, 22/3-4, 285-296 Aharonian, F. et al. (H.E.S.S. collab.), 2004, Astroparticle Physics, 22, 109 Hillas, A.M.: 1985, Proc. of 19th ICRC (La Jolla), Vol.3, 445 Aharonian, F., et al. (HEGRA collab.), 2003, A&A, 403, L1 Götting, N. et al. (HEGRA collab.), 2003, The European Physical Journal C - Particles and Fields, see astro-ph/0310308 Bai, J.M., & Lee, M.G., 2001, ApJ, 549, L173 Stawarz, L. et al., 2003, ApJ, 597, 186-201 Protheroe, R.J. et al., 2003, Astroparticle Physics, Vol.19, Issue 4, 559 Reimer, A., et al., 2004, A&A 419, 89-98 Pfrommer, C. & Enslin, T.A., 2003, A&A, 407, L73 Baltz et al., 1999, Physical Review D, 61, 023514 Ginzburg, V. L. & Syrovatskii, S. L., 1964, “The origin of cosmic rays”, Pergamon Press, Oxford Biermann, et al., 2000, Nucl. Phys. B, Proc. Suppl., 87, 417 Owen, F.N., Ledlow, M.J., Eilek, J.A., et al., 2000, Proc. of The Universe at Low Radio Frequencies, ASP Conf. Ser., 199, [*s*ee astro-ph/0006152]{} Harris, D.E. et al., 2003, ApJ, 586, L41 [^1]: Investigations of improved analysis techniques optimized for faint sources are underway.
--- abstract: 'The performance of a Part-of-speech (POS) tagger is highly dependent on the domain of the processed text, and for many domains there is no or only very little training data available. This work addresses the problem of POS tagging noisy user-generated text using a neural network. We propose an architecture that trains an out-of-domain model on a large newswire corpus, and transfers those weights by using them as a prior for a model trained on the target domain (a data-set of German Tweets) for which there is very little annotations available. The neural network has two standard bidirectional LSTMs at its core. However, we find it crucial to also encode a set of task-specific features, and to obtain reliable (source-domain and target-domain) word representations. Experiments with different regularization techniques such as early stopping, dropout and fine-tuning the domain adaptation prior weights are conducted. Our best model uses external weights from the out-of-domain model, as well as feature embeddings, pretrained word and sub-word embeddings and achieves a tagging accuracy of slightly over 90%, improving on the previous state of the art for this task.' author: - Luisa März - Dietrich Trautmann - | Benjamin Roth\ CIS, University of Munich (LMU) Munich, Germany\ [{luisa.maerz, dietrich, beroth}@cis.lmu.de]{} bibliography: - 'DomainAdaptationMaerz.bib' title: 'Domain adaptation for part-of-speech tagging of noisy user-generated text' --- Introduction ============ Part-of-speech (POS) tagging is a prerequisite for many applications and necessary for a wide range of tools for computational linguists. The state-of-the art method to implement a tagger is to use neural networks [@hovy; @yang2018design]. The performance of a POS tagger is highly dependent on the domain of the processed text and the availability of sufficient training data [@schnabel2014flors]. Existing POS taggers for canonical German text already achieve very good results around 97% accuracy, e.g. [@schmid; @DBLP:journals/corr/PlankSG16]. When applying these trained models to out-of-domain data the performance decreases drastically. One of the domains where there is not enough data is online conversational text in platforms such as Twitter, where the very informal language exhibits many phenomena that differ significantly from canonical written language. In this work, we propose a neural network that combines a character-based encoder and embeddings of features from previous non-neural approaches (that can be interpreted as an inductive bias to guide the learning task). We further show that the performance of this already effective tagger can be improved significantly by incorporating external weights using a mechanism of domain-specific L2-regularization during the training on in-domain data. This approach establishes state-of-the-art results of 90.3% accuracy on the German Twitter corpus of @rehbein. Related Work ============ The first POS tagging approach for German Twitter data was conducted by @rehbein and reaches an accuracy of 88.8% on the test set using a CRF. They use a feature set with eleven different features and an extended version of the STTS [@stts] as a tagset. @Gimpel:2011:PTT:2002736.2002747 developed a tagset for English Twitter data and report results of 89.37% on their test set using a CRF with different features as well. POS tagging for different languages using a neural architecture was successfully applied by @DBLP:journals/corr/PlankSG16. The data comes from the Universal Dependencies project[^1] and mainly contains German newspaper texts and Wikipedia articles. The work of @barone investigates different regularization mechanisms in the field of domain adaptation. They use the same L2 regularization mechanism for neural machine translation, as we do for POS tagging. Data ==== Tagset ------ The Stuttgart-Tübingen-TagSet (STTS, @stts) is widely used as the state-of-the-art tagset for POS tagging of German. @barzt show that the STTS is not sufficient when working with textual data from online social platforms, as online texts do not have the same characteristics as formal-style texts, nor are identical to spoken language. Online conversational text often contains contracted forms, graphic reproductions of spoken language such as prolongations, interjections and grammatical inconsistencies as well as a high rate of misspellings, omission of words etc. For POS tagging we use the tagset of @rehbein, where (following @Gimpel:2011:PTT:2002736.2002747) additional tags are provided to capture peculiarities of the Twitter corpus. This tagset provides tags for @-mentions, hashtags and URLs. They also provide a tag for non-verbal comments such as **\*Trommelwirbel\*** (drum-roll). Additional, complex tags for amalgamated word forms were used (see @Gimpel:2011:PTT:2002736.2002747). Overall the tagset used in our target domain contains 15 tags more than the original STTS. Corpora ------- Two corpora with different domains are used in this work. One of them is the TIGER corpus and the other is a collection of German Twitter data. The texts in the TIGER corpus [@brants] are taken from the Frankfurter Rundschau newspaper and date from 1995 over a period of two weeks. The annotation of the corpus was created semi automatically. The basis for the annotation of POS tags is the STTS. The TIGER corpus is one of the standard corpora for German in NLP and contains 888.505 tokens. The Twitter data was collected by @rehbein within eight months in 2012 and 2013. The complete collection includes 12.782.097 distinct tweets, from which 1.426 tweets were randomly selected for manual annotation with POS tags. The training set is comparably small and holds 420 tweets, whereas the development and test set hold around 500 tweets each (overall 20.877 tokens). Since this is the only available German annotated Twitter corpus, we use it for this work. Pretrained word vectors ----------------------- The usage of pretrained word embeddings can be seen as a standard procedure in NLP to improve the results with neural networks (see @hovy. FastText -------- FastText[^2] provides pretrained sub-word embeddings for 158 different languages and allows to obtain word vectors for out-of-vocabulary words. The pretrained vectors for German are based on Wikipedia articles and data from Common Crawl[^3]. We obtain 97.988 different embeddings for the tokens in TIGER and the Twitter corpus of which 75.819 were already contained in Common Crawl and 22.171 were inferred from sub-word units. Word2Vec -------- Spinningbytes[^4] is a platform for different applications in NLP and provides several solutions and resources for research. They provide word embeddings for different text types and languages, including Word2Vec [@mikolov2013efficient] vectors pretrained on 200 million German Tweets. Overall 17.030 word embeddings form the Spinningbytes vectors are used (other words are initialized all-zero). Character level encoder ----------------------- @lample show that the usage of a character level encoder is expedient when using bidirectional LSTMs. Our implementation of this encoder follows Hiroki Nakayama (2017)[^5], where character embeddings are passed to a bidirectional LSTM and the output is concatenated to the word embeddings. ![Final architecture of the neural model. Layers that are passed pretrained weights are hatched in gray. Dropout affected layers are highlighted in green.[]{data-label="final-graph"}](final-graph4.eps){width="230pt"} Experiments =========== This section describes the proposed architecture of the neural network and the conditional random field used in the experiments. For comparison of the results we also experiment with jointly training on a merged training set, which contains the Twitter and the TIGER training sets. Methods ------- ### Conditional random field baseline The baseline CRF of @rehbein achieves an accuracy of 82.49%. To be comparable with their work we implement a CRF equivalent to their baseline model. Each word in the data is represented by a feature dictionary. We use the same features as @rehbein proposed for the classification of each word. These are the lowercased word form, word length, number of uppercase letters, number of digits and occurrence of a hashtag, URL, @-mention or symbol. ### Neural network baseline The first layer in the model is an embedding layer. The next layers are two bidirectional LSTMs. The baseline model uses softmax for each position in the final layer and is optimized using Adam core with a learning rate of 0.001 and the categorical crossentropy as the loss function. ### Extensions of the neural network The non neural CRF model benefits from different features extracted from the data. Those features are not explicitely modeled in the neural baseline model, and we apply a feature function for the extended neural network. We include the features used in the non-neural CRF for hashtags and @-mentions. In addition, we capture orthographic features, e.g., whether a word starts with a digit or an upper case letter. Typically, manually defined features like these are not used in neural networks, as a neural network should take over feature engineering completely. Since this does not work optimally, especially for smaller data sets, we have decided to give the neural network this type of information as well. Thus we combine the advantages of classical feature engineering and neural networks. This also goes along with the observations of @DBLP:journals/corr/abs-1811-08757 and @W17-6304, who both show that adding conventional lexical information improves the performance of a neural POS tagger. All words are represented by their features and for each feature type an embedding layer is set up within the neural network in order to learn vectors for the different feature expressions. Afterwards all the feature embeddings are added together. As the next step we use the character level layer mentioned in section 3.6 [@lample]. The following vector sequences are concatenated at each position and form the input to the bidirectional LSTMs: - Feature embedding vector - character-level encoder - FastText vectors - Word2Vec vectors ### Domain Adaptation and regularization We train the model with the optimal setting on the TIGER corpus, i.e., we prepare the TIGER data just like the Twitter data and extract features, include a character level layer and use pretrained embeddings. We extract the weights $\hat W$ that were optimized with TIGER. The prior weights $\hat W$ are used during optimization as a regularizer for the weights $W$ used in the final model (trained on the Twitter data). This is achieved by adding the penalty term $R_W$, as shown in Equation \[eq:regularization\], to the objective function (cross-entropy loss). $$\label{eq:regularization} R_W = \lambda ||W - \hat{W} ||_2^2$$ The regularization is applied to the weights of the two LSTMs, the character LSTM, to all of the embedding layers and to the output layer. As a second regularization mechanism we include dropout for the forward and the backward LSTM layers. We also add 1 to the bias of the forget gate at initialization, since this is recommended in @Jozefowicz:2015:EER:3045118.3045367. Additionally, we use early stopping. Since the usage of different regularization techniques worked well in the experiments of @barone, we also tried the combination of different regularizers in this work. Figure \[final-graph\] shows the final architecture of our model. NCRF++ ------ We also report results obtained by training the sequence labelling tagger of @yang2018ncrf, NCRF++. They showed that their architecture produces state-of-the-art models across a wide range of data sets [@yang2018design] so we used this standardized framework to compare it with our model. Results ======= Experimental Results -------------------- Table \[testresult\] shows the results on the Twitter test set. The feature-based baseline CRF outperforms the baseline of the neural net with more than 20 percentage points. After adding the feature information, the performance of the neural baseline is improved by 13 percentage points, which is understandable, because many German POS tags are case sensitive. **experiment** **accuracy** ------------------------------- -------------- *baseline crf* 0.831 *baseline neural model* 0.634 *neural model* *+features* 0.768 *+character embeddings* 0.796 *+pretrained word vectors* 0.845 *+l2 domain adaptation* 0.896 *+dropout* **0.903** *neural model joint training* 0.894 *final CRF of Rehbein 2013* 0.888 *NCRF++ system* 0.887 : Results on the test set using the time-distributed layer. []{data-label="testresult"} The model’s performance increases by another 3 percentage points if the character level layer is used. Including the pretrained embeddings, FastText and Word2Vec vectors, the accuracy is 84.5%, which outperforms the CRF baseline. Figure \[l2norm\] shows the impact of domain adaptation and fine-tuning the prior weight. The value of the $\lambda$ parameter in the regularization formula \[eq:regularization\] can control the degree of impact of the weights on the training. Excluding the pretrained weights means that $\lambda$ is 0. We observe an optimal benefit from the out-of-domain weights by using a $\lambda$ value of 0.001. This is in line with the observations of @barone for transfer-learning for machine translation.\ Overall the addition of the L2 fine-tuning can improve the tagging outcome by 5 percentage points, compared to not doing domain adaptation. A binomial test shows that this improvement is significant. This result confirms the intuition that the tagger can benefit from the pretrained weights. On top of fine-tuning different dropout rates were added to both directions of the LSTMs for the character level layer and the joint embeddings. A dropout rate of 75% is optimal in our scenario, and it increases the accuracy by 0.7 percentage points.\ The final 90.3% on the test set outperform the results of @rehbein by 1.5 percentage points.Our best score also outperforms the accuracy obtained with the NCRF++ model. This shows that for classifying noisy user-generated text, explicit feature engineering is beneficial, and that the usage of domain adaptation is expedient in this context. Joint training, using all data (out-of-domain and target domain), can obtain an accuracy score of 89.4%, which is about 1 percentage point worse than using the same data with domain adaptation. The training setup for the joint training is the same as for the other experiments and includes all extensions except for the domain adaptation.\ coordinates [ (0, 0.840495756) (0.00001, 0.854102115) (0.0001, 0.879294086) (0.001, 0.896) (0.01, 0.893035161) (0.1, 0.85504513) (0.5, 0.772329247) (1, 0.756836858) (1.50, 0.753603664) ]{}; coordinates [ (0, 0.838910398) (0.00001, 0.852461283) (0.0001, 0.872649336) (0.001, 0.898091814) (0.01, 0.886061947) (0.1, 0.851769912) (0.5, 0.760370575) (1, 0.745713496) (1.50, 0.743639381) ]{}; ![Total number of errors for the six most frequent POS-tags and different experimental settings[]{data-label="errortypes"}](error_types.eps){width="225pt"} Error Analysis -------------- The most frequent error types in all our systems were nouns, proper nouns, articles, verbs, adjectives and adverbs as pictured in figure \[errortypes\]. By including the features the number of errors can be reduced drastically for nouns. Since we included a feature that captures upper and lower case, and nouns as well as proper nouns are written upper case in German, the model can benefit from that information. The pretrained word embeddings also help classifying nouns, articles, verbs, adjectives and adverbs. Only the errors with proper nouns increase slightly. Compared to only including the features, the model can benefit from adding both, the character level layer and the pretrained word vectors, while the results for tagging proper nouns and articles are still slightly worse than the baseline. In contrast the final experimental setup can optimize the results for every POS tag compared to the baseline, see figure \[errortypes\]. Slightly in case of articles and proper nouns, but markedly for the other tags. A comparison of the baseline errors and the errors of the final system shows that Twitter specific errors, e.g. with @-mentions or URLs, can be reduced drastically. Only hashtags still pose a challenge for the tagger. In the gold standard words with hashtags are not always tagged as such, but sometimes are classified as proper nouns. This is due to the fact that the function of the token in the sentence is the one of a proper noun. Thus the tagger has decision problems with these hashtags. Other types of errors, such as confusion of articles or nouns, are not Twitter-specific issues, but are often a problem with POS tagging and can only be fixed by general improvement of the tagger. Conclusion ========== We present a deep learning based fine-grained POS tagger for German Twitter data using both domain adaptation and regularization techniques. On top of an efficient POS tagger we implemented domain adaptation by using a L2-norm regularization mechanism, which improved the model’s performance by 5 percentage points. Since this performance is significant we conclude that fine-tuning and domain adaptation techniques can successfully be used to improve the performance when training on a small target-domain corpus. Our experiments show that the combination of different regularization techniques is recommendable and can further optimize already efficient systems. The advantage of our approach is that we do not need a large annotated target-domain corpus, but only pretrained weights. Using a pretrained model as a prior for training on a small amount of data is done within minutes and therefore very practicable in real world scenarios. [^1]: <http://universaldependencies.org> [^2]: <https://fasttext.cc> [^3]: <https://commoncrawl.org> [^4]: <https://www.spinningbytes.com> [^5]: <https://github.com/Hironsan/anago>
Quantum error correction schemes [@Shor; @Shor2; @Steane; @smolin; @Zurek; @Steane2; @Ekert; @newpaper] hold the promise of reliable storage, processing and transfer of quantum information. They actively ‘isolate’ a quantum system from perturbations, which would otherwise decohere the state [@Unruh; @Landauer]. How quickly this decoherence occurs depends to a large extent on what degrees of freedom are involved: single- or many-body, electronic, nuclear, etc. In principle, however, the development of quantum error correction allows one to decouple a quantum state from arbitrary few-particle perturbations. The decoupling in quantum error correction schemes is achieved by unitarily ‘rotating’ the state into one involving a larger number of degrees of freedom. In this larger space the information about the original state is recorded only in multi-particle correlations. Thus, if only a few particles undergo decohering perturbations, the multi-particle correlations are not destroyed, but only mixed amongst each other. After determining which few particle perturbation has occurred we can unmix the multi-particle correlations and hence reconstruct the original state. If, by contrast, decohering perturbations accumulate over too many particles then the multi-particle correlations are no longer isolated and the error correction begins to break down. In this paper an efficient coding circuit for arbitrary single-qubit errors is given. Its efficiency is quantified relative to a specific quantum computer model — Cirac and Zoller’s ion-trap model [@CZ]. Next, two schemes designed to protect against single-qubit phase-noise are studied. One scheme relies on the quantum Zeno effect [@Ch; @V] and uses two qubits to protect against ‘slow’ perturbations of the system; the other is a more conventional quantum error correction scheme [@Steane2; @Ekert; @ME] that requires three qubits to protect against arbitrary single-particle dephasing. The poor behavior of the Zeno schemes is discussed and explained. Efficient coding {#efficient-coding .unnumbered} ================ Various authors [@smolin; @Zurek; @newpaper] have presented circuits implementing a 5-qubit which protects one qubit of quantum information. This code is described as ‘perfect’ since it allows for the complete correction of arbitrary single qubit errors. (The term qubit [@Schu] represents the amount of ‘quantum’ information stored in an arbitrary two-state quantum system.) In this section a simpler version of the Laflamme [*et al*]{} coding circuit [@Zurek] is presented. We discuss the structure of the circuit and consider its efficiency. The measure of efficiency used [@Preskill] is the number of laser pulses required to implement the scheme on an ion-trap quantum computer. A second circuit yielding a slightly different version of this code was found by a computer search and is the most efficient circuit so far constructed for one-bit encoding. Fig. \[fig1\] shows our simplification of the 5-bit coding circuit of Laflamme [*et al*]{} [@Zurek]. This circuit uses single particle rotations $$\hat U = {1\over\sqrt{2}} \left(\begin{array}{rr} 1 & -1 \\ 1 & 1 \end{array}\right) \;,$$ represented by the square ‘one-qubit’ gates in the circuit, and two-particle controlled-NOT gates $$\begin{picture}(10, 5) \put(-2,9){\circle*{2.5}} \put(-2,9){\thinlines\line(0,-1){18}} \makebox(-4,-9)[c]{\large$\oplus$} \end{picture} = % {{\cdot\atop |}\atop \oplus} = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array}\right) \equiv \begin{picture}(20, 5) \unitlength=1pt \put(30,15){\circle*{2.5}} \put(30,15){\thinlines\line(0,-1){16}} \put(30,15){\thinlines\line(1,0){18}} \put(30,15){\thinlines\line(-1,0){18}} \put(10,-9){\makebox(0,0){\framebox(15,15)[c]{\small$\hat U^\dagger$}}} \put(30,-9){\makebox(0,0){\framebox(15,15)[c]{$\hat\sigma_z$}}} \put(50,-9){\makebox(0,0){\framebox(15,15)[c]{\small$\hat U$}}} \put(17.5,-9){\thinlines\line(1,0){5}} \put(37.5,-9){\thinlines\line(1,0){5}} \end{picture} ~~~~~~~~~~~~~ \label{eq1} \;;$$ here the $\oplus$ notation is chosen because of the equality of the controlled-NOT operation and the mathematical exclusive-OR operation. The conditional $\hat \sigma_z$ operation itself is given by $$\begin{picture}(20, 5) \unitlength=1pt \put(30,15){\circle*{2.5}} \put(30,15){\thinlines\line(0,-1){16}} % \put(30,15){\thinlines\line(1,0){18}} % \put(30,15){\thinlines\line(-1,0){18}} \put(30,-9){\makebox(0,0){\framebox(15,15)[c]{$\hat\sigma_z$}}} % \put(17.5,-9){\thinlines\line(1,0){5}} % \put(37.5,-9){\thinlines\line(1,0){5}} \end{picture} ~~~~~~~~~ \equiv \left(\begin{array}{cccr} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{array}\right) \label{eq1a} \;.$$ Here these elements are represented in the basis where $|0\rangle=({1\atop 0})$, $|1\rangle=({0\atop 1})$ and $$\begin{aligned} |00\rangle = (1, 0, 0, 0)^{\rm T} \;,\nonumber \\ |01\rangle = (0, 1, 0, 0)^{\rm T}\;,\nonumber \\ |10\rangle = (0, 0, 1, 0)^{\rm T}\;,\\ |11\rangle = (0, 0, 0, 1)^{\rm T}\;,\nonumber \end{aligned}$$ etc. Decoding is executed by running the coding circuitry backwards, finally recovering the original state after a few extra operations [@Zurek; @fn1]. \[c\][$~~|\psi\rangle$]{} \[c\][$|0\rangle$]{} \[cb\][$~~~~\,\hat U$]{} \[cb\][$~~~~~\hat U^\dagger$]{} =3.4in The circuit in Fig. \[fig1\] has an interesting structure: The coding initially entangles four auxiliary particles and then executes a double ‘classical’ code on the state to be protected. This ‘stores’ the degrees of freedom of $|\psi\rangle$ in the correlations of a five particle entangled state. The classical codes each consist of a simple triple redundancy of the qubit on the upper line of Fig. \[fig1\]: The first classical code may be interpreted as protecting against random bit-flips and the second — against random phase shifts. This double code was motivated by theorem 6 of Steane [@Steane2]. He found, however, that such double coding required a minimum of 7 qubits for a [*linear*]{} quantum code. The circuit in Fig. \[fig1\] produces a code that is not linear [@Zurek]. To see that the above circuit reproduces the Laflamme [*et al*]{} code it is sufficient to consider how this circuit acts on an arbitrary qubit $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$. With the auxiliary qubits initially in the states $|0\rangle$ the circuit generates the superposition $$\begin{aligned} &&\phantom{+}\alpha\left(| 00000\rangle + | 00110 \rangle +| 01001 \rangle - | 01111 \rangle \right.\nonumber \\ &&~~~\,\left.+ | 10011 \rangle+ | 10101 \rangle + | 11010 \rangle - | 11100 \rangle \right) \nonumber \\ &&+\beta \left(| 00011 \rangle- | 00101 \rangle- | 01010 \rangle - | 01100 \rangle\right.\nonumber \\ &&~~~\,\left. -| 10000 \rangle + | 10110 \rangle +| 11001 \rangle + | 11111 \rangle\right) \\ &=& \phantom{+}\alpha\left(|b_2\rangle|00\rangle+|b_5\rangle|01\rangle +|b_7\rangle|10\rangle+|b_4\rangle|11\rangle\right) \nonumber\\ &&+\beta\left(|b_1\rangle|11\rangle-|b_6\rangle|10\rangle +|b_8\rangle|01\rangle-|b_2\rangle|00\rangle\right)\nonumber\end{aligned}$$ where the $|b_j\rangle$ are the 3-particle Bell states defined by Laflamme [*et al*]{} [@Zurek]. Here we have dropped the normalization constant. This code is clearly idential to the Laflamme [*et al*]{} code up to relabelling of the Bell states. How efficient is this coding scheme? One measure is obtained by asking how many ‘clock cycles’ are required to execute the scheme on a quantum computer. The most promising device, at least for relatively small numbers of two-state systems, is a linear ion-trap model suggested by Cirac and Zoller [@CZ]. In this model the basic clock frequency is limited by that of the center-of-mass mode of the trapped ions as they undergo coupled oscillations. This limitation arises from the requirement that the laser line-widths be narrower than the lowest vibrational mode of the ions, thus ensuring that only the correct energy levels are addressed by the laser pulses. The energy-time uncertainty relation, therefore, implies that the duration of the laser pulses must exceed the inverse frequency of this lowest vibrational mode. We then conclude that for ion-trap computers the number of laser pulses required to complete a particular algorithm is a reasonable measure of its efficiency [@fn2]. \[c\][$~~|\psi\rangle$]{} \[c\][$|0\rangle$]{} \[cb\][$~~~~\,\hat U$]{} \[cb\][$~~~~\,\hat \sigma_z$]{} \[cb\][$~~~~\,\hat U^\dagger$]{} =3.4in Rather than directly using the circuit in Fig. \[fig1\], we optimize it for the particular primitive instruction set of the ion-trap quantum computer in the manner shown in Fig. \[fig2\]. (Note that the three-qubit operations, the controlled double $\hat\sigma_z$ operations, require only four laser pulses each as demonstrated in the appendix.) A simple counting of the requirements for the circuit of Fig. \[fig2\] yields 26 laser pulses. By contrast the original circuit of Laflamme [*et al*]{} [@Zurek] appears to require at least 41 laser pulses. Another scheme using six 2-qubit controlled-NOT’s and five 1-bit gates was mentioned in Ref. ; its unoptimized form requires 35 laser pulses [@DiVincenzo]. It is worth noting here that short of trying all possible circuits it is not known in general how to determine the optimal circuit. Much of the optimization achieved here is actually hidden in the choice of the circuit in Fig. \[fig1\] where a number of rearrangements were tried by hand to yield the ‘simplest’ circuit. More efficient circuits than shown in Fig. \[fig2\] can be found by computer search. In fact, we show our current best in Fig. \[fig2a\], where we define two new one-bit operations: $$\hat V = {1\over\sqrt{2}} \left(\begin{array}{rr} 1 & -i \\ -i & 1 \end{array}\right) \;, ~~~~~~ \hat W = \hat V \hat U^\dagger \;.$$ As shown it requires only 24 laser pulses, not counting further speedups such as parallelizing the operation of several of its one-bit gates. \[c\][$~~|\psi\rangle$]{} \[c\][$|0\rangle$]{} \[cb\][$~~~~\,\hat U$]{} \[cb\][$~~~~\,\hat V$]{} \[cb\][$~~~~\,\hat W$]{} \[cb\][$~~~~\,\hat \sigma_z$]{} \[cb\][$~~~~\,\hat U^\dagger$]{} =3.4in An alternative method of error correction has been suggested by Vaidman [*et al*]{} [@V]. Its operation invoves a circuit which can provide only error detection [*not*]{} error correction; however, by sufficiently rapid operation of the circuit the quantum Zeno effect allows it to ‘turn off’ the relatively slow errors. Using the quantum Zeno effect it corrects for [*small*]{} single-particle perturbations of the system rather than the arbitrary single-particle errors of the standard schemes. Nonetheless, quantum Zeno error correction has the advantage of only requiring 4 qubits. Further, we find that the coding and decoding may each be executed using as few as 16 laser pulses with possibly only one extra for the auxiliary qubit resetting. How effective are these error correction schemes that rely on the quantum Zeno effect? We shall now evaluate their performance for correcting phase-diffusion noise. Zeno- versus standard quantum-error-correction {#zeno--versus-standard-quantum-error-correction .unnumbered} ============================================== In this section we compare the performance of Zeno and standard methods for quantum error correction. Rather than considering the schemes discussed in the previous section, however, we study simpler schemes which protect only against 1-qubit dephasing. In particular, we compare a compact 2-qubit code given by Chuang and Laflamme [@Ch], and independently by Vaidman [*et al*]{} [@V] versus a standard 3-qubit code [@Steane2; @Ekert; @ME]. The 2-qubit scheme relies on the quantum Zeno effect to correct for [*small*]{} deviations in the system’s state; whereas the 3-qubit code can correct for arbitrary 1-qubit dephasing. How do these schemes compare? Figs. \[fig3\] and \[fig4\] show complete coding and decoding circuits for both schemes. Clearly, the Zeno scheme uses fewer resources and requires fewer gates to operate so it has a distinct implementational advantage over the more conventional schemes. \[c\][$|\psi\rangle$]{} \[c\][$|0\rangle$]{} \[c\][${~~~}$1-qubit dephasing]{} \[cb\][$~~~~\hat U$]{} \[cb\][$~~~~\hat U^\dagger$]{} =3.4in \[c\][$|\psi\rangle$]{} \[c\][$|0\rangle$]{} \[c\][${~~~}$1-qubit dephasing]{} \[cb\][$~~~~\hat U$]{} \[cb\][$~~~~\hat U^\dagger$]{} =3.4in Our model for dephasing assumes that the phase in each qubit undergoes an independent random walk according to $$\alpha |0\rangle +\beta|1\rangle \rightarrow \alpha |0\rangle +\beta e^{i\phi(t)}|1\rangle \label{model} \;,$$ (up to normalization) where the perturbing phases $\phi(t)$ are given by the Ito stochastic calculus [@Gardiner] with: $$\begin{aligned} \phi(0)& =& 0 \nonumber \\ \langle\!\langle d\phi(t) \rangle\!\rangle &=& 0 \\ \langle\!\langle d\phi(t)\,d\phi(t') \rangle\!\rangle &=& 2 \delta(t-t')dt \nonumber \;,\end{aligned}$$ etc., where $d\phi(t)$ is the Ito differential and the doubled angle brackets represent stochastic averages. Eq. (\[model\]) therefore describes our model of the shaded regions in Figs. \[fig3\] and \[fig4\]. How do each of the above error correction circuits work if applied only after the dephasing has acted for a time $t$? Delaying the decoding circuit in Fig. \[fig3\] for a time $t$ after the coding yields $$\hat\rho_0 \equiv \left( \begin{array}{cc} |\alpha|^2 & \overline{\beta}\alpha \nonumber \\ \overline{\alpha}\beta & |\beta|^2 \nonumber \end{array}\right) \longrightarrow \left( \begin{array}{cc} |\alpha|^2 & e^{-t}\, \overline{\beta}\alpha \nonumber \\ e^{-t}\, \overline{\alpha}\beta & |\beta|^2 \nonumber \end{array}\right) \;,$$ here $\hat\rho_0$ is the initial density matrix for the qubit $|\psi\rangle$; i.e., there is no improvement using the Zeno error correction scheme for this model of noise even for short times! A similar result was noted by Chuang and Laflamme [@Ch]. By contrast, a delay for time $t$ in circuit \[fig4\] before decoding yields $$\begin{aligned} \hat\rho_0 &\longrightarrow& (2+3e^{-t}-e^{-3t})\, \hat\rho_0/4\nonumber \\ && + (2+ e^{-3t}-3e^{-t})\,\hat\sigma_x \,\hat\rho_0\, \hat\sigma_x /4 \;,\end{aligned}$$ with $\hat\sigma_x =\left({0\,1\atop 1\,0}\right)$ being one of the standard Pauli matrices. A measure of (relative) coherence between a pair of states is given by the absolute value of the off-diagonal terms in the density matrix $\hat \rho(t)$ [@Damp] $${\cal C}(t)\equiv \left| {\langle 1|\hat\rho(t)|0\rangle \over \langle 1|\hat\rho_0(t)|0\rangle} \right| \;.$$ The 2-qubit Zeno error correction scheme yields $${\cal C}_{\rm 2-qubit}(t) = e^{-t} \;,$$ whereas the standard 3-qubit scheme has a coherence bounded by its worst case $${\cal C}_{\rm 3-qubit}(t) \ge (3 e^{-t}-e^{-3t})/2 \;.$$ Finally, we note that $n$ evenly spaced repetitions in a time $t$ of an error correction scheme will yield an improved coherence ${\cal C}$ according to $${\cal C}^{\,n{\rm-shot}}(t) = \left[ {\cal C}(t/n)\right]^n \;.$$ The performance of the Zeno 2-qubit scheme, conventional 3-qubit scheme and a 10-fold repetition of the latter are displayed in Fig. \[fig5\]. \[c\][${\cal C}(t)~~$]{} \[c\][$t$]{} =3.4in Why does the Zeno error correction scheme fail to work for the noise model of Eq. (\[model\]) even at short times? Put simply, the random walk model for dephasing implies that the expected deviation of the phase grows as $\sqrt{t}$ instead of $t$. The error, therefore, accumulates too quickly for the Zeno scheme. The random walk model of noise really has two time scales: the typical time between random steps and the much longer dephasing time. The stochastic calculus approach assumes that the former of these time scales is so short as to be negligible. This means that in this model of noise no matter how quickly we operate the Zeno error correction scheme many stochastic steps have occurred. The averaging over these many random steps in phase produces a pertubabtion that overwhelms the linear correction to the state. However, the Zeno error correction schemes discussed above [@Ch; @V] require that the change in the system’s state be dominated by linear terms. The implications are that phase diffusion is [*not*]{} corrected by these Zeno error correction schemes unless they are repeatedly used at a rate faster than the typical time between random steps of the phase — the phase-diffusion time itself is already much too slow. How fast does this need to be in practice? That depends on the detailed source of the phase diffusion: For instance, it might be relatively slow (though still faster than the phase diffusion time) when the principle source of noise is due to external mechanical noise. Other models, however, are very much faster: Unruh’s [@Unruh] study of decoherence due to vacuum fluctuations in the eletromagnetic field coupling to a qubit yielded a time-scale comparable to X-ray frequencies. It is worth mentioning two error ‘stabilization’ schemes which utilize the Zeno effect: Zurek [@Z2] has outlined a scheme which averages several copies of a computation, and Berthiaume [*et al*]{} [@B; @B2] have considered in some detail a scheme which projects several copies of a computation to the symmetric state. Because these schemes evenly spread errors over several copies of a computation rather than attempt to correct them it may be that they circumvent the problem with dephasing discussed here. We leave this question open for further study. Quantum error correction of arbitrary single-qubit errors is rather costly of computing resources: a minimum of five qubits and possibly 24 laser pulses for coding (decoding being only slightly more expensive [@fn1]). This might be compared with the resources required to execute a moderate unprotected calculation; Beckman [*et al*]{} [@Preskill] show that the Shor algorithm could be implemented on 6 trapped ions using only 38 laser pulses to factor the number 15 [@fn3]. Alternate error correction schemes based on the quantum Zeno effect are much more efficient to implement. However, they fail for simple models of decoherence, such as the model of phase diffusion considered here. In conclusion, because error correction is virtually as expensive as the simplest error-correction-free computations, it appears unlikely that full quantum error correction will be implemented for computational purposes in the first few generations of quantum computers. Instead, quantum error correction will probably initially play an important role in the long term storage of quantum information: implementing a true quantum memory. Appendix:\ Quantum networks on ion traps {#appendix-quantum-networks-on-ion-traps .unnumbered} ============================= In this appendix we describe how controlled double $\hat\sigma_z$ operations may be performed in four laser pulses on a Cirac-Zoller ion trap quantum computer [@CZ]. These operations are the [*three*]{}-qubit operations seen in Fig. \[fig2\]. Labeling the ground and excited states of ion $i$ as $|g\rangle_i$ and $|e\rangle_i$ respectively, and the Fock state of the center-of-mass vibrational mode of the trap as $|n\rangle_{\rm cm}$ we summarize two important operations: A suitably tuned $\pi$-pulse on ion $i$ yields the operation [@CZ; @Preskill] $$\hat W^{(i)}_{\rm phon} :\left\{ \begin{array}{rcl} |g\rangle_i |0\rangle_{\rm cm}&\rightarrow& \phantom{-i}|g\rangle_i |0\rangle_{\rm cm} \\ |g\rangle_i |1\rangle_{\rm cm}&\rightarrow& -i |e\rangle_i |0\rangle_{\rm cm} \\ |e\rangle_i |0\rangle_{\rm cm}&\rightarrow& -i |g\rangle_i |1\rangle_{\rm cm} \\ |e\rangle_i |1\rangle_{\rm cm}&\rightarrow& \phantom{-i}|e\rangle_i |1\rangle_{\rm cm} \;, \end{array} \right.$$ whereas, a differently tuned $2\pi$-pulse on ion $j$ yields [@CZ; @Preskill] $$\hat V^{(j)}: \left\{ \begin{array}{rcl} |g\rangle_j |0\rangle_{\rm cm}&\rightarrow& \phantom{-}|g\rangle_j |0\rangle_{\rm cm} \\ |g\rangle_j |1\rangle_{\rm cm}&\rightarrow& -|g\rangle_j |1\rangle_{\rm cm}\\ |e\rangle_j |0\rangle_{\rm cm}&\rightarrow& \phantom{-}|e\rangle_j |0\rangle_{\rm cm}\\ |e\rangle_j |1\rangle_{\rm cm}&\rightarrow& \phantom{-}|e\rangle_j |1\rangle_{\rm cm} \;. \end{array} \right.$$ Finally, another appropriately tuned $\pi$-pulse on ion $j$ yields [@CZ; @Preskill] $$\hat V^{(j)}_{\rm phon}: \left\{ \begin{array}{rcl} |g\rangle_j |0\rangle_{\rm cm}&\rightarrow& \phantom{-i}|g\rangle_j |0\rangle_{\rm cm} \\ |g\rangle_j |1\rangle_{\rm cm}&\rightarrow& -i|e'\rangle_j |0\rangle_{\rm cm}\\ |e\rangle_j |0\rangle_{\rm cm}&\rightarrow& \phantom{-i}|e\rangle_j |0\rangle_{\rm cm}\\ |e\rangle_j |1\rangle_{\rm cm}&\rightarrow& \phantom{-i}|e\rangle_j |1\rangle_{\rm cm} \;, \end{array} \right.$$ where $|e'\rangle_j$ is a [*different*]{} excited state of ion $j$. Using these operations and taking the trap’s vibrational mode intially in the ground state $|0\rangle_{\rm cm}$ we find $$\begin{aligned} \hat W^{(i)\dagger}_{\rm phon} \hat V^{(k)} \hat V^{(j)} \hat W^{(i)}_{\rm phon} :&& \label{theEq} \\ |\epsilon\rangle_i |\eta_1\rangle_j|\eta_2\rangle_k \rightarrow&& (-1)^{\eta_1\epsilon}(-1)^{\eta_2\epsilon} |\epsilon\rangle_i |\eta_1\rangle_j|\eta_2\rangle_k \nonumber \,.\end{aligned}$$ This completes the construction of the controlled double $\hat\sigma_z$ operation. We note that this construction requires only four laser pulses as opposed to the six required to perform the two controlled $\hat\sigma_z$ operations separately. In order to see how to generalize this approach let us introduce a different notation. We start by labeling the states to be acted on by $|\epsilon_1,\,\epsilon_2,\,\ldots,\, \eta_1,\,\eta_2,\,\ldots\rangle$ where the $\epsilon_j$ represents the $j$th control bit and $\eta_k$ represents the $k$th control bit. When only a single of either kind of bit occurs we drop the corresponding subscript. Then we introduce a [*space-time*]{} diagram of events on the ion-trap to replace the usual circuit notation. In these space-time diagrams the horizontal lines represent the world lines of the ions (in an exactly analogous way that they do in the usual circuits). Finally, we superpose on these world lines the events corresponding to an appropriately tuned laser on each ion. In this way Eq. (\[theEq\]) becomes: $$\begin{aligned} \begin{picture}(20, 45) \unitlength=1pt \put(30,40){\circle*{2.5}} \put(30,40){\thinlines\line(0,-1){16}} \put(30,16){\makebox(0,0){\framebox(15,15)[c]{$\hat\sigma_z$}}} \put(30,8.5){\thinlines\line(0,-1){8.5}} \put(30,-8){\makebox(0,0){\framebox(15,15)[c]{$\hat\sigma_z$}}} \end{picture} ~~~~~~~ &\equiv&~~~~~ \begin{picture}(0, 45) \unitlength=1pt \put(-10,40){\thinlines\line(1,0){10}} \put(40,40){\thinlines\line(1,0){30}} \put(110,40){\thinlines\line(1,0){10}} \put(20,40){\makebox(0,0){$\hat W_{\rm phon}$}} \put(90,40){\makebox(0,0){$\hat W_{\rm phon}^\dagger$}} \put(-10,16){\thinlines\line(1,0){35}} \put(55,16){\thinlines\line(1,0){65}} \put(40,16){\makebox(0,0){$\hat V$}} \put(-10,-8){\thinlines\line(1,0){55}} \put(75,-8){\thinlines\line(1,0){45}} \put(60,-8){\makebox(0,0){$\hat V$}} \end{picture} \hskip 1.8in \\ \nonumber \\ &=& ~(-i)^\epsilon (-1)^{\bar \eta_1\epsilon} (-1)^{\bar\eta_2\epsilon} (+i)^\epsilon \nonumber \\ &=& ~(-1)^{\eta_1\epsilon} (-1)^{\eta_2\epsilon} \nonumber \;,\end{aligned}$$ where $\bar\eta\equiv(1-\eta)$. Reading from left to right this circuit decomposes to $\hat W^{(1)\dagger}_{\rm phon} \hat V^{(3)} \hat V^{(2)} \hat W^{(1)}_{\rm phon}$ where now we must explicitly add the numbers of the ions. Since these circuits only involve conditional phase changes it is sufficient to ask what phases accumulate as we operate the various pulses. We see that whenever there is an [*even*]{} number of phases to be flipped (i.e., an even number of $\hat V$ pulses) that the phases accumulated from pulses on the control bits are unwanted and need to be cancelled by applying the inverse operation the second time around. In particular, here we apply $\hat W_{\rm phon}^\dagger$ the second time since we applied $\hat W_{\rm phon}$ the first. Similarly, below where we make use of $\hat V_{\rm phon}$ for a second and further control bit we must use $\hat V_{\rm phon}^\dagger$ the second time whenever there are an even number of bits to have their phases flipped (i.e., an even number of $\hat V$’s). We now give two more example constructions: $$\begin{aligned} \begin{picture}(20, 95) \unitlength=1pt \put(30,88){\circle*{2.5}} \put(30,64){\circle*{2.5}} \put(30,40){\circle*{2.5}} \put(30,40){\thinlines\line(0,-1){16}} \put(30,40){\thinlines\line(0,+1){48}} \put(30,16){\makebox(0,0){\framebox(15,15)[c]{$\hat\sigma_z$}}} \put(30,8.5){\thinlines\line(0,-1){8.5}} \put(30,-8){\makebox(0,0){\framebox(15,15)[c]{$\hat\sigma_z$}}} \end{picture} ~~~~~~~ &\equiv&~~~~~ \begin{picture}(0, 45) \unitlength=1pt \put(-10,88){\thinlines\line(1,0){7.5}} \put(15,88){\makebox(0,0){$\hat W_{\rm phon}$}} \put(32.5,88){\thinlines\line(1,0){75}} \put(125,88){\makebox(0,0){$\hat W_{\rm phon}^\dagger$}} \put(142,88){\thinlines\line(1,0){7.5}} % \put(-10,64){\thinlines\line(1,0){22.5}} \put(30,64){\makebox(0,0){$\hat V_{\rm phon}$}} \put(47.5,64){\thinlines\line(1,0){45}} \put(110,64){\makebox(0,0){$\hat V_{\rm phon}^\dagger$}} \put(127.5,64){\thinlines\line(1,0){22}} % \put(-10,40){\thinlines\line(1,0){37.5}} \put(45,40){\makebox(0,0){$\hat V_{\rm phon}$}} \put(62.5,40){\thinlines\line(1,0){15}} \put(95,40){\makebox(0,0){$\hat V_{\rm phon}^\dagger$}} \put(112.5,40){\thinlines\line(1,0){37}} % \put(-10,16){\thinlines\line(1,0){57.5}} \put(60,16){\makebox(0,0){$\hat V$}} \put(72.5,16){\thinlines\line(1,0){77}} % \put(-10,-8){\thinlines\line(1,0){72.5}} \put(75,-8){\makebox(0,0){$\hat V$}} \put(87.5,-8){\thinlines\line(1,0){62}} \end{picture}~~~~~ \hskip 1.8in \\ \nonumber \\ &=& ~(-i)^{\epsilon_1} (-i)^{\bar\epsilon_2\epsilon_1} (-i)^{\bar\epsilon_3\epsilon_2\epsilon_1} (-1)^{\bar \eta_1\epsilon_3\epsilon_2\epsilon_1} \nonumber \\ &&~~~~ \times(-1)^{\bar\eta_2\epsilon_3\epsilon_2\epsilon_1} (+i)^{\bar\epsilon_3\epsilon_2\epsilon_1} (+i)^{\bar\epsilon_2\epsilon_1} (+i)^{\epsilon_1} \nonumber \\ &=& ~(-1)^{\eta_1\epsilon_3\epsilon_2\epsilon_1} (-1)^{\eta_2\epsilon_3\epsilon_2\epsilon_1} \nonumber \;,\end{aligned}$$ which corresponds to the series of laser pulses $$\hat W_{\rm phon}^{(1)\dagger} \hat V_{\rm phon}^{(2)\dagger} \hat V_{\rm phon}^{(3)\dagger} \hat V^{(5)} \hat V^{(4)} \hat V_{\rm phon}^{(3)} \hat V_{\rm phon}^{(2)} \hat W_{\rm phon}^{(1)} \;,$$ and as a final example: $$\begin{aligned} \begin{picture}(20, 95) \unitlength=1pt \put(30,88){\circle*{2.5}} \put(30,64){\circle*{2.5}} \put(30,40){\makebox(0,0){\framebox(15,15)[c]{$\hat\sigma_z$}}} \put(30,64){\thinlines\line(0,-1){16}} \put(30,64){\thinlines\line(0,+1){24}} \put(30,32.5){\thinlines\line(0,-1){8.5}} \put(30,16){\makebox(0,0){\framebox(15,15)[c]{$\hat\sigma_z$}}} \put(30,8.5){\thinlines\line(0,-1){8.5}} \put(30,-8){\makebox(0,0){\framebox(15,15)[c]{$\hat\sigma_z$}}} \end{picture} ~~~~~~~ &\equiv&~~~~~ \begin{picture}(0, 45) \unitlength=1pt \put(-10,88){\thinlines\line(1,0){7.5}} \put(15,88){\makebox(0,0){$\hat W_{\rm phon}$}} \put(32.5,88){\thinlines\line(1,0){55}} \put(105,88){\makebox(0,0){$\hat W_{\rm phon}$}} \put(122,88){\thinlines\line(1,0){7.5}} % \put(-10,64){\thinlines\line(1,0){22.5}} \put(30,64){\makebox(0,0){$\hat V_{\rm phon}$}} \put(47.5,64){\thinlines\line(1,0){25}} \put(90,64){\makebox(0,0){$\hat V_{\rm phon}$}} \put(107.5,64){\thinlines\line(1,0){22}} % \put(-10,40){\thinlines\line(1,0){42.5}} \put(45,40){\makebox(0,0){$\hat V$}} \put(57.5,40){\thinlines\line(1,0){72}} % \put(-10,16){\thinlines\line(1,0){57.5}} \put(60,16){\makebox(0,0){$\hat V$}} \put(72.5,16){\thinlines\line(1,0){57}} % \put(-10,-8){\thinlines\line(1,0){72.5}} \put(75,-8){\makebox(0,0){$\hat V$}} \put(87.5,-8){\thinlines\line(1,0){44}} \end{picture}~~~~~ \hskip 1.8in \\ \nonumber \\ &=& ~(-i)^{\epsilon_1} (-i)^{\bar\epsilon_2\epsilon_1} (-1)^{\bar\eta_1\epsilon_2\epsilon_1} \nonumber \\ &&~~~~ \times (-1)^{\bar\eta_2\epsilon_2\epsilon_1} (-1)^{\bar\eta_3\epsilon_2\epsilon_1} (-i)^{\bar\epsilon_2\epsilon_1} (-i)^{\epsilon_1} \nonumber \\ &=& ~(-1)^{\eta_1\epsilon_2\epsilon_1} (-1)^{\eta_2\epsilon_2\epsilon_1} (-1)^{\eta_3\epsilon_2\epsilon_1} \nonumber \;,\end{aligned}$$ which corresponds to the series of laser pulses $$\hat W_{\rm phon}^{(1)} \hat V_{\rm phon}^{(2)} \hat V^{(5)} \hat V^{(4)} \hat V^{(3)} \hat V_{\rm phon}^{(2)} \hat W_{\rm phon}^{(1)} \;.$$ 0.2truein The authors thank the ISI for giving them the opportunity to collaborate; SLB appreciated the support of a Humboldt fellowship and discussions with N. Cohen and D. DiVincenzo. Permanent address. P. W. Shor, Phys. Rev. A [**52**]{}, R2493 (1995). A. R. Calderbank and P. W. Shor, “Good quantum error-correcting codes exist,” preprint quant-ph/9512032. A. Steane, Error correcting codes in quantum theory, submitted to Phys. Rev. Lett. J.A. Smolin, “Purification of mixed entrangled states and quantum channel capacity”, Joint Mathemtatics Meetings of the AMS, January 1996, Orlando. This work was later published in [@newpaper]. R. Laflamme, C. Miquel, J. P. Paz and W. H. Zurek, Phys. Rev. Lett. [**77**]{}, 198 (1996). A. Steane, “Multiple particle interference and quantum error correction,” preprint quant-ph/9601029, to appear in Proc. Roy. Soc. London. A. Ekert and C. Macchiavello, “Quantum error correction for communication,” preprint quant-ph/9602022. C. H. Bennett, D. P. DiVincenzo, J. A. Smolin and W. K. Wootters, “Mixed state entanglement and quantum error correction,” quant-ph/9604024. W. G. Unruh, Phys. Rev. A [**51**]{}, 992 (1995). R. Landauer, Philos. Trans. R. Soc. London, Ser. A [**353**]{}, 367 (1996). J. I. Cirac and P. Zoller, Phys. Rev. Lett. [**74**]{}, 4091 (1995). I. L. Chuang and R. Laflamme, “Quantum Error Correction by Coding,” quant-ph/9511003. L. Vaidman, L. Goldenberg and S. Wiesner, “Error prevention scheme with four particles,” quant-ph/9603031. S. L. Braunstein, “ Quantum error correction of dephasing in 3 qubits,” quant-ph/9603024. R. Jozsa and B. Schumacher, J. Mod. Opt. [**41**]{}, 2343 (1994); B. Schumacher, Phys. Rev. A [**51**]{}, 2738 (1995). D. Beckman, A. N. Chari, S. Devabhaktuni and J. Preskill, “Efficient networks for quantum factoring,” Caltech preprint CALT-68-2021, quant-ph/9602016. Just how many extra operations are required to complete decoding depends on whether it is done by added circuitry, or by external observation of the state of the auxiliary qubits. When the necessity of re-initializing the auxiliary qubits is included it appears that as few as three extra ‘clock cycles’ should be required for the external-observation schemes since most of the operations act only one single particles and so could probably be executed in parallel. Indeed, it is likely that many of the operations can be run in parallel. For instance, if the block of unitary operations at the beginning and ending of Fig. \[fig2\] are executed in parallel then the entire circuit would be completed in only 20 clock cycles. Optimization depends on the detailed structure of the circuit which was not given in Ref. . C. W. Gardiner, [*Handbook of Stochastic Methods*]{} (Springer Verlag, Berlin, 1983); L. Arnold, [*Stochastic Differential Equations*]{} (Wiley, New York, 1973). S. L. Braunstein, Phys. Rev. A [**45**]{}, 6803 (1992). The smallest number for which Shor’s algorithm strictly applies is 15. W. H. Zurek, Phys. Rev. Lett. [**53**]{}, 391 (1984). A. Berthiaume, D. Deutsch and R. Jozsa, in [*Proceedings of the Workshop on Physics and Computation — PhysComp94*]{} (IEEE Computer Society Press, Dallas, Texas). A. Barenco, A. Berthiaume, D. Deutsch, A. Ekert, R. Jozsa and C. Macchiavello, “Stabilization of quantum computations by symmetrization,” quant-ph/9604028.
--- abstract: 'Finding feasible, collision-free paths for multiagent systems can be challenging, particularly in non-communicating scenarios where each agent’s intent (e.g. goal) is unobservable to the others. In particular, finding time efficient paths often requires anticipating interaction with neighboring agents, the process of which can be computationally prohibitive. This work presents a decentralized multiagent collision avoidance algorithm based on a novel application of deep reinforcement learning, which effectively offloads the online computation (for predicting interaction patterns) to an offline learning procedure. Specifically, the proposed approach develops a value network that encodes the estimated time to the goal given an agent’s joint configuration (positions and velocities) with its neighbors. Use of the value network not only admits efficient (i.e., real-time implementable) queries for finding a collision-free velocity vector, but also considers the uncertainty in the other agents’ motion. Simulation results show more than 26% improvement in paths quality (i.e., time to reach the goal) when compared with optimal reciprocal collision avoidance (ORCA), a state-of-the-art collision avoidance strategy.' author: - 'Yu Fan Chen, Miao Liu, Michael Everett, and Jonathan P. How [^1]' bibliography: - 'biblio.bib' title: | **Decentralized Non-communicating Multiagent Collision Avoidance\ with Deep Reinforcement Learning** --- Acknowledgment {#acknowledgment .unnumbered} ============== This work is supported by Ford Motor Company. [^1]: Laboratory of Information and Decision Systems, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, USA
--- abstract: 'Let $A,B$ be disjoint sets of sizes $n$ and $m$. Let ${\mathcal Q}$ be a family of quadruples, having $2$ elements from $A$ and $2$ from $B$, such that any subset $S \subseteq A \cup B$ with $|S|=7$, $|S \cap A| \geq 2$ and $|S \cap B| \geq 2$ contains one of the quadruples. We prove that the smallest size of ${\mathcal Q}$ is $(1/16 + O(1/n) + O(1/m)) n^2 m^2$ as $n,m\to\infty$. We also solve asymptotically a more general two-partite Turán problem for quadruples.' author: - Alexander Sidorenko title: | On Turán problems\ for Cartesian products of graphs --- An $r$-[*graph*]{} is a pair $H=(V(H),E(H))$ where $V(H)$ is a finite set of vertices, and the edge set $E(H)$ is a collection of $r$-subsets of $V(H)$. A subset of vertices is called [*independent*]{} if it contains no edges of $H$. The [*independence number*]{} $\alpha(H)$ is the largest size of an independent subset. The classical Turán number $T(n,k,r)$ is the minimum number of edges in an $n$-vertex $r$-graph $H$ with $\alpha(H) < k$. Consequently, $\binom{n}{r} - T(n,k,r)$ is the largest number of edges in an $n$-vertex $r$-graph that does not contain a complete subgraph on $k$ vertices. The ratio $T(n,k,r) / \binom{n}{r}$ is non-decreasing when $n$ increases, so the limit $t(k,r) = \lim_{n\to\infty} T(n,k,r) / \binom{n}{r}$ exists. The exact values of $T(n,k,2)$ were found by Mantel [@Mantel:1907] for $k=3$, and by Turán [@Turan:1941] for all $k$: $$T(n,k,2) \; = \; mn - \frac{m(m+1)}{2} \: (k-1) \;\;\;\;\;{\rm if}\;\; m \leq \frac{n}{k-1} \leq m+1 \; .$$ In particular, $t(k,2) = 1/(k-1)$. Not a single value $t(k,r)$ is known with $k>r>2$. Giraud [@Giraud:1990] discovered an elegant construction for $r=4$, $k=5$ which yields $t(5,4) \leq 5/16$. This construction was generalized by de Caen, Kreher and Wiseman [@Caen:1988] in the following way. Consider two disjoint sets $A=\{a_1,a_2,\ldots,a_n\}$, $B=\{b_1,b_2,\ldots,b_m\}$, and a $0,1$-matrix $X=[x_{ij}]$ of size $n \times m$. Let $E_{40}$ be the set of all quadruples within $A$, $E_{04}$ be the set of all quadruples within $B$, and $E_{22}$ be the set of quadruples $\{a_i,a_j,b_p,b_q\}$ such that $x_{ip} + x_{iq} + x_{jp} + x_{jq}$ is even. It is easy to see that in the $4$-graph $H=(A \cup B,\: E_{40} \cup E_{04} \cup E_{22})$ any subset of $5$ vertices contains at least one edge. If $n$ and $m$ are approximately equal, and the entries of $X$ are selected randomly and independently, with equal probability of being $0$ or $1$, the expected number of edges in $H$ is $\frac{5}{16} \binom{n+m}{4} + O((n+m)^3)$. A more specific choice of $X$ related to Hadamard matrices provides the best known upper bounds for $T(n,5,4)$ (see [@Sidorenko:1995]). If $n,m\to\infty$, then $\big( \frac{1}{2} + o(1) \big) \binom{n}{2}\binom{m}{2}$ is the minimal size of a system of quadruples with $2$ elements from $A$ and from $B$ such that every quintuple with $3$ elements from $A$ and $2$ from $B$, or vice versa, contains at least one of the quadruples. In [@Caen:1990], de Caen, Kreher and Wiseman defined a broader set of Turán type problems which we describe here for quadruples setting only. Let $Q(n,m,a,b)$ denote the smallest size of a system of quadruples with $2$ elements from $A$ and from $B$ such that every $(a+b)$-set with $a$ elements from $A$ and $b$ from $B$ contains at least one of the quadruples. Obviously, $Q(n,m,a,b) \leq T(n,a,2) \cdot T(m,b,2)$. It was proved in [@Caen:1990] that $Q(n,n,3,3) = q \binom{n}{2}^2 + o(n^4)$ where $1/4 \geq q \geq (3-\sqrt{5})/4 \approx 0.1910$. In this note, we consider the following problem. Let ${\mathcal P}$ be a set of pairs $(a,b)$ with $a,b\geq 2$. Let $A$ and $B$ be disjoint sets with $|A|=n$ and $|B|=m$. Denote by $Q(n,m,{\mathcal P})$ the minimum size of a system of quadruples with $2$ elements from $A$ and from $B$ such that for every $(a,b)\in{\mathcal P}$, every $(a+b)$-set with $a$ elements from $A$ and $b$ from $B$ contains at least one of the quadruples. The cases ${\mathcal P} = \{(2,3),(3,2)\}$ and ${\mathcal P} = \{(3,3)\}$ were studied in [@Caen:1988] and [@Caen:1990], respectively. We will focus on the cases ${\mathcal P}_{k2} = \{(2,k+1),(k+1,2)\}$ and ${\mathcal P}_{k3} = \{(2,k+1),(3,k),(k,3),(k+1,2)\}$. To solve them, we need to generalize the above-mentioned construction with $2 \times 2$ submatrices. [**Definition.**]{} A $2 \times 2$ matrix $[x_{ij}]$ over an additive abelian group is called [*fair*]{} if $x_{11} + x_{22} = x_{12} + x_{21}$. \[th:fair2\] Every $2 \times (k+1)$ matrix $[x_{ij}]$ over ${\mathbb{Z}}_k$ contains a fair $2 \times 2$ submatrix. Among the $k+1$ values $x_{1i} - x_{2i}$ one can find two equal ones. If $x_{1i} - x_{2i} = x_{1j} - x_{2j}$, then columns $i$ and $j$ form a fair submatrix. \[th:fair3\] If $k$ is even, every $3 \times k$ matrix $X=[x_{ij}]$ over ${\mathbb{Z}}_k$ contains a fair $2 \times 2$ submatrix. It is obvious that a fair submatrix remain fair after adding the same row to every row of $X$. Since one may subtract the first row from each of the three rows, we can assume that $x_{11}=x_{12}=\ldots=x_{1k}=0$. If there are two equal entries in the second or in the third row, then $X$ has a fair submatrix. Hence, we can assume that both the second and the third row contain every element of ${\mathbb{Z}}_k$ exactly once. Let $S_n$ be the sum of entries in row $n$, then $S_2 = S_3$. If there are two columns $i$ and $j$ such that $x_{2i} - x_{3i} = x_{2j} - x_{3j}$, then $X$ has a fair submatrix in columns $i,j$ and rows $2,3$. Hence, we can assume that $k$ values $x_{1i} - x_{2i}$ represent the $k$ distinct elements of ${\mathbb{Z}}_k$, but then $S_2 - S_3 = \sum_{i=1}^k (x_{2i} - x_{3i}) = 0+1+\ldots+(k-1) \equiv k/2 \pmod{k}$, a contradiction. Let $G_k$ be a graph whose vertices are functions $f: {\mathbb{Z}}_k\to{\mathbb{Z}}_k\:$. A pair of vertices $\{f,g\}$ forms an edge in $G_k$ if $f-g$ is a bijection. restates the fact that $G_k$ has no triangles when $k$ is even. For odd $k$, the problem of counting triangles in $G_k$ has been solved asymptotically in [@Eberhard:2015]. Let $p(k)$ be the smallest prime factor of $k$. The $p(k)$ functions $f_0,f_1,\ldots,f_{p(k)-1}$, where $f_i(j) = i \cdot j \pmod{k}$, form a complete subgraph in $G_k$. It is very tempting to conjecture that $p(k)$ is indeed the size of the largest clique in $G_k$. We know that this is true for even $k$ and for prime $k$. Computer search confirms that this is also true for $k=9$. \[th:main\] $ Q(n,m,{\mathcal P}_{k2}) = \big( \frac{1}{4k} + O(\frac{1}{n}) + O(\frac{1}{m}) \big) n^2 m^2 $, and if $k$ is even, $ Q(n,m,{\mathcal P}_{k3}) = \big( \frac{1}{4k} + O(\frac{1}{n}) + O(\frac{1}{m}) \big) n^2 m^2 $ as $n,m\to\infty$. Let $A$ and $B$ be disjoint sets of sizes $n$ and $m$. Let ${\mathcal Q}$ be a system of quadruples such that every $(k+3)$-set with $2$ elements from $A$ and $k+1$ from $B$ contains at least one of the quadruples. Obviously, for each pair $\{u,v\}\subseteq A$, the number of quadruples in ${\mathcal Q}$ that contain $\{u,v\}$ is at least $T(m,k+1,2)$, hence $Q(n,m,{\mathcal P}_{k2}) \geq \binom{n}{2} T(m,k+1,2)$, and similarly, $Q(n,m,{\mathcal P}_{k2}) \geq \binom{m}{2} T(n,k+1,2)$, which yields $ Q(n,m,{\mathcal P}_{k3}) \geq Q(n,m,{\mathcal P}_{k2}) \geq \big( 1/(4k) + O(1/n) + O(1/m) \big) n^2 m^2 $ as $n,m\to\infty$. To prove the upper bound, consider an $n \times m$ matrix $X=[x_{ij}]$ over ${\mathbb{Z}}_k$. Let $A=\{a_1,a_2,\ldots,a_n\}$, $B=\{b_1,b_2,\ldots,b_m\}$, and let ${\mathcal Q}$ consist of quadruples $\{a_i,a_j,b_p,b_q\}$ such that rows $i,j$ and columns $p,q$ in $X$ produce a fair $2 \times 2$ submatrix. By \[th:fair2\], $Q(n,m,{\mathcal P}_{k2}) \leq |{\mathcal Q}|$, and if $k$ is even, by \[th:fair3\], $Q(n,m,{\mathcal P}_{k3}) \leq |{\mathcal Q}|$. If entries of $X$ are selected randomly, independently and uniformly over ${\mathbb{Z}}_k$, the expected value of $|{\mathcal Q}|$ is $\frac{1}{k}\binom{n}{2}\binom{m}{2}$ which provides the required upper bound. The result mentioned in the abstract follows from \[th:main\] when $k=4$ and ${\mathcal P} = {\mathcal P}_{43}$. Turán problems, where the extremal configurations depend on random maps defined on the set of pairs of vertices, have been studied in a recent series of articles by Rödl and his coauthors (see the concluding remarks in [@Reiher:2018]). Another “partite” version of Turán problem and its connection to the classical problem has been studied by Talbot [@Talbot:2007]. Acknowledgments {#acknowledgments .unnumbered} =============== The author would like to thank two anonymous referees for their careful reading and valuable suggestions. [99]{} D. de Caen, D. L. Kreher, and J. Wiseman: [ *On constructive upper bounds for the [T]{}urán numbers $T(n,2r+1, r)$, Congr. Numer.*]{}, [**65**]{} (1988) 277–280. D. de Caen, D. L. Kreher, and J. Wiseman: [ *A [T]{}urán problem for [C]{}artesian products of hypergraphs, J. Combin. Math. and Combin. Comp.*]{}, [**8**]{} (1990) 17–25. S. Eberhard, F. Manners, and R. Mrazović: [*Additive triples of bijections, or the toroidal semiqueens problem*]{}, https://arxiv.org/abs/1510.05987. G. R. Giraud: [ *Remarques sur deux problèmes extrémaux, Discrete Math.*]{}, [**84**]{} (1990) 319–321, https://doi.org/10.1016/0012-365X(90)90138-8. W. Mantel: [ *Vraagstuk [XXVIII]{}, Wiskundige Opgaven met de Oplossingen*]{}, [**10**]{} (1907) 60–-61. C. Reiher, V. Rödl, and M. Schacht: [ *On a [T]{}urán problem in weakly quasirandom $3$-uniform hypergraphs. J. European Math. Soc.*]{}, [**20**]{} (2018) 1139–1159, https://doi.org/doi:10.4171/jems/784. A. Sidorenko: [ *What we know and what we do not know about [T]{}urán numbers, Graphs Combin.*]{}, [**11**]{} (1995) 179–199, https://doi.org/10.1007/BF01929486. J. Talbot: [ *Chromatic [T]{}urán problems and a new upper bound for the [T]{}urán density of ${\mathcal K}_4^{-}$, European J. Combin.*]{}, [**28**]{} (2007) 2125–2142, https://doi.org/10.1016/j.ejc.2007.04.012. P. Turán: [ *Egy gráfelméleti szélsöértékfeladatrol, Mat. Fiz. Lapok*]{}, [**48**]{} (1941) 436–-453.
--- abstract: 'Despite outstanding success in vision amongst other domains, many of the recent deep learning approaches have evident drawbacks for robots. This manuscript surveys recent work in the literature that pertain to applying deep learning systems to the robotics domain, either as means of estimation or as a tool to resolve motor commands directly from raw percepts. These recent advances are only a piece to the puzzle. We suggest that deep learning as a tool alone is insufficient in building a unified framework to acquire general intelligence. For this reason, we complement our survey with insights from cognitive development and refer to ideas from classical control theory, producing an integrated direction for a lifelong learning architecture.' author: - 'Jay M. Wong' bibliography: - 'developmental.bib' - 'deep-learning.bib' - 'grupen-master.bib' - 'vc.bib' - 'robots.bib' date: 'Received: date / Accepted: date' title: | Towards Lifelong Self-Supervision:\ A Deep Learning Direction for Robotics --- Introduction ============ As roboticists and scientists, is our goal to engineer a solution to work for a particular task in a specified domain or is it to build a system that has the capacity to acquire general intelligence (a notion characterized by [@Legg2007])? An historic example is that success in aviation, with autonomous aerial navigation does not immediately imply success in a task like autonomous driving, which shares some similarities. Or assume that we solve autonomous driving tomorrow—likely an engineering effort like changing our highways, roads, and infrastructure with increased sensory [@Ng2016] will not generalize especially well to robotic tasks in mobile manipulation. Should we then add additional sensors to all possible environments (e.g. residential homes) where autonomous systems are likely to operating in? How is it then to operate in novel, unknown, or disastrous environments? Or in space? In fact, it is under inspirations from these disastrous and unstructured domains, that have given rise to recent technological advances with the DARPA Robotics Challenge (e.g. [@Johnson2015; @Feng2015; @Feng2015b; @Kohlbrecher2015; @Yi2015; @Kuindersma2016; @Dellin2016]). This manuscript presents rather a different direction in thinking, where instead of engineering and redesigning systems to perform competently in novel tasks and domains, perhaps a system that can bootstrap its lifetime of experiences can quickly learn useful solutions in these new areas—we refer to the process by which this long-term knowledge repertoire is acquired as *lifelong learning*. Lifelong learning should not address only novel domains, but also should consider optimizing behavior at existing tasks. Let’s consider the following hypothetical scenario. An autonomous system (e.g. robot, mobile manipulator, unmanned aerial vehicle (UAV), etc.) returns from a mission or accomplishes some task. We are now out of things to provide it. Likely in many cases, the robot is left somewhere in corner of a laboratory until there are subsequent tasks to accomplish. But what if instead, it uses this downtime as an emergent possibility for continuous progress?—and continue to operate, either refining its inherent representations of the world (which have generally, to this date, been hand-defined by human operators) or optimizing its inherent motion primitives (e.g. tuning internal control parameters that perhaps due to wear and tear are now highly suboptimal). What if it can learn to build complex motor behaviors, that may prove useful in future missions from exploiting existing structure in its primitives? A misconception is that what we are referring to as lifelong learning does not necessarily imply that the system learns from scratch. It is *not* an end-to-end approach for motor development or task solving. In other words, it does not necessarily imply learning *motor torques* directly from raw sensory input. Instead, its underlying purpose is generalization and *structural bootstrapping*, a term coined by [@Worgotter2015], where existing knowledge is exploited for generalization to novel activities. We draw insight from cognitive development to learn complex motor behavior and structural representations by an account of intrinsic and extrinsic properties (e.g. environmental uncertainties) influencing the system in ways beyond engineering analyses. Lifelong learning implies that systems learn over a lifetime of complex tasks and domains, achieving generalized solutions. This form of generalizability for both domain and task is extremely important for designing robust, high-performance systems. Interestingly, a study by [@Pinto2016] found that convolutional networks achieved higher performance grasping when trained on both grasping and pushing tasks when compared to grasping alone, suggesting that inter-task representation sharing helps build a better understanding of the environment overall. This manuscript provides an initial survey on recent advances in deep learning pertaining to the robotics domain and complements this review with inspirations in cognitive development and control theory outlining that the coalescence of these ideas may pave way for lifelong learning robots. We indicate that deep learning alone is likely incapable of solving all problems in a unified framework. Instead, we discuss a connectionist approach in lifelong self-supervision, drawing ideas from these other areas to tackle the acquisition of general motor intelligence. We formulate a direction by discussing how systems can intrinsically motivate themselves to attack the problem of building accurate representations of structures in the world *and* the development of complex motor behavior *simultaneously*. This particular direction incorporates the use of hierarchies of neural networks under the popularized notion of *deep learning*. We suggest that the use of deep learning as an approximation tool allows robots to encode complex functions that describe physical phenomena concerning interactions with the world and sensory-driven control. From Computer Vision to Robotics -------------------------------- Deep learning has exhibited major success stories in the computer vision domain. This particular tool, popularized by [@Krizhevsky2012] in the ImageNet competition, showed significant promise when learned latent feature representations by a neural network that incorporated a series of convolution, pooling, and densely connected neurons outperformed existing hand crafted feature representations that have otherwise been the standard. The support from computational machinery (GPUs) allowed for efficient parallelization necessary for training neural network structures with massive datasets. Since then, the computer vision community has produced a plethora of deep learning research and have plateaued close to human level capabilities in recognition by building deeper and deeper networks [@Szegedy2015], fine tuning, and introducing extra features (e.g. surface normals) [@Madai2016]. However, that is not to say that these powerful, state of the art, demonstrations and their solutions are immediately applicable to mobile perceptual systems. There exists a number of fundamental differences in these two domains that hinder trivial compatibilities. First, the deep learning solutions in computer vision are generally supervised. Supervised learning is very constrained. And in many computer vision tasks, a particular input is associated with a single, correct output. Yet quite evidently, this is not the case in robotics—robots do more than classification. They must perform actions in the world. They must build representations of things they sense and act on these sensory signals whereas computer vision systems do not necessarily act. Classification helps in identifying the entities in the world, but to accomplish tasks, robots must perform actions and manipulate such entities. The connection between perception and action is essential in building perceptual systems in the real world. On Overgeneralization --------------------- An immediate drawback for these computer vision architectures is that studies have found that image classification has an unfortunate overgeneralization (“fooling”) phenomenon. These classification tasks take as input generally a single sensor modality, in many cases, RGB. Where deep learning tools fail is when adversarial RGB examples are construed in attempts to fool these networks into very incorrect predictions presented in studies by [@Szegedy2013; @Nguyen2015; @Carlini2016]. In these works, hill climbing and gradient ascent methods were used to evolve images to match very incorrect classes with high probability predicted by the network. Although there is work to make these networks robust to such malicious attacks (e.g. [@Bendale2015; @Wang2016b])[^1], we theorize that the addition of multiple modalities (with more than vision alone) may alleviate such intriguing and devastating phenomena, as the real world obeys certain structures that are locally constraining [@Kurakin2016]. For instance, it is increasingly difficult to fool a predictor that reasons with depth and tactile information with physical adversarial entities. Furthermore, by the universal approximator theorem [@Hornik1989], multilayer feedforward networks are capable of representing arbitrarily complex functions, even those that are robust to such adversarial anomalies. It becomes then a formulation problem where models must be trained with an adversarial objective [@Goodfellow2015a]. Other work elaborates that networks should be also realized with proper regularization [@Tanay2016]. In fact, a particular variant of neural networks that implicitly enforces regularization may be beneficial. For instance, *DivNet* is an approach that attempts to model neuronal diversity allowing for efficient auto-pruning, in turn reducing network size and providing inherent regularization [@Mariet2015]. Another remedy realized a particular network layer called *competitive overcomplete output layer* to mitigate this overgeneralization problem of neural networks [@Kardan2016]. Such a layer forces outputs to explicitly compete with each other, resulting in tight-fitting regions around the training data. On Deep Reinforcement Learning ------------------------------ Impressive success stories has been shown in game domains that integrate perception and action. Namely work by Google DeepMind has pushed this particular frontier and have revolutionized the intersection between deep learning and its connection to reinforcement learning. [@Mnih2013; @Mnih2015] built a single learning framework that could learn to play a large number of Atari games beyond human level competence through a trial and error approach, where the only information given to the system were several game frames, the game score, and a discrete control set. A neural network was used to approximate the $Q$ values of a discrete set of actions from several game play frames and executed the highest valued action resulting in a competent gameplay policy. The idea of using a neural network as an approximation to a value function is not something profound and novel. Dating back two and a half decades ago, Tesauro attempted to tackle the game Backgammon—a board game that had approximately $10^{20}$ states, making traditional table-based reinforcement learning infeasible. Instead, a backpropagation layered neural network was used to approximate the value function describing board positions and probabilities of victory. It was shown in various incarnations of his algorithm where both raw encodings and hand-crafted features derived from human task knowledge were used to learn competent gameplay policies [@Tesauro1992; @Tesauro1995; @Tesauro1995b]. Furthermore, other researchers in the past have leveraged recurrent neural networks to learn $Q$ values using a history of features to form policies [@Jurgen1991; @Lin1992; @Meeden1993]. Recently, the defeat of 18-time Go world champion, Lee Sedol, by DeepMind’s AlphaGo system established a major milestone for deep learning frameworks. Their success was not simply attributed to deep reinforcement learning but also clever integration with Monte Carlo tree search [@Silver2016]—it is to show that deep learning alone may not be the solution to all problems, but as a tool, deep learning may used in complement with other algorithms to produce very powerful results. On Domain Transfer ------------------ Unfortunately, a reason why many of these game domains are successful is that the domain is fully observable. Despite there being some studies that inject partial observability and attempt to tackle this problem with augmented memory structure [@Heess2015], the algorithms developed through gameplay should not be considered as game-changing success stories in physical dynamical systems. The real world obeys physics and uncertainties that can not be perfectly modeled in simulation, and as a result policies learned in simulation have difficulty generalizing to real robot systems. For instance, despite millions of training steps in over hundreds of hours of simulations, visuomotor policies learned on a Baxter simulator fails when given real world observations on the physical platform [@Zhang2015]. Interestingly, [@James2016] were able to transfer visuomotor policies learned in simulations to a real system, however, their approach required massaging the scene by occluding complex areas with a physical black box, hiding wires, and mimicking the simulation setting. The introduction of *progressive neural networks* show promise by exploiting deep composition of features amongst columns of domain-specific networks. However, immediate drawbacks are that it assumes known task boundaries and exhibits quadratic parameter growth when a new column is necessary for each novel domain. Despite alleviation to the cause, these networks still need to be trained in the new domain to achieve competence [@Rusu2016]. In later works, [@Rusu2016b], demonstrated domain transfer using features learning in MuJoCo simulation with a Kinova arm to the physical system—the reaching task they showed, however, was highly constrained in a small region of static space and still require several hours of training in the real world. A generalized method for domain alignment has recently been presented to mitigate performance loss when adapting robot visuomotor representations from synthetic to real environments [@Tzeng2015]. The technique, however, requires paired synthetic and real views of the same scene to adapt the deep visual representations. On Self-Supervision ------------------- As a result, there is no escaping the fact that robots must collect their own training data in the real world—to tackle this, various approaches by [@Pinto2015; @Levine2016; @Wong-RSS2016] have been proposed for self-supervision. These studies hint at methods in which robots can label their own experiences and collect potentially massive datasets pertaining to a single task. However, they provide no notion of learning beyond the task at hand—conceptually they are incapable of exhibiting lifelong learning. To address this, we suggest a direction in which robots acquire completely unsupervised visuomotor skills derived from a basis of inherent primitives that are reinforced by the world to generate behaviors that adhere to physical properties of the environment. This hierarchical set of motor behaviors should then be coupled with an intrinsically motivated structure learning module to allow continuous affordance and interaction outcome prediction regarding entities in the world—as a result, this produces permanent artifacts that can be reused for future tasks. Furthermore, the intrinsic motivator must both promise the acquisition of continuously refined forward models and the capacity for the development of arbitrarily complex motor behaviors. This paper is in agreement with [@Silver2013] in which they address the machine learning community that we need to seriously consider the nature of systems that have capacity to learn over a lifetime of experiences rather than for some specified task or domain. Conceptually, this direction of thinking is relatable to the notion of *deep developmental learning* primarily proposed by [@Sigaud2016] in which first sensory motor control must transform raw sensations to a predictive process—we refer to this as a forward affordance model. They also outlined the challenges of integrating behavioral optimization and a curiosity mechanism for deep developmental systems. We suggest to attack these simultaneously through the continuously refinement of motion primitives and the exploitation of their combinatoric sequences and compositions to achieve motor development. To address the latter, we suggest instrinsic motivators driven by information theoretic measures in regards to the forward affordance model’s predictions of world state. Lastly, we quickly address a *certain debate* dating back two decades between planning and reflexive architectures for the design of robot behavior [@Brooks-FAI87; @Brooks1987]. While we agree that hierarchical behavioral responses eliminates the need for planning, we take a stand that is much similar to our ideologies with deep learning—that is, to reiterate, a grandiloquent singular architecture is likely infeasible in many senses. Behavioral responses need not be learned when set rules and instructions are provided (e.g. autonomous missions and manuals where there are precise trajectories through the task)—perhaps this is where we should consider planning for task solving. As such, we firmly believe that there is no single individual or rather, what Rodney Brooks calls *theists*, that is truly correct. Instead, we find that excitement is in building a system that marries the many *theisms* and relates technologies that are both classical and of recent “hype.” As such, this leads us to a unifying, perhaps even holistic, direction in thinking. To our knowledge, this paper is the first attempt at outlining a lifelong learning direction by integrating deep learning, cognitive development, and classical control theory. Implications for Robotics ========================= In the most simplifying sense, deep learning is an algorithmic tool that leverages neural networks as nonlinear function approximators in which weighted connections between input and output neurons are trained via error back propagation. Doing so, encodes a function that minimizes the disparity between prediction and truth by building latent representations in the hidden layers. The deep learning domain has been quickly exploding since its large success in vision popularized by the outstanding results in the ImageNet competition [@Krizhevsky2012], producing a plethora of general reviews on deep neural networks. As a result, we omit the basics of neural networks, convolutions, autoencoding, regularization, recurrency, and related concepts in this manuscript. The reader is referred to the following general reviews [@Bengio2009; @LeCun2015; @Goodfellow2016] for a thorough overview. Instead, we will critique a number of deep learning frameworks most applicable to the *robotics* domains and outline the drawbacks and skepticism that arise with these recent works. Detection, Estimation, and Tracking ----------------------------------- Tools that have been popularized by the computer vision community has generally been leveraged to tackle individual components in the robotics domain. A number of these individual triumphs used neural networks as a means of approximating otherwise very complex and highly nonlinear functions pertaining to estimation and scene understanding. For instance, tasks relating to rule-based navigation like autonomous driving in particular may require understanding the identities of objects in the world. Labeling and scene understanding can be regarded as a segmentation problem given visual input. As such, SegNet, a semantic pixel-wise segmentation encoder-decoder network, was presented to achieve competitive predictive capability [@Badrinarayanan2015]. Other methods also attempted to solve a similar task but were either generating object proposals [@Noh2015; @Hariharan2015] or required multi-stage training [@Socher2011; @Zheng2015]. Still, these segementation results were shown to be especially robust for detection problems. In particular, [@Pinheiro2015] built a system to predict segmentation masks given input patches by using *DeepMask*, a neural network that generated such proposals. These proposals were then passed to an object classifier, producing state of the art segmentation results. Extensions to this work gave way to a bottom-up/top-down segmentation refinement network that was capable of generating high fidelity object masks with a $50\%$ speedup. The network, *SharpMask* leveraged features from all layers of the network by first generating a course mask prediction and refining this mask in a top-down fashion [@Pinheiro2016]. Studies have shown that pre-trained convolutional neural network features were useful for RGB-D object recognition and pose estimation [@Schwarz2015]. A consequence of this became a flurry of research regarding using deep architectures for detection and pose estimation. An approach for the detection of pedestrians was shown using an unsupervised multi-stage feature learning approach by [@Sermanet2013]. Meanwhile, results indicated high accuracy in human pose estimation with *DeepPose*—likely this is due to deep neural networks capturing context and reasoning in a holistic manner [@Toshev2014]. PoseNet, a convolutional neural network camera pose regressor, was presented with impressive robustness to difficult lighting, motion blur, and different camera intrinsics [@Kendall2015]. This was later extended to a Bayesian model able to provide localization uncertainty [@Kendall2015b], establishing a critical step forward for mobile robots, especially connecting close ties to algorithms that operate under uncertainty. In work by [@Wilkinson2015], a pretrained convolutional network was used to predict object descriptions and aspect definitions pertaining to sensory geometries in relation to objects. Unfortunately, class and object descriptors were selected as arbitrary pretrained AlexNet layers and the overall framework relied on a number of thresholds that are difficult to define. Others continue to investigate the use of these tools to learn useful features for contexts like laser based odometry estimation [@Nicolai-RSS2016]. In research by [@Byravan2016], deep networks were used to segment rigid bodies in the scene and predict motions of these entities in SE3. As hyperparametric approximators, these networks have been found success in the tracking regime in which recurrent neural networks were used to filter raw laser measurements and shown to infer object location and identify in both visible and occluded scenes [@Ondruska2016]. This technique is described as a neural network analogous to Bayesian filtering. In addition, by learning to track with a large set of unsupervised data, a new task like semantic classification could be learned by exploiting rich internal structure through inductive transfer [@Ondruska-RSS2016]. An approach was presented by [@Song2015] where 3D bounding boxes of objects were generated through a methodology they call *Deep Sliding Shapes*. Given Kinect images, they learned a multiscale 3D region proposal network that is fully convolutional and identifies interesting regions in the scene. Then, an object recognition network was learned to perform 3D box regressions. A sensory-fusion architecture that incorporated the use of LSTMs to capture temporal dependencies has been presented to anticipate and fuse information from multiple sensory modalities. This Fusion RNN was demonstrated as part of a maneuver anticipation pipeline that outperformed state of the art on a benchmark consisting of a dataset of $1180$ miles of natural driving [@Jain2016]. Similarly, [@Krishnan2015] developed a deep network capable of approximating a broad class of Kalman filters, enhancing them to arbitrarily complex transition dynamics and emission distributions. Yet, despite outstanding results in classification, identification and pose estimation, and semantic segmentation, systems that perform actions in the world still require a connection from detected entities in space to motor commands. In part, these research solutions only attempt to develop robust perceptual interfaces to autonomous systems, but however, a key, perhaps, paramount module is one that reasons over sensations and executes useful motor control. As such, perception alone may not be the answer, but somewhere in the intersection of perception, cognition, and action. From Perception to Motor Control -------------------------------- A number of studies looked into introducing the predictive power of neural networks in place of traditional feature extracting perceptual pipelines to solve detection and control problems with physical robot experiments. In particular, in place of hand-designed features like those of [@Kragic2003; @Maitin2010] for grasping, [@Lenz2015] presented a deep architecture to learn useful feature representations for grasp detection. A two-step cascaded network system was shown where top detections were re-evaluated by the second network, allowing for quick pruning of unlikely candidate grasps. The network operated on RGB-D input and successfully generalized to execute grasps on both a Baxter and PR2 robot. Likewise, a grasp detection system was demonstrated by [@Wang2016], that mapped RGB-D images to gripper grasping pose by first segmenting the graspable objects from the scene using geometric features (for both objects and gripper). They then applied a convolutional network to the graspable objects which used a structure penalty term to optimize the connections between modalities. Similarly, in work presented by [@vanHoof2016], deep autoencoders were used to learn compact latent representations for reinforcement learning to form policies describing tactile skills. These feedback policies were learned directly from high-dimensional space under iterative on-policy exploration and vastly outperformed a baseline policy learned directly from the raw sensor data. Contrary to these works, instead of predicting the single best grasp pose from a given image, [@Johns2016] demonstrated a convolutional network that predicted a score for every possible grasp pose, such a value function described what they denote as a *grasp function*. They discussed that such a method can attribute to robust grasping by smoothing this grasp function with a function describing pose uncertainty. Although in their demonstrations, it appears this particular approach achieved some-$80\%$ grasp success rate, fundamental assumptions are that the object is isolated in the scene and the grasping device is a parallel jaw gripper. In a different approach, [@Varley2016] showed that convolutional networks can be used for shape completion given an observed point cloud. In their method, the network learned to predict a complete mesh model of objects (filling in the occluded regions of the scene), which was then smoothed and used to support grasp planning. The use of convolutional neural networks as a means for automatic feature extraction has been employed in imitation learning paradigms where actions are learned for an autonomous navigation task directly from raw visual data. The network is encoded with no initial knowledge of the task, targets, or environment in which it is acting in. In a simulated study [@Hussein2016] showed that using *deep active learning* can significantly improved the imitated policy through a small number of samples—this is accomplished by the network querying a teacher for the correct action to take in situations of low confidence. Unfortunately, this framework relies on the fact that there is a teacher present with competent knowledge of the domain and appropriate actions. A framework using time-delay deep neural network was shown by [@Noda2013] that both fused multimodal sensory information and learned sensorimotor behaviors simultaneously. They demonstrated that a single network was able to encode six object manipulation behaviors dependent on temporal sequence changes with the environment and displayed object. Since robots operate in dynamic and partially observable environments, selecting the best action is nontrivial since it is dependent on the time history of interactions (or sequences of actions in the past). As such, ways to learn these optimal policies generally rely on a trial and error paradigm via reinforcement learning. For example, this is especially present in the recent successful demonstrations of Atari gameplay [@Mnih2013; @Mnih2015]. Despite its demonstration through an artificial agent, the Deep $Q$ Networks presented has immediate implications in robot control, however, may not be effective on physical domains especially when rewards are sparse making efficient exploration essential. A method proposed by [@Lipton2016] demonstrated exploration by Thompson sampling where using Monte Carlo samples from a Bayes-by-Backprop neural network provided improvement over the standard DQN approach that relied either on $\epsilon$-greedy or Boltzmann exploration. In a particular study, [@Finn2016a] proposed to use neural networks as a tool to learn arbitrarily complex and nonlinear cost functions for inverse optimal control problems allowing systems to learn from demonstration using efficient sample-based approximations. Their methods were demonstrated on simulated tasks as well as on a mobile manipulator. Another study presented a belief-driven active object recognition system that used a pretrained AlexNet first to derive belief state. A Deep $Q$ Network was then incorporated to actively examine objects by selecting actions (in hand manipulations) that minimized overall classification errors, resulting in an efficient policy for recognizing objects with high levels of accuracy [@Malmir2016]. Instead of training the action selection network over the pretrained convolutional network, this system was later extended to be trainable end-to-end [@Malmir2015]. In contrast to the large population of work that uses convolution to extract useful feature representations at the output layer of a neural network and use these features to associate control, [@Ku2016] demonstrated a different approach. They showed that using intermediate features of a convolutional network was sufficient for gross and finer grain manipulation supporting palm and finger grasps. The technique localized features corresponding to high activations given point clouds of simple household objects through *targeted backpropagation*. Using this, they presented a hierarchical controller composing of finger and palm pre-posture positions on the R2 robot, however, alike work by [@Wilkinson2015], the specific layer to obtain information from is still human defined. ### From Pixel to Motion – {#from-pixel-to-motion .unnumbered} Recently, an end-to-end strategy for visuomotor control popularized by [@Levine2015] has shown promise for deep learning in robotics. Using optimal control policies as supervised signal for neural networks, they demonstrated task learning relevant to local spatial features obtained through convolution. More importantly, this showed promise for an end-to-end training approach to obtain visuomotor policies producing a network that commanded motor torques directly from the raw visual input. [@Finn2015] proposed the use of deep spatial autoencoders to acquire informative feature points that correspond to task-relevant positions. The method learns to associate motions with these points using an efficient locally-linear reinforcement learning method—because the resulting policies are based off these learned feature points, the robot is capable of dynamically manipulation in a closed-loop manner. Similar approaches to visuomotor control has been demonstrated by [@Tai2016] the learned end-to-end exploration policies on a mobile robot and folks from nVidia for self-driving cars, where an autonomous vehicle was driven by vision alone through an end-to-end system [@Bojarski2016]. It appears that learning motor control directly from raw sensory signals induces robustness and produces a control solution that is otherwise too complex to hand design. Likely, action outcomes are not deterministic and pose estimation future establishes uncertainties, whereas, these convolutional learning strategies aims to resolve motor commands straight from raw percepts—learning both useful feature representations and control policies simultaneously. An issue with end-to-end methods and deep reinforcement learning in general is that it demands extremely large sets of data. For such reasons, Guided Policy Search (GPS) [@Levine2015] attempts to bias training for the reduction of the number of instances needed and looked to acquire visuomotor policies by the means of a supervised learning problem. A reset-free GPS algorithm was introduced by [@Montgomery2016b] to address the issue with its requirement for a consistent set of initial states. Meanwhile, [@Chebotar2016] later extended GPS to account for highly discontinuous contact dynamics through an path integral optimizer and on-policy sampling to increase the diversity of instances which they argued was crucial for high generalizability. In the original formulation of GPS, the learning problem is decomposed into a number of stages. First full-state information is used to create locally-linear approximations to the dynamics around nominal trajectories, then optimal control is used to find locally-linear policies along those trajectories. Lastly, it uses supervised learning with an Euclidean loss objective to create complex nonlinear versions of these policies that reproduce similar optimized trajectories. In other words, GPS iteratively optimizes local policies (concerning specific task instances) which are then used to train a global policy that is general across instances. However, to do this, it requires data on the physical system—robots must collect data for training to refine the network originally trained by guided policies in the form of optimal trajectories. To collect massive sets of data, one may consider having the robot obtain its own experiences without the need of meticulous human labeling or supervision. Addressing the problem of self-supervision, work by [@Pinto2015] showed that robots can self-supervise themselves to learn visuomotor skills without manual labels. In their experiments, they demonstrated remarkable robustness where a Baxter robot self-labeled $50,000$ grasp examples in over $700$ hours of manipulation. Under similar inspirations, [@Levine2016] used a distributed system consisting of $14$ robot manipulators to collect a massive dataset ($800,000$ grasps) over the course of two months for grasping and eye-hand coordination. However, both of these self-supervised methods considered specifying a heuristic to classify grasp examples. Unfortunately, these heuristics are human-specific and somewhat arbitrary. [@Wong-RSS2016] developed a comparable self-supervision approach with a key distinction being the use of feedback from closed-loop motion primitives as a supervisory signal rather than these human-specific parameters. Still, a fundamental problem for all of these studies is that they are demonstrated and tailored for a single specific, predefined task. A study by [@Pinto2016] found that learning over a number of tasks helps discover richer representations of the environment, thus outperforming models that have otherwise been trained on a single task alone. But an open research problem is to consider methods by which robots can motivate themselves to select useful tasks to learn from. In a study with ideas analogous to adversarial training, [@Pinto2016b] demonstrated that having a protagonist-antagonist paradigm resulted in more effective learning of visuomotor policies. They discovered that having an antagonist robot that aimed to prevent the protagonist from grasping, resulted in learning higher performance grasping, due to the necessity to learn a robust policy to overcome this adversary. In summary, they emphasize that not all data is the same. Contrary to the massive $800,000$ grasping dataset presented by [@Levine2016], they found systems that attack harder examples tend to achieve faster convergence and higher performance. ### On Abstract Parametrized Skills – {#on-abstract-parametrized-skills .unnumbered} End-to-end methods produce amazing results by learning visuomotor torque-level control policies straight from raw pixel information. However, two immediate drawbacks are critical for autonomous systems. Firstly, the robot can only learn very task specific motions rather than abstract notions of skills and representations reuseable throughout its lifetime. In particular, GPS allows the robot to quickly encode visuomotor policies by guided trajectories acquired through optimization, but since they operate over joint torques it is difficult to decipher abstract skill boundaries[^2]. Secondly, by operating over joint torques, the network loses control guarantees and parametrization insight that abstract skills derived from control-theoretic approaches may provide[^3]. By learning at the lowest level of motor units (e.g. in configuration space over joint torques), systems need a massive set of examples and training steps to cover this space. For instance, even for a simple 2D spatial reaching tasks in a confined $40cm\times30cm$ area using a $6$ DOF Kinova arm with three fingers was said to require over $50$ million steps via an end-to-end (raw pixel to joint space mapping) paradigm [@Rusu2016b]. For such reasons, learning over a parameterized space abstracts away these basis motor units and we theorize will accelerate learning. The notion of control guarantees is of chief importance through an industrial and product-delivering perspective. Especially in scenarios where human factors are involved or other high fidelity situations, systems that acquired expertise through learning over a history of experience must exhibit certified guarantees. Indeed, the maximum activation visualization approach, “deep dream” can be used to identify convolution features to attempt to make sense of the network’s latent representations [@Mahendran2016], but, we are concerned with a stronger sense of guarantee, especially, in the sense of control derived from the output of these networks. In autonomous driving, for example, the system must be analyzed that one can establish certified guarantees, up to sensor noise, that regardless of input, the vehicles will never attempt to give commands that result in collision with entities in the world. Such analytics is extremely difficult to reason over if the commands are over low-level specifics like wheel torques. Rather, analysts can reason easily in Cartesian space—perhaps then, the robot should learn sets of abstract skills that operate with goals that are easily interpretable for validation and certifications. As such, the direction in which we should look into may live closer to the realm of acquiring such parametrized skills. While there exists a plethora of work for learning these skills, (e.g. [@Konidaris2009; @DaSilva2012; @Masson2015]), we believe that to better investigate a principled formulation for lifelong learning cognitive systems, we should investigate the perspective of learning through the lens of cognitive psychologists—such allows us to better understand the development of cognition and action in living organisms. Insight from this becomes fundamental in drawing computational analogs originating from developmental processes to better design artificial, learning systems. Complex Sensorimotor Hierarchies ================================ The artificial neural networks architecture as an explanatory means to a connectionist model of cognition and action is not a concept that resides solely in computation. In fact, a number of cognitive psychologists showed intrigue when these networks were at its infancy, dating back two decades [@Rumelhart1998]. Most prominently, [@Thelen1996] describes that such models are exciting in the sense that there exists only process. They indicate that the essence of behavior is distributed among numerous individual units, that together, in the strengths of their connections, describe behavior. They are plastic and modify themselves through dynamical processes with the world. In particular, [@Thelen1996] argue against the notion that these “neural networks contain some privileged icon of behavior, abstracted from complex motivation and environmental contexts in which it is performs.” In other words, the theory that networks encode a particular context-independent behavior—something like a Central Pattern Generator (CPG) is entirely incorrect. They emphasize that behavior is context-specific, even in the case of CPGs—many studies that elicit such behavior and draw such conclusions are based off an impoverished form of induced behavior. To this regard, it may be true that behavior exists in some innate form that when given appropriate stimuli will generate seemingly high level actions—resulting in the misclassification of there being such generators. Thelen and Smith conclude that the development of these into complex behavior is entangled in motivation and context-specificity. Following this notion, the work that has been discussed to this point fail to tackle this intertwined cobweb of action, environment, and motivation. Although reinforcement learning paradigms do describe the acquisition of action through the system’s interaction with entities in the world in context-specific situations, many of these studies fail to indicate principled motivators. Rewards are generally task-specific and user defined—*not* an inherent property derived by the system itself in its interpretation of *situational contexts*. Admittedly, even promising developmental frameworks like the ones outlined by [@Mugan2012; @Grupen2005], are culprit to ad hoc reward structures. Most importantly, however, through [@Grupen2005]’s outline of *figurative schematas* that organize into the development of robot behavior, they emphasize the need for control knowledge to be represented in a manner that supports generalization. Similar aspirations are found in the *action schema* framework [@Platt-ICDL06]. The ability to construct and reused learned behaviors in a general manner is of fundamental importance—thus, we share a similar view on the acquisition of motor behavior. These views were originally derived from the proposal under [@Piaget1952]’s account, where human infants exhibit a sensorimotor stage that lasts approximately $24$ months while producing control knowledge that support generalization and reuse. Such reuse and organization implies underlying hierarchies of motor behavior. In comparable work, [@Heess2016] emphasized the importance of hierarchical controllers that operate at different time scales in support of modularity and generalizability. Their work showed a promising step forward in locomotive skill transfer between a number of simulated bodies with many degrees of freedom, wherein high-level controllers modulated low-level motor skills which emerged from pretraining. Other work has looked into building *implicit plans* or macro-actions by interaction alone using a recurrent neural network structure [@Mnih2016]. However, these works do not necessarily discuss how these temporal hierarchies of options, skills or macro-actions play with intrinsic motivators, where systems derive their own reward paradigms and build continuously extended motor hierarchies. [@Kulkarni2016] presented a hierarchical-DQN framework which integrated hierarchical value functions and intrinsic motivation by having a top-level function learn policies over intrinsic goals and a lower-level learning policies to achieve these goals. They suggested that intrinsic motivation be derived from the space of entities and relations which is sufficiently bounded and finite in Atari games, however, may exhibit explosive growth in the physical world. Future work indicated a connection to deep generative models—wherein, in this paper, we derive intrinsic motivation from a deep generative dynamics model. In the following subsections, we summarize work that provide systems with the ability to learn complex sensorimotor hierarchies resulting from experience and interaction with the world. Complex action-related behavior are expressed as motor hierarchies emerging through the combinatoric sequencing and composition of actions, that at the lowest level, are learned by associating sensory input to resolve motor primitives. As such, we begin at the lowest level of motor development, on how a robot can associate sensor input to activate closed-loop motor primitives. Next, we investigate learning to bootstrap these primitives to develop more complex behaviors. And lastly, we incorporate techniques present in the literature to suggest the learning of control goals that evolve primitive reflexes to intentional goal-oriented behaviors. In Section \[sec:affordance\], we describe an intrinsically motivating paradigms that leverages the system’s ability in its understanding of the world and of its own inherent actions and representations through *control contexts*—such motivators are to drive the processes that govern the development of these sensorimotor behaviors and cognitive representations. A plausible unifying framework is illustrated in Figure \[fig:overall\] by piecing together various selected studies currently in the literature. ![A candidate framework marrying various learning modules that have been individually presented in the literature[]{data-label="fig:overall"}](images/framework.pdf){width="48.50000%"} We now quickly elaborate on each of these pieces and on their respective subsections in this manuscript. Section \[subsec:prims\] describes how to activate motor primitives given sensory input, deriving a *control context*—it discusses an approximation to the function $f_{\gamma_i^T}: s\mapsto \gamma_{\phi_i|^\sigma_\tau}^T$. Section \[subsec:complex\] investigates how to build hierarchies of complex behaviors by combining existing controllers to construct new ones under the current state description $\Gamma^T(s) = \gamma_1^T \cup \gamma_2^T \cup \cdots \cup \gamma_n^T $. This is described by learning the policy $\pi_{n+1}^*: \Gamma^T(s) \mapsto \{\phi_1|^\sigma_\tau, \phi_2|^\sigma_\tau, \cdots, \phi_n|^\sigma_\tau\}$. The networks proposed here approximates the value function $f_{\pi_{n+1}^*}: \Gamma^T(s) \mapsto V^{\pi^*_{n+1}}(\Gamma^T(s))$, where $\Gamma^T$ is the deep control context at time $T$ encompassing all control state descriptions $\gamma^T$. Section \[subsec:param\] describes methods to learn continuous control parameters for controllers $\phi_1|^\sigma_\tau, \phi_2|^\sigma_\tau, \cdots, \phi_n|^\sigma_\tau$, thus approximating the function $f_{\rho_i}: s \mapsto \rho_{\phi_i|^\sigma_\tau}$. Since both the learning of $f_{\pi_{n+1}^*}$ and $f_{\rho,\phi_i|^\sigma_\tau}$ is by trial and error, it requires that environmental reward is defined. We refer to the system’s level of understanding regarding entities in the world and of its own actions to establish reward. As such, Section \[subsec:dynamics\] discusses learning to predict the transition dynamics of the world by approximating $f_{W}: s, \rho_{\phi_i|^\sigma_\tau} \mapsto s'$. Section \[subsec:reward\] outlines a plausible reward structure derived from affordance predictions. And finally, Section \[sec:planning\] describes how a system can exploit these learned behaviors and representations to solve useful tasks and mission in the world. Activating Motion Primitives {#subsec:prims} ---------------------------- In their revolutionary book, [@Thelen1996] outlines the misinformations of contemporary theories in cognitive development, suggesting that it is not the case that cognitive and motor skills emerge linearly through development but these primitive forms of behaviors are inherently ingrained. They are reinforced by interaction with the environment to seek dynamically stable solutions. As a result, skills emerge from context-specific situations that are afforded by the world. It appears that local circuitry within the spinal cord mediates a number of closed-loop sensory motor reflexes—for instance, the spinal stretch reflex [@Purves2001]. In fact, it has been observed that all humans developing infants exhibit similar chronicles of reflexive motor behaviors. The Central Nervous System is organized according to *movement patterns* [@Aronson81]—with its most basic form being the reflex[^4]. Under the notion of epigenetic developmental theory primitive reflexes, expressed as neuro-anatomical structures, are the basic building blocks of behavior. In this regard, complex behavior emerges from combinatoric sequences of primitive control in response to reinforcement by the world. These primitive forms of behavior collectively describe an epigenetic computational basis by which complex control actions are derived [@Grupen2005]. The ability to convert these developmental reflexes into intentional actions remains an fundamental cognitive process [@Zelazo1983] and has been briefly explored in work by [@Wong-RSS2016] in which a deep network controlled closed-loop motion primitives. This was accomplished through the activation of output neurons describing control state derived from time varying dynamics. In short, this reduces sensorimotor control to a supervised learning problem in which the state of the world described through sensor modalities (vision was shown in their study) is used to predict the probability of controller state transitions. Consequently, additional complexity of output neurons simplified learning to a reduced network size and training data. The primitive controllers used in their study are all instances of negative feedback stabilized systems. In other words, these motor primitives are regulators that minimize error between sensation and desired. Consider a simple proportional derivative heading ($\theta$) controller that yields a single degree of freedom dynamical system represented by the second order differential equation in the canonical form, $$\ddot{\theta} + 2\zeta\omega_n\dot{\theta} + \omega_n^2\theta = 0$$ where $\zeta = B/(2\sqrt{KI})$ is the damping ratio and $\omega_n=\sqrt{KI}$ is the natural frequency. It is assumed here that $K, B, $ and $I$ are all constants representing the proportional and derivative gains, and the scalar moment of inertia around the rotation axis respectively under a canonical spring-mass-damper model [@Hebert2015]. Closed-loop feedback controllers such as these have well understood proofs of stability. For instance, the proportional derivative (PD) controller such as the one we described here is provably stable [@Lyapunov1992]. This property is established for systems formulated as harmonic oscillators similar to the above second order differential equation. Convergence results have been proven for closed-loop controllers by [@Coelho_JRS97] for regular convex prismatic objects when two closed-loop controllers executed in a particular sequence. Experiments were later shown on a robot manipulator agreeing with such convergence guarantees [@Coelho-Thesis01][^5]. Likewise, optimal controllers described by regulators like linear quadratic regulators (LQRs) and its variants fall into this categorization as well. In fact, dynamically balancing robots like the uBot platforms [@Kuindersma09:KnuckleWalking; @Ruiken2013], implement a variant of LQR. To reiterate the notion of learning abstractions, we suggest strongly that rather than learning a balancing policy from scratch, perhaps a better direction is to consider this closed-loop controller as a skill and employing learning architectures at the skill level of abstraction. Namely, in spirit of the flurry of work on autonomous vehicles, we emphasize that motion planners and path tracking procedures generally fall under this umbrella notion as well. In particular, path tracking in navigation at the most primitive sense, leverage a heading and longitudinal controller like those described by [@Hebert2015]. Potential field methods for path planning [@Ge2002; @Wang2000; @Khatib85] like Harmonic function path planning [@Connolly1993] have shown success dating back over two decades of research. Especially with recent advances in GPU parallelization for efficient relaxation and the logarithmic transformations to prevent diminishing gradients [@Wray2016], these classical methods still remain powerful path generators. An elegant control theoretic relaxation-based method to velocity planning presented by [@Hebert2015] is shown to be done in linear time of the path, resulting in minimal path deviations and maximal performance envelope. Many of these navigation solutions are fast and robust—so instead of replacing them completely with a learned approximation, we suggest treating resulting plans like motion primitives or inherent behaviors in the context of forming complex visuomotor hierarchies. Likewise, solutions to other motion planners like RRTs or A\* variants can interpreted as motion primitives given that their trajectory can be used to describe some transient, converged, or goal completion evaluation. Under this broad encompass, even dynamic motion primitives (DMPs) [@Ijspeert_NIPS03] and other powerful optimization based methods for locomotion, like those developed during the DARPA Robotics Challenge (e.g. [@Feng2015b; @Kuindersma2016]), fall into this categorization of plausible motor primitives. Motion primitives like these can be formalized under the *Control Basis framework* in which the interaction between the embodied system and the environment is modeled as a dynamical system, allowing the robot to evaluate the status of its actions as a state describing a time varying control system. These controllers $\phi|^\sigma_{\tau}$, consist of a combination of potential functions ($\phi \in \Phi$), sensory inputs ($\sigma \subseteq \Sigma$), and motor resources ($\tau \subseteq \mathcal{T}$) [@Huber1996]. Controllers achieve their objective by descending along gradients in the potential function $\nabla\phi(\sigma)$ with respect to changes in the value of the motor variables $\partial{{\boldsymbol{u_\tau}}}$, described by the error Jacobian $J=\partial\phi(\sigma)/\partial{{\boldsymbol{u_\tau}}}$. References to low-level motor units are computed as $\Delta u_\tau = \kappa J^\# \Delta\phi(\sigma)$, where $\kappa$ is a control gain, $J^\#$ is the pseudoinverse of $J$ [@Nakamura1990], $\Delta\phi(\sigma)$ describes the difference between the reference and actual potential [@Sen2014]. The time history or trajectory of dynamics ($\phi, \dot{\phi}$) as a result of interactions with the environment by executing controllers have been shown to have predictive capability regarding the state of the environment. It was originally shown by [@Coelho1998] that dynamics elicited by abstract actions in the form of controllers serve as important identifiers for the current *control context*—one of many finite sets of dynamic models that capture system behavior. The state description $\gamma^t$ for a particular control action $\phi|_\tau^\sigma$ at time $t$ is derived directly from the dynamics ($\phi, \dot{\phi}$) of the controller given a specified control goal $g$ such that, $$\begin{aligned} \gamma^t(\phi|_\tau^\sigma) = \begin{aligned} \begin{cases} \mbox{\textsc{Undefined}}: &\phi \mbox{ has undefined reference} \\ \mbox{\textsc{Transient}}: &|\dot{\phi}| > \epsilon\\ \mbox{\textsc{Converged}}: &|\dot{\phi}| \le \epsilon\mbox{ and } \phi \mbox{ reaches } g \\ \mbox{\textsc{Quiescent}}: &|\dot{\phi}| \le \epsilon\mbox{ and } \phi \mbox{ fails to reach } g \end{cases} \end{aligned} \end{aligned}$$ Collectively, the state descriptions over all abstract actions describe the control context. This collection can be thought of as a compressed variant of sensorimotor contexts [@Hemion2016b] that is specifically grounded to particular control interactions through the function approximate $f_{\gamma^T}:s \mapsto \gamma^T_{\phi|^\sigma_\tau}$, however, the hierarchy in this case, is achieved by learning new control programs to discover more abstract contexts. Motor units that operate at the torque or joint level are abstracted away into high level parameterized (continuous and goal-oriented) motion controllers that achieve particular objectives. As such, a surrogate of control context is produced by inferring the time-varying dynamics—specifically, Figure \[fig:control-context\] illustrates the concatenation of state descriptions over several $\gamma^T$-networks originally presented by [@Wong-RSS2016] to formulate a *deep control context*, which provides implications into intriguing reinforcement learning methods that have been otherwise inubiquitous. ![A *deep control context* defined over the collection of control state predictions from each $\gamma^T$-network[]{data-label="fig:control-context"}](images/stacked.pdf){width="47.50000%"} Composition of Motor Behaviors {#subsec:complex} ------------------------------ Learning these $\gamma^T$-networks results in a set of sensorimotor policies for activating motor primitives—such a policy is a sensory-driven predictive process that activates primitives when appropriate state descriptions are predicted by the network. A fundamental drawback is that this reduces the set of available behaviors of a system to some static set preordained by initial primitive “reflexes”—in fact, it is indeed the size of the primitive set. This is not the case in development however, since the set of skills and their competence grows [@Thelen1996]. As such, both robots and humans must learn how to make use of their innate primitives to build new, complex motor behaviors. Previously, we discussed how infants derive complex actions through developmental chronicles of emergence and inhibition of primitives. In the same sense that complex motor behaviors emerge from combinatoric sequences of primitive reflexes in the human infant, a computation composition and sequencing of primitive closed-loop motor units form policies for high-level control policies. This idea is not new. The idea of acquiring hierarchies of motor policies through reinforcement learning over a basis of existing motor controllers dates back two decades to work by Huber et. al (See, [@Huber-ICRA96; @Huber-IJCAI97; @Grupen2005]). This framework was first shown on a quadruped walking robot, where it quickly learned walking gait behaviors given a set of primitive kinematic conditioning controllers in a state space constrained by valid tripod stances. They outlined the learning of a locomotive gait schemata over a basis of quasistatic kinematic conditioning controllers with time-varying constraints represented as artificial maturation processes and domain requirements. Unfortunately, these constraints are defined a priori and are inherent in their developmental scheduler or assembler. Rewards are closely tied to phases of development and known task specifications. For instance, in particular developmental stage the reward attributed to the maximization of angular velocity. Regardless, the system was able to learn a compilation of control knowledge through environmental, task relevant rewards under traditional $Q$-learning over the control context or concatenated state descriptions over the basis of innate control programs. Emerging schemata consisted of rotate, translate, and maze traversal behaviors. Expressiveness was increased by using a method to allow for concurrent control composition [@Grupen2005]. Later, this was extended to a number of other applications, for instance in robot manipulation and grasping domains producing a series of profound and sophisticated control paradigms namely in work by [@Platt_IROS02; @Platt_ICRA04; @Platt-Humanoids05; @Platt-ICDL06; @Platt-TRO10]. To clarify, the forming of new control policies can be formulated under the *options* framework [@Sutton-AIJ99; @Stolle2002], where primitive sensorimotor skills are seen as possible high-level abstracted options. New control policies do not emerge by sequencing innate controllers alone, but also can emerge by *combining* many controllers, or simply operating inferior control programs inside the nullspace of primal ones. An important aspect to the creation of new control policies is the notion of *null space composition*, a mechanism for systematically specifying composite controllers derived from combinations of other controllers. To our knowledge, this was first introduced in work by [@Huber-ICRA96] and popularized in manipulation by the work of Platt. This composition uses the *subject-to*, $\triangleleft$, operator to project the control gradient of subsequent, subordinate controllers into the null space of a superior controller, allowing all controllers to attempt to achieve their objectives if there exists resources allowed by those superior in a sequential priority fashion. Thus, new control policies can be expressed in terms of existing motor behaviors. [@Platt2006] derives a general formulation for the composition of $k$ controllers in numerical priority under the effector space $Y$ as the following, $$\begin{aligned} \begin{aligned} \nabla (\phi_k\,\triangleleft\, ,\hdots, \triangleleft \,\phi_1) & \equiv \, \nabla_y\phi_1 + [\mathcal{N}(\nabla_y\phi_1^T)]\nabla_y \phi_2 \\ &+ \Big[\mathcal{N}\Big(\genfrac{}{}{0pt}{0}{\nabla_y\phi_1^T}{\nabla_y\phi_2^T}\Big)\Big]\nabla_y\phi_3 \\ & \,\,\, \vdots \\ & + \Bigg[\mathcal{N}\Bigg(\begin{array}{c} \nabla_y\phi_1^T \\[-5pt] \vdots \\[-2pt] \nabla_y\phi_{k-1}^T \end{array}\Bigg)\Bigg]\nabla_y\phi_k \end{aligned} \vspace{-15pt} \end{aligned}$$ The null space of the $(k-1)$ controllers are computed by concatenating all control gradients as, $$\begin{aligned} \begin{aligned} \mathcal{N}\Bigg(\hspace{-5pt}\begin{array}{c} \nabla\phi_1^T (y)\\[-5pt] \vdots \\[-2pt] \nabla\phi_{k-1}^T(y) \end{array}\hspace{-5pt}\Bigg)= I - \Bigg(\hspace{-5pt}\begin{array}{c} \nabla\phi_1^T(y) \\[-5pt] \vdots \\[-2pt] \nabla\phi_{k-1}^T(y) \end{array}\hspace{-5pt}\Bigg)^{\hspace{-4pt}\#} \Bigg(\hspace{-5pt}\begin{array}{c} \nabla\phi_1^T(y) \\[-5pt] \vdots \\[-2pt] \nabla\phi_{k-1}^T(y) \end{array}\vspace{-10pt}\Bigg) \end{aligned} \end{aligned}$$ where $I$ is identity. The above produces a null-space $\mathcal{N}$ that is tangent to all $k-1$ gradients. The idea of operating multiple control programs through compositions in null-space paves way for coarticulate control paradigms. This notion relies on the fact that in many cases controllers are redundant, having more resources than necessary to achieve a particular objective (e.g. redundant in the sense of degrees of freedom). As such, these excess resources can be used by inferior controllers to achieve secondary objectives simultaneously by operating under the constraint of not interfering with the primary objectives [@Rohanimanesh2004]. The optimal sequencing of controllers described by some policy $\pi$ to solve a particular task is generally achieved through table-based reinforcement learning using the control context as a state representation. Yet, the use of neural networks for classical control problems like the peg-in hole insertion task dates back to the early nineties. For instance, work by [@Gullapalli1992] used multilayered feedforward networks to generate real valued outputs as control parameters. Simple networks such as these can easily approximate the value functions that describe the optimal policy $\pi$ that expresses a trajectory through control contexts and closed-loop (primitive and composite) controllers. In fact, this is a simplified version of DeepMind’s Atari approach [@Mnih2013; @Mnih2015] where current state is used to approximate a value function, however the key distinction is that by describing current state via control contexts, this reduces the input dimensionality drastically. The options framework has been demonstrated with DQNs where option heads were integrated into the output end of the networks. [@Osband2016] used these “bootstrap heads” with different motivations allowing the network to output a distribution over Q-functions. Similarly, this was later extended by [@Arulkumaran2016] with an additional supervisory network, allowing the policy to be decomposed into combination of simpler sub-policies. Instead, since we learn over control contexts the input dimensionality is extremely reduced and leads to faster learning. If one were to consider the use of these $\gamma^T$-networks as a perceptual interface to activate motor primitives, null-space composition is achieved simply by taking the controller $\phi_i|^\sigma_\tau$ corresponding to the control state $\gamma_i^T$ predicted by the network and using the null-space operator between multiple instances of these (e.g. the composition of networks $f_{\gamma,\phi_j|^\sigma_\tau}$ subject to $f_{\gamma,\phi_i|^\sigma_\tau}$ is trivially $\phi_j|^\sigma_\tau \triangleleft \phi_i|^\sigma_\tau$). Each controller $\phi|^\sigma_\tau$ requires a control reference that implies an error to minimize. In many cases, these goals are generally human-defined. For instance, [@Hart2009thesis] uses hue saturation to select interesting areas in the scene. However, recently deep learning architectures were shown to be powerful tools of extracting candidate features in the world. Leveraging the predictive $\gamma^T$-networks provides these indicates stimuli in the world that derives useful control goals, or in other words, control references, that feed into these controllers $\phi|^\sigma_\tau$. In fact, the null-space composition in this case, is mostly unchanged and operate almost identically. A forward propagation over each $\gamma^T$-network predicts control context or state description that indicate activations for each primitive. In the following section, we investigate the literature in reinforcement learning, both deep and classical, for methods that learn these control references or goal parameters that take on continuous values. Continuous Control Parameterization {#subsec:param} ----------------------------------- The $\gamma^T$-network shown in [@Wong-RSS2016] makes an unfortunate assumption that the motion primitive described by closed-loop controller $\phi|^\sigma_\tau$ inherently defines a static control goal $g$. This particular form of goal is derived from cognitive development insight, allowing networks to learn *when* to execute particular abstract skills represented as reflexive actions. However, the question that remains is the encoding of *how* should these skills be performed. Infants readily bootstrap their innate reflexive repertoire to quickly learn the functionality of their end effectors as entities conform to their hands via palmar grasp reflex. Static parametrized goals allows for quick association-based learning, but do not generalized well to future task that require goals outside of innate reflex descriptions. As a result, both infants and robots must learn useful parameters to their inherent motor behaviors in response to stimuli in the world. The $\gamma^T$-networks can be extended to predict varying parameterizations of control goals by a concatenation of state and action parameters after the convolutional layers, a technique that is used in many predictive networks like those presented in [@Levine2015; @Finn2016b]. In fact, work by [@Takahashi2017] implemented this extension producing a variant of $\gamma^T$-networks that account for varying control goal parameters. Learning control parameters is generally accomplished via a trial and error approach, for instance, through popularized deep reinforcement learning methods [@Mnih2013; @Mnih2015]. Unfortunately, a major drawback of the approach used to tackle learning control policies for the Atari games is that the network’s output is fixed over the set of discrete control actions. Quite evidently, robots must be able to perform continuous parameterized actions in the real world instead of fixed discretized control actions. To address this [@Lillicrap2015] presented an actor-critic approach to learn over continuous domains with neural networks using a deep variant to the deterministic policy gradient originally proposed by [@Silver2014]. Similarly, [@Balduzzi2015] also extended the original deterministic policy gradient algorithms with a network that explicitly learned $\partial Q/\partial a$—unfortunately though, their methods were only shown on low-dimensional domains. [@Heess2015b] introduced the stochastic value gradients to learn stochastic policies through a $Q$-critic. They found that stochastic control can be supported by treating the stochasticity in the Bellman equation as a deterministic function consisting of external, Gaussian noise. From this, they revealed a “reparametrization” trick, similar to that of [@Kingma2013]. Meanwhile, [@Wawrzynski2013] trained stochastic policies using a replay experience buffer with the actor-critic framework. The use of the replay buffer has been essential to ensure that data samples are independent and identically distributed. This particular technique was popularized by [@Mnih2015] with original insight dating back twenty years in work by [@Lin1993]. A trusted policy optimization approach proposed by [@Schulman2015], directly builds a stochastic neural network policy without this decomposition and does not require the learning of an action-value network. It appears to produce near monotonically improvements but require careful selection of updates to the policy parameters to prevent large divergences to the existing policy. Furthermore, it has been theorized that this technique appears to be less data efficient. Work by [@Hausknecht2015] has demonstrated a solution to reinforcement learning in continuous parameterized action spaces, where they successfully trained a RoboCup soccer agent that scored more reliably than the 2012 champion. Using value function estimation like these approaches for continuous domains generally uses two networks to represent the policy and value function individually [@Schulman2015; @Lillicrap2015]. There has been work however, to reformulate the original $Q$-learning scheme that results in an elegant effort that can be ported to the continuous setting—namely work by [@Gu2016] has attempted to show this by learning a single network that outputs both policy and value function. Their work was based off dueling networks shown by [@Wang2015] where they decomposed learning into two streams corresponding to the learning of state-value and advantages for each action—though their work was presented in the discrete setting. These *dueling networks* were improved from the formulation of *double $Q$ networks*. The motivation behind double $Q$ networks is that $Q$-learning, even in the tabular setting, exhibits overoptimism due to estimation errors, however, [@vanHasselt2015] found that by decomposing the max operation in the target into two value functions corresponding to action selection and action evaluation, not only reduced this particular overoptimism, but also lead to performance increase. However, approaches that rely on a model-free reinforcement learning method like traditional $Q$-learning has yet another major drawback. They are tragically inefficient with experience. For example, the learned Atari policies required millions of gameplay examples to converge to competence. Consequently, in the past a number of model-based methods like Dyna [@Sutton1990; @Sutton1991], Prioritized Sweeping [@Moore1993], and Queue-Dyna [@Peng1993] have been suggested to make more efficient use of training examples while increasing computation. Many of these select $k$ samples to use for update as opposed to a single update with traditional $Q$-learning. To do this, these methods use experience to not only learn an optimal policy, but also construct a transition $\hat{T}$ and reward $\hat{R}$ model, meanwhile updating the values of $k$ additional state-action pairs. And as summarized by [@Kaelbling1996], Dyna does this by selecting $k$ randomly while the latter two methods prioritize the selection of the pairs to “regions of interest.” Dyna is shown to converge ten times faster than traditional $Q$-learning and the prioritized methods being two-fold faster then Dyna—thus, making much better use of experiences for learning. Quite recently, these ideas stemming back two decades of research has reincarnated into a deep learning framework that incorporates model-based acceleration for deep reinforcement learning with continuous control parameters. First, [@Gu2016] reformulated $Q$-learning in the continuous setting into *normalized advantage functions*, an alternative to policy gradient and actor-critic methods, which decomposes the quality term $Q$ into a state value term $V$ and an advantage term $A$. This particular insight have been explored by others in the past [@Baird1993; @Harmon1996; @Wang2015]. Next, they showed that policy learning can be achieved by taking a learned model of the dynamics and simulating synthetic plausible outcomes via *imagination rollout* and appending the experiences to the replay buffer. Doing so, increases the efficiency of data usage and is a likely candidate to learn the control parameters needed for $\gamma^T$ networks. Interestingly, these imagination rollouts according to learned dynamics models corresponds to a form of $\lambda$-return, where given this model, we can simulate a number of $n$ step trajectories by traversing this aspect transition network. We then weigh these $n$ step backups yielding a compound backup—such an update has been shown to make more efficient use of experiences [@SuttonBarto98]. In a distributed approach, [@Gu2016b] introduced a parallelizable learning algorithm leveraging the normalized advantage functions, to be used across multiple robots which can pool their policy updates asynchronously resulting in accelerated learning. With almost all frameworks to date, experience updates from the replay buffer were uniformly selected from this replay buffer. In fact, it this may not be ideal since individual samples may have varying degrees of significance. As a result, [@Schaul2015] proposed the *prioritized experience replay* which samples at the same frequency they were originally experienced—this result was shown to improve state of the art, and perhaps could be a potential candidate for model-based acceleration sampling. Similar works by [@Zhai2016] used prioritized sampling to bias experience selections. Model referred updates accelerates learning by simulating synthetic on-policy futures. In essence, this describes a forward dynamics model of the interactions, perhaps even over a lifetime of interaction experiences. Evidently, the use of Guided Policy Search tries to fit the dynamics in a locally-linear fashion and using the model as a reference to apply imagination rollouts by placing these cumulatively constructed synthetic experiences into the replay buffer for updates. There is in fact, a connection between these dynamics models that attribute to the increased efficiency of deep reinforcement learning methods and algorithms to provide artificial curiosity. Quite fortunately, these representations that concern the transition dynamics and reward have immediate ties and are analogous to an interaction-based knowledge repertoire that adheres to a series of task planning and intrinsic motivation frameworks. Unsupervised Affordances {#sec:affordance} ======================== The development of an interaction-based knowledge repertoire is an important structure that can be incorporated into a number of algorithms that necessitates forward models or predictions of future state. Fortunately, the learning of this action centric knowledge is a close analog of the transition dynamics that is *already* a component of these model-based acceleration techniques presented to make efficient use of experiences in reinforcement learning paradigms. As such, this structural acquisition has immediate ties to intrinsic motivators that seek to provide artificial curiosity to autonomous systems. Intrinsic motivators allows systems to build their *own* representations that reflect the inherent uncertainties of the system. Curiosity is important for learning systems—as without a sense of curiosity, learning becomes very task specific and one dimensional—the robot can not choose to learn novel skills, only a task-specific motion for a human-defined task. For such reasons, one might consider introducing some form of curiousity [@Frank2015] into the approaches previously discussed. We believe that the development of motor behavior should be driven by intrinsic motivators in order to learn task-generalizable skills that can be reused in the future. While a thorough survey of intrinsic motivators is outside the scope of this paper (for further readings, see [@Barto2013]), we acknowledge that a number of researchers who have applied schemes to autonomously learn new skills [@Utgoff2002; @Barto2004] and representations [@Oudeyer2007; @Hart2009]. Instead, we envision a system that accomplishes this *simultaneously*. As for this, we look into insight from cognitive development. In fact, a number of studies have shown that motor development in biological systems greatly influences the development of perception and cognition. For instance, [@Piaget1953; @Piaget1954] described motor skills as a mechanism that drives development in other domains by generating new sensorimotor experiences and further studies have described cognition and perception as embodied phenomena—grounded to the body and its actions [@Gibson1988]. Other studies have shown that infants are highly sensitive to action-outcome relations and are capable of learning contingencies between their own behavior and outcomes in the world. In particular, [@Libertus2010] showed that sensory-motor experiences motivates infants to reproduce manipulation outcomes and foster reaching and grasping skills. Evidently, infants learn models through manipulation and interaction forming a sense of behavioral organization[^6]. Such organization is an interplay between representation and motor development, driven by curiosity—which we regard, should adhere to model-referenced objectives that aim to explain the complex dynamic phenomena in the world. A common, perhaps, ubiquitous representation describes action-related contexts regarding entities in the world. Such a representation is of chief importance to cognitive systems. In fact, [@Hemion2016] argues that instead of having contemporary computational reinforcement learning agents learn each individual skill entirely from scratch, the development of a world model can be used to support adaptive behavior and learning for cognitive systems. Predicting the Dynamics of the World {#subsec:dynamics} ------------------------------------ Originating from ecological approaches to visual perception, the central concept to the Gibsonian perceptual framework is the notion of an *affordance*, an observable environmental context that invokes a variety of latent interactions. Affordances emphasize an agent-world relationship and constitutes an interactionist account of perception as it reflects environmental signals in relation to an agent’s ability to act on those signals [@Chemero03]. In the strongest sense, Gibson’s theory of *direct perception* holds that the transformation from signal to behavior is expressed directly by neural projections that evolve to recognize opportunities for context-specific actions [@Reed96; @Turvey92]. Such theories emphasize that percepts themselves provide a direct index into all the “action possibilities latent in the environment” [@GibsonJJ77], thus, applicable actions and related outcomes are immediately recognized without necessarily identifying the object itself. The Gibsonian notion of affordances describes action possibilities and can be seen as a surrogate of world state, induced by sensory input and interaction. Gibson’s theory of affordance advocates for modeling the environment directly in terms of the actions it affords. These representations are idiosyncratic and reflect only those actions that can be generated by the agent. Research has been done to investigate the autonomous acquisition of such affordance representations with intrinsic motivators. For instance, an example of multiple intrinsic reward functions have been proposed to learn the transition dynamics of a particular task [@Hester2015]. Others have looked into domain-independent intrinsic rewards, like novelty or certainty, for learning adaptive, non-stationary policies based on data gathered from experience [@Hart2009; @Sequeira2014]. In particular, *model exploration programs* have been presented by [@Hart2009], but the methods reported lacked multimodal sensor integration and do not produce knowledge structures that are easily transferrable to other tasks. A *multimodal structure learning* paradigm was proposed by [@Wong-ICDL2016] extending the ideas initially presented by Hart—in their studies, they leveraged a promising representation describing affordances in terms of *aspect nodes*. The graphical structure called an *aspect transition graph* encodes Markovian state as nodes and actions as edges in a multi-graph [@Ku-ECCV14]. Such a model is generally used in object identification tasks by planning in belief space (rolling out a population of these forward models) to select informative actions [@Sen2014; @Ruiken2016]. Methods for robots to autonomously acquire these models has been described in work by [@Wong-ICDL2016; @Ruiken2016], where systems are intrinsically motivated by a variant of the differential variance function originally proposed by [@Hart2009] to acquire complete graphical representations of objects—associating actions and futures derived from all possible interactions under controlled settings. Evidently though, the aspect transition graph model has two major disadvantages. One being that in studies regarding learning these graphs by [@Hart2009; @Wong-ICDL2016; @Ruiken2016], it is assumed that a sample mean and variance approximates the true underlying transition distribution. Unfortunately, it is not the case that this distribute is necessarily Gaussian $\mathcal{N}(\mu, \Sigma)$, for instance, it may be arbitrarily complex and multimodal. The next large criticism is that these models assume some discretization granularity of sensory input space into aspect models. The definition of what constitutes an *aspect* is task-dependent and difficult to manage in a task-generic way. Both of these issues can be addressed by approximating this arbitrary complex representation in high dimensionality—this approximation over the Markovian state describing what constitutes an aspect and potential outcomes derived from interactions attributes to a new model, a *deep aspect transition network*. This network is otherwise an extension of the original aspect transition model graphical structure with the key distinction being that it captures interactions over many possible granularities, deriving a continuous form of aspect state.[^7] A fundamental extension to the graphical structure is that the deep variants makes no assumptions of aspect boundaries, rather, aspect nodes take on continuous state description. The acquisition of a model that explains the dynamics of entities in the world and their evolution through interaction (the representation we referred to as an *aspect transition network*) is closely related to work that attempts to predict physics, structure, and futures given current state. Despite applicable research in deep filtering and sensor fusion approaches [@Krishnan2015; @Jain2016], likely making predictions in the original sensory-space holds promise in robustness for planning algorithms, since actions are directly derived from future scenes. Many encoder-decoder networks hope to achieve this by upsampling to generate predictions in visual (more specifically, sensory) space. Unfortunately, the prediction in visual space may be nonsense. For instance, recall many of these hill-climbing algorithms to fool the network into predicting visually inplausible images. As such, a particular line of work have looked into guarantees that the prediction has physical properties that are relevant and meaningful in real life. [@Goodfellow2014] proposed the use of *generative adversarial networks* (GANs) that have been widely accepted as a tool to generate visually plausible predictions that fall into the realm of reality, rather than blurred or meaningless output. This is accomplished by simulating two models: a discriminative model $D$ and a generative model $G$ who is trained to maximize the probability of $D$ making a mistake—this framework is analogous to a minimax two-player game. Building off this work, [@Mathieu2015] incorporated the adversarial training techniques into their convolutional network architectures to deal with blur resulting from standard mean squared error loss. They showed that their network was capable of predicting vivid future scenes under an image gradient difference loss function given a set of input sequences. Learning the transition dynamics of entities in the world has immediate correlation with an understanding of intuitive physics. Work by [@Lerer2016] incorporated a variant of the *DeepMask* network [@Pinheiro2016] that was altered to support multi-class predictions and replicated a number of times to predict the segmentation trajectory of multiple time steps in the future for a falling block prediction task. A ResNet-34 was trained as the trunk of the convolutional network and their approach, *PhysNet*, was shown to outperform all other methods for predicting the future locations of the falling blocks. To insert spatial invariance to neural networks, work by [@Jaderberg2015], introduced a differentiable *Deep Spatial Transformer* module that can be applied to convolutional networks allowing it to be able to explicitly actively transform inherent feature maps. [@Jain2015] presented a generic framework to model time-space interactions using statio-temporal graphs with a recurrent neural network architecture. The use of spatio-temporal structures impose high-level intuitions allow for improvements in modeling human motion and predicting object interactions. Prediction work by [@Oh2015] has made several interesting discoveries in predicting futures in the gameplay domain (namely in the game Space Invaders). They showed that a feedforward network was better at predicting precise movements of objects when recurrent structures consistently made a few pixels of translation error. Their hypothesis is due to the failure of precise spatio-temporal encodings in the recurrent setting, however, they found that recurrent structures were better at predicting events that have long-term dependencies. Long-term sequential movements of objects as a result of an applied force vector at a particular location in the image were learned by a deep neural network while taking into account the geometry and appearance of the scene by using convolutions and recurrent layers in the network [@Mottaghi2016]. Others looked into building models for action-conditioned video prediction that explicitly models the motion of pixels rather than predicting the future as a whole. This is achieved by predicting distributions over pixel motion from previous frames and as a result, the model is partially invariant to occlusions. Their model was trained on a dataset of $50,000$ robot interaction videos and resulted in the learning of a “visual imagination”—a concept of predicting different futures based on the the robot’s courses of actions [@Finn2016b]. Similarily, [@Santana2016] trained a realistic, action-conditioned vehicle simulator using generative adversarial networks. These video prediction mechanisms share a similar action-conditioned form of function approximation as *aspect transition networks* given by $f_{W}: s, \rho_{\phi_i|^\sigma_\tau} \mapsto s'$. Further work presented by [@Agrawal2016] allowed a robot to gather over $400$ hours of experience by poking different objects over $50,000$ times. They learned both an inverse and forward model of the dynamics—the inverse model provided supervision to build informative visual features, which then was used by the forward model to predict the interaction outcomes. They refer to these accurate models for multi-step decision making. Deriving Environmental Reward {#subsec:reward} ----------------------------- A likely candidate for environmental reward is one that is computed through an information theoretic interpretation of the affordance prediction networks corresponding to the system’s understanding of interaction and dynamics of the world. Assume the robot interacts with the world by executing some set of control programs and obtains interaction tuples $\langle s, \Phi, P, S'\rangle$ describing the initial state $s$, the control programs that were executed $\phi_i|^\sigma_\tau \in \Phi$ with parameters $\rho_i \in P$ resulting in future states $s_i' \in S'$—in turn this outlines some experience dataset described by $\mathcal{D}_t = \{e_1, e_2, \cdots, e_n\}$ where each experience tuple consisting of $e_i = \langle s, \phi_k|^\sigma_\tau, \rho_k, s'\rangle$. Now, consider the class of reward structures that adhere to using uncertainty and degree of understanding to penalize or promote the selection of new actions and behaviors to emerge. A prominent example of structures of this nature is the differential variance intrinsically motivated function originally proposed by [@Hart2009]. Such a function motivates systems to perform actions that it is most uncertain about, allow it to exploit this reward to build new behaviors. [@Wong-ICDL2016] proposed to use this function to learn complete affordance models by showing that the system consumes rewards as representations become more accurate. Unfortunately, these metrics imply a Gaussian distribution assumption that likely fails when adapting to high dimensional sensory-spaces, as such candidate surrogates are information theoretic functions that prescribe distance metrics on predictions and truths. As such consider, $$I_{f_W} = H(f_W(s, \rho_{\phi_i|^\sigma_\tau}))+H(s')-H(f_W(s, \rho_{\phi_i|^\sigma_\tau}), s')$$ where, $I_{f_W} = I(f_W(s, \rho_{\phi_i|^\sigma_\tau}); s'))$ expresses the mutual information between the prediction of what the outcome should be according to the network $f_W$ given state $s$ and control parameters $\rho_{\phi_i|^\sigma_\tau}$ and the actual outcome $s'$—in essence, a measure of the similarity between prediction $f_W(s, \rho_{\phi_i|^\sigma_\tau})$ and result $s'$. We consider this a candidate reward scheme and express its mechanics under two plausible scenarios, ### Scenario 1, Low Mutual Information: {#scenario-1-low-mutual-information .unnumbered} In the case that there is low mutual information between the output of the affordance network $f_W$ and the true outcome $s'$, this is attributed to two likely culprits, either the network approximating $f_W$ is not converged, in which case, the system does not have a good understanding of the underlying dynamics of the world, or the action networks corresponding to $f_{\gamma^T}, f_{\rho}, f_{\pi^*_{N}}$ do not approximate the appropriate control parameters or falsely predicts activations of primitives in the world. Either way, these networks are continuously trained with current dataset $\mathcal{D}_t$. ### Scenario 2, High Mutual Information: {#scenario-2-high-mutual-information .unnumbered} In the case that there is high mutual information between these quantities, the system has acquired good approximations to interaction outcomes via $f_W$ given its current set of controllers expressed as both primitives $\phi_i|^\sigma_\tau \in \Phi$ and complex encodings $\pi^*_j \in \Pi^*$. And those behaviors have likely found useful control goals describe by the approximator $f_{\rho_i}$. As such, this becomes a state of either habituation or emergent behavior—when high mutual information is observed, a likely course of action to continue the development of complex behaviors and interaction-based data collection is to spawn a new network $f_{\pi^*_{n+1}}$ with the sole purpose of attempting to learn new behavioral control sequences and compositions that result in unexpected transition dynamics in the world. Simply, the control policy is rewarded when it learns sequences of actions that fool the dynamics prediction network $f_W$ into new states, while the $f_W$ networks uses these novel interactions to refine its inherent representation. \ This direction of thinking is adapted from a form of adversarial training where the affordance network tries to best predict the outcome of futures under interactions while the behavioral networks attempts to learn new control parameters that the affordance network fails to predict—hence, new behaviors develop that broaden the system’s understanding of the world. From this fact, the reward given to the predictor $f_W$ and the reward given to the developing behavioral policy network $f_{\pi^*_{n+1}}$ can not be and should not be the same. In fact, they exhibit an inverse variation phenomena, therefore, the reward for new behavioral policy networks must be derived from the prediction network’s ill-performance. An example reward structure that obeys this particular property is the Kullback-Leibler (KL) divergence given by, $\mathcal{D}(f_W(s, \rho_{\phi_k|^\sigma_\tau})||s') = \sum_{i,j} f_W(s, \rho_{\phi_k|^\sigma_\tau})_{i,j} \log(f_W(s, \rho_{\phi_k|^\sigma_\tau})_{i,j}/s_{i,j}$ —this particular quantity is ubiquitously used as a measure of the similarity between two distributions, describing relative entropy. Such metrics generally concern controlling the exploration and exploitation tradeoffs in learning architectures [@Levine2015]. For instance, a number of approaches have used the KL-divergence to control policy update step sizes [@Peters2010; @Levine2014; @Schulman2015; @Akrour2016]. A key concern with this approach is the stability of the learned control policy under an ever-changing dynamics model may be compromised, especially when computing rewards in response to its predictive accuracy. To address this, it is wise to consider a target network paradigm like that of [@Mnih2015], except instead of freezing the target $Q$ network for stability, one should consider freezing the affordance network $f_W$ when computing rewards for the corresponding behavioral policy network. Since otherwise, these rewards will be drastically non-stationary and thus may have large implications on convergence issues. Implications for Task Planning {#sec:planning} ============================== Perhaps, one of the most recurrent themes throughout this manuscript is that there likely is not a single method capable of solving all problems, especially those that concern developing artificial intelligence for physical robotic systems. Similarly, we find that a candidate approach is the marriage between several *theisms* of thought. For instance, consider a lifelong learning framework that generates useful artifacts that task planners can exploit—in actuality, let us briefly entertain this idea. Quite obviously, we would like robots to perform useful tasks or missions in the real world given its massive repertoire of motor skills and precise, learned representation of interaction dynamics in the world. Because these representations and control policies are all derived by the robot through intrinsic motivation, it encodes inherent uncertainties that allow for robust plans and execution of actions. So evidently, it is up to task planners to find plans over these control skills and transition dynamics such that the robot will solve useful problems[^8]. Planning generally assumes some form of forward dynamics model, of how actions affect the state of the world—in this particular scenario, we suggest that a learned aspect transition network $f_W$ will serve purposefully as it is a representation for forward dynamics. In other works, basic push motion planning was achieved using dynamics in the form of video prediction and visual foresight [@Finn2016c]. In another study by [@Tamar2016], *value iteration networks* were presented as an architecture that allows systems with the capability of *learning to plan* by embedding the fully differentiable neural network with a “planning module.” Interestingly, research has shown that an aspect geometry alone is sufficient in describing a number of complex robotic tasks [@Ruiken2016b]. In particular, object identification and assembly tasks can be reconfigured into a model-referenced belief-space planner. The aspect definition prescribes sensory geometries to define a Markovian state described by an aspect node in a geometric structure outlining the geometric constellations under some field of view—thus, encoding latent affordances of entities in the world. The specific geometries of features embedded in the environment can be used to drive belief-space architectures into task-specific solutions. Simply in this setting, the artifacts produced by the approximations describe the networks $f_{\gamma^T}, f_{\pi^*_{}}, f_{\rho}$ and $f_W$, which are respectively the networks for control state prediction, complex behavioral policies, continuous control parameters, and world transition dynamics, can be used during planning rollouts (i.e. a Monte Carlo simulation of trajectories according to action-conditioned transition dynamics $f_W$ via parameters $f_\rho$). We decompose this task solution into a mathematical representation outlined by a (partially observable) Markov Decision Process. Firstly, Markovian state is given by the robot’s perception or raw sensory input. The set of actions in this case collectively describe the set of all control state predictors, control parameter learning networks, and complex policies given by the three set of networks: $f_{\gamma^T}, f_{\pi^*}, f_{\rho}$. And lastly, the transition dynamics are encoded through the aspect transition network $f_W$, with actions being constrained through the $\gamma^T$-networks’ prediction that decides the likely control states at any given time instance. In planning, one may simply perform rollouts over the Markovian state and predict likely candidate actions that can be executed. Many planning approaches at this point resort to sample techniques in conjunction with RRTs, preimage backchaining [@Kaelbling2011], or large population of dynamics models [@Ruiken2016]. Instead, we have control state networks that specifically describe candidate control programs—for each of these actions, we rollout under the aspect transition network the candidate future states. A similar affordance model referenced *Active Belief Planner* [@Ruiken2016] has been presented recently to solve object identification tasks. In fact, their planner uses these affordance representations as forward models during problem solving behavior in non-ideal contexts that include sensor noise, suboptimal lighting, missing information, and extraneous information arising from scenes that can contain multiple objects in initially unknown arrangements. Unfortunately, their methods requires that large populations of transition models are explicitly described—this population is what would be a useful structure to approximate using $f_W$, the aspect transition network. Evidently this roll out can be performed by a number of generic planners that expand states accordingly to their future dynamics. As such, a number of planners like A\*, All-Domain Execution and Planning Technology (ADEPT) [@Ricard2003], or Hierarchical Planning in the Now (HPN) [@Kaelbling2011] can solve for task relevant plans while leveraging learned artifacts for additional robustness. A fundamental problem with planners in general consist of the planning horizon and the branching factor prescribed by the possible number of actions at any given state. Hierarchy attempts to reduce the planning horizon by only planning to the first executable primitive. Still, hierarchical planning scales exponentially (number of operators to the shallowest abstraction level). Fortunately, the control state description networks $f_{\gamma^T}$ and complex behavioral policy networks $f_{\pi^*}$ drastically helps reduce this planning complexity by in turn decreasing the depth at which the planner must roll out due to the consideration of more complex motions. Secondly, the possible actions at any given state is constrained by those physically plausible which is immediately evaluated by the approximator $f_{\gamma^T_i}$ for each control program $\phi_i|^\sigma_\tau$—quiescent actions should then not be considered. Interestingly, another problem that arises is when a cognitive system increases its skill set, this in turn has monotonically increasing effects on the branching factor of planners. One may consider only feasible transitions to attack this phenomena—the feasibility of actions requires either geometric evaluations or explicitly defined dynamics models. For instance, HPN uses generators that reason over the geometry of entities in the world [@Kaelbling2011]. The concept of an aspect transition graph under planning frameworks do a good job of ensuring that only feasible actions are planned for by exploiting the likely transitions under learned object models [@Ruiken2016]. Similarily, the network variants are good candidates for quickly evaluating potential control contexts and feasibility of particular actions parameters. Importantly, by no means do the control policies and transition dynamics learned through lifelong intrinsic motivation limit the use of more sophisticated planners. Dealing with unstructured and dynamic environments becomes a fundamental problem. Therein, a number of frameworks have been adjusted to operate over uncertainties like for example, the Active Belief Planner [@Ruiken2016] and the Belief-space HPN [@Kaelbling2013]. A key insight is that many of these video prediction paradigms (as described in Section \[sec:affordance\]) can be inferred as affordance models that predict all possible futures given interaction with the world—a promising advantage of using generative adversarial networks. As such, this inherently encodes the belief over many possible outcomes that may result in interaction and can be leveraged in planning. For instance, one can operate in the belief space of futures by observing the manifold on which the many futures lie. It appears that deep generative adversarial networks, like the future prediction networks trained adversarially, obey certain arithmetics [@Radford2015] and as a result can be used to discover such a futures manifold. Conclusion ========== This paper has provided an initial survey of recent advances in deep learning applicable to mobile perceptual systems, namely pertaining to the robotics domain. We discuss a series of challenges that arise when applied to physical embodied systems that are otherwise unseen in strictly vision and simulation domains. And we have outlined these recent advances in detection, control, and future prediction problems that are most relevant to robotics and candidate planners in a new learning direction. These advances were structured in this manuscript in such a way that implies a future direction in self-supervision and lifelong learning. Piecing together these individual research ideas, we indicate that the technologies currently may be ripe to design a lifelong self-supervised system to learn complex behaviors in the real world. As such, the acquisition over an extended period of learning can be leveraged in numerous robotics tasks both in research and industry by coupling these learned artifacts with existing task planners. As [@Silver2013] mentions, it is time to move on from task-specific machine learning—instead, learn over an extensive repertoire, over numerous tasks in order to acquire general intelligence. However, with using physical hardware a concern revolving around exploration of control actions comes into play. One of the most fundamental concerns with learning systems considers the question of what constitutes a *safe* exploration paradigm. Rather, how does one ensure that the system does not perform catastrophic actions during exploration? Safe reinforcement paradigms have been outlined by [@Thomas-Thesis2015], discussing algorithms to search for new and refined policies while ensuring that the probability of bad policies are minimized. In these works, [@Thomas2015] presented a method using the trajectories of other policies that were executed in the past to efficiently, with high confidence, perform off-policy evaluations to gauge exploration candidates. With such an approach, it becomes possible to evaluate the performance of new policies without explicit execution. Perhaps, the future for self-supervised systems lies in the connection to metrics that safeguards the hardware while effectively evaluating its possible actions. Measures like these should be considered in order to built a system that learns over a lifetime of experiences. Recently, the emergence of *deep symbolic reinforcement learning* may be a promising architecture by combining recent breakthroughs in deep reinforcement learning with classical symbolic artificial intelligence [@Garnelo2016]. Simply, a neural network backend is used to extract useful symbolic representations which are then used by a symbolic frontend for action selection. Although the work is still at its infancy, a fundamental drawback is that alike the formulation of DQN, it requires that the system has a specified task that it is trying to solve in which reward can be evaluated from. In fact, the resulting artifact of this is a meta-policy composed of sub-policies under a specified task—these sub-policies are however locally optimal under any combination of interactions between entities. There are two evident issues consisting of scaling, due to the nature of considering all possible interactions, and troubles with global optima. However, insight from the two network approach, learning a value and a policy network, may help support some of these immediate issues. Perhaps as this idea develops, it may be considered as a module in lifelong self-supervision, due to its promising connections with symbolic hierarchical planning [@Kaelbling2013]. An important aspect of reinforcement learning is the capability of transfer, both between systems and between task domains. Work by [@Devin2016] provided insight on the decomposition of network policies into robot-specific and task-specific modules that supported transfer between tasks and different robot morphologies (e.g. varying in number of links and joints). Interestingly, under the *control basis formulation*, parametrized controllers already supports generalization from robot to robot, with an assumption that the new system has sufficiently motor resources the same control objectives. As such, the high level behavioral networks, those that are composed of primitive parametrized controllers, too inherit this form of generalizability. A good review by [@Lake2016] discusses the fundamental cognitive problems with building systems that expertly accomplish tasks by pattern recognition alone. Wherein to build cognitive systems that learn and think like people, they suggest that these systems must have the capability to support both explanation and understanding. Systems must be able to understand intuitive physics, have the capacity to learn to learn, and build grounded generalizations that span new tasks and situations—such a view is similar to the direction presented in this paper. Our survey is particularly tailored towards the connection of these ideas with physical robot systems. We suggest that perhaps the goal is not the build a system that exactly mimics human cognition and learning, but instead draw *insight* and *computational analogs* from ideas in cognitive development. Robots are not humans and do not necessarily have to learn at their granularity nor produce the same artifacts through learning. But, we regard that studying the development of cognition in biological systems may be crucial in building algorithms for artificial systems. In this manuscript, we outlined powerful nonlinear approximation tools with inspirations from cognitive development and control theory to produce a direction in which lifelong learning frameworks can be applied to autonomous systems that continuously acquire a hierarchy of complex motor behaviors in addition to a dynamics representation of interactions in the world. A number the ideas presented in this manuscript were influenced by the computational development of action and representation going back to [@Grupen2005]. Their work, however, assumes there exists some preordained developmental guideline under the notion of a *Developmental Assembler* that provides design constraints and developmental schedules. Such entities assigns task specific rewards that are the fundamental motivators to the acquisition of complex behaviors. In the case of our review, we outlined an example reward paradigm that computes reward that adhere to the system’s internal predictions of the world and of it evolution through interaction—tying together the dynamic modeling of action related complexes in the world. Further, we surveyed numerous deep learning advances pertaining to robotics and found a close connection between many deep reinforcement learning paradigms with classical concurrent control schemes under the *control basis* formulation. With this survey, we would like to acknowledge that deep learning should be considered as an hyperparametric approximation tool that alone is likely not capable of attacking all problems. And as such, we foresee a direction in which these powerful approximators are integrated with closed-loop control and optimization paradigms, driven by principled motivators, and inspired by insight from cognitive development to realize a robust lifelong self-supervised system. Acknowledgements {#acknowledgements .unnumbered} ================ Although the author’s affiliations are with the Charles Stark Draper Laboratory, Inc, any opinions, findings, conclusions, or recommendations expressed in this material are solely those of the author(s) and do not necessarily reflect the views of organization. I would like to thank Takeshi Takahashi and Roderic A. Grupen for their expertise, enthusiasm, and insight on preliminary work with control state prediction networks that lead up to this direction of thinking. Lastly, and most importantly, the stand, the structure, and the direction of this manuscript would not be in its current form, without the critiques and “skepticisms” of my good friend Mitchell Hebert. [^1]: This particular phenomenon is categorized under *open set recognition*. [@Bendale2015] proposed a new model layer, OpenMax, that estimates the probability of input from unknown classes and rejects fooling adversarial examples. [^2]: In fact, a coalition of researchers during a Robotics: Science and Systems (RSS) workshop entitled *Are the Sceptics Right? Limits and Potentials of Deep Learning in Robotics* (June 2016) in Michigan, USA have argued that it may not be ideal to learn end-to-end. They indicate that in the same way that we would never want to learn sort when we have quicksort, it makes little sense to learn low-level torque activations when we understand kinematics. [^3]: It is entirely untrue to state that Guided Policy Search algorithms have *no* guarantees. Actually, its original formulation has asymptotic local convergence guarantees. Later [@Montgomery2016] reformulated GPS under approximate mirror descent to find convergence guarantees in simplified convex and linear settings and bounded guarantees in the nonlinear setting. [^4]: [@Grupen2005] expresses that *packaged movement patterns* “reside in the central and peripheral nervous system and range from involuntary responses to cortically mediated visual reflexes \[and\] contribute to the organization of behavior at the most basic level by constituting a sensorimotor instruction set for the developing organism.” [^5]: In the next subsection, we will discuss composite controllers which have similar convergence guarantees as outlined and proven by [@Platt-TRO10]. [^6]: The developmental process from neonate to approximately a year in age consists of a precisely-timed chronicle of emergence and inhabitation of primitive, postural, or bridge reflexes that contribute to the organized development of complex behavior and skill acquisition [@Law2011]. With age, myelination occurs in the infant, resulting in increased controllable degrees of freedom and resolution in motor activity [@Oudeyer2013]. This form of maturation is especially prominent when fine motor control start to emerge in cases such as pincer grasp reflexes. When the infant develops appropriate skills, it begins to play and interact with the environment and objects in it by exploratory activity [@Oudeyer2007]. Increased motor acuity and refined motor skills are important for development in general and affect what kinds of information can be extracted from the environment. As motor skills develop, complicated representations of the world can too be constructed through addition information provided through these actions [@Libertus2010]. [^7]: The choice to derive these transition networks from affordances defined through aspect graphs is due to convenience. We are aware and acknowledge the numerous affordance representations in the literature ike Object Action Complexes (OACs) [@Kruger2011], etc, to name a few. [^8]: While a complete review of all task planning frameworks is outside the scope of this paper, we discuss how lifelong learning artifacts can integrate with a number of selected planners to accomplish task-relevant solutions.
0.0cm 0.0cm =16.0cm =23.cm =0.0cm =0.0cm = 1.5pc [TWO STEP MECHANISM OF $\eta, \ \eta'\ ,\omega, \ \phi $ MESON]{} [PRODUCTION IN $pD\to ^3HeX$ REACTION]{} L.A. Kondratyuk [^1]\ [*Institute of Theoretical and Experimental Physics,\ *B.Cheremushynskaya 25, Moscow 117259,Russia**]{} Yu.N. Uzikov [^2]\ *Laboratory of Nuclear Problems,\ *Joint Institute for Nuclear Research, Dubna, Moscow reg. 141980, Russia** Abstract {#abstract .unnumbered} ======== The differential cross sections of $pD\to ^3He X$ reactions, where $X=\eta, \ \eta'\ ,\omega, \ \phi $, are calculated on the basis of a two-step mechanism involving the subprocesses $pp\to d\pi ^+$ and $\pi^+n\to Xp $. It is shown that this model describes well the form of energy dependence of available experimental cross sections at the final c.m.s. momentum $p^*$=0.4-1$ GeV/c$ for the $\eta $ and at $p^*=0-0.5$ GeV/c for the $\omega $ meson as well as the ratios $R(\eta'/\eta) $ and $R(\phi/\omega)$. The absolute value of the cross section is underestimated by an overall normalization factor of about 3 for $\eta, \ \eta '$ and by nearly one order of magnitude for $\omega$ and $\phi$. The spin-spin correlations are predicted for the reaction ${\vec D}{\vec p}\to ^3He X$ in the forward-backward approximations for the elementary amplitudes. PACS: 25.10.+s, 25.40.Qa, 13.60.Le, 14.40.Aq KEYWORDS: $pd$ interaction, $\eta,\ \eta', \ \omega, \ \phi $ meson production. 1\. Reactions $pD\to ^3He X$, where $X$ means a meson heavier than the pion, are of great interest for several reasons. Firstly, high momentum transfer ($\sim 1$ GeV/c) to the nucleons takes place in these processes. Secondly, unexpected strong energy dependence of $\eta $ meson production was observed near the threshold [@berg88]. In this respect the possible existence of quasi-bound states in the $\eta -^3He$ system is discussed in the literature [@kuwilk]- [@kugree]. Thirdly, production of the $\eta, \eta', \phi $ mesons, whose wave functions contain valence strange quarks, raises a question concerning strangeness of the nucleon and the mechanism of Okubo-Zweig-Iizuka rule violation [@kuellis],[@kuwurz]. As a result of this discussion the experimental investigation of the reaction ${\vec D} {\vec p}\to ^3He\phi$ is proposed [@kukhac] in Dubna. Finally, the preliminary experimental data on $\eta '$ ¨ $\phi $ meson production in the reactions $pD\to ^3He \eta'$ and $pD\to ^3He \phi$ near the thresholds are available at present [@kusaturn] at meson c.m.s. momenta $p^*\sim 20\ MeV/c$. New experimental data on the $pD\to ^3He \omega $ reaction were obtained recently in Ref. [@kuwurz95] above the threshold. In this connection the mechanism of the reactions in question seems to be very important. 2\. The first attempt to describe the reaction $pD\to ^3He \eta $ on the basis of the three-body [@kulagel] mechanism displayed an important role of intermediate pion beam in this process. As was mentioned for the first time in Ref. [@kukilian], at the threshold of the reaction $pD\to ^3He \eta $ the two-step mechanism including two subprocesses $pp\to d \pi ^+$ and $\pi^+n\to \eta p $ is favoured. The advantage of this mechanism is that at the threshold of this reaction and zero momenta of Fermi motion in the deuteron and $^3He$ nucleus the amplitudes of these subprocesses are practically on the energy shells. It is easy to check, that this peculiarity (the so called velocity matching or kinematic miracle) takes place above the threshold too, if the c.m.s. angle $\theta_{c.m.}$ of the $\eta $ meson production in respect to the proton beam is $\theta_{c.m.}\sim 90^o$. For the $\omega, \ \eta'$ and $\phi $ mesons the velocity matching takes place above the corresponding thresholds only at $\theta_{c.m.}\sim 50^o-90^o$ depending on the meson mass and energy of the incident proton. The two-step model of the $pD\to ^3He \eta $ reaction was developed in Refs. [@kuklu], [@kuwilkf]. Progress in comparison with the microscopic model [@kulagel] was the comprehension of the very important role of final state interaction in the $\eta -^3He $ system near the threshold. Recently Fäldt and Wilkin [@kuwilkfpl] found that the two-step model can describe the form of the threshold cross section of $pD\to ^3He X^0$ reaction as a function of the mass of produced meson $X^0= \eta, \omega, \eta',\phi $. New points of present work are following. (i) We extend the two-step model [@kuklu] for the production of $\eta, \omega, \eta'$ and $\phi $ mesons above the thresholds ( at the final c.m.s momenta $p^*$ about several hundred MeV/c). Previously, only $\eta $ meson production was investigated above the threshold by Laget and Lecolley in the microscopic model. (ii) We consider in more general form than in Ref. [@kuwilkfpl] the influence the spin effects. In particular, we predict the pD spin-spin correlation for the reaction ${\vec D}{\vec p}\to ^3He X$ at the energy region of the proposed Dubna experiment [@kukhac]. 3\. Proceeding from the 4-dimensional technique of nonrelativistic graphs one gets the following expression for the amplitude of the $pD\to ^3He X $ reaction in the framework of the two-step model corresponding to the Feynman graph in Fig.1 Ref.[@kuklu] $$A(pD\to ^3He X)= C{\sqrt 3\over{ 2m}} A_1(pp\to d\pi^+) A_2(\pi^+n\to Xp) {\cal F}(P_0,E_0), \label{1}$$ where $A_1$ and $A_2$ are the amplitudes of the $pp\to d\pi^+$ and $\pi^+n\to Xp$ processes respectively, $m$ means the nucleon mass, $C=3/2$ is the isotopic spin factor taking into account the sum over isotopic spin projections in the intermediate state. This factor is the same for all isoscalar mesons $X$ under discussion. The form factor has the form $$\label{2} {\cal F} (P_0,E_0)=\int {d^3q_1\over {(2\pi )^3}} {d^3q_2\over {(2\pi )^3}} {\Psi_D({\bf q}_1) \Psi_{\tau }^*({\bf q}_2)\over {E_0^2-({\bf P}_0+{\bf q}_1+{\bf q}_2)^2 +i\epsilon}}.$$ Here $\Psi_D({\bf q}_1)$ is the deuteron wave function and $ \Psi_{\tau }({\bf q}_2)$ is the $^3He$ wave function in momentum space for the $d+p$ channel; $E_0$ and ${\bf P}_0$ are the energy and momentum of the intermediate $\pi $ meson at zero Fermi momenta in the nuclear vertices ${\bf q}_1={\bf q}_2=0$: $$E_0=E_X+{1\over 3}E_{\tau }-{1\over 2}E_D, \ \ {\bf P}_0=-{2\over 3}{\bf P}_{\tau } - {1\over 2} {\bf P}_D, \label {3}$$ where $E_j$ is the energy of the j-th particle in c.m.s., ${\bf P}_D$ and ${\bf P}_{\tau }$ are the relative momenta in the initial and final states respectively $|{\bf P}_{\tau }|\equiv p^*$. In comparison with [@kuwilkf] we do not restrict ourselves to the linear approximation over ${\bf q}_1$ and ${\bf q}_2$ in the $\pi $ meson propagator and take into account the dependence on Fermi momenta exactly. It results in faster decreasing $|{\cal F} (P_0,E_0)|$ with increasing mass of the meson produced than in Ref.[@kuwilkf]. Amplitude (\[1\]) is related to the differential cross section of the $pD\to ^3He X$ reaction by the following expression $$\label {4} {d\sigma\over {d\Omega }}={1\over{64\pi ^2}}{1\over s_{pd}} {|{\bf P}_{\tau }|\over {|{\bf P}_D|}} {\overline {|A(pD\to ^3He X)|^2}} ={|{\bf P}_{\tau }|\over {|{\bf P}_D|}}|f(pD\to ^3HeX)|^2,$$ where $\sqrt{s_{pD}}$ is the invariant p+D mass. The amplitudes $A_1(pp\to d\pi^+)$ and $ A_2(\pi^+n\to X) $ are similarly related to the corresponding cross sections. When deriving Eq. (\[1\] ) one factored the amplitudes of elementary subprocesses $A_1$ and $A_2$ outside the integral sign over ${\bf q_1}$ and ${\bf q_2}$ at the point ${\bf q}_1={\bf q}_2=0$ and then replaced them to the amplitudes of the corresponding free processes. Neglection of the off-energy-shell effects is expected to be correct at the velocity matching conditions. Taking into account the off-shell and Fermi motion effects in the optimal approximation [@kugurvitz] one obtains numerical results very close to the approximation (\[1\]) if the energy dependence of the cross sections of elementary processes is smooth enough. The cross section can be always present in the following formally separable form $$\label{5} {d\sigma\over {d\Omega }}=R_S K |{\cal F}(P_0,E_0)|^2 {d\sigma\over {d\Omega }} (pp\to d\pi^+) {d\sigma\over {d\Omega }}(\pi^+n\to Xp)$$ where $K$ is the kinematic factor defined according to Eq. (21) in Ref. [@kuklu] for the differential cross section developed in a spinless approximation. ( Indeed the factor $K$ from Ref.[@kuklu] is multuplied here by factor $(9/8)^2$ in order to obtain the correct normalization condition for the the vertex function $d+p\to ^3He$). The additional factor $R_S$ in Eq.(\[5\]), which is absent in Ref.[@kuklu], takes into account spins and generally depends on mechanism of the reaction. It is important to remark that the approximation (\[1\]) does not lead generally to the condition $R_S=const$ because of complicated spin structure of the amplitudes $A_1(pp\to d\pi^+)$ and $A_2(\pi^+n\to Xp)$. The analysis is simpler at the angles $\theta _{c.m.}=0^o$ and $ 180^o$. In this case the production of pseudoscalar meson $\pi^+ n\to Xp$ in the forward-backward direction is described by only one invariant amplitude. The processes $pp\to d\pi^+$ and $\pi^+n\to \omega(\phi )p$ are determined by two forward-backward invariant amplitudes $a_i$ and $b_i$ according to the following expressions [@kugermw90] $$\label{6} {\hat A_1(pp\to d\pi^+}) = a_1 {\bf e n} +i b_1 {\bf \sigma } [{\bf e}\times {\bf n}],$$ $$\label{6a} {\hat A_2(\pi^+n\to p\omega }) = a_2 {\bf e \sigma} + b_2 ({\bf \sigma n})({\bf e \sigma }),$$ where ${\bf n}$ is the unity vector along the incident proton beam, ${\bf e}$ is the polarization vector of the spin 1 particle $(d, \omega ,\phi, )$, ${\bf {\sigma }}$ denotes the Pauli matrix. According to our numerical calculations, the contribution of the $D-$ component of the nuclei wave functions to the modulus squared of the form factor $|{\cal F}(P_0,E_0)|^2$ is less than $\sim 10$ % for the deuteron and less $\sim 1$ % for $^3He$. Using the $S-$ wave approximation for the nuclear wave functions and taking into account Eqs.(\[6\],\[6a\]) we have found the following expressions for the spin factor $R_S$ of the spin averaged cross section in the two-step model $$\label{7} R_{0 }={1\over 3}\left ( {1\over 2}|a_1|^2+{2\over 3}|b_1|^2-{2\over 3} Re(a_1b_1^*)\right ) \left [{1\over 2}|a_1|^2+|b_1|^2\right ]^{-1}$$ – for the pseudoscalar mesons and $$R_{1 }={1\over 3}\left [{1\over 2}|a_1|^2(3|a_2 |^2+\gamma )+{2\over 3} (|a_2 |^2+\gamma )Re(a_1b^*_1)+{2\over 3}|b_1|^2(5|a_2|^2+\gamma )\right ]$$ $$\label{7a} \times \left [{1\over 2}(|a_1|^2+2|b_1|^2)(3|a_2|^2+\gamma) \right ]^{-1}$$ for the vector mesons, where $\gamma= |b_2|^2+2Re(a_2^*b_2)$. As it follows from Ref. [@kugermw90], at the threshold of $\eta $ meson production $T_p\sim 0.9$ GeV one has $|b_1|/|a_1|\sim 0.1$, therefore it allows one to put $R_0=1/3$ [@kuwilkf],[@kuwilkfpl]. Unfortunately, the experimental data on the spin structure of the $pp\to d\pi^+$ and $\pi^+n\to \omega (\phi )p$ amplitudes at energies $T_p\geq 1400 MeV$ are not available. Thus, the exact absolute magnitude of the spin factors and the cross sections is rather questionable. We have found numerically from Eqs.(\[7\]- \[7a\]) that the values $R_0$ and $R_1$ vary in the range from 1/9 to 4/9 when the complex amplitudes $a_i$ and $b_i$ vary arbitrary. An remarkable peculiarity of the condition $|a_1|\gg |b_1|$ is that in this case the spin factor for vector mesons $R_1$ does not depend on the behaviour of amplitudes $a_2$ and $b_2$ and in accordance with Eq.(\[7a\]) equals to $R_1=1/3$. This value is very close to the maximal one $R_S^{max}= 4/9$. As it will be shown below the assumption $|a_1|\gg |b_1|$, which provides the condition $R_0=R_1={1\over 3}= const$, is compatible with main features of the observed cross sections for $\eta, \omega$ and $\eta '$ meson production. The numerical calculations are present below at $R_0=R_1={1\over 3}$. The numerical calculations are performed using the RSC wave function of the deuteron [@alberi]. The parametrization [@kuzuyu] of the overlap integral between the three-body wave function of the $^3He$ nucleus and the deuteron is used for the wave function of $^3He$, $\Psi_{\tau }$, in the channel $d+p$. The value $S_{pd}^{\tau }=1.5$ is taken for the deuteron spectroscopic factor in $^3He$ [@schiav]. The numerical results are obtained in the $S$ - wave approximation for the spin averaged cross sections and taking into account the D-component of deuteron for spin correlations. The formfactor (\[2\]) can be expressed through the S- and D- components of the nuclei wave function $\varphi_l$ by the following integrals $$%{\cal F}_{000}(P_0,E_0)={1\over 4\pi}\int _0^\infty j_0(P_0r)\exp{(iE_0r)} %\varphi _d(r)\varphi_{\tau}(r)r\ dr. {\cal F}_{Lll'}(P_0,E_0)={1\over 4\pi}\int _0^\infty j_L(P_0r)\exp{(iE_0r)} \varphi^{\tau}_l(r)\varphi ^d_{l'}(r)r\ dr; \label{8}$$ the normalization integral $\int _0^\infty [\varphi_0^2(r)+ \varphi_2^2(r)]r^2dr$ equals to 1 for the deuteron and $S_{pd}^\tau $ for the $^3He$. In the S-vawe approximation we have ${\cal F}(P_0,E_0)={\cal F}_{0000}$. The parametrization [@ritchie] is used here for the differential cross section of the $pp\to d\pi^+$ reaction. The experimental data on the total cross section of the reactions $\pi^+n\to p\eta (\eta',\omega,\phi) $ are taken from Ref. [@cern83] and the isotropic behaviour of the differential cross section is assumed here. In Fig.2 are shown the results of calculations of the modulus squared of the form factor $|{\cal F}_{000}(P_0,E_0)|^2$ for the production of $\eta, \ \eta', \omega, \ \phi$ mesom at the angle $180^0$ as a function of kinetic energy $T_p$ of the incident proton in the laboratory system. One can see from this figure that the value of $|{\cal F}_{000} (P_0,E_0)|^2$ decreases exponentially with increasing $T_p$, and the slope in the logarithmic scale is the same for all mesons in question. It is important to remark that at the definite energy $T_p$ the value of the form factor $|{\cal F}_{000}(P_0,E_0)|^2$ is practically the same for all mesons whose production thresholds in the reaction $pD\to ^3HeX$ is below $T_p$. Therefore difference in the production probability of different mesons in the two-step model is mainly due to difference of the $\pi^+n\to Xp$ amplitudes. The results of calculations of the differential cross sections are presented in Fig.3 in comparison with the experimental data at $\theta_{c.m.} =180^0$ from Refs.[@kusaturn], [@kuwurz95] and at $\theta_{c.m.}=60^0$, $T_p=3$ GeV from Ref. [@kubrody]. We do not discuss here the region near the $\eta $ threshold and the related problem of $\eta -^3He $ final state interaction which was investigated in detail previously [@kuklu],[@kuwilkf]. It follows from Fig.3,[*a*]{} that form of energy dependence of the calculated cross section for the $pD\to ^3He \eta$ reaction at the energies sufficiently higher than the threshold $T_p\ge 1.3$ GeV ($p^*=0.4 - 1.0 $ GeV/c) is in qualitative agreement with the experimental data at $\theta_{c.m.} =180^0$ and in a less degree at $\theta _{c.m.}=60^o$. To obtain the absolute value of the cross section we need the normalization factor $N=3$ which is close to $N=2.4$ found in Ref. [@kuwilkfpl] at the threshold. According to our calculations (Fig.3,[*b*]{}), the cross section of the $\eta '$ meson production near the threshold $(p^*=22$ MeV) and at $T_p=3$ GeV agrees with the experimental data in absolute value at the same factor $N=3$ as for the $\eta $ meson. As one can see from Fig.4, the shape of the modulus squared $|f|^2$ of the $pD\to ^3He\omega $ reaction amplitude as a function of momentum $p^*$ agrees properly with the form observed in experimental data [@kuwurz95] in the range of $p^*=0 - 500$ MeV/c. It should be noted that the ratio $R(\phi/\omega )=|f(pD\to ^3He \phi)|^2/|f(pD\to ^3He \omega)|^2$ near the corresponding thresholds predicted by the model $R^{th}=0.052$ is in agreement with the experimental value $R^{exp}=0.07\pm 0.02$. However, the absolute value of the cross section for vector mesons is essentially smaller than the experimental value. At the threshold ($p^*\sim 20 MeV/c$) the normalization factor N for $\omega $- is 5.9 and for $\phi $-meson is 6.6. To describe the absolute magnitude of the cross section in the range of $100 MeV/c\leq p^*\leq 400 MeV/c$ one needs the normalization factor $N=9.6$. On the other hand, at $T_p=3$ GeV for $\theta_{c.m.}=60^o$ [@kubrody] the calculated cross section coincides with the experimental value in absolute magnitude. The above mentioned agreement with the experimental data in form of energy dependence of $\eta, \omega $ (and $\eta '$) and in the ratios $\eta'/\eta$, $\phi/\omega$ supports the assumption that the spin factors $R_0 $ and $R_1$ are approximately constants in the corresponding energy regions. Therefore the assumption $|a_1|\gg |b_1|$ seems to be enough reasonable. It allows us to give the definite prediction for spin-spin correlations in the reaction ${\vec p}{\vec D}\to ^3HeX$ with polarized deuteron and proton. Using Eqs.(\[6\], \[6a\]) and taking into account the D- component of the deuteron wave function we find under above condition $b_1=0$, that the cross section of vector meson production in case of polarized colliding particles can be obtained from Eq.(\[5\]) by the following replacement $$R_1|{\cal F}(P_0,E_0)|^2 \to {1\over 3}\Biggl \{ |{\cal F}_{000}|^2 (1- {\bf P}_p\cdot {\bf P}_D -\sqrt{2}Re({\cal F}_{000}{\cal F}_{202}^{\ *}) \left [{\bf P}_p\cdot{\bf P}_D- {3\over {2{\bf P}_0^2}} [{\bf P}_0\times {\bf P}_p]\cdot [{\bf P}_0\times {\bf P}_D]\right ]$$ $$\label{11} +|{\cal F}_{202}|^2 \left [1+{1\over 4}{\bf P}_p\cdot{\bf P}_D-{3\over {2{\bf P}_0^2}} [{\bf P}_0\times {\bf P}_p]\cdot[{\bf P}_0\times {\bf P}_D] +{9\over{ 4 {\bf P}_0^4}} ([{\bf P}_0\times {\bf P}_p]\cdot{\bf P}_0)([{\bf P}_0\times {\bf P}_D] \cdot{\bf P}_0) \right ] \Biggr \},$$ where ${\bf P}_p$ and ${\bf P}_D$ are the polarization vectors of the proton and deuteron respectively and the momentum ${\bf P}_0$ is defined in Eq.(\[3\]). It is assumed here that the tenzor polarization of deuteron is zero. Using this result we obtain for the spin-spin asymmetry the following expression in case ${\bf P}_d\perp {\bf P}_0$ and ${\bf P}_p\perp {\bf P}_0$ $$\label{12} {\Sigma}_1= {d\sigma (\uparrow \uparrow)-d\sigma (\uparrow \downarrow)\over{ d\sigma (\uparrow \uparrow)+d\sigma (\uparrow \downarrow)}}= -{{|{\cal F}_{000}|^2-|{\cal F}_{202}|^2- {1\over \sqrt{2}}Re({\cal F}_{000}{\cal F}_{202}^{\ *})}\over {|{\cal F}_{000}|^2+|{\cal F}_{202}|^2}},$$ where $d\sigma (\uparrow \uparrow)$ and $d\sigma (\uparrow \downarrow)$ are the cross sections for parallel and antiparallel orientation of the polarization vectors of the proton and deuteron. We have found numerically from Eq. (\[12\]) that near the threshold $\Sigma_1(\phi)=-0.95 $ and $\Sigma_1$ very fast goes to $-1$ above the threshold. Very similar result is obtained for the $\omega $ meson: $\Sigma_1(\omega) =-0.92$. Neglecting the D-component of the deuteron wave function we obtain one the same result for vector and pseudoscalar mesons: $\Sigma_1=\Sigma_0=-1$. 5\. In conclusion, the two step mechanism favoured in the case of $\eta $ meson production near the threshold and at $\theta _{c.m.}\sim 90^o$ owing to the kinematical velocity matching turns out to be very important also beyond the matching conditions, namely both above the threshold of $\eta $ meson production and in the cases of $\eta',\ \omega,$ and $\phi $ mesons. Despite of its simplicity the two-step model describes fairly well the shape of energy dependence of available experimental data of the cross sections of $\eta$ and $\omega$ production as well as the ratios $\eta '/\eta $ and $\phi /\omega $ at the thresholds. The absolute value of the cross sections is not described by this model. The most discrepancy was found for the vector mesons. The normalization factor for $\omega $ meson $N= 9.6$ is considerably greater than the value $N=2.4 $ established by Fäldt and Wilkin [@kuwilkfpl] at the thresholds of all mesons under discussion. The reasons for the deficiency in absolute value of the predicted cross section may be the contribution of nondeuteron states in the subprocess $pp\to \pi NN$ at the first step and the full spin structure of the elementary amplitudes beyond the approximations (\[6\],\[6a\]) and (\[1\]). For example, the contribution of the two-nucleon state with the spin 0 will modify in a different way the amplitudes of the pseudoscalar and vector meson production in the reaction $pD\to ^3HeX$. The experiments with polarized particles [@kukhac] can give a new information about the mechanism in question. The authors are grateful to M.G.Sapozhnikov for useful discussions. This work was supported in part by grant $N^o$ 93-02-3745 of the Russian Foundation For Fundamental Researches. [99]{} J. Berger et al., [ Phys.Rev.Lett.]{} [**61**]{} (1988) 919; P. Berthet et al., [Nucl.Phys.]{} [**A443**]{} (1985) 589. C. Wilkin, [ Phys. Rev.]{}[ C47]{} (1993) R938. L.A. Kondratyuk, A.V. Lado and Yu.N. Uzikov, [ Yad. Fiz.]{} [**57**]{} (1995) 524. S.Wycech, A.M. Green, J.A. Niskanen, in: Abstr. Book of XV European Few Body Conf. (June 3-4, 1995, Valencia, Spain) p.196; Belyaev V.B. et al. [*ibid*]{}. J.Ellis, M.Karliner, D.E.Kharzeev, M.G.Sapozhnikov, Phys.Lett.[**B353**]{} (1995) 319. R.Wurzinger. Preprint LNS/Ph/94-21, 1994. B. Khachaturov et al. Talk in Int. Symp. DEUTERON-95, Dubna (4-7 July,1995) R.Bertini et al. Preprint LNS/Ph/94-16. R.Wurzinger, R. Siebert, J.Bisplinghoff et al., Phys.Rev.[**C51**]{} (1995) R443. J.M. Laget, J.F. Lecolley, [ Phys. Rev. Lett.]{} [**61**]{} (1988) 2069. K. Kilian and H. Nann, Preprint KFA, Juelich.1989. G. Fäldt and C. Wilkin, Nucl.Phys. A587 (1995) 769. G. Fäldt and C. Wilkin, Phys. Lett.B354 (1995) 20. J.Keyne et al., Phys.Rev. [**D14**]{} (1976) 28. S.A. Gurvitz, Phys. Rev. [**C22**]{} (1980) 725; S.A. Gurvitz, J.-P. Dedonder and R.D. Amado, Phys. Rev. [**C19**]{} (1979) 142; J.F. Germond and C. Wilkin, J.Phys.[**G16**]{} (1990) 381. G. Alberi, L.P. Rosa and Z.D. Thome , [ Phys. Rev. Lett. ]{} [**34**]{} (1975) 503. M.A.Zusupov, Yu.N. Uzikov and G.A.Yuldasheva, [ Izv.AN KazSSR,ser. fiz.-mat.]{} [N 6.]{} (1986) 69. R. Schiavilla, V.R. Pandaripande and R.B. Wiringa, Nucl.Phys. [**A449**]{} (1986) 219. B.G. Ritchie, [ Phys.Rev.]{} [**C44**]{} (1991) 533. V.Flaminnio, W.G.Moorhead, D.R.O.Morrison et al., CERN-HERA, 83-01. H.Brody, E.Groves, R.Van Berg et al., Phys.Rev. [**D9**]{} (1974) 1917. Figure captions {#figure-captions .unnumbered} =============== Fig.1. Two-step mechanism of the reaction $pD\to ^3HeX$. Fig.2. Calculated modulus squared of form factor $|{\cal F}_{000}(P_0,E_0)|^2$ (\[2\],\[8\]) as a function of kinetic energy of the proton in laboratory system $T_p$ for $\eta ,\ \eta',\ \omega, \ \phi$ meson production at $\theta_{c.m.}=180^0$. Fig.3. Differential cross sections of the $pD\to ^3He\eta (\omega ,\ \eta',\ \phi )$ reactions as a function of lab. kinetic energy of proton $T_p$. The curves show the results of calculations at $R_S={1\over 3}$ for different angles $\theta _{c.m.} $ multiplied by the appropriate normalization factor $N$.\ - $pD\to ^3He\eta $: $180^\circ $ (full line, $N=3$), $90^\circ$ (dashed curve, $N=3$), circles are experimental data: $\circ $ - $\theta_{c.m.}=180^0$ Ref.[[@berg88]]{} ; $\bullet $ - $\theta_{c.m.}=60^0$ Ref.[[@kubrody]]{};\ - $pD\to ^3He\eta ' $ at $\theta_{c.m.}=180^0$ (full, N=3) and $\theta_{c.m.}=60^0$ (dashed, N=3); the circles are experimental data for the $\eta '$ production: $\circ $ - $\theta_{c.m.}=180^0$ Ref. [[@kusaturn]]{}; $\bullet $ - $\theta_{c.m.}=60^0$ Ref.[[@kubrody]]{}; the dotted line shows the results of calculation for the $pD\to ^3He\phi$ reaction at $\theta_{c.m.}=180^o$ normalized by factor $N=6.6$ to the experimental point ($\triangle $) from Ref.[@kusaturn];\ - the same as but for the reaction $pD\to ^3He\omega $ with normalization factor $N=1$; circles ($\circ $) are the experimental data from Ref. [[@kuwurz95]]{}. Fig.4. The modulus squared of the amplitude of the $pD\to ^3He\omega $ reaction defined by Eq.(\[4\]) as a function of the c.m.s. momentum of the $\omega $ meson, $p^*$. The curve is the result of calculation at $R_1={1\over 3}$ multiplied by factor $N= 9.6$, the circles ($\circ $) are experimental data [@kuwurz95]. \[fig1\] \[fig2\] \[fig3\] FIG.3,a FIG.3,b FIG.3,c Fig.4 [^1]: e-mail address:[email protected] [^2]: e-mail address: [email protected]\ [*Former address*]{}: Department of Physics, Kazakh State University, Timiryazev Str.,47, Almaty 480121 Republic of Kazakhstan\
--- bibliography: - 'Doctoral\_thesis\_Sushma\_1807.bib' --- Abstract {#abstract .unnumbered} ======== This thesis consists of two independent parts: random matrices, which form the first one-third of this thesis, and machine learning, which constitutes the remaining part. The classical Wishart matrix has been defined only for the values $\beta = 1,2$ and $4$ (corresponding to real, complex and quaternion cases respectively), where $\beta$ indicates the number of real matrices needed to define a particular type of Wishart matrix. The moments and inverse moments of Wishart matrices have their theoretical and practical importance. In the works of Graczyk, Letac and Massam (2003, 2004), Matsumoto (2012), Collins et al. (2014), a certain additional condition is assumed in order to derive a formula for finite inverse moments of Wishart matrices. Here, we address the necessity of this additional condition. In general, we consider the question of having finite inverse moments for two bigger classes of Wishart-type matrices: the $(m,n,\beta)$-Laguerre matrices defined for continuous values of $\beta >0$ and compound Wishart matrices for the values of $\beta=1$ (real) and $2$ (complex). We show that the $c$-th inverse moment of a $(m,n,\beta)$-Laguerre matrix is finite if and only if $c < (m-n+1)\beta/2$, for $\beta >0$. Moreover, we deduce that the $c$-th inverse moment of a compound Wishart matrix is finite if and only if $c < (m-n+1)\beta/2$, for $\beta =1,2$. The definition of compound Wishart matrix in quaternion case ($\beta =4$) is not so coherent yet, so the condition for finiteness of inverse moments in this case is a future work. The second part of the thesis is devoted to the subject of the universal consistency of the $k$-nearest neighbor rule in general metric spaces. The $k$-nearest neighbor rule is a well-known learning rule and one of the most important. Given a labeled sample, the $k$-nearest neighbor rule first find ‘$k$’ data points in the sample, which are closest to $x$ based on a distance function and then predicts the label of $x$ as being the most commonly occurring label among the picked ‘$k$’ labels. There is an error if the predicted label is not same as the true label. A learning rule is universally weakly consistent if the expected (average) learning error converges to the smallest possible error for the given problem (known as the Bayes error). According to the 2006 result of C[é]{}rou and Guyader, the $k$-nearest neighbor rule is universally weakly consistent in every metric space equipped with probability measure satisfying the strong differentiation property. A 1983 result announced by Preiss states necessary and sufficient condition for a metric space to satisfy the strong differentiation property for all finite Borel measures. This is the condition of being metrically sigma-finite dimensional in the sense of Nagata. Thus, in every sigma-finite dimensional metric space in the sense of Nagata, the $k$-nearest neighbor rule is universally weakly consistent. The main aim of this part of the thesis is to prove the above result by direct means of statistical learning theory, bypassing the machinery of real analysis. Our proof is modeled on the classical proof by Charles Stone for the Euclidean space. However, the main tool of his proof, the geometric Stone lemma, only makes sense in the presence of the finite dimensional linear structure. The lemma gives an upper bound on the number of points in a sample for which a given point can serve as one of the $k$-nearest neighbors. We search for an analogue of the geometric Stone lemma for metrically (sigma) finite dimensional spaces in Nagata’s sense, making a number of interesting discoveries on the way. While in the absence of distance ties there is a straightforward analogue of the lemma, it is provably false in the presence of ties, and besides, we show that the distance ties in general metrically finite-dimensional (even zero-dimensional) spaces are unavoidable. At the same time, it turns out that the upper bound in the Stone lemma, although unbounded, grows slowly in $n$ (as the $n$-th harmonic number), which allows to deduce the universal consistency. Further, we establish strong consistency in a metrically finite dimensional space, under the additional condition of zero probability of ties. In the Euclidean case, the result is known in the general case, but historically, it was also first proved in the absence of ties. We leave the question of validity of the result in a metrically sigma-finite dimensional space as an open question. Finally, we work out in detail the necessity part of the proof of the Preiss theorem above. The original note by Preiss only briefly outline the ideas of the proof in a few lines, and to work out sufficiency, Assouad and Quentin de Gromard had written a 61-page long article. The details of the necessity part appear in our thesis for the first time. Declaration {#declaration .unnumbered} =========== I hereby declare the thesis entitled “Topics in Random Matrices and Statistical Machine Learning” has been undertaken by me and reflect my original work. All sources of knowledge used have been duly acknowledged. I declare that this thesis has never been submitted and/or published for any award to any other institution before. Sushma Kumari (18 July 2018) Acknowledgments {#acknowledgments .unnumbered} =============== This thesis is a result of support and guidance of many people. I would like to extend my sincere thanks to all of them. Firstly, I would like to express my sincere gratitude to my supervisors Dr. Beno[î]{}t Collins and Dr. Vladimir G. Pestov for their guidance and encouragement. I greatly acknowledge the support of Dr. Collins during my PhD. I am very grateful to Dr. Pestov for his guidance and critical comments on this research work over emails and calls irrespective of the 12-hour time difference between Brazil and Kyoto. I sincerely acknowledge the help by Dr. Hiroshi Kokubu and the financial support of Kyoto Top Global Unit (KTGU) for Brazil overseas trip. I greatly acknowledge the hospitality of Department of Mathematics, Kyoto University and financial support of JICA-IITH Friendship program for giving me an opportunity to pursue a doctoral course at Kyoto University. I would like to thank my thesis committee members for their support and suggestions. My special thanks goes to my best friend, Mr. Akshay Goel, for the umpteen number of discussions we had over varied topics of mathematics. I would like to thank my super-friends Ms. Jasmine Kaur and Ms. Akanksha Yadav for keeping with me in my good and bad days. I thank my fellow colleagues Gunjan, Prashant, Reddy, Mathieu, Felix for being there. Last but not the least, a very special thanks to my lovely family: mumy, papa and bhai (mother, father and elder brother). Their constant assurance and faith in me has made me come so far. Dedication {#dedication .unnumbered} ========== to my family... List of Notations {#list-of-notations .unnumbered} ================= Here, we list the main notations, which are used in both parts of the thesis but in different context. Part I $$\begin{aligned} \beta \ \ \ \ \ \ & \text{ parameter to define matrix ensemble} \ \\ A,B \ \ \ \ \ \ & \text{matrix} \ \\ Q \ \ \ \ \ \ & \text{ compound Wishart matrix} \ \\ \lambda,\xi \ \ \ \ \ \ & \text{ eigenvalues } \ \\ \chi_{s} \ \ \ \ \ \ & \text{ chi distribution with parameter s } \ \\ n, m \ \ \ \ \ \ & \text{ size of a matrix} \end{aligned}$$ Part II $$\begin{aligned} \beta \ \ \ \ \ \ & \text{ dimension of a metric in Nagata sense} \ \\ A,B \ \ \ \ \ \ & \text{measurable sets } \ \\ \Omega \ \ \ \ \ \ & \text{separable metric space} \ \\ Q \ \ \ \ \ \ & \text{metric space, } Q \subseteq \Omega \ \\ n, m \ \ \ \ \ \ & \text{sample size, sub-sample size} \ \\ \chi_{M}(x) \ \ \ \ \ \ & \text{characteristic function of some set $M$,} \\ & \text{equals to 1 if $x \in M$, else 0 } \vspace{0.3cm}\end{aligned}$$ Some of the frequently used notations in the thesis are: $$\begin{aligned} \mathbb{I}_{\{x_i \in A\}} \ \ \ \ \ & \text{indicator function, equals to 1 if $x_i$ is in} \\ & \text{set $A$, otherwise 0} \ \\ \sharp\{A\} \ \ \ \ \ & \text{cardinality of set $A$} \ \\ \rho \ \ \ \ \ & \text{metric}\ \\ B(x,r), \bar{B}(x,r),S(x,r) \ \ \ \ \ & \text{open ball, closed ball, sphere respectively, at $x$ and radius $r$} \end{aligned}$$ List of Figures {#list-of-figures .unnumbered} =============== Figure \[fig:unconnected\] : An illustration for an unconnected family of balls Figure \[fig:finite\_dimension\] : An illustration that real line has metric dimension 2 Figure \[fig:nagata\_dimension\] : An illustration that real line has Nagata dimension 1 Figure \[fig:ball-covering\_dimension\] : An illustration that real line has ball-covering dimension 2 Figure \[fig:classification\] : Illustration of a binary classification problem, classifying new data points into ‘rectangle’ and ‘black dot’ Figure \[fig:consistent rule\] : Illustration of a weakly consistent rule and a smart learning rule [@Hatko_2015] Figure \[fig:bayes\_rule\] : Defining the Bayes classifier $g^{*}$ based on the values of regression function $\eta$ [@Cerou_Guyader_2006] Figure \[fig:knn\_classification\] : Illustration of the $k$-nearest neighbor rule with voting ties for $k=2$ and distance ties for $k=3$ Figure \[fig:cones\] : A cone of angle $\pi/6$ [@Devroye_Gyorfi_Lugosi_1996] Figure \[fig:stone\_lemma\] : Illustration of geometric Stone’s lemma in Euclidean spaces (adapted from Fig. 5.5 of [@Devroye_Gyorfi_Lugosi_1996]) Figure \[fig:stone\_lemma\_fails\_construction\] : Illustration of construction of a sample for which Stone’s lemma fails An Overview {#an-overview .unnumbered} =========== I started my ‘research life’ in the second year of my Masters at Indian Institute of Technology, Hyderabad under the guidance of Dr. Balasubramaniam Jayaram. I worked with Dr. Jayaram on finding a yardstick to empirically measure the concentration of various distance functions. We published a paper entitled ‘Measuring Concentration of Distances-An Effective and Efficient Empirical Index’ in IEEE-TKDE. In 2015, I joined as a doctoral student under Dr. Beno[î]{}t Collins to pursue my newly developing interest in random matrices. In the first year of my PhD, I worked on the problem of finding a necessary and sufficient condition to have finite inverse moments for $(m,n,\beta)$-Laguerre matrices. Based on this work, a paper [@Kumari_2018] entitled ‘Finiteness of Inverse Moments of $(m,n,\beta)$-Laguerre matrices’ has been accepted in Infinite Dimensional Analysis, Quantum Probability, and Related Topics. As I had some research experience in machine learning, after discussing with Dr. Collins, I decided to work in machine learning in the remaining time of my PhD. Dr. Collins introduced me to Dr. Vladimir Pestov, both of them were colleagues at University of Ottawa. Although I had known Dr. Pestov thorough his works on concentration of measure which I studied during my masters, the wish to work with him was made possible by Dr. Collins and Kyoto University. Dr. Pestov is my PhD co-supervisor and we studied the universal consistency of the $k$-nearest neighbor rule in metrically sigma-finite dimensional spaces. We hope to convert this joint work, which constitutes the second part of this thesis, to a scientific paper in the near future. [*Overview of the thesis*]{} This thesis is based on two different areas of mathematics broadly known as random matrix theory and statistical machine learning. The first part of the thesis examines the finiteness of inverse moments of $(m,n,\beta)$-Laguerre matrices and the second part investigates the universal consistency of $k$-nearest neighbor rule in metrically sigma-finite dimensional spaces. According to the literature, the theory of random matrices are often employed in different areas of machine learning such as in dimensionality reduction and random projections. Some recent works of Romain Couillet and others [@Liao_Couillet_2017; @Mai_Couillet_2017; @Louart_Liao_Couillet_2017] present the emerging applications of random matrices in machine learning. However, the two topics discussed in this thesis are entirely independent of each other. *Part I* The extensive study of random matrices specifically, Wishart matrices, is credited to the pioneering work of John Wishart [@Wishart_1928] in 1928. John Wishart studied the real Wishart matrices in relation to the sample covariance matrices from a multivariate Gaussian distribution. The complex Wishart matrices were introduced by N. R. Goodman [@Goodman_1963]. Originally, the classical Wishart ensemble was defined only for the parameter $\beta = 1,2$ and $4$ corresponding to real, complex and quaternion Wishart matrices respectively. A while later in 2002, Dumitriu and Edelman [@Dumitriu_Edelman_2002] generalized the classical Wishart ensemble to a tri-diagonal matrix ensemble called $(m,n,\beta)$-Laguerre ensemble, for the general values of $\beta >0$ having similar eigenvalue distribution. Another generalization of the Wishart matrices, called compound Wishart matrices for the values $\beta = 1$ and $2$, was introduced by Roland Speicher [@Speicher_1998]. Let $A$ be a random matrix then for integer $c > 0$, $\mathbb{E}\{{ \rm Tr }A^{c} \}$ and $\mathbb{E}\{{ \rm Tr }A^{-c} \}$ are called the $c$-th moment and $c$-th inverse moment of $A$, respectively. Letac and Massam [@Letac_Massam_2004] were the first to compute all the general moments of Wishart and inverse Wishart matrices of the form $\mathbb{E}\{Q(S)\}$ and $\mathbb{E}\{Q(S^{-1})\}$ in both real and complex cases, where $Q$ is a polynomial depending only on eigenvalues of the corresponding matrix $S$ or $S^{-1}$. Later, Sho Matsumoto [@Matsumoto_2012] gave the formula for all the general moments and inverse moments of Wishart matrices using Weingarten function. The explicit expression for the inverse moments of a compound Wishart matrix was obtained by Collins et al. in [@Collins_Matsumoto_Saad_2014]. In [@Collins_Matsumoto_Saad_2014; @Graczyk_Letac_Massam_2003; @Letac_Massam_2004; @Matsumoto_2012], an additional condition such as $c < m-n+1$ for $\beta =2 $ (in complex case) and $c < (m-n+1)/{2}$ for $\beta =1$ (in real case) was assumed to compute the finite $c$-th inverse moment of a Wishart and compound Wishart matrix. Interestingly, it is not known whether this additional condition is necessary to have finite inverse moments. This work is motivated by this question. We consider this property, to have finite inverse moments, in general for a broader family of $(m,n,\beta)$-Laguerre matrices. In this thesis, we present a necessary and sufficient condition for the finite inverse moments of $(m,n,\beta)$-Laguerre matrices and compound Wishart matrices to exist. The main contribution of the first part of the thesis is as follows: (i) Let $S$ be a $(m,n,\beta)$-Laguerre matrix, then $$\begin{aligned} \mathbb{E}\{ {\rm Tr}(S^{-c})\} \text{\ is finite if and only if \ } c < \frac{(m -n +1)\beta}{2}. \end{aligned}$$ This finiteness condition is derived from the eigenvalue distribution. Since Wishart matrices and $(m,n,\beta)$-Laguerre matrices have same eigenvalue distribution for $\beta = 1,2$ and $4$, in particular, the finiteness condition also holds for Wishart matrices. As a natural consequence, we also give a necessary and sufficient condition for the compound Wishart matrices to have finite inverse moments. *Part II* The $k$-nearest neighbor rule is one of the simplest, oldest and yet the most popular learning rules in statistical machine learning. To predict a label for $x$, the $k$-nearest neighbor rule first find ‘$k$’ labeled data points among a given labeled sample of $n$ data points, which are closest to $x$ with regard to some distance function, not necessarily a metric, and takes a majority vote among the selected ‘$k$’ labels. Large part of the theory developed for the $k$-nearest neighbor rule is for metric spaces due to their well-understood properties. The first proof for universal weak consistency of the $k$-nearest neighbor rule in a finite dimensional Euclidean space $\mathbb{R}^d$ was given by Charles Stone [@Stone_1977] in 1977. He showed that the expected misclassification error converges in probability to the smallest possible error (also known as Bayes error) as the sample size grows. Stone listed three important conditions that are sufficient to yield universal weak consistency of a learning rule in any finite dimensional normed space. Indeed, these conditions by Stone have more general importance, two of the Stone’s conditions hold for any separable metric space whenever $n,k \rightarrow \infty$ and $k/n \rightarrow 0$. The proof of Stone’s theorem was based on a geometrical argument, the so-called (geometric) Stone’s lemma. The basic idea is to partition $\mathbb{R}^d$ into $L$ number of sets with some special convexity properties and show that a point cannot serve as the $k$-nearest neighbor of more than $kL$ sample points, where $L$ is a constant depending only on the dimension $d$ and the norm. The proof of geometric Stone’s lemma highly relies on the structure of $\mathbb{R}^d$ and thus is limited to finite dimensional Euclidean spaces or, more generally, finite dimensional normed spaces [@Duan_2014]. The third condition by Stone is called the Stone’s lemma and is known to be true only for finite dimensional normed spaces. After almost three decades, C[é]{}rou and Guyader [@Cerou_Guyader_2006] proved, developing the ideas of Devroye [@Devroye_1981], that the $k$-nearest neighbor rule is universally weakly consistent in a broader class of metric spaces namely, those satisfying the weak Lebesgue-Besicovitch differentiation property. It is known (1983, David Preiss [@Preiss_1981]) that a complete separable metric space satisfies the strong Lebesgue-Besicovitch differentiation property if and only if the space is metrically sigma-finite dimensional. Therefore, the $k$-nearest neighbor rule is universally weakly consistent in a complete separable and metrically sigma-finite dimensional space. It was left open by Preiss whether sigma-finite metric dimension of a space is necessary for the weak Lebesgue-Besicovitch differentiation property to hold. Mattila [@Mattila_1971] showed that for a given measure the strong and weak Lebesgue-Besicovitch differentiation property may not be equivalent. Strengthening the conclusion of Stone’s theorem, Devroye et al. [@Devroye_Gyorfi_Krzyzak_Lugosi_1994] proved that the universal weak consistency and universal strong consistency are equivalent in Euclidean spaces. Our focus is primarily on metric spaces with finite and sigma-finite metric dimension. The following flow diagram illustrate the bridge between universal consistency, differentiation property and dimension of a metric. = \[thick,-&gt;,&gt;=stealth\] = \[rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered, draw=black, fill=red!30\] = \[rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered,text width=3cm, draw=black, fill=red!30\] = \[draw, -latex’\] (start) \[io\] [2. sigma-finite metric dimension]{}; (in5) \[io,xshift=-5cm, left of=start\] [1. finite metric dimension]{}; (in1) \[io, below of=in5\] [3. strong LB-differentiation property]{}; (in2) \[io, below of=in1,yshift=-0.2cm\] [5. universal strong consistency]{}; (in3) \[io, below of=start\] [4. weak LB-differentiation property]{}; (in4) \[io, below of=in3,yshift=-0.2cm\] [6. universal weak consistency]{}; (in5.west) – ++(-0.5cm,0) – ++ (0,-3.8cm) node\[pos=0.5,sloped,anchor = center,above\] [[no ties]{}]{} – ++(0.5cm,0) (in4); (in5) -&gt; (start); (in1) -&gt; node\[pos=0.5,sloped,anchor=center,above\][[Preiss]{}   ]{}(start); (start) -&gt; node\[pos=0.5,sloped,anchor=center,below\][[Assouad & Gromard]{}]{} (in1); (in1) -&gt; (in3); (in2) -&gt; (in4); (in3.east) – ++(0.5cm,0) – ++(0,-2.3cm) node\[pos=0.5,sloped,anchor = center,below\] [[C[é]{}rou & Guyader]{}]{}– ++(-0.5cm,0) (in4); (start.east) – ++(0.8cm,0) – ++(0,-5.4cm) – ++(-1.9cm,0) – ++(0,0.4cm) (in4); In the above diagram, the thick double line represent our results. We already have the following implications: (i) Preiss, Assouad and Gromard: $2 \Leftrightarrow 3$. (ii) C[é]{}rou and Guyader: $4 \Rightarrow 6$. (iii) Always true: $1 \Rightarrow 2$, $3 \Rightarrow 4$ and $5 \Rightarrow 6$. Our principal goal is to investigate the universal consistency of the $k$-nearest neighbor rule in a metrically sigma-finite dimensional space, from the machine learning perspective. The classical method to establish the universal consistency is to use Stone’s theorem. While generalizing the Stone’s theorem in a metrically finite dimensional space, we encountered a number of interesting observations, presented in Chapter \[chap:Consistency in a metrically finite dimensional spaces\]. Furthermore, the study of strong consistency in Euclidean spaces was fundamentally initiated by Devroye around $1981$ [@Devroye_1981]. The strong consistency was first proved under the assumption of absolute continuity of measures (that is zero probability of ties) [@Devroye_Gyorfi_1985], while the universal strong consistency was established under appropriate tie-breaking method much later [@Devroye_Gyorfi_Krzyzak_Lugosi_1994]. In fact, a much stronger statement is true, the notions of weak and strong consistency for the $k$-nearest neighbor rule are equivalent in Euclidean spaces [@Devroye_Gyorfi_Lugosi_1996]. When working with strong consistency in the presence of ties, a good tie-breaking method is needed as the solution become much more complicated. In our attention, there are almost no developments on strong consistency of the $k$-nearest neighbor rule in metric spaces other than Euclidean spaces. After a brief review of the literature on universal consistency of the $k$-nearest neighbor rule, we see that there are numerous directions for theoretical work. We have accomplished some of them in this thesis. We outline our contributions for part II of this thesis as following: 1. (*$3 \Rightarrow 2$*): Preiss has sketched the proof of $2 \Leftrightarrow 3$ very briefly without any details, where the sufficiency part was completed by Assouad and Gromard [@Assouad_Gromard_2006]. A thorough explanation of necessity of sigma-finite metric dimension has not been done before. We give a detailed proof of necessity part of the Preiss’ result (refer to Section \[sec:A result by Preiss\]), that is, the strong Lebesgue-Besicovitch differentiation property holds only if the space is metrically sigma-finite dimensional. 2. We generalize the Stone’s lemma in metric spaces with finite Nagata dimension under the assumption of no distance ties. As a consequence, we give an alternate proof for weak consistency of the $k$-nearest neighbor rule in metric spaces with finite Nagata dimension under the additional assumption of no distance ties (refer to Section \[sec:Consistency without distance ties\]). We also present some examples reflecting the problem with distance ties. One of the major issues is that the Stone’s lemma fails in presence of distance ties, which indicates that the classical method of using Stone’s theorem may not be a right way to prove universal consistency in such metric spaces (refer to Section \[sec:Consistency with distance ties\]). 3. (*$2 \Rightarrow 6$*): We reestablish the universal weak consistency of the $k$-nearest neighbor rule in a metrically sigma-finite dimensional space under the random uniform tie-breaking method. Stone’s lemma fails in the presence of distance ties, so we give another geometric lemma to work with distance ties. Using this lemma and not the argument of differentiation property, we give a direct and simpler proof for universal weak consistency of the $k$-nearest neighbor rule (refer to Section \[sec:Consistency with distance ties\]). This may provide an insight in establishing universal consistency for other learning rules where the Lebesgue-Besicovitch differentiation property and other real analysis techniques are not so coherent. 4. (*$1 \Rightarrow 4$* partially): We also prove the strong consistency of the $k$-nearest neighbor rule in a separable space which has finite metric dimension under the assumption of zero probability of distance ties (refer to Section \[sec:Strong consistency\]). This is a new result in this direction as all the previous results on strong consistency in [@Devroye_Gyorfi_Lugosi_1996] are limited to Euclidean spaces. 5. Davies [@Davies_1971] has constructed an example of a compact metric space of diameter 1 and two distinct Borel measures which gives equal values to all closed ball of radius $<1$. The Davies’ example fails the differentiation property and therefore, by the result of C[é]{}rou and Guyader, the $k$-nearest neighbor rule is not consistent on Davies’ example. We modify the two Borel measures, constructed by Davies, to show the inconsistency of $k$-nearest neighbor rule on Davies’ example directly without using the differentiation argument. This thesis is organized in the following way: In Part I, the Chapter \[chap:Random matrices\] introduces the $(m,n,\beta)$-Laguerre matrix, Wishart and compound Wishart matrix and their joint eigenvalue distribution. While in Chapter \[chap:Finiteness of inverse moments\], a necessary and sufficient condition to have finite inverse moments has been derived. In part II, the Chapter \[chap:Dimension of a metric\] introduces the various notions of metric dimension and differentiation property followed by our proof for the necessary part of Preiss’ result. Further, Chapter \[chap:Statistical machine learning\] gives an introduction to mathematical concepts in statistical machine learning and then the $k$-nearest neighbor rule is presented in Chapter \[chap:The $k$-nearest neighbor rule\] with a proof of Stone’s theorem. In Chapter \[chap:Consistency in a metrically finite dimensional spaces\] and Chapter \[chap:Future Prospects\] we present our main results and some possible future directions based on it. Random Matrices {#chap:Random matrices} =============== In this chapter, we introduce the Wishart matrices and two of its generalizations, namely, $(m,n,\beta)$-Laguerre matrices and compound Wishart matrices. We also briefly discuss the joint eigenvalue densities of Wishart matrices and $(m,n,\beta)$-Laguerre matrices. Wishart matrix {#sec:Wishart matrix} -------------- A matrix whose at least one of the entries is a random variable is called a random matrix. Consider the experiment of tossing a coin and let $\Omega = \{H,T\}$ denote the outcomes. Define a set of random variables $M_{ij}$ from $\Omega$ to $\{0,1\}$ such that $M_{ij}(H) = 0$ and $M_{ij}(T) = 1$ for $1\leq i,j \leq 2$. Then, $$\begin{aligned} M = \begin{pmatrix} M_{11} & M_{12} \\ M_{21} & M_{22} \end{pmatrix} \end{aligned}$$ is a $2 \times 2$ random matrix and $ \begin{pmatrix} 0 & 1 \\ 1 & 1 \end{pmatrix} $ is one of the realizations of $M$. Random matrix theory is a subject which evolved mainly because of its applications. Random matrices are relevant in numerous fields ranging from number theory and physics to mathematics and machine learning. Based on the distribution of its entries, random matrices are categorized as ensembles. Wigner ensemble, Gaussian orthogonal (unitary) ensemble and Wishart ensemble are the most studied with entries from Gaussian distribution. The exceptional properties of Gaussian distribution make these ensembles special in their areas of applications. The main interests of random matrix theory share its interests with probability and matrix theory, like studying the limiting distribution of its eigenvalues. The study of random matrices has progressed very rapidly and now there are lots of books available based on the prospects of random matrices you want to explore. [@Mehta_2004; @Forrester_2010] are the classical texts whereas [@Tao_2013] is a nice way to get introduced to random matrices. Other books like [@Couillet_Debbah_2011] and [@Foucart_Rauhut_2013] introduces the applications of random matrices in machine learning and compressed sensing with an adequate flavor of pure mathematics. This thesis focuses only on Wishart matrices and its generalizations. Let $\mathcal{K}_{m,n}$ denote the space of all $m \times n$ random matrices having independent and identically distributed entries from standard Gaussian distribution. Consider four random matrices $A_1, \allowbreak A_2$, $C_1,C_2 \in \mathcal{K}_{m,n}$. (i) The matrix $P_1 = A_{1}^{*}A_1$ is called a *real Wishart* matrix, where $A_{1}^{*}$ is the transpose of $A_1$. (ii) Let $A = A_1 + i A_2$. The matrix $P_2 = A^{*}A $ is called a *complex Wishart* matrix, where $A^{*}$ denotes the conjugate transpose of $A$. (iii) Let $C = C_1 + i C_2$. Then a matrix of the form $$\begin{aligned} P_4 = \begin{pmatrix} A & C \\ -C & A \end{pmatrix}^{*} \begin{pmatrix} A & C \\ -C & A \end{pmatrix} \end{aligned}$$ is called a *quaternion Wishart* matrix. Therefore, a Wishart matrix is of the form $P_{\beta}= A^*A$ defined for the parameter $\beta =1,2$ and $4$ corresponding to real, complex and quaternion Wishart matrices, respectively. Here, the parameter $\beta$ denote the number of different real matrices require to define a Wishart matrix such as, we require 2 real matrices $A_1$ and $A_2$ to define a complex Wishart matrix. Without explicitly stating, we always assume $m \geq n$, in the first part of this thesis. Suppose $P_{\beta}$ is a positive-definite matrix, then there are $n$ real and positive eigenvalues. Let $0 < \lambda_1\leq \ldots \leq \lambda_n$ be the eigenvalues of $P_{\beta}$ such that the exact value of $\beta$ will be specified whenever necessary. The joint eigenvalue density of a Wishart matrix [@Wishart_1928] can be stated as following. $$\begin{aligned} \label{eqn:jpdf_eigenvalues_wishart} \displaystyle h_{\beta}(\lambda_1,\lambda_2,\dots,\lambda_n) \ = \ Z_{m,n}^{\beta} \prod_{i=1}^{n}\lambda_{i}^{\alpha -1} e^{\left(- \frac{1}{2}\sum_{i=1}^{n}\lambda_i\right)} \prod_{k<j}(\lambda_{j} - \lambda_{k})^{\beta},\end{aligned}$$ where $Z_{m,n}^{\beta}$ is the normalization constant and can be explicitly computed. The equation is defined only for the values of $\beta =1,2$ and $4$. In simpler words, corresponding to the values of $\beta =1,2$ and $4$, there are real, complex and quaternion Wishart matrices which have eigenvalue density function $h_{\beta}(\lambda_1,\lambda_2,\dots,\lambda_n)$. Then, the natural question is whether there any random matrix of Wishart-type which has similar joint eigenvalue density for every values of $\beta > 0$? This was answered by Dumitriu and Edelman [@Dumitriu_Edelman_2002], where they constructed a tri-diagonal matrix of Wishart-type having same joint eigenvalue density as in the equation . The $(m,n,\beta)$-Laguerre matrix is presented in Section \[sec:mnb\_matrix\]. Compound Wishart matrix {#sec:Compound Wishart matrix} ----------------------- A different yet interesting generalization of Wishart matrices are compound Wishart matrices, which were introduced by Roland Speicher [@Speicher_1998]. A compound Wishart matrix is defined only for $\beta=1$ and $2$ corresponding to real and complex compound Wishart matrices, respectively. Let $B$ be a $m \times m$ complex deterministic matrix and let $ A$ be a $m \times n$ complex random matrix with independent and identically distributed entries from a standard complex Gaussian distribution, then $$\begin{aligned} Q & = A^{*}BA\end{aligned}$$ is called a complex *compound Wishart* matrix. The matrix $B$ is known as the shape parameter. \[def:compound\_wishart\] The matrix $Q$ is a complex Wishart matrix if $B$ is an identity matrix. We note the following observation for $Q$ when $B$ is a positive definite Hermitian matrix. If $B$ is a positive definite Hermitian matrix, then $B$ has an eigenvalue decomposition that is, $B = UDU^*$, where $D = diag({\xi_1,\dots,\xi_m})$ such that $0< \xi_1 \leq \dots \leq \xi_m$ and $U $ is a unitary matrix consisting of eigenvectors of $B$. As, $U^{*} A$ has the same distribution as $ A$, the matrix $Q$ has the same distribution as $ A^{*}DA$. [[ 9999 ]{}]{} In this thesis, we assume that $B$ is positive definite and Hermitian so that the matrix $Q$ has real and positive eigenvalues. The real compound Wishart matrices can also be defined analogously. The $(m,n,\beta)$-Laguerre matrix {#sec:mnb_matrix} --------------------------------- Ioana Dumitriu and Alan Edelman generalized the Wishart matrix to $ \allowbreak (m,n,\beta)$-Laguerre matrix such that the equation is defined for all positive values of $\beta$. By the process of bi-diagonalization, a Wishart matrix can be reduced to its corresponding $(m,n,\beta)$-Laguerre matrix for $\beta =1,2$ and $4$. In this section, we study the $ \allowbreak (m,n,\beta)$-Laguerre matrix and its joint eigenvalue distribution. Let $X$ be a bi-diagonal matrix with mutually independent diagonal and sub-diagonal non-zero entries following the distribution, $$\begin{aligned} \displaystyle X \sim \begin{pmatrix} \chi_{ m \beta} & \chi_{(n-1)\beta} & & & \\ & \chi_{(m-1)\beta}& \chi_{(n-2)\beta}& & \\ & \ddots & \ddots & & \\ & & \chi_{(m-n+2)\beta} & \chi_{\beta}& \\ & & & \chi_{(m-n+1) \beta}& \end{pmatrix}, \end{aligned}$$ where $\chi_{s}$ is the chi distribution with parameter $s$. The tri-diagonal matrix $S = X^*X$ is called a *$(m,n,\beta)$-Laguerre matrix*. As, $S$ is a $n \times n$ positive-definite Hermitian matrix, there are $n$ real and positive eigenvalues. Dumitriu and Edelman has computed the explicit expression for the joint eigenvalue density function of $(m,n,\beta)$-Laguerre matrices, which can be stated as the following theorem. \[jpdf\_eigenvalues\] Suppose $\beta > 0$ and let $0 < \lambda_{1} \leq \lambda_2 \leq \dots \leq \lambda_{n}$ be the $n$ ordered eigenvalues of $(m,n,\beta)$-Laguerre matrix $S$. The joint eigenvalue density function is $$\begin{aligned} \label{eqn:jpdf_eigenvalues} \displaystyle h_{\beta}(\lambda_1,\lambda_2,\dots,\lambda_n) \ = \ Z_{m,n}^{\beta} \prod_{i=1}^{n}\lambda_{i}^{\alpha -1} e^{\left(- \frac{1}{2}\sum_{i=1}^{n}\lambda_i\right)} \prod_{k<j}(\lambda_{j} - \lambda_{k})^{\beta} \mathbb{I}_{\{\lambda_1 \leq \ldots \leq \lambda_n \} } ,\end{aligned}$$ where $\alpha = (m-n+1)\beta/2$ and the normalization constant given by $$\begin{aligned} \displaystyle Z_{m,n}^{\beta} \ = \ \frac{2^{-mn\beta/2}}{n!} \prod_{j=1}^{n} \frac{ \Gamma{\left(1+\frac{\beta}{2}\right)} }{\Gamma{\left(1+\frac{\beta }{2}j\right)} \Gamma{\left( \frac{\beta}{2}(m-n+j)\right)}}.\end{aligned}$$ The indicator function $\mathbb{I}_{\{\lambda_1 \leq \ldots \leq \lambda_n \} }$ in the equation ensures that the density function $h_{\beta}$ is non-zero if and only if the eigenvalues ${\lambda_1,\ldots,\lambda_n}$ are ordered. We omit writing $\mathbb{I}_{\{\lambda_1 \leq \ldots \leq \lambda_n \} }$ in the remaining sections as we work only with ordered eigenvalues. The joint eigenvalue density function of a real, complex and quaternion Wishart matrix is same as the joint eigenvalue density function of a $(m,n,\beta)$-Laguerre matrix for the values of $\beta = 1,2$ and $4$, respectively. It is important to acknowledge the fact that the Wishart ensemble have invariance properties. It means that a complex Wishart matrix is invariant under unitary conjugation, that is, a complex Wishart matrix $P_2$ has same distribution as $U^{*}P_2U$ for any non-random unitary matrix $U$. Similarly, a real Wishart matrix and a quaternion Wishart matrix is invariant under orthogonal and symplectic conjugation, respectively. However, while generalizing the Wishart matrix ensemble to $(m,n,\beta)$-Laguerre ensemble in order to have the equation  well-defined for all positive values of $\beta$, we lose this invariance property. Finiteness of Inverse Moments {#chap:Finiteness of inverse moments} ============================= We start this chapter by discussing the research problem and the motivation behind it. We compute the gap probability for the smallest eigenvalue of a $(m,n,\beta)$-Laguerre matrix and then, we present our results for $(m,n,\beta)$-Laguerre matrices and compound Wishart matrices followed by some remarks. Motivation {#subsec:motivation} ---------- Here, we examine the formula for the inverse moments of complex Wishart matrices as given by Graczyk et al. in [@Graczyk_Letac_Massam_2003]. The moments and inverse moments of some random matrices, in particular, Wishart matrices can be expressed as sums of Weingarten functions, as can be seen in Theorem \[thm:moment\_letac\]. Weingarten functions were first introduced by Don Weingarten in 1978 and further studied in depth by Collins [@Collins_2003]. Weingarten functions are used to compute the integrals of product of matrix coefficients over unitary groups with respect to Haar measure . This section has been adapted from [@Collins_Matsumoto_Saad_2014], we refer to [@Collins_Matsumoto_Saad_2014] for detailed understanding of the technical terms. Let $c$ be a positive integer. Consider a set of $l$ positive integers $\eta = (\eta_1,\ldots,\eta_l)$ such that $\sum_{i=1}^{l} \eta_{i} = c$ then $\eta$ is called a partition of $c$. Let $\mathcal{S}_c$ be the symmetric group defined on $[c]=\{1,2,\dots,c\}$. Every permutation $\sigma \in \mathcal{S}_c$ can be decomposed uniquely into cycles of lengths $\eta = (\eta_1,\eta_2,\dots,\eta_l)$ such that $\eta$ is a partition of $c$ associated to $\sigma$. Let $p(\sigma)$ denote the length of vector $\eta$. The identity permutation in $\mathcal{S}_c$ is denoted by $\sigma_{e}$. For $z \in \mathbb{C}$ and a partition $\lambda = (\lambda_1,\ldots,\lambda_m)$ of $c$ define, $${\psi}^{\lambda}(z) = \prod_{i=1}^{m} \prod_{j=1}^{\lambda_i} (z + j - i).$$ Let $\chi^{\lambda}$ be the irreducible characters in $\mathcal{S}_c$. Given a complex number $z$ and a permutation $\sigma \in \mathcal{S}_c $, the unitary Weingarten function is, $$\begin{aligned} \label{eqn:weigngarten_function} \displaystyle {\rm Wg}(\sigma,z) \ = \ \frac{1}{c!} \ \sum_{ \substack{\lambda \\ {\psi}^{\lambda}(z) \neq 0 }} \ \frac{\chi^{\lambda}(\sigma_{e})}{ {\psi}^{\lambda}(z)} \chi^{\lambda}(\sigma) ,\end{aligned}$$ where the sum is over all partitions $\lambda$ of $c$ such that ${\psi}^{\lambda}(z) \neq 0 $. The following result states the inverse moment formula for a complex Wishart matrix. \[thm:moment\_letac\] Let $P$ be a $n \times n$ complex Wishart matrix and $\pi = (12\ldots c)$ be a cycle in $\mathcal{S}_{c}$. If $c < (m -n + 1)$, then $$\begin{aligned} \displaystyle \mathbb{E}\{ { \rm Tr}(P_{2}^{-c}) \} = \ (-1)^{c} \sum_{\sigma \in \mathcal{S}_{c}} {\rm Wg}(\pi {\sigma}^{-1};n-m)n^{p(\sigma)},\end{aligned}$$ where ${ \rm Wg}(\pi \sigma^{-1};n-m) $ is the unitary Weingarten function as described in the equation . In Theorem \[thm:moment\_letac\], a sufficient condition $c < (m-n+1)$ is assumed to define the $c$-th inverse moment of a complex Wishart matrix. A similar condition has been assumed in [@Collins_Matsumoto_Saad_2014; @Letac_Massam_2004; @Matsumoto_2012] to find the inverse moments of real, complex Wishart matrices and compound Wishart matrices. We wonder if this condition is necessary too. We aim to find a necessary and sufficient condition to have finite inverse moments for the bigger class of $(m,n,\beta)$-Laguerre matrices and hence the finiteness condition hold for Wishart matrices also. Statement of problem: : given a $(m,n,\beta)$-Laguerre matrix $S$ and a compound Wishart matrix $Q$, find functions ${g_1(m,n,\beta)}$ and ${g_2(m,n,\beta)}$ such that (i) $\ { \displaystyle \mathbb{E}\{ {\rm Tr}(S^{-c})\} < \infty \text{ \ \text{ if and only if} \ } c < g_1(m,n,\beta) }.$ (ii) $\ { \displaystyle \mathbb{E}\{ {\rm Tr}(Q^{-c})\} < \infty \text{ \ \text{ if and only if} \ } c < g_2(m,n,\beta) }.$ We make a note in advance that the finiteness of inverse moments of a $(m,n,\beta)$-Laguerre matrix depends on the behavior of its smallest eigenvalue near zero. So, we first estimate the gap probability of the smallest eigenvalue of a $(m,n,\beta)$-Laguerre matrix in the following section. Gap probability near zero {#subsec:gap_probability} ------------------------- Let $ 0 < \lambda_{1} \leq \dots \leq \lambda_n$ be the ordered eigenvalues of the $(m,n,\beta)$-Laguerre matrix $S$, hence $\lambda_1$ denote the smallest eigenvalue of $S$ in the remaining sections of part I of this thesis. The term *gap probability near zero’* means the probability that no eigenvalue of a matrix lies in the neighborhood of zero. Let $a\in (0,1]$. Then, $$\begin{aligned} \displaystyle \mathbb{P}(\text{no eigenvalues} \in (0,a)) \ = \ & \ \mathbb{P}(\lambda_1 \notin (0,a)) \ \ \ \\ \displaystyle \ = \ & \ 1 - \mathbb{P}(\lambda_1 < a).\end{aligned}$$ We prove in the following that the probability of the smallest eigenvalue being less than $a$ is asymptotically equivalent to some constant times $a^{\alpha}$, where the constant depends only on $m,n$ and $\beta$. \[lem:gap\_prob\] Let $S$ be a $(m,n,\beta)$-Laguerre matrix. Then for $a \leq 1$, we have $$\begin{aligned} \mathbb{P}(\lambda_1 < a) \ \underset{a\rightarrow 0}{\approx} \ C^{\beta}_{m,n} a^{\alpha}, \end{aligned}$$ where $\alpha = (m-n+1)\beta/2$ and $C^{\beta}_{m,n}$ is a non-zero constant depending only on $m,n$ and $\beta$. In simpler words, when $a$ is small enough the probability of $\lambda_1$ belonging to the interval $(0,a)$ is asymptotically equivalent to $C^{\beta}_{m,n} a^{\alpha}$ at the neighborhood of $0$. From the equation , we have $ \mathbb{P}(\lambda_1 < a) = $ $$\begin{aligned} & Z_{m,n}^{\beta} \int_{0}^{a} \int_{\lambda_1}^{\infty} \ldots \int_{\lambda_{n-1}}^{\infty}\prod_{i=1}^{n}\left(\lambda_{i}^{\alpha -1} \right)\prod_{k<j}(\lambda_{j} - \lambda_{k})^{\beta} e^{\left(-\frac{1}{2}\sum_{i=1}^{n}\lambda_i\right)} d\lambda_{n} \ldots d\lambda_1.\end{aligned}$$ By the change of variables $(\lambda_1,\lambda_2,\dots,\lambda_n) = (a x_1,a x_1 + x_2,\dots,a x_1 + x_2 + \dots x_n)$, we have $$\begin{aligned} \mathbb{P}(\lambda_1 < a) & = Z_{m,n}^{\beta} a^{\alpha} \int_{0}^{1} \int_{0}^{\infty} \dots \int_{0}^{\infty}f_{a}(x_1,\ldots,x_n)dx_{n} \ldots dx_1, \end{aligned}$$ where, $$\begin{aligned} f_{a}(x_1,\dots,x_n) & = x_{1}^{\alpha-1} e^{(-nx_1 a/2)} \prod_{i=2}^{n}\bigg(ax_1 + \sum_{j=2}^{i} x_j\bigg)^{\alpha-1} \prod_{j=2}^{n} x_{j}^{\beta} e^{-(n-j+1)x_j /2}\\ & \ \ \ \ \prod_{k=3}^{n}\bigg( \prod_{i=1}^{k-2} \bigg( \sum_{j=i+1}^{k} x_j \bigg)^{\beta} \bigg) .\end{aligned}$$ The function $f_a$ has a point-wise limit, $$\begin{aligned} \lim_{a \rightarrow 0} f_{a}(x_1,\dots,x_n) & = x_{1}^{\alpha-1}\prod_{i=2}^{n}\left(\sum_{j=2}^{i} x_j\right)^{\alpha-1} \prod_{k=3}^{n}\left( \prod_{i=1}^{k-2} \left( \sum_{j=i+1}^{k} x_j \right)^{\beta} \right) \\ & \ \ \ \ \prod_{j=2}^{n} x_{j}^{\beta} e^{-(n-j+1)x_j/2} \ \\ & := f(x_1,\dots,x_n).\end{aligned}$$ Now, we try to find a dominating function for $f_a$. By simple calculations and using the fact that $a \leq 1$, we have the following bounds for the expressions in the function $f_{a}(x_1,\dots,x_n)$. 1. $ \! \begin{aligned}[t] \prod_{i=2}^{n}\left(ax_1 + \sum_{j=2}^{i} x_j\right)^{\alpha-1} & \leq \prod_{j=2}^{n} {x_j}^{-1} \ \prod_{i=2}^{n}\left(1 + \sum_{j=2}^{i} x_j\right)^{\alpha} \ \\ & \leq \prod_{j=2}^{n}{x_j}^{-1} \left(1 + \sum_{j=2}^{n} x_j\right)^{(n-1)\alpha}. \end{aligned} $ 2. $ \! \begin{aligned}[t] \prod_{k=3}^{n}\left( \prod_{i=1}^{k-2} \left( \sum_{j=i+1}^{k} x_j \right)^{\beta} \right) & \leq \prod_{k=3}^{n}\left( \prod_{i=1}^{k-2} \left( 1 + \sum_{j=2}^{n} x_j \right)^{\beta} \right) \ \\ & \leq \left( 1 + \sum_{j=2}^{k} x_j \right)^{((n-1)(n-2)\beta /2) + (n-1)\alpha}. \end{aligned} $ From the above computations, we have an upper bound for $f_a$, $$\begin{aligned} \label{eqn:eq1} f_{a}(x_1,\dots,x_n) & \leq x_1^{\alpha -1} \prod_{j=2}^{n} {x_j}^{\beta -1}e^{-(n-j+1)x_j/2} \left( 1 + \sum_{j=2}^{n} x_j \right)^{((n-1)(n-2)\beta /2) + (n-1)\alpha}.\end{aligned}$$ For any positive real numbers $w_1,w_2$ and $p$, the following inequality holds $$\begin{aligned} (w_1 + w_2)^{p} \ \leq \ 2^{p} (w_1^{p} + w_2^{p}). \end{aligned}$$ Set $p \ = \ (n-1)\alpha + (n-1)(n-2)\beta /2 > 0$. Using the above inequality in the equation , we obtain a dominating function for $f_a$, $$\begin{aligned} g(x_1,\dots,x_n) \ = \ & \ {x_1}^{\alpha-1} \ \prod_{j=2}^{n} {x_j}^{\beta-1} e^{-(n-j+1)x_j /2} \bigg(2^{p} (1+x_2)^{p} + \sum_{j=3}^{n} 2^{(j-2)p} {x_j}^{p} \bigg), \end{aligned}$$ such that $$f_{a}(x_1,\dots,x_n) \ \leq \ g{(x_1,\dots,x_n)}.$$ The function $g$ is a finite sum of integrable functions and hence is integrable. By the Dominated Convergence Theorem, we have $$\begin{aligned} & \int_{0}^{1} \int_{0}^{\infty} \dots \int_{0}^{\infty}f_{a}(x_1,\dots,x_n)dx_{n}\dots dx_1 \\ & \underset{a\rightarrow 0}{\approx} \int_{0}^{1} \int_{0}^{\infty} \dots \int_{0}^{\infty}f(x_1,\dots,x_n)dx_{n}\ldots dx_1 \\ & = \ z_{m,n}^{\beta}\ \ \text{ (is strictly positive by the positivity of $f$). } \end{aligned}$$ Hence, we have $$\begin{aligned} \hspace{2.1cm} \mathbb{P}(\lambda_1 < a) & \underset{a\rightarrow 0}{\approx} \ a^{\alpha} Z_{m,n}^{\beta} z_{m,n}^{\beta} \ \\ & \ = \ a^{\alpha} C^{\beta}_{m,n}. \end{aligned}$$ A finiteness condition {#sec:A finiteness condition} ---------------------- Moments of a random matrix are useful in various theoretical and practical settings. It is always useful to know whether an expression is finite or not without explicitly having to compute it. Here, we present one such finiteness condition, with only three values $m,n$ and $\beta$ known beforehand, which tells the finiteness of the $c$-th inverse moment of a $(m,n,\beta)$-Laguerre matrix. ### Preliminaries {#subsec:Def_and_not} In this part, we simply state the results (without proofs) which is needed to find the finiteness condition. The proof of all these lemmas can be found in standard texts. Let $\mathbb{M}_{m,n}$ denote the space of $m\times n$ matrices with entries from either $\mathbb{R}$ or $\mathbb{C}$ which will be clear from the context. For $m=n$, we simply write $\mathbb{M}_{n}$ for $\mathbb{M}_{n,n}$. Let ${\rm Tr}(A)$ denote the un-normalized trace of the matrix $A \in \mathbb{M}_{n}$. Let $A,B \in \mathbb{M}_m$. We write $A \preceq B$ if $A$ and $B$ are Hermitian matrices and $B-A$ is positive semi-definite. The relation $"\preceq "$ is a partial order, which is known as Loewner partial order. The following lemma helps in proving the necessity to have finite inverse moments for a compound Wishart matrix. \[lem:partila\_order\_loewner\] Suppose $A,B$ are two $m \times m$ Hermitian matrices. Let $\sigma_1(A) \leq \sigma_2(A) \leq \dots \leq \sigma_m(A)$ and $\sigma_1(B) \leq \sigma_2(B) \leq \dots \leq \sigma_m(B)$ be the ordered eigenvalues of the matrices $A$ and $B$, respectively. If $A \preceq B$, then 1. $S^*AS \preceq S^*BS$ for $S \in \mathbb{M}_{m,n}$. 2. $\sigma_i(A) \leq \sigma_{i}(B) $ for every $i = 1,\dots,m$. The following lemma for a positive random variable assists in finding the condition for the finiteness of inverse moments. \[positive\_rv\] Let $Z$ be a positive random variable and let $g(z)$ be a measurable function of $z$. If there is a non-negative real number $a$ such that $\mathbb{P}(Z \geq a) = 1$, then $$\begin{aligned} \displaystyle \mathbb{E}\{g(Z)\} = g(a) + \int_{a}^{\infty} g'(z) \mathbb{P}(Z>z) dz.\end{aligned}$$ ### Inverse moments of a $(m,n,\beta)$-Laguerre matrix Now, we present the necessary and sufficient condition for $(m,n,\beta)$-Laguerre matrices to have finite inverse moments. \[thm:beta\_moment\] Let $\beta >0$ and let $S$ be a $n \times n$ $(m,n,\beta)$-Laguerre matrix. Then for integer $c > 0$, we have $$\begin{aligned} \displaystyle \mathbb{E}\{{ \rm Tr }(S^{-c})\} \text{\ is finite if and only if \ } c < (m-n+1)\beta/{2}.\end{aligned}$$ For $c > 0$, we have $$\begin{aligned} {\lambda_{1}^{-c}} \ & \ \leq \ \sum_{i=1}^{n} {\lambda_{i}^{-c}} \ \leq \ n{\lambda_{1}^{-c}}, \ \end{aligned}$$ so it follows that, $$\begin{aligned} \displaystyle \hspace{2cm} \mathbb{E} \left\{ {\lambda_{1}^{-c}} \right\} \ & \ \leq \ \mathbb{E} \left\{ Tr(S^{-c}) \right\} \ \leq \ n \mathbb{E}\left\{{\lambda_{1}^{-c}} \right\}. \label{eqn:trace_ev}\end{aligned}$$ From the equation , we understand that $\mathbb{E}\{\allowbreak Tr(S^{-c} \} $ is finite if and only if $\mathbb{E} \left\{ {\lambda_{1}^{-c}} \right\} $ is finite. Thus, it is sufficient to find the necessary and sufficient condition for the finiteness of $\mathbb{E} \left\{ {\lambda_{1}^{-c}} \right\}$. Let $\delta >0 $ be small enough. By applying the Lemma \[positive\_rv\] to the positive random variable $\lambda_{1}^{-1}$, we get $$\begin{aligned} \displaystyle \mathbb{E} \left\{ {\lambda_{1}^{-c}} \right\} \ = \ & \ \int_{0}^{\infty} c t^{c-1} \mathbb{P}\left({\lambda_1^{-1}} > t \right) dt \ \\ \displaystyle \ = \ & \ \displaystyle c \ \int_{0}^{\infty} t^{c-1} \mathbb{P}\left(\lambda_1 < {t^{-1}} \right) dt \ \\ \displaystyle \ = \ & \ \displaystyle c \ \int_{0}^{\infty} w^{-c-1} \mathbb{P}\left(\lambda_1 < w \right) dw \text{\ \ \ \ \ \ \ \ (put $1/t = w$)} \ \\ \displaystyle \ = \ & \ \displaystyle c \ \int_{0}^{\delta} w^{-c-1} \mathbb{P}\left(\lambda_1 < w \right) dw + c \ \int_{\delta}^{\infty} w^{-c-1} \mathbb{P}\left(\lambda_1 < w \right) dw.\end{aligned}$$ So, we have \[eqn:bound\] 1. $\displaystyle c \ \int_{0}^{\delta} w^{-c-1} \mathbb{P}\left(\lambda_1 < w \right) dw \ \leq \ \mathbb{E} \left\{ {\lambda_{1}^{-c}} \right\} $ and, 2. $ \displaystyle \mathbb{E} \left\{ {\lambda_{1}^{-c}} \right\} \ \leq \ c \ \int_{0}^{\delta} w^{-c-1} \mathbb{P}\left(\lambda_1 < w \right) dw \ + \ c \ \int_{\delta}^{\infty} w^{-c-1} dw. $ Consider the inequality (i), then by the Lemma \[lem:gap\_prob\] we have $$\begin{aligned} \displaystyle \mathbb{E} \left\{ {\lambda_{1}^{-c}} \right\} \ \geq \ & \ c \int_{0}^{\delta} w^{-c-1} \mathbb{P}\left(\lambda_1 < {w} \right) \ dw \ \\ \displaystyle \approx \ & \ c \ C^{\beta}_{m,n}\int_{0}^{\delta} w^{-c-1} w^{\alpha} \ dw \\ \displaystyle = \ & \ \infty \ \ \ \text { whenever \ } (\alpha - c) \leq 0.\end{aligned}$$ Using the Lemma \[lem:gap\_prob\] to evaluate the inequality (ii), $$\begin{aligned} \displaystyle \mathbb{E} \left\{ {\lambda_{1}^{-c}} \right\} \ \leq \ & \ c \ \int_{0}^{\delta} w^{-c-1} \mathbb{P}\left(\lambda_1 < w \right) \ dw \ + \ c \ \int_{\delta}^{\infty} w^{-c-1} \ dw \ \\ \displaystyle \approx \ & \ c \ C^{\beta}_{m,n} \ \int_{0}^{\delta} w^{-c-1} w^{\alpha} dw \ + \ c \ \int_{\delta}^{\infty} w^{-c-1} dw \ \\ \displaystyle \ < \ & \ \infty \ \ \ \text { whenever \ } {(\alpha - c )} > 0.\end{aligned}$$ This implies that $\displaystyle \hspace{.1cm} \mathbb{E} \left\{ {\lambda_{1}^{-c}} \right\} $ is finite if and only if $c < \alpha$. Thus, $\mathbb{E}\left\{ Tr(S^{-c}) \right\}$ is finite if and only if $c < (m-n+1)\beta/2$. ### Inverse moments of a compound Wishart matrix {#subsec:Inverse moments of a compound Wishart matrix} A simple but interesting consequence of the Theorem \[thm:beta\_moment\] is the following necessary and sufficient condition for the finiteness of inverse moments of compound Wishart matrices. \[thm:compound\_moment\] Let $Q$ be a $n \times n$ non-degenerate complex compound Wishart matrix. For $c > 0$, $$\begin{aligned} \mathbb{E}\{{ \rm Tr}(Q^{-c})\} \text{\ is finite if and only if \ } c < (m-n+1).\end{aligned}$$ As, $Q$ is a $n \times n$ complex compound Wishart matrix, it follows from Definition \[def:compound\_wishart\] that $Q$ has the same distribution as $ A^*DA$, where $A$ is a complex random matrix with i.i.d. entries from a standard complex Gaussian distribution. We can understand $ A^{*}A = P_2$ as a $(m,n,\beta)$-Laguerre matrix $S$ for $\beta =2$. It follows from the Lemma \[lem:partila\_order\_loewner\] that $\xi_1 S \preceq Q \preceq \xi_m S$ because $\xi_1 I \preceq D \preceq \xi_mI$, where $I$ is a $m \times m$ identity matrix. Let $0 < \mu_1\leq \mu_2\leq \dots\leq \mu_n$ be the ordered eigenvalues of $Q$, it follows from part $(ii)$ of Lemma \[lem:partila\_order\_loewner\], $$\begin{aligned} \xi_1 \lambda_1 \ \leq \ \ \mu_1 \leq \ & \ \xi_{m} \lambda_1.\end{aligned}$$ This implies that for $c > 0$, we have $$\xi_1 \mathbb{E}\{\lambda_1^{-c}\} \leq \mathbb{E}\{\mu_1^{-c}\} \leq \ \ \xi_m \mathbb{E}\{\lambda_1^{-c} \}.$$ It follows from the proof of the Theorem \[thm:beta\_moment\] for $\beta =2$ that, $\mathbb{E}\{\mu_1^{-c}\}$ is finite if and only if $c < (m-n+1)$. Revisiting the initial part of the proof of the Theorem \[thm:beta\_moment\], we know that the inverse moments of a random matrix with positive eigenvalues is finite if and only if the inverse moments of its smallest eigenvalue is finite. So, $\mathbb{E}\{{ \rm Tr}(Q^{-c})\}$ is finite if and only if $c < (m-n+1)$. We have a similar result for real compound Wishart matrix stated as the following remark. The $c$-th inverse moment of a real compound Wishart matrix, which is defined analogous to complex compound Wishart matrix as in Definition \[def:compound\_wishart\], is finite if and only if $c < (m-n+1)/2$. [[ 9999 ]{}]{} Some remarks {#sec:Some remarks} ------------ The Lemma \[lem:gap\_prob\] investigates the probability of the smallest eigenvalue of a $(m,n,\beta)$-Laguerre matrix in the neighborhood of zero. Our main results, Theorem \[thm:beta\_moment\] and Theorem \[thm:compound\_moment\] give a necessary and sufficient condition to have finite inverse moments for the smallest eigenvalue of a $(m,n,\beta)$-Laguerre matrix and a compound Wishart matrix, respectively. These results may find their use where the inverse moments of the smallest eigenvalue are relevant. The existence of inverse moments for a $(m,n,\beta)$-Laguerre matrix is equivalent to the existence of inverse moments of the smallest eigenvalue of a $(m,n,\beta)$-Laguerre matrix. We summarize our result for $(m,n,\beta)$-Laguerre matrix $S$ as: *$\mathbb{E}\{Tr(S^{-c})\} \allowbreak < \infty $ if and only if $c < (m-n+1)\beta /2 $, that is, all the finite integer inverse moments of a $(m,n,\beta)$-Laguerre matrix lies in the interval $ \left(0, (m-n+1)\beta/2 \right)$.* We studied the compound Wishart matrix for the values $\beta =1$ and $2$. As, compound Wishart matrices are the generalization of Wishart matrices, so the result extend naturally to compound Wishart matrices. The $c$-th inverse moment of a compound Wishart matrix exists if and only if $c < (m-n+1)\beta /2 $ and thus, we obtain that all the finite integer inverse moments lies in $\displaystyle \left(0, (m-n+1)\beta/2 \right)$. Recently, the inverse moments of $(m,n,\beta)$-Laguerre matrices has been studied in [@Mezzadri_Reynolds_Winn_2017]. Our results are consistent and complete with the other results on the inverse moments of $(m,n,\beta)$-Laguerre matrices and compound Wishart matrices as in [@Collins_Matsumoto_Saad_2014; @Letac_Massam_2004; @Matsumoto_2012; @Mezzadri_Reynolds_Winn_2017]. We expect that our results can be extended to more general matrix models involving Wishart matrices and leave it for the future work. In the general $\beta$ case, there is no well defined notion of a compound Wishart matrix, which explains why we focused on compound Wishart case for the values of $\beta = 1$ and $2$. [[ 9999 ]{}]{} Dimension of a Metric {#chap:Dimension of a metric} ===================== In this chapter, we discuss about the various properties of dimension of a metric space in the sense of Nagata and Preiss. We also give a detailed proof of the necessity part of the Preiss’ result. Sigma-finite metric dimension {#sec:Sigma-finite metric dimension} ----------------------------- David Preiss introduced the notion of sigma-finite metric dimension in his article [@Preiss_1983] in order to describe the metric spaces having strong Lebesgue-Besicovitch differentiation property. Let $\beta \geq 1$ denotes an integer and let $(\Omega,\rho)$ be a metric space throughout the remaining sections of this thesis. \[sigma\_finite\] A subset $Q$ of $\Omega$ has *metric dimension* $\beta$ on a scale $s \in(0,+\infty) $ in $\Omega$ if any finite set $F = \{ x_1,\ldots,x_m \} \subseteq Q$, $m > \beta$ and $r_1,\ldots,r_m \in (0,s)$, where each $r_i$ depends on $x_i$ to satisfy the condition that for $i \neq j $, $x_i \notin \bar{B}(x_j,r_j)$ and $x_j \notin \bar{B}(x_i,r_i)$, implies that for every $x \in \Omega$ $$\begin{aligned} \sum_{x_i \in F} \chi_{_{\bar{B}(x_i,r_i)}}(x) \leq \beta.\end{aligned}$$ Generally, we consider families of closed balls satisfying certain property stated as the following definition. Let $I$ be an index set and $\mathcal{F}= \{ \bar{B}(x_i,r_i): i \in I\}$ be a any family of closed balls. Then, $\mathcal{F}$ is called an *unconnected* family if for every $i\neq j$ in $I$, $x_i$ does not belong to $\bar{B}(x_j,r_j)$ and vice-versa. In simpler words, distance between $x_i$ and $x_j$ is strictly greater than $r_i$ and $r_j$. (35,0) circle (1cm); (35,0) circle (2pt); (35,1.2) circle (1cm); (35,1.2) circle (2pt); (36.1,0.5) circle (1cm); (36.1,0.5) circle (2pt); (34,0.6) circle (1cm); (34,0.6) circle (2pt); (30,0) – (38,0); (32,0) circle (1.1cm); (32,0) circle (2pt); (33.3,0) circle (1cm); (33.3,0) circle (2pt); (35,0) circle (1.5cm); (35,0) circle (2pt); (36.8,0) circle (0.5cm); (36.8,0) circle (2pt); In other words, $Q$ has metric dimension $\beta$ on scale $s$ if every element of $\Omega$ can belong to at most $\beta$ closed balls in an unconnected family of closed balls having centers in $Q$ and radius bounded by scale $s$. The value $\beta$ is the smallest possible integer and $s$ is the largest possible positive real number satisfying the above property. A metric space is called metrically finite dimensional if there is a pair of $\beta < \infty$ and $0< s < \infty$ satisfying Definition \[sigma\_finite\], or sometimes we simply say, a space has metric dimension $\beta$ on scale $s$. The real line has metric dimension 2 as illustrated in figure \[fig:finite\_dimension\]. The space $(\Omega,\rho)$ is [*metrically sigma-finite dimensional*]{} if there is a sequence of subsets $Q_1,Q_2,\ldots$ of $\Omega$ such that $\Omega = \bigcup_{i=1}^{\infty} Q_{i}$ and each $Q_i$ has finite metric dimension in $\Omega$. Note that in the above definition, it is possible that each $Q_i$ has different metric dimension. The definition of metric dimension trivially implies that any metric space with finite number of elements has metric dimension equal to its cardinality on any scale. Therefore, every metric space with countable number of elements has sigma-finite dimension. Similarly, a metric space with 0-1 metric (refer to \[app:discrete\_metric\]) has metric dimension 1 on scale $s \in (0,1]$. Moreover, a finite union of metric spaces having finite metric dimension also has finite metric dimension. \[lem:finite\_union\_fd\] Suppose that $Q_1, Q_2 \subseteq \Omega$ have metric dimension $\beta_1$ and $\beta_2$ on scale $s_1$ and $s_2$ in $\Omega$, respectively. Then $Q_1 \cup Q_2$ has metric dimension $ \beta_1 + \beta_2 $ on scale $s = \min\{s_1,s_2\}$ in $\Omega$. Consider a finite set $F = F_1 \cup F_2 \subseteq Q_1 \cup Q_2$ of cardinality $ > \beta_1 + \beta_2$, where $F_1 \subseteq Q_1$ and $F_2 \subseteq Q_2$. It implies that either $F_1$ has cardinality &gt; $\beta_1$ or $F_2$ has cardinality $> \beta_2$. As, $s$ is the minimum of $s_1,s_2$ and $Q_1,Q_2$ are metrically finite dimensional, it follows from the Definition \[sigma\_finite\] that $Q_1 \cup Q_2$ has metric dimension $\beta_1 + \beta_2$ on scale $s$. However, the countable union of metrically finite dimensional spaces may not have finite metric dimension and which is why we need the concept of sigma-finite metric dimension. The following is an example of a metric space which does not have finite metric dimension but is metrically sigma-finite dimensional. Let $\Omega = \{ x_1,x_2,x_3,\ldots \}$ and define a function on $\Omega$, $$\begin{aligned} \rho(x_n,x_m) \ = \ \begin{cases} 1/n +1/m \ & \text{ if } n \neq m \ \\ 0 \ & \text{ otherwise } \end{cases}\end{aligned}$$ Then $\rho$ is a metric as $\rho(x_n,x_m) = 0$ if and only if $n = m$. Let $Q_l = \{x_1,\ldots,x_l\}$ and $\Omega$ is the union of all $Q_l, l\in \mathbb{N}$. For a fixed $l$, $Q_l$ is a finite set and so has metric dimension $l$ on any scale $s'>0$ in $\Omega$. Let $\beta \geq 1$ be a integer and $s$ be a positive real number. Choose $l \in \mathbb{N}$ large enough that $s > 1/l $ and pick $\beta + 1$ elements, $F = \{ x_{l+1},\ldots,x_{l+\beta+1} \}$ from $\Omega$. For $l+1\leq i \leq {l+\beta+1}$, set $r_i = \rho(x_i,x_{l+\beta+2})$ then we have $\rho(x_i,x_{j}) > \max\{r_i,r_j\}$. This means the any closed balls in $\mathcal{F} = \{ \bar{B}(x_i,r_i): x_i \in F \}$ does not contain the center of any other ball in $\mathcal{F}$, hence $\mathcal{F}$ is an unconnected family. However, $x_{l+\beta+2}$ belongs to every ball in $\mathcal{F}$ and has multiplicity $\beta+1$ in $\mathcal{F}$. Therefore, $\Omega$ does not have finite metric dimension but it is the countable union of such spaces, hence $\Omega$ is metrically sigma-finite dimensional. [[ 9999 ]{}]{} We make the following important remark about subsets inheriting the metric dimension. \[rem:subset\_fd\] Let $(\Omega,\rho)$ be a metric space and suppose $Q_1\subseteq Q_2 \subseteq \Omega$. If $Q_2$ is metrically finite dimensional in $\Omega$, then $Q_1$ is also metrically finite dimensional in $\Omega$. Given a family of subsets $\{Q_i:i \in I\}$ of $\Omega$ such that at least one of the $Q_i$ is metrically finite dimensional in $\Omega$. Then, $\cap_{i \in I} Q_i$ is also metrically finite dimensional in $\Omega$. [[ 9999 ]{}]{} Furthermore, we show that a set has finite metric dimension if and only if its closure has finite metric dimension. \[lem:fd\_closed\] Let $Q \subseteq \Omega$ has metric dimension $\beta$ on scale $s$ in $\Omega$, then $\bar{Q}$ has metric dimension $\beta$ on scale $s$ in $\Omega$. Let $F = \{x_1,\ldots,x_m\} \subseteq \bar{Q}$ such that $m>\beta$ and let $\{r_1,\ldots,r_m\} \subseteq (0,s)$ such that for every distinct $1\leq i,j \leq m$, $\rho(x_i,x_j)>\max\{r_i,r_j\}$. Then $\mathcal{F} = \{\bar{B}(x_i,r_i): r_i <s, 1 \leq i \leq m\}$ is an unconnected family of closed balls. It is sufficient to show that the multiplicity of $\mathcal{F}$ is at most $\beta$. Let $\rho(x_1,x_j) = k_j > \max\{r_1, r_j\}$. Choose $\varepsilon_1 > 0$ such that for all $j \neq 1$, $k_j - 2\varepsilon_1 > \max(r_1, r_j)$ and $r_1 + \varepsilon_1<s$. Let $y_1 \in Q$ such that $\rho(y_1, x_1) < \varepsilon_1$. It is clear that $\rho(x_j, \bar{B}(x_1,\varepsilon_1)) = k_j - \varepsilon_1$ and so $\rho(x_j, y_1) \geq k_j - \varepsilon_1 > r_1 + \varepsilon_1$ for all $j \neq 1$. Thus, we have $\bar{B}(y_1, r_1 + \varepsilon_1)$, which contains $\bar{B}(x_1,r_1)$ but not any other $x_j$, $j \neq 1$. Moreover, note that $y_1 \notin \bar{B}(x_j,r_j)$ for all $j \neq 1$. Replace $x_1$ and $r_1$ by $y_1$ and $r_1 + \varepsilon_1$ in the original set $F$ and form a new set $F'$, and apply the same technique for $x_2$ but for $F'$. Then, we have $\bar{B}(y_2, r_2 + \varepsilon_2)$, where $y_2 \in Q$ and $r_2 + \varepsilon_2 < s$ with $\varepsilon_2 > 0$, such that $\bar{B}(x_2,r_2)$ is contained in it and none of the points in $F \setminus \{x_2\}$ belongs in the ball. Doing in the same way for all the $m-2$ points $\{x_3, x_4, \ldots, x_m\}$, we obtain a new family of closed balls $\mathcal{F'} = \{\bar{B}(y_i,r_i + \varepsilon_i): 1\leq i \leq m\}$, where $y_i \in Q$ and $r_i + \varepsilon_i < s$ with $\varepsilon_i > 0$ for all $1\leq i \leq m$, such that $y_j \notin \bar{B}(y_i,r_i +\varepsilon_i)$ whenever $i \neq j$ and $\bar{B}(x_i, r_i) \subseteq \bar{B}(y_i,r_i + \varepsilon_i)$ for all $1\leq i \leq m$. Since $Q$ has metric dimension $\beta$ on scale $s$, then $x \in \mathcal{F}$ belongs to at most $\beta$ balls in $\mathcal{F'}$. An interesting result is that $Q$ is metrically finite dimensional in itself but there is a super-set $\Omega$ which contains $Q$ but $Q$ does not have finite metric dimension in $\Omega$. Let $(Q \cup \{a^*\},\rho')$ be a metric space, where $a^*$ is not an element of $Q$, such that the distance $\rho'(a,b)$ is 1, if $a,b$ are distinct elements of $Q$, $1/2$ if only one of $a,b$ is $a^*$ and 0 otherwise. Now, we consider disjoint copies of $Q \cup \{a^*\}$. For $n \in \mathbb{N}$, set $Q_n = (Q \cup \{a^*\},n)$ and let the metric on $Q_n$ be $\rho_n$. The distance between any two elements $a_n = (a,n), b_n = (b,n)$ of $Q_n$ is $ \rho(a,b)/ n$ where $a,b \in Q \cup \{a^*\}$. Since $\rho'$ is a metric so $\rho_n$ is metric for each $n$. Let $\Omega = \cup_{n \in \mathbb{N}} Q_n$ and define a function $\rho$ on $\Omega$, for $a_n,b_m \in \Omega$, $\rho(a_n,b_m) = \rho_n(a,b)$ if $n = m$, otherwise 1. We have $\rho(a_n,b_m) \Leftrightarrow \rho_{n}(a,b) = 0 \Leftrightarrow \rho(a,b) = 0 $, and for $a_n,b_m,c_{p} \in \Omega$, $\rho(a_n,c_p) + \rho(c_p,b_m) \geq \rho(a_n,b_m)$. Therefore, $\rho$ is a metric. It is evident that $Q$ has metric dimension $1$ on any scale $s \in (0,1]$ in $Q$ with respect to $\rho'$. *$Q$ is not metrically finite dimensional in $\Omega$*: Let $\beta$ be any positive integer and suppose that $s \in (0,1]$. We choose a $m \in \mathbb{N}$ large enough so that $1/m < s$, and consider a family of closed balls in $Q_m$, $\mathcal{F} = \{ \bar{B}(a^i_{m},1/2m) : a^i_m = (a^i,m) \in Q_m, a^i \in Q, 1 \leq i \leq \beta +1\}$. For $i \neq j$, $\rho(a^i_m,a^j_m) = \rho(a^i, a^j)/m = 1/m$, so any two balls in $\mathcal{F}$ with distinct centers does not contain each other’s center and $\mathcal{F}$ form an unconnected family. The closed balls are actually in $\Omega$, like $\bar{B}(a^i_m,1/2m) = \{ b_n \in \Omega : \rho(a^i_m,b_n) \leq 1/2m < 1/m < s \leq 1 \}$. As the radius of every ball in $\mathcal{F}$ is less than equal to $1/2m < 1$, this implies that $b_n$ is in $Q_m$ and $\bar{B}(a^i_m,1/2m) = \{ a^i_m,a^{*}\}$ for $1 \leq i \leq \beta+1$. It means that the multiplicity of $a^*$ is more than $\beta$ in $\mathcal{F}$ and because centers of balls in $\mathcal{F}$ are in $Q$, so $Q$ is not metrically finite dimensional in $\Omega$. For the other case when $s > 1$, we choose $m > 1$ large enough such that $1/m < s$. The rest of the argument is same as for $s \leq 1$. [[ 9999 ]{}]{} The following characterization for a metrically sigma-finite dimensional space is important and so we state it as a remark. \[rem:fd\_closed\] If $\Omega = \cup_{i=1}^{\infty} A_i$ such that each $A_i$ has finite metric dimension $\beta_i$ on scale $s_i$. Let $Q_l = \cup_{i=1}^{l} A_{i}$, and the sets $\{Q_l\}_{l \in \mathbb{N}}$ form an increasing chain. Since for a fixed $l$, $Q_{l}$ is a finite union of finite metric dimensional space, due to the Lemma \[lem:finite\_union\_fd\], $Q_l$ has metric dimension $\beta_1 + \ldots + \beta_l$ on scale $s = \min\{s_i : 1 \leq i \leq l\}$ in $\Omega$. Without loss of generality, we can assume each $Q_l$ is closed (because of Lemma \[lem:fd\_closed\]). Therefore, $\Omega = \cup_{l=1}^{\infty} Q_l$, where each $Q_l$ is closed and has finite metric dimension. [[ 9999 ]{}]{} Now, we present an important result about complete metric spaces. Every metrically sigma-finite dimensional complete metric space contains a non-empty open set which is metrically finite dimensional. \[prop:subset\_open\] Suppose $(\Omega,\rho)$ is a complete metric space and is metrically sigma-finite dimensional. Then there is a non-empty open set which is metrically finite dimensional in $\Omega$. There is a sequence of metrically finite dimensional subsets $(Q_i)_{i \in \mathbb{N}}$ such that $\Omega = \cup_{i \in \mathbb{N}} Q_i$. Due to the Lemma \[lem:fd\_closed\], we can assume each $Q_i $ is closed. It follows from the Baire Category Theorem [@Engelking_1989] that at least one of the $Q_i$ has non-empty interior. So there is an non-empty open ball contained in $Q_i$, which is metrically finite dimensional in $\Omega$ (follows from the Remark \[rem:subset\_fd\]). Assouad and Gromard [@Assouad_Gromard_2006] have studied the dimension of a metric in more general environment, for example, they considered families of balls (open or closed) in a semimetric space [^1]. A related yet somewhat different notion called, Nagata dimension, is presented in the following section. (30,0) – (32,0) node\[sloped,below\] [$x_1$]{} – (33.3,0) node\[sloped,below\] [$a$]{} – (35,0) node\[sloped,below\] [$x_2$]{} – (36.8,0)node\[sloped,below\] [$x_3$]{} – (38,0); (32,0) circle (2pt); (33.3,0) circle (2pt); (35,0) circle (2pt); (36.8,0) circle (2pt); (35,0.5) – (36.8,0.5); (33.3,0.9) – (36.8,0.9); Nagata dimension {#sec:Nagata dimension} ---------------- This section is based on an important paper by Assouad and Gromard [@Assouad_Gromard_2006]. The essence of their paper is the generalization of many dimension related concepts to general metric spaces. As, the paper is in French language so it is difficult for researchers not knowing french language to absorb their interesting results. Here, we present an English translation of some of the dimension related concepts picked from their article. In general, Nagata dimension is defined somewhat differently in dimension theory, but we tend to follow the same notions as in paper by Assouad and Gromard. We also want to state that the Nagata dimension was first introduced by Nagata and then it was modified by Preiss to define metric dimension by introducing scales $s$. In this thesis, our major focus is finite metric dimension in the sense of Preiss, as it will clear after this section that a metric space with finite Nagata dimension has finite metric dimension for all scales but the converse statement is not true. \[def:nagata\_dim\] A subset $Q$ of $\Omega$ has [*Nagata dimension*]{} $\beta-1 \geq 0$ in $\Omega$ if for every set of $\beta +1$ elements $x_1,\ldots,x_{\beta+1}$ in $Q$ and every $a \in \Omega$, there exist distinct $i,j$ in $\{1,\ldots,\beta+1\}$ such that $\rho(x_i,x_j) \leq \max\{ \rho(a,x_i),\rho(a,x_j) \}$. A space with 0-1 metric has Nagata dimension zero whereas, it has metric dimension 1. The real line with usual metric has Nagata dimension one (see figure \[fig:nagata\_dimension\]). The following proposition follows immediately from the definition of Nagata dimension. \[cor:archi\_Nagata\] An ultrametric space has Nagata dimension zero. Let $(\Omega,\rho)$ be an ultrametric space and let $Q \subseteq \Omega$. Then by the strong triangle’s inequality of $\rho$, for any $x_1,x_2 \in Q$ and $a \in \Omega$, we have $\rho(x_1,x_2) \leq \max\{ \rho(a,x_1),\rho(a,x_2) \}$. (35,0) circle (1cm); (35,0) circle (2pt); (35,1.2) circle (1cm); (35,1.2) circle (2pt); (36.1,0.5) circle (1cm); (36.1,0.5) circle (2pt); (34.4,0.1) circle (1cm); (34.4,0.1) circle (2pt); (36.2,1.7) circle (1cm); (36.2,1.7) circle (2pt); (35,0) circle (1cm); (35,1.2) circle (1cm); (36.1,0.5) circle (1cm); (35,0) circle (1cm); (35,0) circle (2pt); (35,1.2) circle (1cm); (35,1.2) circle (2pt); (36.1,0.5) circle (1cm); (36.1,0.5) circle (2pt); (34.4,0.1) circle (2pt); (36.2,1.7) circle (1cm); (36.2,1.7) circle (2pt); We say $\Omega$ has *sigma-finite Nagata dimension*, if $\Omega$ can be written as countable union of subsets having finite Nagata dimension in $\Omega$. Another interesting notion for a family of balls is the ball-covering dimension. A subset $Q$ of $\Omega$ is said to have [*ball-covering dimension*]{} $\beta$ in $\Omega$, if for every countable family $\mathcal{F}$ of closed balls with centers in $Q$, there exists a subfamily $\mathcal{F}' \subseteq \mathcal{F}$ such that center of every ball in $\mathcal{F}$ belongs to some ball in $\mathcal{F}'$ and the multiplicity of $\mathcal{F}'$ is at most $\beta$ in $\Omega$. The ball-covering dimension of a family of balls is shown in figure \[fig:ball-covering\_dimension\]. Assouad and Gromard proved that Nagata dimension + one is equal to ball-covering dimension of the space. \[lem:wcp\_nd\] The following are equivalent: (i) $Q$ has Nagata dimension $\beta-1$ in $\Omega$. (ii) An unconnected family of closed balls with centers in $Q$ have multiplicity at most $\beta$ in $\Omega$. (iii) $Q$ has ball-covering dimension $\beta$ in $\Omega$. $(i) \Rightarrow (ii)$: Let $I$ be an index set and consider an unconnected family of balls, $\mathcal{F} = \{ \bar{B}(a_i,r_i):a_i \in Q, i \in I\}$ such that for $i \neq j$, $a_i \notin \bar{B}(a_j,r_j)$ and $a_j \notin \bar{B}(a_i,r_i)$. Let $a \in \Omega$ and suppose that every ball in $\{ \bar{B}(a_i,r_i):a_i \in Q, i \in J \subseteq I\}$ contains $a$. Since the balls are from unconnected family, $\max\{\rho(a_i,a),\rho(a_j,a) \}< \rho(a_i,a_j)$, but $Q$ has Nagata dimension $\beta-1$ which implies that $\sharp J \leq \beta$. Therefore, $\mathcal{F}$ has multiplicity at most $\beta$ in $\Omega$. $(ii) \Rightarrow (iii)$: Consider a family of closed balls $\mathcal{F} = \{ \bar{B}(a_i,r_i):a_i \in Q, i \in \mathbb{N}\}$. Choose $I_1$ as the maximal set of all indices in $\mathbb{N}$ such that $\rho(a_i,a_j) > \max\{r_i,r_j\}$ for all $i \neq j$ in $I_1$ and let $\mathcal{F}_1=\{\bar{B}(a_i,r_i):i\in I_1\}$. Again choose $I_2$ to be the maximal set of all indices $i \in \mathbb{N} \setminus I_1$ such that $a_i \notin \mathcal{F}_1$ and for all $i \neq j$ in $I_2, \rho(a_i,a_j) > \max\{r_i,r_j\}$ and let $\mathcal{F}_2=\{\bar{B}(a_i,r_i):i\in I_2\}$. Continuing these steps until we exhaust $\mathcal{F}$ gives $I = \cup_{p=1}^{\infty}I_{p}$ and $\mathcal{F}'= \cup_{i \in I} \bar{B}(a_i,r_i)$ such that, every center of balls in $\mathcal{F}$ is in $\mathcal{F}'$. Since $\mathcal{F}'$ is an unconnected family, so by $(ii)$, any $x \in \Omega$ belongs to at most $\beta$ balls in $\mathcal{F}'$. $(iii) \Rightarrow (i)$ Suppose $Q$ does not have Nagata dimension $\beta -1$ in $\Omega$. Then, there exists a set of $\beta+1$ elements, $x_1,\ldots,x_{\beta+1} \in Q$ and $a \in \Omega$ such that for every $i \neq j$, $\rho(x_i,x_j) > \max\{\rho(a,x_i),\rho(a,x_j)\}$. This implies that $\mathcal{F}=\{\bar{B}(x_i,\rho(x_i,a)):1\leq i \leq \beta+1\}$ is an unconnected family of closed balls and hence we cannot extract a proper sub-family containing every $x_i$. Also, $\mathcal{F}$ has multiplicity more than $\beta$. So, $Q$ cannot have ball-covering dimension $\beta$. The Lemma \[lem:wcp\_nd\] gives an impression that the metric dimension in the sense of Preiss and Nagata dimension seems to be interrelated. To find metric dimension, we consider an unconnected family of closed balls but with radius of each ball bounded by scale $s$, whereas to find ball-covering dimension or Nagata dimension there is no restriction on radius of a ball. Because of scale $s$, there is a subtle difference between metric dimension and Nagata dimension. Let us look at an example. \[ex:not\_nagata\_fd\] Let $\mathbb{R}$ be equipped with metric, $$\begin{aligned} \rho(x,y) \ = \ \begin{cases} 0 \ & \ \text{if \ \ } x = y \ \\ 1/2 \ & \ \text{if \ \ } x \neq y \ \& \ xy = 0\ \\ 1 \ & \ \text{otherwise} \ \end{cases}\end{aligned}$$ Let $Q = (0,1)$. Then (i) *$Q$ has infinite Nagata dimension in $\mathbb{R}$.* Let $a = 0$ and $x_i = 3^{-i}$ for $i \in \mathbb{N}$. For all $i \neq j$, $\rho(x_i,x_j) = 1 > \max\{\rho(0,x_i), \rho(0,x_j)\} = 1/2$. Then there is no $\beta-1$ for which $\rho(x_i,x_j) \leq \max\{\rho(0,x_i), \rho(0,x_j)\}$ and so, $Q$ has infinite Nagata dimension. (ii) *$Q$ has finite metric dimension in $\mathbb{R}$.* For $\beta = 1$ and any $s \in (0,1/2]$, there are finite sets $F \subseteq Q$ with cardinality $>1$ such that if we consider an unconnected family of closed balls $\bar{B}(x,r_{x})$ with centers in $F$ and $r_{x} \in (0,s)$ then $\bar{B}(x,r_{x})= \{x\}$ and the multiplicity of any real number can be at most $1$. [[ 9999 ]{}]{} It is easy to see that if a set $Q$ has Nagata dimension $\beta-1$ in $\Omega$, then it has metric dimension $\beta$ for all scales $s$, in the sense of Preiss. However, the Example \[ex:not\_nagata\_fd\] shows that a metric space can be metrically finite dimensional for one scale and not for another scale, that is, not every metrically finite dimensional space will have finite Nagata dimension. We can tweak the notion of Nagata dimension to define a new concept called ‘weak Nagata dimension’ which is equivalent to metric dimension. \[def:weak\_nagata\] A subset $Q$ of $\Omega$ has *weak Nagata dimension* $\beta-1$ on scale $s>0$ in $\Omega$ if for any $a \in \Omega$, whose open ball $B(a,r), r < s$ contains at least $\beta + 1$ elements from $Q$ say $x_1,\ldots,x_{\beta+1}$, there exist distinct $i,j \in 1,\ldots,\beta+1$ such that $\rho(x_i,x_j) \leq \max\{\rho(a,x_i),\rho(a,x_j) \}$. In the similar way, we can define the notion of weak ball-covering dimension from the ball-covering dimension with respect to finite family of closed balls having radius $<s$. Now, we prove the equivalency of metric dimension, weak Nagata dimension and weak ball-covering dimension. \[lem:restricted\_Nagata\_FD\] Let $Q$ be a subset of metric space $(\Omega,\rho)$. The following are equivalent: (i) $Q$ has weak Nagata dimension $\beta-1$ on scale $s$ in $\Omega$. (ii) $Q$ has metric dimension $\beta$ on scale $s$ in $\Omega$. In other words, any unconnected finite family of closed balls with centers in $Q$ and radius strictly less than $s$, has multiplicity at most $\beta$ in $\Omega$. (iii) $Q$ has weak ball-covering dimension $\beta$ on scale $s$ in $\Omega$. $(i) \Rightarrow (ii)$: Suppose $Q$ has weak Nagata dimension $\beta-1$ on the scale $s$ in $\Omega$. Let $a \in \Omega$ and consider an unconnected finite family of $m$ closed balls containing $a$, $\mathcal{F}=\{\bar{B}(x_i,r_i): x_i \in Q, r_i < s, x_i \notin \bar{B}(x_j,r_j), 1 \leq i \neq j \leq m\}$. Choose $\varepsilon > 0$ such that $ r = \max\{r_i: 1 \leq i \leq m\} + \varepsilon < s$. So, the open ball $B(a,r)$ contains every $x_i$. Since every closed ball at $x_i$ contain $a$, for all distinct $i,j \in \{1,\ldots,m\}$ we have $\rho(x_i,x_j)> \max\{\rho(a,x_i),\rho(a,x_j)\}$. By our assumption, $m \leq \beta$. So, an element $x \in \Omega$ can belong to at most $\beta$ numbers of ball in $\mathcal{F}$. $(ii) \Rightarrow (iii)$: Consider any finite family of closed balls with centers in $Q$ and radius bounded by $r$, then we can extract an unconnected sub-family which contain every centers in the original family and has multiplicity at most $\beta$. The rest of the argument is exactly same as in the proof of $(ii) \Rightarrow (iii)$ in Lemma \[lem:wcp\_nd\] considering finite families of closed balls. $(iii) \Rightarrow (i)$: Suppose the weak Nagata dimension of $Q$ is not $\beta-1$ on scale $s$. Then, there exist $a$ in $\Omega$ and $x_1,\ldots,x_{\beta+1} \in Q$ such that the open ball $B(a,r)$ for some $0<r<s$ contains every $x_i$ and for all $i \neq j$, $\rho(x_i,x_j) > \max\{\rho(a,x_i), \rho(a,x_j)\}$. Let $\mathcal{F} = \{ \bar{B}(x_i,\rho(a,x_i)): x_i \in Q, 1 \leq i \leq \beta+1 \} $, then for any $ 1 \leq i \neq j \leq \beta+1$, $\rho(a,x_i)<r<s$ and $\bar{B}(x_i,\rho(a,x_i))$ contains element $a$ but does not contain $x_j$. So, $a$ belongs to $\beta+1$ balls in $\mathcal{F}$. A result by Preiss says that all the complete and separable metric spaces satisfying strong Lebesgue-Besicovitch differentiation property are essentially the spaces having sigma-finite metric dimension. In the following section, we present the differentiation property of a metric space. The differentiation property {#sec:The Lebesgue-Besicovitch differentiation property} ---------------------------- Let $f$ be an integrable (with respect to Lebesgue measure) real-valued function on $\mathbb{R}$. Henri Lebesgue proved that derivative of integral of $f$ at $x$ with respect to Lebesgue measure is $f(x)$ almost everywhere, which is known as Lebesgue differentiation theorem. Later in 1945, Abram Besicovitch extended this result for any locally finite Borel measure on a Euclidean space. This result is called the Lebesgue-Besicovitch differentiation theorem. Based on the type of convergence, we have two notions of differentiation property. \[def:Lebesgue-Besicovitch diff\_thm\] We say the *strong differentiation property* holds for a locally finite Borel measure $\nu$ on a metric space $(\Omega,\rho)$, if for any $f \in L_{\nu}^{1}(\Omega)$ $$\begin{aligned} \label{eqn:strong_LBDT} \lim_{r \rightarrow 0} \frac{1}{\nu(\bar{B}(x,r))} \int_{\bar{B}(x,r)} f(y) d\nu(y) & = f(x), \ \ \text{ for $\nu$ a.e. $x $}, \end{aligned}$$ where $L_{\nu}^{1}(\Omega)$ is the space of all $\nu$-integrable functions on $\Omega$. If we replace almost everywhere convergence with convergence in measure in the equation then it is called as *weak differentiation property* for $\nu$. If $f = \chi_{ _M}$ for any measurable $M \subseteq \Omega$ , then we get a special case of differentiation property. \[def:Lebesgue-Besicovitch density\_thm\] The *strong density property* is said to hold for a locally finite Borel measure $\nu$, if for any measurable set $M \subseteq \Omega$ such that $\nu(M) < \infty$, we have $$\begin{aligned} \lim_{r \rightarrow 0 } \frac{\nu( \bar{B}(x,r) \cap M)}{\nu(\bar{B}(x,r))} = \chi_{ _M}(x) \ \ \text{ for $\nu$ a.e. $x $,} \end{aligned}$$ where $\chi_{ _M} $ is a characteristic function of $M$. In the same manner, the *weak density property* is defined with convergence in measure instead of almost everywhere convergence. To avoid ambiguity, we state the following remark regarding notations. If the differentiation or density property holds for all locally finite Borel measures on $\Omega$, then we say $\Omega$ satisfy the Lebesgue-Besicovitch differentiation or density property. [[ 9999 ]{}]{} A result by Preiss {#sec:A result by Preiss} ------------------ This section explains the necessary part of the following theorem given by Preiss. \[thm:Priess\_theorem\] Let $(\Omega, \rho)$ be a complete separable metric space. The strong Lebesgue-Besicovitch differentiation property holds for $\Omega$ if and only if $\Omega$ is metrically sigma-finite dimensional. In his short paper, Preiss did not give the proof of the above theorem as such, instead he just outlined the basic ideas of the proof in a few sentences. To work out a complete proof of sufficiency, Assouad and Gromard have written a 61-page long paper [@Assouad_Gromard_2006]. The necessity condition was never given a full proof. It is the first time that we give a detailed proof of necessity of $\Omega$ to be metrically sigma-finite dimensional in the theorem. To prove the necessary part of Theorem \[thm:Priess\_theorem\], we require several results. Firstly, we prove the following lemma about the existence of a non-empty open subset of a metrically not sigma-finite dimensional space, which does not contain any non-empty open metrically finite dimensional set. \[lem:p\_result\_1\] Let $(\Omega,\rho)$ be a complete separable metric space. Suppose $(\Omega,\rho)$ is not metrically sigma-finite dimensional, then there is a non-empty open subset $W$ of $\Omega$ such that $W$ is not metrically sigma-finite dimensional in itself and does not contain any non-empty open metrically finite dimensional subset. Suppose the collection $\mathcal{F} = \{O \subseteq \Omega: O \text{ is non-empty, open and has} \allowbreak \text{ finite metric dimension in } \Omega\}$ is non-empty (otherwise, there is nothing to prove). Let $U$ denote the union of $\mathcal{F}$. Since $U$ is both metrizable and separable, from the two lemmas \[app:para\_space\] and \[app:lindelof\], there is a countable open locally finite refinement of $\mathcal{F}$, say $\{V_j: j \in \mathbb{N}\}$ such that each $V_j$ is a subset of some $O$ in $\mathcal{F}$. So $V_j$ is metrically finite dimensional in $\Omega$ and therefore $U = \cup_{j=1}^{\infty} V_{j}$ is metrically sigma-finite dimensional in $\Omega$. It follows from the Lemma \[app:countable\_closed\] that $\bar{U} = \cup_{j=1}^{\infty} \bar{V}_{j}$. Further the Lemma \[lem:fd\_closed\] implies that each $\bar{V}_j$ is metrically finite dimensional in $\Omega$. Thus $\bar{U}$ is metrically sigma-finite dimensional and hence a proper subset of $\Omega$, for otherwise $\Omega$ would be metrically sigma-finite dimensional. The set $W = \Omega \setminus \bar{U}$ is a non-empty open set. Let us prove that $W$ contains no non-empty open subsets which are metrically finite dimensional in $W$. Suppose there is a non-empty open subset $Y $ of $W$ which has metric dimension $\beta$ on some scale $s>0$ in $W$. Notice that $Y$ is open in $W$ and $W$ is open in $\Omega$, so $Y$ is open in $\Omega$. Let $x \in Y$ and choose $r>0$ such that $B^{\Omega}(x,r)$ is contained in $Y$ and is metrically finite dimensional in $W$. Let $t = \min\{s,r/2\}$. We show that indeed $B^{\Omega}(x,t)$ is metrically finite dimensional on scale $t$ in $\Omega$ and get the contradiction. Let $\mathcal{G} = \{ \bar{B}^{\Omega}(x_i,r_i): x_i \in B^{\Omega}(x,t), 0 <r_i < t\}$ be any finite family of closed balls in $\Omega$ with centers in $B^{\Omega}(x,t)$ and radius bounded above by $t$. As $Y$ has finite metric dimension, there exists a subfamily $\mathcal{G}' $ which contain all $x_i$ and has multiplicity at most $\beta$ in $W$. If $a \in \bar{B}^{\Omega}(x_i,r_i)$, then by triangle’s inequality $\rho(a,x) \leq \rho(a,x_i) + \rho(x_i,x) < r$. So every $\bar{B}^{\Omega}(x_i,r_i)$ is a subset of $B^{\Omega}(x,r)$ and hence a subset of $Y$. This implies that no element of $U$ can belong to any ball in $\mathcal{G}'$ and therefore $\mathcal{G}'$ has multiplicity at most $\beta$ in $\Omega$. Thus $B^{\Omega}(x,t)$ is a non-empty open set which has metric dimension $\beta$ on scale $t$ in $\Omega$ and by the definition of $U$, $B^{\Omega}(x,t)$ must be a subset of $U$ which is not possible. As $W$ is homeomorphic to a complete metric space, from the Proposition \[prop:subset\_open\] we conclude that $W$ is not metrically sigma-finite dimensional in itself. Suppose $(\Omega,\rho)$ is a complete separable metric space. Let $\mathcal{P}_{\Omega}$ be the set of all probability measures on ${\Omega}$. By the Portmanteau theorem [@Billingsley_1999], the convergence of measures in weak topology is equivalent to the convergence of measures in the metric space $(\mathcal{P}_{\Omega}, \pi)$, where for $\mu,\nu \in \mathcal{P}_{\Omega}$, $\pi(\mu,\nu)$ is the infimum of the set $$\begin{aligned} \{\delta> 0 : \mu(A) \leq \nu(A^{\delta}) + \delta, \nu(A) \leq \mu(A^{\delta}) + \delta \ \ \forall A \in \mathcal{B}(\Omega) \}. \end{aligned}$$ Here $A^{\delta}= \cup_{x \in A} B(x,\delta)$. This metric is known as Lévy-Prokhorov metric. Note that if $\mu(A) \leq \nu(A^{\delta}) + \delta$ holds for all Borel subsets $A$ of $\Omega$ then $\pi(\mu,\nu) < \delta$ [@Billingsley_1999]. For each $n \in \mathbb{N}$, define $M_n \subseteq \mathcal{P}_{\Omega}$ as the collection of measures $\mu$ such that for each $\mu$, there exists a measure $\nu \in \mathcal{P}_{\Omega}$ such that $$\begin{aligned} \mu \{ x : \nu(\bar{B}(x,r)) \leq n \mu(\bar{B}(x,r)) \text{ for every } r < 1/n \} < 1/n.\end{aligned}$$ Let $M_n^{\circ}$ denote the interior of $M_n$. We denote by $\delta_{x}$ the Dirac measure supported at $x$, that is, $\delta_{x}(A)$ is equal to 1 if $x \in A$ and is 0 otherwise. Our goal is to prove that $M_n^{\circ}$ is dense in $\mathcal{P}_{\Omega}$, for which it is enough to show that the closure of $M_n^{\circ}$ contains the set $\mathcal{N}$ of all finitely supported measures: $$\begin{aligned} \mathcal{N} = \bigg\{ \sum_{i=1}^{k} \beta_i \delta_{d_i}: k \in \mathbb{N}, d_i \in S, \beta_i \in [0,1], \sum_{i=1}^{k} \beta_i = 1\bigg\}.\end{aligned}$$ Here, $S$ is the countable dense subset of $\Omega$. It is known that $\mathcal{N}$ is a dense subset of $\mathcal{P}_{\Omega}$ [@Billingsley_1999]. For each $n \in \mathbb{N}$, let $\mathcal{C}_n$ denote the collection of all probability measures of the form $$\begin{aligned} \bigg\{ \sum_{i=1}^{m} \alpha_{i} \delta_{a_i} \bigg\}, \end{aligned}$$ satisfying the following properties: (i) $m > n$, each $0 < \alpha_i < 1/n$ and $\sum_{i=1}^{m} \alpha_i =1$, (ii) corresponding to the set $\{a_1,a_2,\ldots, a_m\}$, there exist $0 < r_1,\ldots,r_m < 1/n$ such that $a_j \notin \bar{B}(a_i,r_i)$ for $1 \leq i \neq j \leq m$ and there is an element $y \in \Omega$ which belongs to every ball $\bar{B}(a_i,r_i)$. Let $\mu_{i} \in \mathcal{C}_{n}$. Let the support of $\mu_{i}$ be denoted by $F_i = \{ a_{i1},\ldots,a_{i m_i}\}, m_i >n$, corresponding to which the set of radii and the common element are denoted by $\{r_{i1},\ldots, r_{i m_i}\}$ and $y_i$ respectively. Also, let $\{\alpha_{i1},\ldots, \alpha_{i m_i}\}$ denote the set of coefficients. Let $\mathcal{A}_n$ be the collection of all probability measures of the form $$\begin{aligned} \bigg\{ \sum_{i=1}^{l} \lambda_i \mu_i \colon \mu_{i} \in \mathcal{C}_{n}, \lambda_i \in [0,1], \sum_{i=1}^{l} \lambda_i =1 \bigg\}, \end{aligned}$$ such that the set $\{F_i\}$, where $F_i$ is the support of $\mu_i$, produces an unconnected family of balls, that is, no closed ball at $a_{ik} \in F_i$ of radius $r_{ik} < 1/n$ intersects $F_j$ for all $1\leq i \neq j \leq l$ and $1\leq k\leq m_i$. Every element of $\mathcal{A}_n$ is an element of $M_n$. It is easy to check that every $\mu \in \mathcal{A}_n$ belongs to $M_n$. Indeed, let $\nu = \sum_{i=1}^{l} \lambda_i \delta_{y_i}$. Let $a_{ij}\in F = \cup_{i=1}^{l}F_i$. By the assumption on the family $\{F_i: 1 \leq i \leq l \}$, $\bar{B}(a_{ij},r_{ij}) \cap F = \{a_{ij}\}$. Therefore, $\mu(\bar{B}(a_{ij},r_{ij})) = \lambda_i \alpha_{ij}$. Also, $y_i \in \bar{B}(a_{ij},r_{ij})$, which implies $\nu(\bar{B}(a_{ij},r_{ij})) \geq \lambda_i$. Since $\mu_i \in \mathcal{C}_n$ for all $1\leq i\leq l$, $$\begin{aligned} \frac{\nu(\bar{B}(a_{ij},r_{ij}))}{\mu(\bar{B}(a_{ij},r_{ij}))} \geq \frac{1}{\alpha_{ij}} >n.\end{aligned}$$ Since $\mu$ is supported on $F$, $$\begin{aligned} \mu \bigg\{x : \nu(\bar{B}(x,r)) \leq n \mu(\bar{B}(x,r))~\text{for all}~r < 1/n \bigg\} = 0. \end{aligned}$$ Hence, $\mu \in M_n$. Furthermore, $\mu \in M_n^{\circ}$, which we show in the following lemma. \[lem:m\_n\_interior\] Let $\mu \in \mathcal{A}_n$. Then $\mu$ is an element of $M_{n}^{\circ}$. Let $\omega$ be a probability measure such that $\pi(\mu,\omega) < \varepsilon$, where $\varepsilon >0$. Then from the definition of the metric $\pi$, there exists $\delta > 0$ such that $\pi(\mu,\omega) \leq \delta \leq \varepsilon$ and $$\begin{aligned} \omega(B) \leq \mu(B^{\delta}) + \delta~\text{for all}~B \in \mathcal{B}(\Omega).\end{aligned}$$ We will find the conditions on $\delta$, and hence on $\varepsilon$, such that whenever $\pi(\mu,\omega) < \varepsilon$, the measure $\omega $ belongs to $M_n$. This will imply that the measure $\mu$ belongs to $M_{n}^{\circ}$. Let $\nu = \sum_{i=1}^{l} \lambda_i \delta_{y_i}$. Let $D = D_{\omega}$ denote the set of all $x$ in $\Omega$ such that $\nu(\bar{B}(x,r)) \leq n \omega(\bar{B}(x,r)) $ for all $r < 1/n$. It is evident that if $\omega(D) < 1/n$ then $\omega \in M_n$. Set $F = \cup_{i=1}^{l}F_i$, which is the support of $\mu$. Let $A$ be the set of all $x \in \Omega$ such that $\bar{B}(x,\delta) \cap F $ is empty, then $A$ is a subset of $F^{c}$. We write $D$ as the disjoint union of intersection of $D$ with three sets $F, A$ and $F^c \setminus A$. Therefore, $$\begin{aligned} \label{eqn:total_bound} \omega(D) = \omega(D \cap F) + \omega( D \cap (F^{c} \setminus A)) + \omega(D \cap A).\end{aligned}$$ The set $A^{\delta}$ does not intersect $F$, so $\mu(A^{\delta}) = 0$. Therefore, we have $$\begin{aligned} \omega(D \cap A) \leq \omega(A) \leq \mu(A^{\delta}) + \delta = \delta.\end{aligned}$$ We now show that the sets $D \cap F$ and $D \cap (F^{c} \setminus A)$ can be made empty by choosing an appropriate $\delta >0$, denoted by $\delta_0$. Note that the choice of such $\delta$ could be made beforehand. It then follows that for $\varepsilon < \min\{1/n, \delta_0\}$, the open ball centered at $\mu$ of radius $\varepsilon$ with respect to the metric $\pi$ lies in $M_n$, and hence, $\mu$ is an element of $M_n^{\circ}$. - Suppose $z \in D \cap F$, then $z$ is equal to some $a_{ij} \in F_i$. This implies $\omega(\bar{B}(a_{ij},r_{ij})) \leq \mu(\bar{B}(a_{ij},r_{ij} + \delta)) + \delta \leq \lambda_i \alpha_{ij} + \delta$, whenever $$\begin{aligned} r_{ij} + \delta < \min\{\rho(a_{ij},a): a \in F, a \neq a_{ij}\}. \end{aligned}$$ Let $p_{ij}=\min\{\rho(a_{ij},a): a \in F, a \neq a_{ij}\}$. Note that there always exist such $\delta$ satisfying the above inequality since by the assumption on the family $\{F_i\}$, $r_{ij} < p_{ij}$. By choosing $\delta > 0$ such that $$\begin{aligned} \frac{\lambda_i}{\lambda_i \alpha_{ij} + \delta} > n, \end{aligned}$$ which is always possible since $\alpha_{ij} < 1/n$, the fraction $\nu(\bar{B}(a_{ij},r_{ij}))/\allowbreak \omega(\bar{B}(a_{ij},r_{ij})) \geq (\lambda_i) / (\lambda_i \alpha_{ij} + \delta)$ is strictly greater than $n$, which implies $a_{ij}$ does not belong to $D \cap F$. Thus, by taking $\delta < \min\{t_1, t_2\}$, where $$\begin{aligned} t_1 = \min_{ij}\{p_{ij} - r_{ij}\} ~\text{and}~ t_2= \min_{ij}\left\{\lambda_i\left(\frac{1}{n}-\alpha_{ij}\right)\right\}, \end{aligned}$$ we conclude that the set $D \cap F$ is empty. - Now, let $z$ be an element of $D \cap (F^c \setminus A)$ then there is an element $a_{ij}$ in $\bar{B}(z,\delta)$. Let $t_{ij} = \rho(\bar{B}(a_{ij},\delta),y_i)$. If we choose $\delta$ such that $$\begin{aligned} 4 \delta + t_{ij} < p_{ij},\end{aligned}$$ which is always possible since $t_{ij} < p_{ij}$, $\bar{B}(z,2\delta + t_{ij})$ will contain the common element $y_i$. It follows from the triangle’s inequality, $\rho(z,y_i) \leq \rho(z,a_{ij}) + \rho(a_{ij},y_i) \leq 2\delta + t_{ij} $. Note that the ball $\bar{B}(z,2\delta + t_{ij})$ may also contain some $y_j$, $j\neq i$. On the other hand, $\bar{B}(z, 3\delta + t_{ij})$ will not contain any element except $a_{ij}$ from $F$. To see this, suppose $a \in F$ such that $a \neq a_{ij}$ is in the closed ball $\bar{B}(z, 3\delta + t_{ij})$. Then $\rho(a, a_{ij}) \leq \rho(a, z) + \rho(z, a_{ij}) < 4\delta + t_{ij} < p_{ij}$, which is a contradiction. This implies that $\omega(\bar{B}(z,2\delta + t_{ij})) \leq \lambda_i\alpha_{ij} + \delta$ and $$\begin{aligned} \frac{\nu(\bar{B}(z,2\delta + t_i))}{\omega(\bar{B}(z,2\delta + t_i))} \geq \frac{\lambda_i}{\lambda_i \alpha_{ij} + \delta}.\end{aligned}$$ The left hand side of the above inequality is strictly greater than $n$ since we will choose $\delta < \min\{t_1, t_2\}$. In addition, if we take $\delta < t_3$, where $$\begin{aligned} t_3 = \min_{ij}\{p_{ij} - t_{ij}\},\end{aligned}$$ the set $D \cap (F^c \setminus A)$ will be empty. Hence, whenever $\delta_0 < \min\{t_1, t_2, t_3\}$, we will have $\omega(D) \leq \delta_0 < 1/n$, implying $\omega \in M_n$. From the Lemma \[lem:m\_n\_interior\], it is sufficient to prove that $\mathcal{A}_n$ is dense in $\mathcal{N}$ to prove the denseness of $M_n^{\circ}$ in $\mathcal{P}_{\Omega}$, which is shown under the assumption that no nonempty open set of $\Omega$ is metrically finite dimensional. \[lem:m\_n\_dense\] Suppose that no nonempty open subset of $\Omega$ is metrically finite dimensional in $\Omega$. Then for each $n$, the set $M_{n}^{\circ}$ is dense in $\mathcal{P}_{\Omega}$. Let $\omega = \sum_{i=1}^{l}\lambda_i \delta_{d_i}$ be an element of $\mathcal{N}$, where $d_i \in S$. Let $t = \min\{\rho(d_i,d_j): 1 \leq i \neq j \leq l\}$ and let $0 <\varepsilon < t$. Note that $B(d_i,\varepsilon ) \cap B(d_j,\varepsilon) = \emptyset$, for all $1 \leq i \neq j \leq l$. By the assumption, $B(d_i,\varepsilon /2), 1 \leq i \leq l$ is not metrically finite dimensional on scale $\varepsilon/2$. Therefore, we have a set of measures $\{\mu_i: 1 \leq i \leq l\}$ in $\mathcal{C}_n$, where each $\mu_i$ has support $F_i \subseteq B(d_i,\varepsilon/2)$. This implies that for $a \in F_i$, the closed ball $\bar{B}(a, r), r< \varepsilon/2$ is contained in $B(d_i,\varepsilon)$ and does not contain any other element of $F_j$. So, $F = \cup_{i=1}^{l} F_i$ will form an unconnected family of closed balls and hence, from the Lemma \[lem:m\_n\_interior\], $ \mu = \sum_{i=1}^{l} \lambda_i \mu_i$ is in $M_n^{\circ}$. Let $A$ be any Borel measurable subset of $\Omega$. We will show that $\omega(A)\leq \mu (A^{\varepsilon/2}) + \varepsilon/2$, which then completes the proof. It is trivial if $A$ does not contain any $d_i$. Suppose $d_i$ belongs to $A$. Then $F_i \subseteq A^{\varepsilon/2}$ since $F_i$ is contained in $ B(d_i,\varepsilon/2)$, and hence, $\mu(A^{\varepsilon/2}) \geq \lambda_i = \omega(A)$, if no $d_j$, $j \neq i$ is contained in $A$. Therefore, $$\begin{aligned} \omega(A)\leq \mu (A^{\varepsilon/2}).\end{aligned}$$ Now we are ready to give the proof of the necessity condition in the Theorem \[thm:Priess\_theorem\]. We state it as a separate lemma as follows. We want to prove the following: Let $(\Omega, \rho)$ be a complete separable metric space. Suppose the strong Lebesgue-Besicovitch differentiation property for $\Omega$. Then $\Omega$ is metrically sigma-finite dimensional. Suppose $\Omega$ is not metrically sigma-finite dimensional. From the Lemma \[lem:p\_result\_1\], without loss of generality we can assume that there is no non-empty open subset of $\Omega$ which is metrically finite dimensional in $\Omega$. Let $S$ be a dense countable subset of $\Omega$. From the Lemma \[lem:m\_n\_dense\], we have that each $M_n^{\circ}$ is a dense open subset of $\mathcal{P}_{\Omega}$. It follows from the Baire Category Theorem that $\cap_{n \in \mathbb{N}} M_n^{\circ}$ is dense and hence $\cap_{n \in \mathbb{N}} M_n$ is non-empty. Let $\mu \in \cap_{n \in \mathbb{N}} M_{n} $, then for each $n$ there is a sequence of probability measures $\nu_{n}$ such that $$\begin{aligned} \mu \bigg( A_{n} = \{ x \in \Omega: \nu_n(\bar{B}(x,r)) > n \mu(\bar{B}(x,r)) \text{ for some } r < 1/n \} \bigg) \geq 1 - \frac{1}{n}. \end{aligned}$$ Since $\limsup_{n} \mu(A_n) \leq \mu(\limsup_{n} A_n)$ (See Theorem 4.1 [@Billingsley_2012]) and that $\mu$ is a probability measure, we have $$\begin{aligned} \mu\left(\limsup_{n} A_n\right) = 1.\end{aligned}$$ Let $\nu = \sum_{n=1}^{\infty} \alpha_n \nu_n$, where $\alpha_{n} = \frac{1}{n(n+1)}$. Define the following set. $$\begin{aligned} A = \bigg\{x: \limsup_{r \rightarrow 0} \frac{\nu(\bar{B}(x,r))}{\mu(\bar{B}(x,r))} = \infty \bigg\}\end{aligned}$$ We now show that $\limsup_{n}A_n \subseteq A$. Given any $x \in \limsup_{n} A_n$, $x$ belongs to $A_n$ for infinitely many $n$ and so there is an increasing sequence $m_1,m_2,\ldots$ in $n$ such that $x \in A_{m_t}$ for every $t \in \mathbb{N}$. For each $m_t$, we have a $0< r_t < \frac{1}{m_t}$ such that $$\begin{aligned} \frac{\alpha_{m_t} \nu_{m_t}(\bar{B}(x, r_t))}{\mu(\bar{B}(x,r_t))} > \alpha_{m_t} m_t.\end{aligned}$$ We can assume that $(r_t)$ is a decreasing sequence converging to zero. Let fix $t \geq 2$. For all $1 \leq j \leq t-1$, we have $$\begin{aligned} \frac{\alpha_{m_j} \nu_{m_j}(\bar{B}(x, r_t))}{\mu(\bar{B}(x,r_t))} > \alpha_{m_j} m_j \frac{\mu(\bar{B}(x, r_j))}{\mu(\bar{B}(x, r_t))} \geq \alpha_{m_j} m_j,\end{aligned}$$ while for all $t+1 \leq j$, $$\begin{aligned} \frac{\alpha_{m_j} \nu_{m_j}(\bar{B}(x, r_t))}{\mu(\bar{B}(x,r_t))} > \alpha_{m_j} m_j \frac{\nu_{m_j}(\bar{B}(x, r_t))}{\nu_{m_j}(\bar{B}(x, r_j))}.\end{aligned}$$ Let $s_t = \sum_{j=1}^{t} \alpha_{m_j} m_j$. Then for all $t \geq 1$, $$\begin{aligned} \frac{\nu(\bar{B}(x,r_t))}{\mu(\bar{B}(x,r_t))} = \sum_{j=1}^{\infty}\frac{ \alpha_{m_j} \nu_{m_j}(\bar{B}(x,r_t))}{\mu(\bar{B}(x,r_t))} > s_t+ \sum_{j=t+1}^{\infty} \alpha_{m_j} m_j \frac{\nu_{m_j}(\bar{B}(x, r_t))}{\nu_{m_j}(\bar{B}(x, r_j))} > s_t.\end{aligned}$$ Since $s_t$ tends to infinity as $t$ tends to infinity, it implies $x \in A$. Hence, $\limsup_{n}A_n \subseteq A$. Since $\mu\left(\limsup_{n} A_n\right) = 1$, we have then $\mu(A)=1$. In other words, $$\begin{aligned} \limsup_{r \rightarrow 0} \frac{\nu(\bar{B}(x,r))}{\mu(\bar{B}(x,r))} = \infty \text{ for } \mu-\text{a.e.} \end{aligned}$$ This means that $$\begin{aligned} \label{contradiction} \limsup_{r \rightarrow 0} \frac{\mu(\bar{B}(x,r))}{(\mu+ \nu)(\bar{B}(x,r))} = 0 \text{ for } \mu-\text{a.e.}.\end{aligned}$$ As $\mu$ is absolutely continuous with respect to $\mu + \nu$. By the Radon-Nikodym theorem, there is a measurable function $f: \Omega\rightarrow [0,\infty)$ such that for any measurable $A$, we have $\mu(A) = \int_{A} f(y) (\mu+ \nu)(dy)$. Then we have, $$\begin{aligned} \limsup_{r \rightarrow 0} \frac{\mu(B(x,r))}{(\mu+ \nu)(B(x,r))} & = \limsup_{r \rightarrow 0} \frac{1}{(\mu+ \nu)(B(x,r))} \int_{B(x,r)} f(y) (\mu+ \nu) (dy)\end{aligned}$$ Suppose that $\mu+ \nu$ satisfies the strong differentiation property, then the right-hand side of the above equation is equals to $f(x)$ for $(\mu+\nu)$-almost everywhere and hence also for $\mu$-almost everywhere, while the left-hand side is equals to 0 for $\mu$-almost everywhere. This gives that $f(x) = 0$ for $\mu$-almost everywhere and therefore contradicts the fact that $\mu$ is a probability measure. This completes the proof. Statistical Machine Learning {#chap:Statistical machine learning} ============================ In this chapter, we introduce the fundamentals of statistical machine learning. We start with the binary classification problem and then discuss about the learning rules, error of a learning rule and consistency. Binary classification problem {#sec:Binary Classification problem} ----------------------------- A classification problem is categorizing a set of data, for example sorting the clothes based on its colors like blue, white, red, yellow etc. These categories are referred as labels for a data. In general, any classification problem can be understood as a binary classification problem that is, with two labels. Almost every machine learning concepts can be modeled mathematically. Let $\Omega$ be a non-empty set and let $\{0,1\}$ be the set of labels. A *labeled sample* $\sigma_n$ of size $n$ is an element of $(\Omega \times \{0,1\})^n$, $$\begin{aligned} \sigma_n = (x_1,y_1), \ldots,(x_n,y_n),\end{aligned}$$ where $(x_i,y_i) \in \Omega \times \{0,1\}$ such that each data point $x_i$ has label $y_i$. Then, the binary classification problem (see figure \[fig:classification\]) is defined as follows. Given $\sigma_n$, a binary classification problem is to construct a Borel measurable function $g : \Omega \rightarrow \{0,1\}$ such that $g(x_i) = y_i$ for every $1 \leq i \leq n$ and that $g$ assigns a label $0$ or $1$ to every element $x$ of $\Omega$. The function $g$ is called a *classifier*. (0.3,0.3) rectangle (7.8,5); (1,2) circle (2pt); (1.5,3.4) circle (2pt); (2.9,3.2) circle (2pt); (2.1,4.3) circle (2pt); (3.2,2.2) circle (2pt); (3.7,0.5) circle (2pt); (4,2.1) circle (2pt); (5.6,0.9) circle (2pt); (6.1,2) circle (2pt); (7.6,4.9) circle (2pt); (7.6,1.9) circle (2pt); (2.2,2) rectangle (2.4,2.2); (2,3.8) rectangle (2.2,4); (4.6,1.7) rectangle (4.8,1.9); (4,4) rectangle (4.2,4.2); (4.6,3.2) rectangle (4.8,3.4); (5.6,4.1) rectangle (5.8,4.3); (5.8,2.7) rectangle (6,2.9); (7,4.1) rectangle (7.2,4.3); (6.8,4.7) rectangle (7,4.9); (5.1,3.9) rectangle (5.3,4.1); (7.5,3.1) rectangle (7.7,3.3); (5.6,2.45) node\[sloped,below\] [     ?]{} ellipse (5pt and 4pt); Let’s see an example. We want to classify the emails in our email account as spam and non-spam emails. We would like to construct a machine to do this work. We take a set of emails called training data and based on it we set a hypothesis that if subject of an email contain ‘credit or win’ then it is a spam. Now, the machine has to classify the new emails based on this hypothesis. This is a binary classification problem with labels ‘spam’ or ‘non-spam’. Suppose we have a way to classify emails, is it what we want? A human can pick spam emails without an error by seeing the email content but a machine cannot. There is always some uncertainty in classifying emails such as emails having no subject, and hence there a possibility of error. In principle, we prefer those machines which give less error and hence are more accurate. Due to such uncertainties, the probabilistic settings are the best. Let $\mu$ be a probability measure on $\Omega \times \{0,1\}$ and $(X,Y)$ be a $\Omega \times \{0,1\}$-valued random variable having distribution $\mu$. Here, $X$ is a random element having label $Y$. \[misclass\_error\] The *misclassification error* for a classifier $g$ is the measure of set of all labeled data points whose predicted label and actual label are different, $$\begin{aligned} \ell_{\mu}(g) \ & = \ \mathbb{P}(g(X) \neq Y) \ \\ \ & = \ \mu \{(x,y) \in \Omega \times \{0,1\} : g(x) \neq y \}. \end{aligned}$$ The prime aim of a classifier is to predict label for a new data point. The misclassification error gives the probability that we will predict a wrong label. Like in our example of emails, the machine can classify an email from a friend as spam based on the hypothesis, whereas the actual label is ‘non-spam’. The Definition \[misclass\_error\] gives the probability of such cases. It is evident that a classifier with low misclassification error will be preferred. Learning rule and consistency {#sec:Learning rule and consistency} ----------------------------- Here, we discuss the Bayes error and constructing good classifiers based on labeled samples to attain minimum possible error. Given a probability measure $\mu$ on $\Omega \times \{0,1\}$, it is possible to define the minimum possible misclassification error for $\mu$. The *Bayes error* is the infimum of misclassification error for $\mu$, $$\begin{aligned} {\ell}^{*}_{\mu} & = \inf\{\ell_{\mu}(g): g\text{ is a classifier on } \Omega\} \\end{aligned}$$ The set of classifiers is non-empty as we can always define a function as $g : \Omega \rightarrow \{1 \}$ and also, the misclassification error is bounded between $0$ and $1$. This implies that the infimum always exists and indeed is attained by the Bayes classifier (defined later). So, Bayes classifier can be a solution to the classification problem, but Bayes error depends on $\mu$ which is unknown. The only thing we have are labeled samples. We know that $(X,Y)$ is distributed according to $\mu$. We define two measures $\nu$ and $\nu_1$ on $\Omega$. For any measurable $A \subseteq \Omega$, let $$\begin{aligned} \nu (A) = \mu(A \times \{0\}) +\mu(A\times\{1\}), \ \nu_1 (A) = \mu(A\times\{1\}).\end{aligned}$$ As $\nu_1 \leq \nu$, so $\nu_1$ is absolutely continuous with respect to $\nu$. By the Radon-Nikodym theorem [@Billingsley_2012], there exists a measurable function $\eta$ such that, $$\begin{aligned} \nu_1(A)& =\int_{A} \eta(x) \nu(dx).\end{aligned}$$ Here, $\eta$ is the Radon-Nikodym derivative of $\nu_1$ with respect to $\nu$. Probabilistically, $\eta$ is equal to the conditional probability of getting label $1$, given $X=x$, $$\begin{aligned} \eta(x)& = \mathbb{P}(Y=1|X=x). \end{aligned}$$ In statistics, $\eta$ is called *regression function*. Note that, $\eta$ is function on $\Omega$ and takes values in $[0,1]$. We show below that the distribution of $(X,Y)$ can be completely described by the pair $(\nu,\eta)$, where $\nu$ is a probability measure and $\eta$ a regression function on $\Omega$ obtained from the underlying probability measure $\mu$ on $\Omega \times \{0,1\}$. We can write any measurable set $A \subseteq \Omega \times \{0,1\}$ as, $$\begin{aligned} A & = \{ A_0 \times \{0\} \} \cup \{A_1 \times \{1\} \}. \end{aligned}$$ Therefore, we have $$\begin{aligned} \mathbb{P}((X,Y) \in A) & = \mathbb{P}(X \in A_0, Y = 0) + \mathbb{P}(X \in A_1, Y = 1) \ \\ & =\int_{A_0} (1-\eta(x)) d\nu(x) + \int_{A_1} \eta(x) d\nu(x).\end{aligned}$$ In the above equation, the left hand-side is in terms of $\mu$, while the right hand-side is defined by $\nu$ and $\eta$. So, the distribution of $(X,Y)$ is completely determined by $\nu$ and $\eta$. The distribution of $(X,Y)$ is described by $(\nu,\eta)$ means that the random element $X$ is distributed according to $\nu$ with a random label $Y$ following Bernoulli distribution with probability of success $\eta(x) = \mathbb{P}(Y=1|X=x)$. We will intermittently describe the distribution of $(X,Y)$ by $\mu$ or $(\nu,\eta)$. In case of $(\nu,\eta)$, there is always an underlying probability measure $\mu$ on $\Omega \times \{0,1\}$. [[ 9999 ]{}]{} With the help of the regression function, we can also define the important notion of the Bayes classifier. The *Bayes classifier* is defined as: $$\begin{aligned} g^*(x) = & \begin{cases} 1 & \text{if \ } \eta(x) \geq \frac{1}{2}, \ \\ 0 & \text{otherwise} \end{cases}\end{aligned}$$ The above definition is well defined and the error of the Bayes classifier can be defined as $ \mathbb{P}(g^{*}(X) \neq Y)$. We can infer from the following theorem that the Bayes error is indeed attained by the Bayes classifier. \[thm:optimal\_bayes\] Let $g$ be a classifier on $\Omega$, then we have $$\begin{aligned} \mathbb{P}(g(X) \neq Y) - \mathbb{P}(g^{*}(X) \neq Y) \geq 0.\end{aligned}$$ We first find the probability of no error for $g$ given an element $x$, $$\begin{aligned} &\mathbb{P}(g(X)=Y|X=x) \ \\ &=\mathbb{P}(g(X) =1,Y=1|X=x)+\mathbb{P}(g(X)=0,Y=0|X=x) \\ & = \mathbb{P}(Y =1 | X =x ) \mathbb{I}_{\{g(x) =1\}} + \mathbb{P}(Y=0 | X =x ) \mathbb{I}_{\{g(x) =0\}} \\ & = \eta(x) \mathbb{I}_{\{g(x) =1\}} + (1 - \eta(x))\mathbb{I}_{\{g(x) =0\}}.\end{aligned}$$ Similarly, we have the probability of zero error for $g^*$. $$\begin{aligned} \mathbb{P}(g^*(X) = Y | X =x ) & = \eta(x) \mathbb{I}_{\{g^*(x) =1\}} + (1 - \eta(x))\mathbb{I}_{\{g^*(x) =0\}}.\end{aligned}$$ Then the difference of error probabilities of $g$ and $g^*$ is, $$\begin{aligned} & \mathbb{P}(g(X) \neq Y | X =x ) - \mathbb{P}(g^*(X) \neq Y | X =x ) \ \\ & = \mathbb{P}(g^*(X) = Y | X =x ) - \mathbb{P}(g(X) = Y | X =x ) \ \\ & = \eta(x)( \mathbb{I}_{\{g^*(x) =1\}} - \mathbb{I}_{\{g(x) =1\}} ) + (1 - \eta(x))(\mathbb{I}_{\{g^*(x) =0\}} - \mathbb{I}_{\{g(x) =0\}}) \ \\ & = 2\bigg|\eta(x) - \frac{1}{2}\bigg|\mathbb{I}_{\{g(x)\neq g^*(x)\}},\end{aligned}$$ which is equal to 0 if $g = g^*$, $(\eta(x)-1/2)$ if $g^*(x) =1,g(x)=0$ and $(1/2 - \eta(x))$ if $g^*(x) =0,g(x)=1$. By the definition of $g^*$, $2\eta(x)-1$ is non-negative if and only if $g^*(x) = 1$. So, $$\begin{aligned} \mathbb{P}(g(X) \neq Y | X =x ) - \mathbb{P}(g^*(X) \neq Y | X =x ) & \geq 0,\end{aligned}$$ taking the expectation over all $x \in \Omega$, $$\begin{aligned} &\mathbb{P}(g(X) \neq Y ) - \mathbb{P}(g^*(X) \neq Y) \ \\ & = \mathbb{E}\{ \mathbb{P}(g(X) \neq Y | X =x ) \} - \mathbb{E}\{\mathbb{P}(g^*(X) \neq Y | X =x )\} \geq 0.\end{aligned}$$ From the above theorem we have $\ell^{*}_{\mu} = \mathbb{P}(g^{*}(X) \neq Y)$. The Bayes classifier depends on the underlying distribution $\mu$ of $(X,Y)$, which is unknown, thus $g^{*}$ is unknown. We assume the existence of $\mu$ to make the theoretical study possible. We have labeled samples, in our hand, we try to construct a classifier based on labeled samples. We cannot in general expect misclassification error of a classifier to be zero, but we can strive for error of a classifier to be closer to the minimum possible error, that is, Bayes error. To achieve this, we construct a family of classifiers based on labeled samples, which are known as learning rule. \[def:learning rule\] A *learning rule* of size $n$ is a mapping defined on all labeled samples of size $n$ which assigns a label to a data point given a labeled sample, $$\begin{aligned} g_n : ({\Omega} \times {\{0,1\}} )^{n} \times \Omega \rightarrow \{0,1\} \end{aligned}$$ In simpler words, a learning rule take a labeled sample $\sigma_n$ and assigns a classifier $g_n(\sigma_n)$ to it. This classifier $g_n(\sigma_n)$ then finds the label $g_{n}(\sigma_n)(x) = g(x,\sigma_n)$ for $x \in \Omega$. A *learning rule* is a sequence of maps $(g_n),n \in \mathbb{N}$ for labeled samples of all sizes. We sometimes write $g_n(x) = g_{n}(x,\sigma_n)$, with an understanding that a learning rule is also a function of labeled samples. A learning rule is entirely a deterministic function, but further analysis to measure the error of a learning rule require randomness and probabilistic settings. Let $D^{\infty}$ denote the infinite sequence $ (X_1,Y_1), (X_2,Y_2), \ldots $ of independently and identically distributed random variables according to a probability measure $\mu$. Then, $D^{\infty}$ is called a *random sample path* and the product measure $\mu^{\infty} = \prod_{i=1}^{\infty} \mu$ is the distribution of $D^{\infty}$. The first $n$ pairs from $D^{\infty}$, denoted by $D_n = (X_1,Y_1),\ldots, \allowbreak (X_n,Y_n)$ is called a *random labeled sample* of size $n$ and follows distribution $\mu^n = \prod_{i=1}^{n} \mu$. Let $\sigma_n, \sigma_{\infty}$ denote a realization of $D_n$ and $D^{\infty}$, respectively. We have an underlying assumption that each $(X_i,Y_i)$ from the random sample path is distributed according to $\mu$ and is independent of $(X,Y)$. A random sample of size $n$, $W = (W_1,W_2,\allowbreak \ldots,W_n)$, is a vector of i.i.d. random variables, while a sample viewed as an instance is one possible realization of the sample $W$. In other words, if $W_i(p) = w_i$ for $p\in \Omega$ then, $w = (w_1,w_2,\dots,w_n)$ is one realization of $W = (W_1,W_2,\dots,W_n)$, and $w$ is considered as an instance of the random sample $W$. For example, let $X_1,X_2$ is the result of throw of two dices respectively. Then, $(X_1,X_2)$ is a random sample of size $2$ and $(1,4)$ is one instance of this sample. Now, we define the error of a learning rule. \[def:error\_prob\_rule\] The *error probability* of $(g_n)$ is the conditional probability, $$\begin{aligned} \label{eqn:error_rule} {\ell}_{\mu}(g_n) = \mathbb{P} ( g_n(X) \neq Y | D_n ), \end{aligned}$$ and the *expected error probability* is given by, $$\begin{aligned} \label{eqn:expected_error_rule} \mathbb{E}\{{\ell}_{\mu}(g_n)\} = \mathbb{P}\left(g_n(X) \neq Y\right),\end{aligned}$$ where the average is over all labeled samples of size $n$. Note that, the equation is a function of random data and hence ${\ell}_{\mu}(g_n)$ is a random variable. In simpler words, $\ell_{\mu}(g_n)$ is a function from set of all labeled $n$-samples $(\Omega \times \{0,1\})^{n}$ to $[0,1]$ such that $\ell_{\mu}(g_n)(\sigma_n) = \mathbb{P} ( g_n(X) \neq Y | \sigma_n ) = \mu\{ (x,y): g_{n}(x)(\sigma_n) \neq y\}$. While the expectation in the equation is with respect to $\mu^{n}$ and hence the value is a real number. The accuracy of a learning rule is measured by the convergence of its error probability to Bayes error. \[def:consistent\_rule\] A learning rule $(g_n)$ is called *weakly consistent* for a probability measure $\mu$, if the error probability converges to Bayes error in probability, that is, $$\begin{aligned} \mathbb{E}\{{\ell}_{\mu}(g_n)\} \rightarrow {\ell}^*_{\mu}, \text{ \ as } n \rightarrow \infty, \end{aligned}$$ while, $(g_n)$ is said to be *strongly consistent* if $$\begin{aligned} {\ell}_{\mu}(g_n) \rightarrow {\ell}^*_{\mu} \ \text{ almost surely, as } n \rightarrow \infty, \end{aligned}$$ (0,0) – node\[sloped,below\] (10,0); (0,0) – node\[sloped,above\] [ ]{} (0,5); (0,0.5) – node\[sloped,below, near end\] [$\ell^*$]{} (9.5,0.5); (9.5,1) node\[sloped,right\] parabola (0,3) ; (0,0.9) – (9.5,0.9); (0,2.5) sin (1.5,4.5) cos (4,2.5) sin (6.5,0.71) .. controls (6.5,0.71) and (9.5,0.6) .. (9.5,0.6) node\[sloped,right\] ; Note that, the almost sure convergence in the definition of strong consistency is with respect to random sample path $D^{\infty}$. That is, the set of infinite labeled samples $\sigma_{\infty}$ for which the error probability $\ell_{\mu}(g_n)(\sigma_n)$ converges to Bayes error has measure one with respect to $\mu^{\infty} $, in other words, $$\mu^{\infty}\{ \sigma^{\infty}: \lim_{n \rightarrow \infty} \ell_{\mu}(g_n)(\sigma_n) = \ell^{*}_{\mu} \} = 1.$$ The notion of weak consistency demonstrates that if we increase the data size then with high probability we have the average error over all labeled samples of size $n$ to achieve Bayes error. While, for a strongly consistent rule, the error probability converges to Bayes error for almost every infinite labeled sample. In addition, a learning rule may be consistent for a particular distribution and may not be consistent for another distribution. It is preferable to construct a learning rule which is consistent for every distribution without having the need to know the unknown distribution. \[def:universal\_consistency\] A learning rule $(g_n)$ is said to be [*universally weakly consistent*]{} if it is weakly consistent for every probability measure $\mu$ on $\Omega \times \{0,1\}$. Similarly, a *universally strongly consistent* rule is strongly consistent for every probability measure on $\Omega \times \{0,1\}$. The $k$-nearest neighbor rule is an example of a universally consistent learning rule. In fact, the $k$-nearest neighbor rule is also universally strongly consistent in Euclidean spaces. We will explore more about consistency of the $k$-nearest neighbor rule in Chapter \[chap:The $k$-nearest neighbor rule\] and Chapter \[chap:Consistency in a metrically finite dimensional spaces\]. From a theoretical perspective, we prefer universally consistent rules but learning rules like Random forests rule which is not universally consistent are also employed in practical applications due to their  high accuracy [@Biau_Devroye_Lugosi_2008]. A learning rule whose expected error decreases monotonically with increasing $n$ is called a *smart learning rule*. It has been conjectured that a universally consistent rule is not a smart rule (see Problem 6.16 of [@Devroye_Gyorfi_Lugosi_1996]). Recently, a mutual notion of consistency has been introduced [@Zakai_Ritov_2009], which measures the closeness between two learning rules. \[def:mutual\_consistent\_rule\] Two learning rules $(g_n)$ and $(h_n)$ are called *mutually weakly consistent* if for every distribution $\mu$ on $\Omega \times \{0,1\}$, $$\begin{aligned} \mathbb{E}_{\mu}\{ |g_n(X) - h_n(X)|\} \rightarrow 0 \text{ \ as \ } n \rightarrow \infty.\end{aligned}$$ A learning rule is universally weakly consistent if and only if it is mutually weakly consistent with Bayes rule. The notion of mutual strong consistency can be defined similarly. How to construct a learning rule? {#sec:How to construct a learning rule?} --------------------------------- We know that a possible way to find a good classifier with low misclassification error is to construct a sequence of learning rules whose error can be made as small as possible. However, the real question is what is the form of such a learning rule when the only thing being available are the labeled samples. A formal way to construct a learning rule is elucidated in [@Devroye_Gyorfi_Lugosi_1996]. The basic idea is to devise a function $\eta_n$ with the help of labeled samples and try to approximate the regression function $\eta$. The most common way is to assign weights to the labeled sample. Given a labeled sample $\sigma_n = (x_1,y_1), \ldots,(x_n,y_n)$, let us define a function, $$\begin{aligned} \label{eqn:eta_n_app} {\eta}_{n}(x) & =\sum_{i=1}^{n} y_i W_{i}^{n}(x)\end{aligned}$$ where $W_{i}^{n} = \ W_{i}^{n}(x,\sigma_n)$ are non-negative weights and $\sum_{i=1}^{n}W_{i}^{n}(x) = 1$. To be precise, $\eta_{n}(x)$ is actually $\eta_{n}(x,\sigma_n)$ in the equation , it is a function of labeled samples and $x$. Then a learning rule is defined as, $$\begin{aligned} \label{eqn:induce_classifier} g_n(x) = & \begin{cases} 1 & \text{if \ } {\eta}_{n}(x) \geq \frac{1}{2}, \ \\ 0 & \text{otherwise} \end{cases}\end{aligned}$$ (0,0) – node\[sloped,below\] (10,0); (0,0) – node\[sloped,above,yshift = 0.6cm\] [ ]{} (0,5); (0,2) – node\[sloped,very near end, left, below\] (9.5,0.7); (0,1.5) – (9.5,1.5) node\[very near start,xshift = -1.5cm\] ; (3,0) – (3,2.7); (0,2.6) – (2.99,2.6); (0,3) – node\[very near start,xshift =-.7cm\] (3,3) node\[very near end,xshift = .5cm\] [  ]{}; (6,0) – (6,2.1); (7.5,0) – (7.5,2.1); (6,1.8) – (7.5,1.8); (6,3) – (7.5,3); (3,0) – (6,0); (3,0) circle (2pt); (6,0) circle (2pt); (7.5,0) circle (2pt); (7.5,0) – (9.5,0); The above defined learning rule $(g_n)$ is also called as *plug-in rule* [@Devroye_Gyorfi_Lugosi_1996] (see figure \[fig:bayes\_rule\]). We do not state explicitly every time but it is important to understand that $g_{n}(x) = g_{n}(x,\sigma_n)$ always. The error probability of $(g_n)$ is stated in Definition \[def:error\_prob\_rule\]. The following theorem conveys that the expected error probability of $(g_n)$ is more than the error probability of Bayes rule but cannot increase the error probability of Bayes rule by more than twice the average difference between $\eta$ and its approximation $\eta_n$. The proof of the following theorem has been adopted from [@Devroye_Gyorfi_Lugosi_1996]. \[lem:diff\_err\] Let $(\Omega,\rho)$ be a separable metric space and let $g_n$ be a learning rule as in the equation , then $$\begin{aligned} & \mathbb{P}(g_{n}(X) \neq Y ) - \mathbb{P}(g^*(X) \neq Y) \leq 2 \mathbb{E}\{ | \eta(X) - \eta_n(X) | \},\end{aligned}$$ and, $$\begin{aligned} & \mathbb{P}(g_{n}(X) \neq Y ) - \mathbb{P}(g^*(X) \neq Y) \leq 2 \sqrt{\mathbb{E}\{ ( \eta(X) - \eta_n(X) )^2 \}}.\end{aligned}$$ In the proof of the Theorem \[thm:optimal\_bayes\], we have deduced the following difference between error probabilities, $$\begin{aligned} \mathbb{P}(g_{n}(X) \neq Y | X =x ) - \mathbb{P}(g^*(X) \neq Y | X =x ) = 2\bigg|\eta(x) - \frac{1}{2}\bigg|\mathbb{I}_{\{g_{n}(x)\neq g^*(x)\}}.\end{aligned}$$ We see that if $g_{n}(x)=0, g^*(x)=1$, then $\eta_n(x) < 1/2, \eta(x) \geq 1/2 $. In the other case, if $g_{n}(x)=1, g^*(x)=0$, then $\eta_n(x) \geq 1/2, \eta(x) < 1/2 $. So, $|\eta(x) < 1/2| \leq |\eta(x) - \eta_{n}(x)|$. Now we take the average of difference between conditional error probabilities, over all $x \in \Omega$, $$\begin{aligned} & \mathbb{P}(g_{n}(X) \neq Y ) - \mathbb{P}(g^*(X) \neq Y) \ \\ & = \mathbb{E}\{ \mathbb{P}(g_{n}(X) \neq Y |X=x) - \mathbb{P}(g^*(X) \neq Y | X =x ) \} \ \\ & = 2 \mathbb{E}\bigg\{ \bigg|\eta(X) - \frac{1}{2}\bigg|\mathbb{I}_{\{g_{n}(X)\neq g^*(X)\}} \bigg\} \\ & \leq 2\mathbb{E}\{ |\eta(X) - \eta_{n}(X)|\} \end{aligned}$$ By the Cauchy-Schwarz inequality on the above inequality, we get $$\begin{aligned} \mathbb{P}(g_{n}(X) \neq Y ) - \mathbb{P}(g^*(X) \neq Y) \leq 2\sqrt{\mathbb{E}\{ (\eta(X) - \eta_{n}(X))^2\}}.\end{aligned}$$ The Theorem \[lem:diff\_err\] is important because it gives sufficient condition to prove weak consistency, that is, if $\eta_n$ is asymptotically close to $\eta$ then the average error converges to Bayes error. Indeed, we can even deduce a sufficient condition from the Theorem \[lem:diff\_err\] to have strong consistency. A simple corollary to the Theorem \[lem:diff\_err\] is as follows. The difference between the error probability of learning rule $(g_{n})$ and Bayes rule $g^*$ is bounded by, $$\begin{aligned} \ell_{\mu}(g_n) - \ell^*_{\mu} \leq 2 \mathbb{E}\{ | \eta(X) - \eta_n(X)| | D_{n} \}, \end{aligned}$$ where $D_n$ is a random labeled sample. The $k$-Nearest Neighbor Rule {#chap:The $k$-nearest neighbor rule} ============================= Here, we introduce the simplest learning rule called the $k$-nearest neighbor rule. We discuss about some of important results such as Stone’s lemma, Stone’s theorem and Cover-Hart lemma, together they establish the universal weak consistency of $k$-nearest neighbor rule in finite dimensional normed spaces. We also prove the inconsistency of the $k$-nearest neighbor rule on Davies’ example. The $k$-nearest neighbor rule {#sec:What is the $k$-nearest neighbor rule?} ----------------------------- The origin of $k$-nearest neighbor rule can be dated back to the work of Fix and Hodges [@Fix_Hodges_1951] in 1951. Since then, the $k$-nearest neighbor rule has become a hub of statistical machine learning. The $k$-nearest neighbor rule is very simple. The ‘$k$’ in the $k$-nearest neighbor rule is a positive integer and is less than or equal to $n$, the number of data points in a sample. We explain in the following, the major steps of applying the $k$-nearest neighbor rule algorithmically: - Suppose we have a set of $n$ data points, $\{x_1,\ldots,x_n\}$ with their labels $\{y_1,\ldots,y_n\}$. Let $x$ be a new data point and our task is to predict the label of $x$ given the labeled sample. - Let $\rho$ be a distance function, not necessarily a metric. Arrange the distances of $x$ to $x_i$ in increasing order, $$\begin{aligned} \rho(x_{(1)},x) \leq \rho( x_{(2)},x) \leq \ldots \leq \rho(x_{(n)},x). \end{aligned}$$ - Then the first $k$ data points, $\{x_{(1)},\ldots,x_{(k)}\}$ are called the $k$-nearest neighbors of $x$ with corresponding labels $\{y_{(1)},\ldots,y_{(k)}\}$. - The label of $x$ is the most frequent label among $\{y_{(1)},\ldots,y_{(k)}\}$. In other words, we take a majority vote among $\{y_{(1)},\ldots,y_{(k)}\}$ and assign this as the label of $x$. (0.3,0.3) rectangle (7.8,5); (1,2) circle (2pt); (1.5,3.4) circle (2pt); (2.9,3.2) circle (2pt); (2.1,4.3) circle (2pt); (3.2,2.2) circle (2pt); (3.7,0.5) circle (2pt); (4,2.1) circle (2pt); (5.6,0.9) circle (2pt); (6.1,2) circle (2pt); (7.6,4.9) circle (2pt); (7.6,1.9) circle (2pt); (2.2,2) rectangle (2.4,2.2); (2,3.8) rectangle (2.2,4); (4.6,1.7) rectangle (4.8,1.9); (4,4) rectangle (4.2,4.2); (4.6,3.2) rectangle (4.8,3.4); (5.6,4.1) rectangle (5.8,4.3); (5.8,2.7) rectangle (6,2.9); (7,4.1) rectangle (7.2,4.3); (6.8,4.7) rectangle (7,4.9); (5.1,3.9) rectangle (5.3,4.1); (7.5,3.1) rectangle (7.7,3.3); (5.6,2.45) ellipse (5pt and 4pt); (5.74,2.5) – node\[right\] [$k=1$]{} (5.8,2.7); (5.7,2.3) – node\[right\] [ $k=2$]{} (6,2); (5.48,2.5) – node\[sloped,below\] [$k=3$]{} (4.78,3.15); (5.6,2.3) – node\[sloped,below\] [$k=3$]{} (4.85,1.95); There are two major issues that hinder the implementation of the $k$-nearest neighbor rule (see figure \[fig:knn\_classification\]): Voting ties and distance ties. Voting ties is the difficulty in finding the majority vote among the picked ‘$k$’ labels. If $k$ is an even integer and suppose exactly $k/2$ of $\{y_{(1)},\ldots,y_{(k)}\}$ are 0 and rest are 1, then there is no clear majority vote. Voting ties are usually avoided by taking $k$ to be odd. We pick label 1 as the majority vote in case of a voting tie as stated in the formal definition of the $k$-nearest neighbor rule in the later part of this section. Distance ties occur when two or more data points are at the same distance to $x$, that is $\rho(x_{i},x) = \rho(x_{j},x)$. This is a problem because there may be many data points at same distance to $x$ and hence it is difficult to choose exactly $k$ nearest neighbors for $x$. The solution to distance ties are complicated and often the consistency is derived under the assumption of no distance ties. To obtain universal consistency, we need a tie-breaker to overcome the problems due to distance ties. There are several methods of breaking distance ties, however in this thesis, we discuss only two methods of breaking distance ties. The simplest one is index-based tie-breaking method or in simpler words, breaking distance ties by comparing indices. Given a sample of ordered $n+1$ data points $(x,x_1,\ldots,x_n)$, the $k$-nearest neighbors of $x_i$ are picked from the sample $(x_1,\ldots,x_{i-1},x,x_{i+1},\ldots,x_n)$. Suppose there is a distance tie between $x_j$ and $x$ for $x_i$, that is, $\rho(x_{i},x) = \rho(x_{i},x_{j})$, then we choose $x_{j}$ to be closer to $x_i$ if $x_j \in \{x_1,\ldots,x_{i-1}\}$, otherwise we choose $x$. In Euclidean spaces, tie-breaking by comparing indices is sufficient to avoid any bad situation but the same is not true for general metric spaces. Some issues related to distance ties for metric spaces with finite Nagata dimension are discussed in section \[sec:Consistency with distance ties\]. The second method is to break distance ties randomly and uniformly. A distance tie basically appears on the sphere, so in case of ties, a point is chosen uniformly on the sphere. Suppose $\rho(X_{i},X) = \rho(X_{i},X_{j})$, then $X$ and $X_{j}$ are chosen with equal probability, that is, $1/(\sharp\{S(X_{i},\rho(X_{i},X))\})$. Now, we present a formal and mathematical definition of the $k$-nearest neighbor rule. According to [@Devroye_Gyorfi_Lugosi_1996], the $k$-nearest neighbor classification rule belong to the family of plug-in rules, which are defined in the equation . Intuitively, it is clear that data points which are closer to $x$ will have more influence on $x$ rather than the data points lying far from $x$. This is the fundamental idea of the $k$-nearest neighbor rule, so it is convincing to assign high weights to the data points closer to $x$. Given a labeled sample, $\sigma_n = (x_1,y_1), \ldots,(x_n,y_n)$, let $\mathcal{N}_{k}(x)$ denote the set of $k$-nearest neighbors of $x$. Note that, $\sharp \mathcal{N}_{k}(x) = k$. Each data point in $\mathcal{N}_{k}(x)$ is assigned equal and non-zero weight, that is $1/k$. The $k$-nearest neighbor approximation for $\eta$ is, $$\begin{aligned} \label{eqn:knn_estimate} \eta_{n}(x) = \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{x_i \in \mathcal{N}_{k}(x)\}} y_i.\end{aligned}$$ Then, the $k$-nearest neighbor rule is defined as : $$\begin{aligned} \label{eq:kNN} g_n(x) = \begin{cases} 1 & \text{if } \ \eta_{n}(x) \geq 1/2, \\ 0 & \text{otherwise} \end{cases}\end{aligned}$$ In the equation , the $k$-nearest neighbor rule $g_n$ assigns label $1$ to a data point $x$, if the average weights of the $k$-nearest neighbors of $x$ having label 1 is greater than the average weights of the $k$-nearest neighbors with label 0. The $k$-nearest neighbor rule is the earliest example of a universally weakly consistent rule. There are two known methods to prove the universal weak consistency of the $k$-nearest neighbor rule: Stone’s theorem [@Stone_1977] and using the weak Lebesgue-Besicovitch differentiation property [@Cerou_Guyader_2006; @Devroye_1981]. We discuss the Stone’s theorem in detail in the following section as the Stone’s argument is our center of focus. Universal consistency {#sec:Universal consistency} --------------------- In this section, we make some mathematical preparations for the proof of Stone’s theorem. We start by proving some results that holds in any separable metric space such as Cover-Hart lemma and then using the argument of cones we prove the Stone’s lemma and Stone’s theorem in Euclidean spaces. ### Cover-Hart lemma and other results for general metric spaces {#subsec:Cover-Hart lemma and other results for general metric spaces} A separable metric space has some nice properties such as the support of a probability measure in a separable metric space has full measure. A set has full measure if its complement has zero measure. Let $S_{\nu}$ denotes the support of probability measure $\nu$. \[lem:full\_support\] Let $(\Omega,\rho)$ be a separable metric space and let $X$ be distributed according to a probability measure $\nu$ on $\Omega$. Then $\mathbb{P}(X \in S_{\nu}) = \nu(S_{\nu}) = 1$. Let $D$ be a countable dense subset of $\Omega$. For each $x$ in $S_{\nu}^{c} $, there exists $r >0$ such that $\nu(B(x,r)) =0$. Due to the denseness of $D$, there is an element $a$ in $D$ such that $\rho(x,a)< r/3$. We show that every element $z \in B(a,r/2)$ belongs to $B(x,r)$. By triangle’s inequality $\rho(x,z) \leq \rho(x,a) + \rho(a,z) < r/3 + r/2 = 5r/6 <r $. So, $B(a,r/2) \subseteq B(x,r)$, and as $\nu(B(x,r))$ is zero so $\nu(B(a,r/2)) =0$. Observe that $\rho(x,a) <r/3 < r/2$, so $x \in B(a,r/2)$. So, for every $x \in S_{\nu}^c$, there is an element $a$ such that $x$ belongs to the open ball $ B(a,r/2)$. We can cover $S_{\nu}^c$ by the countable union of $B(a,r/2),a \in D$ which have measure zero. The countable sub-additivity of $\nu$ implies $S_{\nu}^c$ has zero measure. In a separable metric space, the distance of a data point to its $k$-th nearest neighbor can be made small under appropriate values of $k,n$. The Lemma \[lem:cover\_hart\] stated below was originally proved by Cover and Hart for the $k$-nearest neighbor rule in a separable metric space with fixed $k$ (See pages 23, 26 of [@Cover_Hart_1967]), moreover, the result is true even if $k$ increases with $n$ but slower than $n$ such that $k/n$ converges to zero (See Lemma 5.1 in [@Devroye_Gyorfi_Lugosi_1996] for Euclidean spaces). However, the proof remains same for any separable metric space. The proof of Cover-Hart lemma for any separable metric space presented here has been adapted from Hatko’s masters thesis (see Lemma 2.3.4 of [@Hatko_2015]), where the proof has been done in separable C-inframetric space [^2]. \[lem:cover\_hart\] Let $(\Omega,\rho)$ be a separable metric space. Let $X,X_1,\ldots,X_n$ be an i.i.d. random sample distributed according to $\nu$. Let $X_{(k)}(X)$ denote the $k$-th nearest neighbor of $X$ among a sample of $n$ points. If $(k_n)$ is a sequence of values such that $ \lim_{n \rightarrow \infty}k_n/n \rightarrow 0$, then $$\begin{aligned} \mathbb{P}\bigg(\lim_{n \rightarrow \infty} \rho(X_{(k_n)}(X),X) & = 0 \bigg) = 1. \end{aligned}$$ If $x$ is in the support of the measure $\nu$, then for all $\varepsilon >0$, $\nu(B(x,\varepsilon)) > 0$. We note that the distance $\rho(X_{(k_n)}(x),x) > \varepsilon$ if and only if $ \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in B(x,\varepsilon)\}} \allowbreak < k_n$, which is equivalent to $$\begin{aligned} \label{eqn:k_n_zero} & \frac{1}{n} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in B(x,\varepsilon)\}} < \frac{k_n}{n}.\end{aligned}$$ We see that the right side of the equation goes to 0 as $k_n/n \rightarrow 0$, whereas the left side of the equation converges to $\nu(B(x,\varepsilon))$ almost surely by the strong law of large numbers. But $\nu(B(x,\varepsilon))$ is strictly positive as $x$ is in the support of $\nu$, therefore, $\rho(X_{(k_n)}(x),x)$ converges to 0 almost surely whenever $x \in S_{\nu}$ and $k_n/n \rightarrow 0$. If $(k_n)$ is a constant sequence, then $\rho(X_{(k_n)}(x),x)$ is a monotone non-increasing sequence in $n$. We will show that the sequence $\rho(X_{(k_n)}(X),X)$ converges in probability to 0, and hence will converge almost surely. Let $\varepsilon >0$. From the Lemma \[lem:full\_support\], we have $\mathbb{P}(X \in S_{\nu}) =1$, then $$\begin{aligned} \mathbb{P}( \rho(X_{(k_n)}(X),X) > \varepsilon) & = \mathbb{P}( X \in S_{\nu}) \mathbb{P}( \rho(X_{(k_n)}(X),X) > \varepsilon | X \in S_{\nu}) + \\ & \ \ \ \ \mathbb{P}( X \notin S_{\nu}) \mathbb{P}( \rho(X_{(k_n)}(X),X) > \varepsilon | X \notin S_{\nu}) \ \\ & = \mathbb{P}(\rho(X_{(k_n)}(X),X) > \varepsilon | X \in S_{\nu}) \ \\ & = \mathbb{E}\{ \mathbb{I}_{\{ \rho(X_{(k_n)}(X),X) > \varepsilon \}} | X \in S_{\nu}\},\end{aligned}$$ which converges to 0, by the Monotone Convergence Theorem. So, $\allowbreak \rho(X_{(k_n)}(X), \allowbreak X)$ converges to 0 in probability and therefore, converges to 0 almost surely. Now, suppose $(k_n)$ is a sequence increasing with $n$ but $k_n/n \rightarrow 0$. Let $X_{(k_n,n)}(X)$ denote the $k$-th nearest neighbor of $X$ among the sample $X_1,\ldots, \allowbreak X_n$. Since, $\sup_{m \geq n} \rho(X_{(k_m,m)}(x),x) \rightarrow 0$ almost surely whenever $x$ is in $S_{\nu}$ (as proved above), as a consequence we have $\sup_{m \geq n} \rho(X_{(k_m,m)}(x),x) \rightarrow 0$ almost surely as $n \rightarrow \infty$. Let $\varepsilon > 0$. As, $\rho(X_{(k_n,n)}(X),X) \leq \sup_{m \geq n} \rho(X_{(k_m,m)}(X),X) $, we have $$\begin{aligned} \mathbb{P}( \rho(X_{(k_n,n)}(X),X) > \varepsilon) & \leq \mathbb{P}\bigg(\sup_{m \geq n} \rho(X_{(k_m,m)}(X),X) > \varepsilon \bigg).\end{aligned}$$ So, it is enough to show that the sequence $\sup_{m \geq n} \rho(X_{(k_m,m)}(X),X)$ converges to 0 almost surely. We follow the similar argument as above by showing that $\sup_{m \geq n} \rho(X_{(k_m,m)}(X),X)$ converges to 0 in probability. As the sequence $\sup_{m \geq n} \rho(X_{(k_m,m)}(x),x)$ is monotonically non-increasing, this implies that $\sup_{m \geq n} \rho(X_{(k_m,m)}(X),X)$ converges to 0 almost surely as $n \rightarrow \infty$. We know that $\mathbb{P}( X \in S_{\nu}) =1$ from the Lemma \[lem:full\_support\], as a result $$\begin{aligned} \mathbb{P}\bigg( \sup_{m \geq n} \rho(X_{(k_m,m)}(X),X) > \varepsilon \bigg) & = \mathbb{P}\bigg(\sup_{m \geq n} \rho(X_{(k_m,m)}(X),X) > \varepsilon | X \in S_{\nu} \bigg) \ \\ & = \mathbb{E}\bigg\{ \mathbb{I}_{\{ \sup_{m \geq n} \rho(X_{(k_m,m)}(X),X) > \varepsilon \}} | X \in S_{\nu}\bigg\},\end{aligned}$$ The expectation of indicator functions of events that is, $\mathbb{E}\{ \mathbb{I}_{\{\sup_{m \geq n} \rho(X_{k_m},X) > \varepsilon \}} \allowbreak | X \in \text{supp}(\nu)\}$ goes to 0 by the Monotone Convergence Theorem. It has been shown in Theorem 5.2 of [@Devroye_Gyorfi_Lugosi_1996] that if $k$ is fixed but $n \rightarrow \infty$, then the expected error of the $k$-nearest neighbor rule converges to some constant which is greater than the Bayes error. So, for finite values of $k$, the $k$-nearest neighbor fails to be universally weakly consistent. Also, two of the conditions of Stone’s theorem are satisfied whenever $n,k \rightarrow \infty$ and $k/n \rightarrow 0$ in finite dimensional normed spaces. Therefore, to obtain universal consistency we always consider the limit that $k$ increases with $n$ but slowly, that is, $k/n \rightarrow 0$ as $k,n \rightarrow \infty$. The proof of the following result is based on [@Devroye_Gyorfi_Lugosi_1996], where it was proven in Euclidean settings, but the same proof works for every separable metric space. This result shows that the expected difference between the $k$-nearest neighbor approximation $\eta_n$ in the equation and another approximation $\tilde{\eta}_n$ in the equation decreases for large values of $k$. \[lem:third\_prop\_all\_metric\_sp\] Let $(\Omega,\rho)$ be a separable metric space and let $\nu$ be a probability measure on $\Omega$. Given a labeled sample $\sigma_n$, which takes values in $(\Omega \times \{0,1\})^n$, define a function $\tilde{\eta}_{n}$, $$\begin{aligned} \label{eqn:sec_app_knn} \tilde{\eta}_{n}(x) = \frac{1}{k}\sum_{i=1}^{n} \mathbb{I}_{\{x_i \in \mathcal{N}_{k}(x)\}}\eta(x_i).\end{aligned}$$ If $k \rightarrow \infty$, then $\mathbb{E}\{ (\eta_{n}(X) - \tilde{\eta}_n(X) )^2 \} \rightarrow 0$. By the definition of $\eta_n$ and $\tilde{\eta}_n$, $$\begin{aligned} & (\eta_{n}(X) - \tilde{\eta}_n(X))^2 \\ & = \bigg(\frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} Y_i - \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} \eta(X_i) \bigg)^2 \nonumber \\ & = \bigg( \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} (Y_i - \eta(X_i) ) \bigg)^2 \nonumber \\ & = \frac{1}{k^2} \sum_{i =1}^{n} \sum_{j=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} \mathbb{I}_{\{X_j \in \mathcal{N}_{k}(X)\}} (Y_i - \eta(X_i) )(Y_j - \eta(X_j) ) \label{eqn:ind_expec}\end{aligned}$$ We know that $\mathbb{E}\{ \eta(X_i)\} = \mathbb{E}\{ \mathbb{E}\{Y_i| X = X_i\}\} = \mathbb{E}\{Y_i\}$. And if $i \neq j$, then $(X_i,Y_i)$ and $(X_j,Y_j)$ are independent of each other. So, $$\begin{aligned} & \mathbb{E}\bigg\{ \frac{1}{k^2} \sum_{i,j =1, i \neq j}^{n}\mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} \mathbb{I}_{\{X_j \in \mathcal{N}_{k}(X)\}} (Y_i - \eta(X_i) )(Y_j - \eta(X_j) )\bigg\}= 0.\end{aligned}$$ Since $(Y_i - \eta(X_i))^2$ is bounded above by one, we have $$\begin{aligned} \mathbb{E}\{(\eta_{n}(X) - \tilde{\eta}_n(X))^2\} & = \mathbb{E}\bigg\{\frac{1}{k^2} \sum_{i =1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} (Y_i - \eta(X_i) )^2 \bigg\} \ \\ & \leq \mathbb{E}\bigg\{ \frac{1}{k^2} \sum_{i =1}^{n}\mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} \bigg\} \ \\ & = \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} \frac{1}{k} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} \bigg\} \ \\ & = 1/k,\end{aligned}$$ where the last equality is true because $\mathcal{N}_{k}(X)$ contains exactly $k$ data points from the sample. In case of distance ties, we break ties and choose $k$ data points for $\mathcal{N}_k$. In fact, we prove in the Lemma \[lem:eta\_tilde\_reg\] that the expected difference between $\tilde{\eta}_n$ and $\eta$ can be bounded above by the expected difference of $\eta$ and an uniformly continuous function with the help of Luzin’s theorem (see Theorem \[thm:Luzin\_theorem\]). A initial part of the following proof is based on [@Devroye_Gyorfi_Lugosi_1996], where it was drafted for Euclidean spaces. \[lem:eta\_tilde\_reg\] Let $\nu$ be a probability measure on $\Omega$, where $(\Omega,\rho)$ is a separable metric space. Let $\tilde{\eta}_n$ be same as defined in the equation of Lemma \[lem:third\_prop\_all\_metric\_sp\]. Let $\varepsilon > 0$, then there exists a set $K \subseteq \Omega$ and a uniformly continuous function $\eta^{*}$ on $\Omega$ such that, $\mathbb{E}\{( \tilde{\eta}_n(X) - \eta(X))^2\}$ is less than or equal to $$\begin{aligned} \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_k(X)\}} ( \eta^*(X_i) - \eta(X_i) )^{2} \bigg| X\in K, X_i \in U \bigg\} + 12\varepsilon, \end{aligned}$$ where $U = \Omega \setminus K$. By the Jensen’s inequality we have, $$\begin{aligned} ( \tilde{\eta}_{n}(X) - \eta(X) )^2 & = \bigg( \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}}\eta(X_i) - \eta(X) \bigg)^2 \ \\ & = \bigg( \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}}( \eta(X_i) - \eta(X) ) \bigg)^2 \ \\ & \leq \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} (\eta(X_i) - \eta(X) )^2 .\end{aligned}$$ Then, we have $$\begin{aligned} \mathbb{E}\bigg\{ ( \tilde{\eta}_{n}(X) - \eta(X) )^2 \bigg\} & \leq \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} (\eta(X_i) - \eta(X) )^2 \bigg\}.\end{aligned}$$ Given $\varepsilon >0$, the Luzin’s theorem (see chapter 7 of [@Folland_1999] or see Theorem \[thm:Luzin\_theorem\]) implies that there exists a compact set $K \subseteq \Omega$ such that $\eta|_{K}$ is continuous and $\nu(\Omega \setminus K) < \varepsilon$. Let $U = \Omega \setminus K$. Due to Lemma \[lem:uni\_cts\_lip\_cts\] and Lemma \[lem:lip\_cts\_lip\_cts\], we can extend $\eta|_{K}$ to a uniformly continuous function $\eta^{*} : \Omega \rightarrow [0,1]$ such that $\eta^*(X) = \eta(X) $ for $X \in K$ and $(\eta^*(X) - \eta(X))^2 \leq 1 $ whenever $X \notin K$. Using the inequality $(a+b+c)^2 \leq 3 (a^2 + b^2 + c^2)$, where $a,b,c$ are real numbers, we have $$\begin{aligned} \label{eqn:three_bounds} & \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} (\eta(X_i) - \eta(X) )^2 \bigg\} \nonumber \\ & = \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}}(\eta(X_i)-\eta^*(X_i) + \eta^*(X_i)-\eta^*(X) + \eta^*(X)-\eta(X) )^2 \bigg\} \nonumber \\ & = 3 \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}}(\eta(X_i) - \eta^*(X_i) )^2 \bigg\} + 3 \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} \nonumber \\ & \ \ \ \ (\eta^*(X_i) - \allowbreak \eta^*(X))^2 \bigg\} + 3 \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} (\eta^*(X) - \eta(X))^2 \bigg\}.\end{aligned}$$ We bound the three expressions in the right-hand side of the above inequality in the following way, - Third term of the equation : As $\eta^*$ and $\eta$ are equal on $K$ and $\frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} = 1$ after breaking the distance ties, we have $$\begin{aligned} & \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} (\eta^*(X) - \eta(X))^2 \bigg\} \\ & \leq \mathbb{E}\{(\eta^*(X) - \eta(X))^2 | X \in K\} + \mathbb{E}\{(\eta^*(X) - \eta(X))^2| X \in U \} \\ & \leq \nu(U) < \varepsilon.\end{aligned}$$ - Second term of the equation : We first divide the expectation in two disjoint cases: $\rho(X_{i},X) > \delta$ and $\rho(X_{i},X)\leq \delta$. As $\eta^{*}$ is a uniformly continuous function, given $\varepsilon >0$ there exists $\delta >0$ such that $ (\eta^*(X_i) - \eta^*(X))^2 \leq \varepsilon$ whenever $\rho(X,X_i) \leq \delta$. For the second case, we use Cover-Hart lemma (Lemma \[lem:cover\_hart\]). As $k/n$ goes to zero, by Cover-Hart lemma the distance between $X$ and its $k$-th nearest neighbor $X_{(k)}$ will tend to zero almost surely. That is, for all $\varepsilon > 0$, $\mathbb{P}(\rho(X_{(k)},X)> \delta) \rightarrow \varepsilon$. Note that, all $k-1$-nearest neighbors are closer to $X$ than $X_{k}$. This implies that $\mathbb{E}\{(1/k)\sum_{i=1}^{n} \mathbb{I}_{\{\rho(X_{i},X) > \delta)\}} \} < \varepsilon$. So, we have $$\begin{aligned} & \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} (\eta^*(X_i) - \eta^*(X))^2 \bigg\} \ \\ & = \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}}\mathbb{I}_{\{\rho(X,X_i) > \delta \}} (\eta^*(X_i) - \eta^*(X))^2 \bigg\} + \end{aligned}$$ $$\begin{aligned} & \ \ \ \ \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} \allowbreak \mathbb{I}_{\{\rho(X,X_i) \leq \delta \}}(\eta^*(X_i) - \eta^*(X))^2 \bigg\} \ \\ & \leq 2 \varepsilon.\end{aligned}$$ Now, we will analyze the first term of the equation . Let $Z_i$ denote the expression $ \mathbb{I}_{\{X_i \in \mathcal{N}_k(X)\}} ( \eta^*(X_i) - \eta(X_i) )^{2}$. We use $Z_i$ here for easy calculations. Then, the first term in the equation looks like, $$\begin{aligned} \label{eqn:ab2} \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} (\eta(X_i) - \eta^*(X_i) )^2 \bigg\} & = \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} Z_{i} \bigg\}. \end{aligned}$$ We further divide the equation into two cases, $$\begin{aligned} \label{eqn:ab1} \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} Z_i \bigg\} & \leq \ \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} Z_i \bigg| X \in U \bigg\} + \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} Z_i \bigg| X \in K \bigg\}.\end{aligned}$$ The value of $(1/k)\sum_{i =1}^{n} Z_i$ is at most one, so the first term on the right hand side of the equation is bounded above by $\nu(U) < \varepsilon$. For the second term of the equation , we again consider two disjoint cases, $$\begin{aligned} \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} Z_i \bigg| X \in K \bigg\} & = \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} Z_i \bigg| X,X_i \in K \bigg\} + \\ & \ \ \ \ \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} Z_i \bigg| X \in K, X_i \in U \bigg\}. \ \end{aligned}$$ If $X_i \in K$, then $\eta(X_i) = \eta^*(X_i)$ and so we have $$\begin{aligned} \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} Z_i \bigg| X,X_i \in K \bigg\} = 0. \end{aligned}$$ Now summing all the bounds calculated above, we obtain $$\begin{aligned} & \mathbb{E}\{( \tilde{\eta}_n(X) - \eta(X))^2\} \\ & \leq \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} Z_i \bigg| X \in K, X_i \in U \bigg\} + 3(\varepsilon + 2\varepsilon + \varepsilon) \ \\ & = \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_k(X)\}} ( \eta^*(X_i) - \eta(X_i) )^{2} \bigg| X \in K, X_i \in U \bigg\} + 12\varepsilon.\end{aligned}$$ It is important to recall again that all the results in Subsection \[subsec:Cover-Hart lemma and other results for general metric spaces\] holds for any separable metric space. ### Stone’s theorem {#sec:Consistency using Stone's theorem} Charles Stone proved that the $k$-nearest neighbor rule is universally consistent in an Euclidean space. The result can be extended to finite dimensional normed spaces without much difficulty, see for example Duan’s thesis [@Duan_2014]. Here, we discuss the proof of Stone’s theorem in Euclidean spaces using the cones argument adopted from section 5.3 of [@Devroye_Gyorfi_Lugosi_1996]. Let $\theta \in (0,\pi/2)$. A *cone* $C(x,\theta)$, around an element $x \in \mathbb{R}^d$ of angle $\theta$, is the set of all $y$ from $\mathbb{R}^d$ such that the angle between $x$ and $y$ is less than or equal to $\theta$, that is, $$C(x,\theta) = \bigg\{y \in \mathbb{R}^{d}:\frac{\langle x,y \rangle}{||x||\ ||y||} \geq \cos(\theta)\bigg\},$$ where $\langle x,y \rangle = x^t.y$ is the dot product of $x$ and $y$. A cone of angle $\pi/6$ is shown in figure \[fig:cones\]. If $\theta \in (0,\pi/6]$, then the cone $C(x,\theta)$ has the following geometrical property: for $x_1,x_2 \in C(x,\theta)$, $$\begin{aligned} ||x_1|| < ||x_2|| \Rightarrow ||x_1-x_2|| < ||x_2||.\end{aligned}$$ If $x_1,x_2$ is in $C(x,\theta)$, then the angle of $x_1$ and $x_2$ with $x$, respectively, is at most $\theta$. From the figure \[fig:cones\], we see that the angle between $x_1$ and $x_2$ is at most $2\theta$, $$\begin{aligned} \cos(2\theta) & = 2 \cos^2(\theta) -1 \ \\ & \leq 2 \frac{\langle x,x_1 \rangle}{||x||\ ||x_1||} \frac{\langle x,x_2 \rangle}{||x||\ ||x_2||} -1 \ \\ & = \frac{x_1^t.x_2 \ x^t.x}{||x||^2\ ||x_1||\ ||x_2||} + \frac{x_1^t.x_2 \ x^t.x}{||x||^2\ ||x_1||\ ||x_2||} -1 \ \\ & \leq \frac{\langle x_1,x_2 \rangle}{||x_1||\ ||x_2||},\end{aligned}$$ where the last inequality is due to Cauchy-Schwarz inequality. We see that if $||x_1|| < ||x_2||$, then $\frac{||x_1||}{||x_2||} < 1$, which gives $\frac{||x_1||^2}{||x_2||^2} +1 < \frac{||x_1||}{||x_2||} +1$. If $\theta \leq \pi/6$, then $\cos(2\theta) \geq 1/2$, so we have $$\begin{aligned} ||x_1 -x_2||^2 & = \langle x_1-x_2,x_1-x_2\rangle \ \\ & = ||x_1||^2 + ||x_2||^2 - 2 \langle x_1, x_2\rangle \ \\ & = ||x_1||^2 + ||x_2||^2 - 2 \frac{\langle x_1,x_2\rangle}{||x_1||\ ||x_2||} ||x_1||\ ||x_2|| \\ & \leq ||x_1||^2 + ||x_2||^2 - 2 \cos(2 \theta) ||x_1||\ ||x_2|| \ \\ & \leq ||x_1||^2 + ||x_2||^2 - ||x_1||\ ||x_2|| \ \\ & = ||x_2||^2 \bigg( \frac{||x_1||^2}{||x_2||^2} + 1 - \frac{||x_1||}{||x_2||} \bigg) \ \\ & < ||x_2||^2.\end{aligned}$$ (4.5cm,0.9cm) arc (30:60:4.95cm); (4.1,1) circle (1.5pt) node\[left\] ; (4.2cm,0.85cm) arc (30:60:4.6cm); (3.2,2.4) circle (1.5pt) node\[above\] ; (1cm,0.199cm) arc (30:60:0.59cm) node\[right\] ; (1cm,0.5cm) arc (30:60:0.69cm) node\[right\] ; (0,0) – (5,1); (0,0) – (4,2); (0,0) – (3,3); (2.2,1.1) circle (1.5pt) node\[below\] ; The following covering lemma (see pp. 67-68, Lemma 5.5 of [@Devroye_Gyorfi_Lugosi_1996]) for $\mathbb{R}^d$ is true for any fixed positive value of $\theta < \pi/2$. \[lem:covering\_r\] Let $(\mathbb{R}^{d},||.||)$ be an Euclidean space. Let $\theta \in (0,\pi/2)$, then there exists a constant $\beta_d$, depending only on the dimension $d$ and norm, such that there is a finite subset $\{z_1,\ldots,z_{\beta_{d}} \}$ of $\mathbb{R}^{d}$ and the finite union of cones $C(z_i,\pi/6)$ covers $\mathbb{R}^d$. The constant $\beta_d$ is less than or equal to $ \bigg(1 + \frac{1}{\sin(\theta/2)}\bigg)^d -1$. Now we present the proof of the important geometric Stone’s lemma for Euclidean spaces using the beautiful argument of cones, as given in [@Devroye_Gyorfi_Lugosi_1996]. \[stone\_lemma\] Let $x,x_1,\ldots,x_n$ be a sample of $n+1$ points in an Euclidean space $(\mathbb{R}^d,||.||)$. Suppose that $x_i \neq x_j$ for $i \neq j$. Then, $x$ can be the $k$-nearest neighbor for at most $k \beta_d$ number of data points $x_i$, $$\begin{aligned} \sum_{i=1}^{n} \mathbb{I}_{\{ x \in \mathcal{N}_{k}(x_i)\}} \leq \ k \beta_{d}, \end{aligned}$$ where $\beta_{d}$ is a constant as given in Lemma \[lem:covering\_r\]. By the Lemma \[lem:covering\_r\], we can cover $\mathbb{R}^{d}$ by $\beta_{d}$ numbers of cones at $z_i$ of angle $\theta \leq \pi/6$. Let $x + C(z_i,\theta)$ be the translation of $C(z_i,\theta)$ and it still covers $\mathbb{R}^d$ due to translation invariance property of norm. So, $\mathbb{R}^d =\cup_{i=1}^{\beta_d} (x + C(z_i,\theta))$. See figure \[fig:stone\_lemma\]. The data points $x_i$ are lying around $x$ belonging to some set $(x +C(z_i,\theta))$. In each set $(x + C(z_i,\theta))$, we mark $x_i$ which are $k$-nearest neighbors of $x$. If there are fewer than $k$ points in a particular set, then we mark all the points in that set. From the figure \[fig:stone\_lemma\], we can see that marked points form an insulation belt around $x$ separating unmarked points and $x$. If a data point $x_j$ is unmarked then there are at least $k$ data points in that cone which are closer to $x_j$ than $x$, after breaking distance ties by comparing indices. So, we can assume that $||x-x_i|| < ||x-x_j||$ for every marked point $x_i$ in that particular cone. By the geometrical property of cones, we have $||x_i-x_j|| < ||x-x_j||$, which means that $x_i$ is closer to $x_j$ than $x$. This implies that if $x_j$ is not marked then $x$ cannot be the $k$-nearest neighbor of $x_j$. Hence, we need to count the marked points. There are $\beta_d$ sets and in each set there are at most $k$ marked points, so there are at most $k\beta_d$ marked points. $$\begin{aligned} \sum_{i=1}^{n} \mathbb{I}_{\{x \in \mathcal{N}_{k}(x_i)\}} & \leq \sharp\{x_i: x_i \text{ is marked}\} \ \\ & \leq k \beta_d.\end{aligned}$$ (0,0) – (3,0.8); (0,0) – (1,2.5); (0,0) – (-3,0.8); (0,0) – (-1,2.5); (0,0) – (-2.9,-0.8); (0,0) – (-1,-2.5); (0,0) – (3,-0.8); (0,0) – (1,-2.5); (0,0) circle (1.5pt) node\[yshift=-0.05cm,below\] ; (0.5,0.7) circle (1.5pt); (1.1,0.6) circle (1.5pt); (1.9,0.9) circle (1.5pt); (1.3,1.7) circle (1.5pt); (1,1) circle (1.5pt); (1,1) ellipse (3pt and 2pt); (0.5,0.7) ellipse (3pt and 2pt); (1.1,0.6) ellipse (3pt and 2pt); (-2,1) circle (1.5pt); (-1.7,1.3) circle (1.5pt); (-0.5,0.5) circle (1.5pt); (-1,1.5) circle (1.5pt); (-1,0.7) circle (1.5pt); (-1.5,1) circle (1.5pt); (-1.5,0.7) circle (1.5pt); (-0.5,0.5) ellipse (3pt and 2pt); (-1,0.7) ellipse (3pt and 2pt); (-1.5,0.7) ellipse (3pt and 2pt); (0.7,-0.9) circle (1.5pt); (1.7,-0.8) circle (1.5pt); (1,-0.7) circle (1.5pt); (1.5,-1.5) circle (1.5pt); (0.7,-0.9) ellipse (3pt and 2pt); ( 1,-0.7) ellipse (3pt and 2pt); (1.7,-0.8) ellipse (3pt and 2pt); (-0.2,0.9) circle (1.5pt); (0.1,0.8) circle (1.5pt); (-0.3,1.8) circle (1.5pt) node\[below\] ; (0.3,1.8) circle (1.5pt) node\[below\] ; (-0.2,0.9) ellipse (3pt and 2pt); (0.1,0.8) ellipse (3pt and 2pt); (0.3,1.8) ellipse (3pt and 2pt); (2.2,-0.01) circle (1.5pt); (1,0) circle (1.5pt); (1.5,-0.3) circle (1.5pt); (1.9,-0.4) circle (1.5pt); (2.5,-0.2) circle (1.5pt); (1,0) ellipse (3pt and 2pt); (1.5,-0.3) ellipse (3pt and 2pt); (1.9,-0.4) ellipse (3pt and 2pt); (0.1,-1.5) circle (1.5pt); (0.2,-1) circle (1.5pt); (0.1,-1.5) ellipse (3pt and 2pt); (0.2,-1) ellipse (3pt and 2pt); (-2.5,-0.4) circle (1.5pt); (-2,0) circle (1.5pt); (-1.5,-0.1) circle (1.5pt); (-1.1,-0.2) circle (1.5pt); (-1.1,-0.2) ellipse (3pt and 2pt); (-2,0) ellipse (3pt and 2pt); (-1.5,-0.1) ellipse (3pt and 2pt); (-1.3,-1) circle (1.5pt); (-1,-0.5) circle (1.5pt); (-1,-1.5) circle (1.5pt); (-1.3,-1) ellipse (3pt and 2pt); (-1,-0.5) ellipse (3pt and 2pt); (-1,-1.5) ellipse (3pt and 2pt); (0.35cm,0.1cm) arc (30:60:0.6cm) node\[right\] ; We are now ready to present the classical Stone’s theorem. As a result of geometric Stone’s lemma, the conditions of Stone’s theorem are satisfied, which establishes the universal weak consistency of the $k$-nearest neighbor rule in Euclidean spaces. \[thm:stone\_euclidean\] Let $g_n$ be the $k$-nearest neighbor rule on Euclidean space $(\mathbb{R}^d,||.||)$. If $k/n \rightarrow 0$ as $n,k \rightarrow \infty$, then the expected error probability of $g_n$ converges to Bayes error. In other words, the $k$-nearest neighbor rule is universally weakly consistent. From the Theorem \[lem:diff\_err\], it is sufficient to show that $$\begin{aligned} \mathbb{E}\{(\eta(X) - \eta_n(X))^2\} \rightarrow 0.\end{aligned}$$ Using the inequality $(a+b)^2 \leq 2a^2 + 2 b^2$, where $a,b$ are real numbers, $$\begin{aligned} \mathbb{E}\{(\eta_{n}(X) - \eta(X))^2\} & = \mathbb{E}\{(\eta_{n}(X) - \tilde{\eta}(X) + \tilde{\eta}(X) - \eta(X))^2\} \ \\ & \leq 2 \mathbb{E}\{(\eta_{n}(X) - \tilde{\eta}(X))^2\} + 2\mathbb{E}\{(\tilde{\eta}(X) - \eta(X))^2\}\end{aligned}$$ The Lemma \[lem:third\_prop\_all\_metric\_sp\] implies that the first term in the above equation goes to zero when $k \rightarrow \infty$. From the Lemma \[lem:eta\_tilde\_reg\], we have an upper bound on the second term. Then we exchange $X$ and $X_i$ , as $X,X_i$ are i.i.d., and use the (Stone’s) Lemma \[stone\_lemma\]. We also use the fact that $( \eta^*(X) - \eta(X) )^{2}$ is bounded above by one, where $\eta^*$ is a uniformly continuous function as stated in Lemma \[lem:eta\_tilde\_reg\]. We have (by Lemma \[lem:eta\_tilde\_reg\]), $$\begin{aligned} & \mathbb{E}\{( \tilde{\eta}_n(X) - \eta(X))^2\} \\ & \leq \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_k(X)\}} ( \eta^*(X_i) - \eta(X_i) )^{2} \bigg| X\in K, X_i \in U \bigg\} + 12\varepsilon \ \\ & = \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} \mathbb{I}_{\{X \in \mathcal{N}_k(X_i)\}} ( \eta^*(X) - \eta(X) )^{2} \bigg| X_i\in K, X \in U \bigg\} + 12\varepsilon \ \\ & \leq \beta_d \mathbb{E}\{( \eta^*(X) - \eta(X) )^{2} | X \in U \} + 12\varepsilon \ \\ & \leq \beta_d \nu(U) + 12\varepsilon \ \\ & \leq \beta_d \varepsilon + 12\varepsilon.\end{aligned}$$ We observe that the Stone’s lemma \[stone\_lemma\] is the heart of the Stone’s theorem. If the Stone’s lemma holds for any general metric space, then the Stone’s theorem holds and hence we achieve universal weak consistency in any general metric space. But, the argument of cones which has been used to prove Stone’s lemma is extremely restricted to finite dimensional Euclidean spaces. In general, Stone’s lemma is known to be true for any finite dimensional normed space [@Duan_2014]. Indeed, the proof of Stone’s lemma is limited to finite dimensional normed spaces. In the next chapter, we make an attempt to generalize Stone’s lemma for spaces with finite Nagata dimension. There is another method worked out by C[é]{}rou and Guyader [@Cerou_Guyader_2006] to prove the universal weak consistency in more general metric spaces. They showed that the weak Lebesgue-Besicovitch differentiation property of a metric space is sufficient to guarantee the universal weak consistency. \[bdt\_knn\] Let $(\Omega,\rho)$ be a separable metric space. Suppose that $\Omega$ satisfies the weak Lebesgue-Besicovitch differentiation property. Then, the $k$-nearest neighbor rule is universally weakly consistent. The above theorem by C[é]{}rou and Guyader, along with the result by Preiss (Theorem \[thm:Priess\_theorem\]) imply that the $k$-nearest neighbor rule is universally weakly consistent in a complete separable metric space having sigma-finite metric dimension. The main aim of this thesis is to reprove this result directly, by using the means of statistical learning theory while trying to imitate the proof by Stone in as much as possible. We investigate to what extent the geometric Stone’s lemma can be adapted in such metric spaces, and make a number of interesting observations. An example of inconsistency {#sec:An example of inconsistency} --------------------------- In this section, we first discuss the example by Davies in detail and then prove the inconsistency of the $k$-nearest neighbor rule on this example. ### Davies’ example {#subsec:Davies' example} Roy Davies in his article [@Davies_1971] constructed an interesting example of a compact metric space and two different Borel measures, say $\mu_a, \mu_b$, whose values on all closed balls of radius strictly less than 1 are equal to each other, such that the Radon-Nikodym derivative $d\mu_a/d(\mu_a + \mu_b)$ fails the differentiation property. According to C[é]{}rou and Guyader [@Cerou_Guyader_2006], the universal weak consistency is unachievable if the differentiation property fails. It would be nice to give a complete proof of the differentiation property using the consistency argument, mentioned as a future work in Chapter \[chap:Future Prospects\]. Now, we present the construction of Davies’ example. Let $n$ be a natural number and let $(p_n)$ be a sequence of natural numbers, which will chosen recursively later. For each $n$, define a set $M_n = \{(i_1,i_2) : 1 \leq i_1 \leq p_n, 0 \leq i_2 \leq p_n \}$ consisting of $p_n^2+ p_n$ pair of elements. An element of type $(i_1,i_2), i_2 >0$ is called a peripheral element corresponding to its central element, $(i_1,0)$. Let $G_n = (M_n,E_n)$ be a graph with $M_n$ and $E_n$ being the set of vertices and edges, respectively. The edges between the vertices are defined as: every central element is joined to other central elements, that is there is an edge between $(i_1,0)$ and $(i_2,0)$ for $1 \leq i_1 \neq i_2 \leq p_n$, and every peripheral element $(i_1,i_2),i_2>0$ is joined to its corresponding central element $(i_1,0)$. Based on these edges, we will define the distance between any two elements. Let $\Omega = \prod_{n \in \mathbb{N}} M_n = \{ (x_n) = (x_1,x_2,\ldots): x_n = (i_1,i_2) \in M_n\}$. Let $(x_n), (y_n)$ be two elements of $\Omega$, they are distinct if $x_n = y_n $ for every $n$. Let $m$ be the smallest index such that $x_m \neq y_m$. Define the distance between $x = (x_n)$ and $y = (y_n)$ as, $$\begin{aligned} \rho( x,y) = \begin{cases} 0 \ \ \ \ \ \ \ \ \ \ \ \text{ if } x_n = y_n, \forall n \ \\ (1/2)^{m} \ \ \ \ \ \text{if $x_m \neq y_m$, $\exists$ edge between } x_m \text{ and }y_m \ \\ (1/2)^{m-1} \ \ \text{otherwise} \end{cases} \end{aligned}$$ The function $\rho$ is similar to the metric defined in the Lemma \[eg:non\_archi\_metric\]. Following the similar argument as in Lemma \[eg:non\_archi\_metric\], we will obtain that $\rho$ is a metric. In fact, $(\Omega,\rho)$ is a compact metric space of diameter equals to $1$. Let $\alpha_0 = 2/3$ and $\beta_0 = 1/3$ and now we will define the values of $p_n,\alpha_n,\beta_n$, recursively. Given $\alpha_0 > \beta_0 $, choose $p_1 > (\alpha_0/\beta_0)$ large enough that for some positive real numbers $\alpha_1 > \beta_1$, we have $ p_1^2 \alpha_1 + p_1 \beta_1 = 2/3$ and $ p_1^2 \beta_1 + p_1 \alpha_1 = 1/3$. Given $\alpha_{n-1} > \beta_{n-1}$, choose $p_n > (\alpha_{n-1}/\beta_{n-1})$ so large that there are positive real numbers that $$\begin{aligned} \label{eqn:alpha_n_n-1} p_n^2 \alpha_{n} + p_n \beta_{n} = \alpha_{n-1}, \ p_n^2 \beta_{n} + p_n \alpha_{n} = \beta_{n-1}.\end{aligned}$$ Let $\Omega[x_1,\ldots,x_n] = \{x_1\} \times \ldots \times \{x_n\} \times M_{n+1} \times M_{n+2} \times \ldots $ and $\Omega[\phi] = \Omega$. We define two functions based on the number of central elements in the set $\{x_1,\ldots,x_n\}$. If there are even number of central elements in $\{x_1,\ldots,x_n\}$, then $\mu_a$ assigns value $\beta_n$ and $\mu_b$ assigns value $\alpha_n$ and similarly, if there are odd number of central elements then the values are flipped. That is, $$\begin{aligned} \mu_a ( \Omega[x_1,\ldots,x_n] ) & = \begin{cases} \label{eqn:mu_a} \beta_n \ \ \text{if } \sharp \{i: 1 \leq i \leq n, x_i \text{ is central} \} = \text{ even}, \ \\ \alpha_n \ \ \text{otherwise} \end{cases} \end{aligned}$$ In the similar way, the function $\mu_b$ is defined, $$\begin{aligned} \mu_b (\Omega[x_1,\ldots,x_n] ) & = \begin{cases} \alpha_n \ \ \text{if } \sharp \{i: 1 \leq i \leq n, x_i \text{ is central} \} = \text{ even}, \ \\ \beta_n \ \ \text{otherwise} \end{cases} \label{eqn:mu_b} \end{aligned}$$ It follows from the above definitions that $\mu_a(\Omega) = \beta_0 = 1/3$ and $\mu_b(\Omega) = \alpha_0 = 2/3$. By the Carath[è]{}odary’s extension theorem and Dynkin’s $\pi-\lambda$ theorem, such measures exist and are unique if they are finitely additive. \[lem:countable\_add\] Let $l = \{a,b\}$, we have $$\begin{aligned} \mu_l (\Omega[x_1,\ldots,x_{n-1}] ) = \sum_{x_n \in M_n} \mu_l (\Omega[x_1,\ldots,x_n] )\end{aligned}$$ A set $\Omega[x_1,\ldots,x_{n-1}]$ is the finite union of disjoint sets $\Omega[x_1,\ldots,\allowbreak x_n]$ over all $x_n \in M_n$. Suppose there are odd number of central elements in the set $\{x_1,\ldots,x_{n-1}\}$, then we have $ \mu_a (\Omega[x_1,\ldots,x_{n-1}] ) = \alpha_{n-1}$. Also, $ \sum_{x_n \in M_n} \mu_l (\Omega[x_1,\ldots,x_n] )$ can be divided into two sums, when $x_n$ is a central element and when $x_n$ is a peripheral element. Observe that there are $p_n$ central and $p_n^2$ peripheral elements in $M_n$. So, we have $ \sum_{x_n \in M_n} \mu_l (\Omega[x_1,\ldots,x_n] ) \allowbreak = p_n \beta_{n} + p_n^2 \alpha_{n}$, which is equal to $\alpha_{n-1}$ by the equation . In the same way, by the equation we have that $\mu_b (\Omega[x_1,\ldots,x_{n-1}] ) = \beta_{n-1} = p_n^2 \beta_{n} + p_n \alpha_{n} = \sum_{x_n \in M_n} \mu_b (\Omega[x_1,\ldots,x_n] )$. The finite additivity of both functions, in the other case when there are even number of central elements in $\{x_1,\ldots,x_{n-1}\}$ follows likewise. We show in the following lemma that both the measures agree on all closed balls of radius strictly less than 1. The values of the measures $\mu_a$ and $\mu_b$ are equal on each closed ball in $\Omega$ of radius strictly less than 1. If $r =1$, then any closed ball in $\Omega$ of radius one is equal to the whole space $\Omega$, and we know that $\mu_a(\Omega) = 1/3 \neq \mu_b(\Omega)$. Let $x= (x_n) \in \Omega$. Since all the distances between the points of $\Omega$ are of the form $(1/2)^n$. Suppose $r = (1/2)^t < 1$ for a fixed integer $t \geq 1$. The closed ball $\bar{B}(x,r)$ will contain all those $y = (y_n)$ which are at distance at most $(1/2)^t$ to $x$. So, $\bar{B}(x,r)$ contain two types of elements: - All $y \in \Omega$ such that $\rho(x,y) = 2^{-t}$ belong to $\bar{B}(x,r)$. That is, $x_i = y_i $ for $1 \leq i \leq t-1$, $x_t \neq y_t$ and there is an edge between $x_t$ and $y_t$. - All $y \in \Omega$ such that $\rho(x,y) = 2^{-m} < 2^{-t}$ are also in $\bar{B}(x,r)$. This means that for every $m = t+1,t+2,\ldots$, we have $x_i = y_i $ for $1 \leq i \leq m-1$, $x_m \neq y_m$. In simpler words, $\bar{B}(x,r)$ contains all those $(y_n)$ for which, either there is an edge between $x_t$ and $y_t$, or $x_t = y_t$. We consider the following cases to compute the measure of a closed ball, (i) Let $x_t$ be a central element of $M_t$, say $x_t = (i_1,0)$. Then, the possible values of $y_t$ are denoted by $E = \{ (j,0), (i_1,j): 1 \leq j \leq p_t\}$, there are $p_t$ central and $p_t$ peripheral elements. Therefore, the closed ball can be written as, $\bar{B}(x,r) = \cup_{y_t \in E} \Omega[x_1,\ldots,x_{t-1},y_t]$. To evaluate the measure of $\bar{B}(x,r)$, we further have two cases, either number of central elements in $\{x_1,\ldots,x_{t-1}\}$ is odd, or even. We treat only the case having odd number of central elements (the other case follows similarly). If the number of central elements in $\{x_1,\ldots,x_{t-1}\}$ is odd, then by the Lemma \[lem:countable\_add\] and definition of $\mu_a,\mu_b$, we have $\mu_a(\bar{B}(x,r)) = p_t \alpha_t + p_t \beta_t = \mu_b(\bar{B}(x,r))$, because there are equal number of central and peripheral elements in $E$. (ii) Let $x_t$ be a peripheral element of $M_t$, say $x_t = (i,j)$. Then the possible values for $y_t$ is $\{(i,j), (i,0)\}$. This implies that if the number of central elements in $\{x_1,\ldots,x_{t-1}\}$ is odd or even, then $\mu_a(\bar{B}(x,r)) = \alpha_t + \beta_t = \mu_b(\bar{B}(x,r))$. In any case, the values of both measures are equal on every closed ball of radius $<1$. Now, we will show that the differentiation property fails. The differentiation property does not holds for $\mu_a + \mu_b$. As, $\mu_a$ is absolutely continuous with respect to $\mu_a + \mu_b$, by the Radon-Nikodym theorem, there is a measurable function $f_a: \Omega \rightarrow [0,\infty)$ such that for any measurable set $A \subseteq \Omega$, $$\begin{aligned} \mu_a(A) = \int_{A} f_a(x) (\mu_a+\mu_b)(dx). $$ Suppose that the differentiation theorem holds for $\mu_a+\mu_b$, then we have $$\begin{aligned} \lim_{r \rightarrow 0} \frac{1}{ (\mu_a + \mu_b) (\bar{B}(x,r) ) } \int_{\bar{B}(x,r)} f_a(y) (\mu_a + \mu_b)(dy) & = f_a(x) \ \ \text{for $(\mu_a+\mu_b)$ a.e. } x \end{aligned}$$ As, $ \mu_a(\bar{B}(x,r)) = \mu_b(\bar{B}(x,r)), r<1$, then $f_a(x) = 1/2$ for $(\mu_a+\mu_b)$-almost everywhere. Let $\theta = \{x \in \Omega: f_a(x) = 1/2 \}$, so $(\mu_a + \mu_b)(\theta^c) = 0$. Therefore, we have $$\begin{aligned} \mu_a(\Omega) & = \int_{\theta} f_a(x) (\mu_a+\mu_b)(dx) \ \\ & = \frac{1}{2}, \end{aligned}$$ which is a contradiction. Davies extended $\Omega$ to $\hat{\Omega}$ and also $\mu_a$ and $\mu_b$ to become probability measures on $\hat{\Omega}$, to conclude that there are two distinct probability measures $\mu_a^{'}$ and $\mu_b^{'}$ such that they have equal values on every closed ball in $\hat{\Omega}$ of radius $<1$. Although, the original space $\Omega$ and measures $\mu_a$ and $\mu_b$ are enough to show the inconsistency of the $k$-nearest neighbor rule. We explain in brief the further argument by Davies in the following paragraph. We can define $\hat{\Omega}$ as the union of disjoint copies of $\Omega$, such as $\hat{\Omega} = \Omega \times \{a\} \cup \Omega \times \{b\}$. The Borel measures $\mu_a^{'},\mu_b^{'}$ are defined as, for $\hat{A} \subseteq \hat{\Omega}$, $$\begin{aligned} \mu_a^{'}(\hat{A}) = \mu_a(A_1) + \mu_b(A_2), \ \mu_b^{'}(\hat{A}) = \mu_b(A_1) + \mu_a(A_2), \end{aligned}$$ where $\hat{A} = A_{1} \times \{a\} \cup A_2 \times \{b\}$. This implies that $\mu_a^{'}$ and $\mu_b^{'}$ are two distinct probability measures on $\hat{\Omega}$. The distance $\hat{\rho}$ between two elements, where each element is from $\Omega \times \{a\}$ and $\Omega \times \{b\}$, respectively, is equal 1. If both the points are from same space, say $\Omega \times \{a\}$, then the distance is given by the original metric $\rho$. The metric properties of $\rho$ implies that $\hat{\rho}$ is a metric and thus, $\hat{\Omega}$ is of diameter 1. It follows from the properties of $\mu_a, \mu_b$ that the values of $\mu_a^{'}$ and $\mu_b^{'}$ are equal on all closed balls of radius strictly less than 1. ### Inconsistency of the $k$-nearest neighbor rule on the Davies’ example {#subsec:inconsistent_Davies} Here, we show that the $k$-nearest neighbor is not weakly consistent without using the differentiation argument. It also give a hope that the consistency can be studied directly without involving any differentiation argument. The following lemma suggest that if the values of two measures are equal on every closed ball, then the values of both measures will be equal on every open ball and every sphere. \[lem:closed\_open\] For every $x \in \Omega$, we have $$\begin{aligned} \mu_a(B(x,r)) = \mu_b(B(x,r)) \text{, } \mu_a(S(x,r)) = \mu_b(S(x,r)),\end{aligned}$$ where $r \leq 1$ for open balls and $r <1$ for spheres. Let $(r_n)$ be an increasing sequence converging to $r $ such that $r_1 \leq r_{2} \leq \ldots < r \leq 1$ and $\cup_{n=1}^{\infty}\bar{B}(x,r_n) = B(x,r)$. Then, by the $\sigma$-additivity of $\mu_a$ and $\mu_b$, we have $$\begin{aligned} \mu_a(B(x,r)) & = \mu_a(\cup_{n=1}^{\infty}\bar{B}(x,r_n)) \ \\ & = \lim_{n \rightarrow \infty} \mu_{a}(\bar{B}(x,r_n)) \ \\ & = \lim_{n \rightarrow \infty} \mu_{b}(\bar{B}(x,r_n)) \ \\ & = \mu_b(\cup_{n=1}^{\infty}\bar{B}(x,r_n)) \ \\ & = \mu_b(B(x,r)).\end{aligned}$$ As, the values of measures are equal on every closed ball of radius $<1$ and on every open ball with radius $\leq 1$, it follows that, for $r <1$ $$\begin{aligned} \mu_a(S(x,r)) & = \mu_a(\bar{B}(x,r)) - \mu_a(B(x,r)) \ \\ & = \mu_b(\bar{B}(x,r)) - \mu_b(B(x,r)) \ \\ & = \mu_b(S(x,r)).\end{aligned}$$ Let us define two measures $\mu_0$ and $\mu_1$ such that, $$\begin{aligned} \label{eqn:mu_0_mu_1} \mu_0 = \frac{6}{5} \mu_a, \ \ \mu_1 = \frac{9}{10} \mu_b.\end{aligned}$$ Then, $\mu_0 (\Omega ) = (6/5)(1/3) = 0.4 $ and $\mu_1(\Omega) = 0.6$. For $r <1$, we know that $\mu_{a}(\bar{B}(x,r)) = \mu_{b}(\bar{B}(x,r))$ and so, by the equation $\eqref{eqn:mu_0_mu_1}$ we have, $\mu_0 (\bar{B}(x,r)) \allowbreak = (4/3) \mu_{1}(\bar{B}(x,r))$. Similarly, from the Lemma \[lem:closed\_open\] and the equation $\eqref{eqn:mu_0_mu_1}$ it follows that $\mu_0 (B(x,r)) = (4/3) \mu_1(B(x,r))$ and $\mu_0 (S(x,r)) = (4/3) \mu_1(S(x,r))$, whenever $r <1$. Let $\mu = \mu_0 + \mu_1$, then $\mu(\Omega) =1$ is a probability measure on $\Omega$. We observe that, $\mu_1$ is absolutely continuous with respect to $\mu$, by the Radon-Nikodym theorem, there exists a function $\eta$, called the Radon-Nikodym derivative such that for measurable $A \subseteq \Omega$, $$\begin{aligned} \label{eqn:RND_mu_1} \mu_1(A) = \int_{A} \eta(x) \mu(dx). \end{aligned}$$ The distribution of the pair of random variables $(X,Y)$, can be described by $\mu$ and $\eta$. Let $\mu_1$ denote the distribution of points having label 1, that is, $\mu_1(A) = \mathbb{P}( X \in A, Y =1)$ for measurable $A \subseteq \Omega$. Then, there exists a measurable set $M_1 \subseteq \Omega$, $\mu(M_1) >0$ such that $\eta(x) \geq 0.6$ for all $x \in M_1$. Suppose there is no such set $M_1$, which means that the value of $\eta$ is strictly less than 0.6 on every set of positive measure. This is a contradiction to the fact that $\mu_1(\Omega) = 0.6$, due to the above relation between $\eta$ and $\mu_1$. Let $M$ be a measurable subset of $\Omega$ such that $M_1 \subseteq M$ and for every $x \in M$, $\eta(x) \geq 0.5$. The Bayes rule $g^{*}$ assigns label 1 to a data point $x$ if $\eta(x) \geq 0.5$, otherwise assigns label 0. So, $g^{*}(x)$ is equal to 1 if $x \in M$ and equal to 0 if $x \notin M$. The Bayes error is given by, $$\begin{aligned} \ell^{*}_{\mu} & = \mathbb{P}(g^{*}(X) = 1, Y=0) + \mathbb{P}(g^{*}(X) = 0, Y=1) \ \\ & = \mathbb{P}(X \in M, Y=0) + \mathbb{P}(X \in M^{c}, Y=1) \ \\ & = \mu_0(M) + \mu_1(M^{c}).\end{aligned}$$ For $x \in M^{c}$, we have $\eta(x) < 0.5$. By the equation , we have $\mu_1(M^c) = \int_{M^{c}} \eta(x) \mu(dx) \leq 0.5\mu(M^c) = 0.5 ( \mu_0(M^c) + \mu_0(M^c) )$. So, $\mu_1(M^c) \leq \mu_0(M^c)$, this implies that $\ell^{*}_{\mu} \leq 0.4$. We will show that the expected error of the $k$-nearest neighbor rule is at least 0.6 in the limit, and therefore, strictly greater than the Bayes error. Let $D_n = ( (X_1,Y_1),\ldots,(X_n,Y_n) )$ be a random labeled sample of independently and identically distributed random pairs. Let $x \in X, r<1$ and let $k', k''$ be two natural numbers such that $k' \leq k \leq k'' $. Let $T$ denote the following event, $$\begin{aligned} T = \bigg\{ \varepsilon_{kNN}(X) = r <1, \sharp\{ \bar{B}(x,r)\} = k'', \sharp\{ B(x,r)\} = k' \bigg\}, \end{aligned}$$ where $\varepsilon_{kNN}(X)$ is defined in the equation . Let $Y_1,\ldots, Y_k$ denote the labels of the the $k$-nearest neighbors of $x$, denoted by $X_1,\ldots,X_k$. We claim that, the expectation of average of labels of $k$-nearest neighbors of $x$, given the event $T$, is equal to 3/7. This can be observed by considering the following two cases: (I) If $\mu(S(x,r)) =0$, then there is only one data point $X_k$ on the sphere $S(x,r)$. In this case, $k'' = k$ and $k'=k-1$. For $i =1,\ldots,k$, given the event $T$, we know that $X_i$ are coming from the closed ball $\bar{B}(x,r)$, thus we have $$\begin{aligned} \mathbb{E}\{ Y_{i}| T \} & = \mathbb{P}(Y_i=1 | T) \ \\ & = \frac{\mathbb{P}(X_i \in \bar{B}(x,r),Y_i =1)}{\mathbb{P}(X_i \in \bar{B}(x,r))} \ \\ & = \frac{\mu_1( \bar{B}(x,r) )}{\mu_0( \bar{B}(x,r) ) + \mu_1(\bar{B}(x,r))}\ \\ & = \frac{\mu_1( \bar{B}(x,r) )}{\frac{4}{3}\mu_1( \bar{B}(x,r) ) + \mu_1(\bar{B}(x,r))} \ \\ & = \frac{3}{7},\end{aligned}$$ where we used the equality, $\mu_0( \bar{B}(x,r) ) = (4/3) \mu_0( \bar{B}(x,r) )$. The expectation of average of labels of the $k$-nearest neighbors of $x$ is, $$\begin{aligned} \mathbb{E}\bigg\{ \frac{Y_{1} + \ldots + Y_{k}}{k} \bigg| T \bigg\} & = \frac{1}{k} \sum_{i=1}^{k} \mathbb{E}\{ Y_{i}| T \} \ \\ & = \frac{3}{7}.\end{aligned}$$ (II) If $\mu(S(x,r)) >0$, then there may be more than one data point on the sphere. Given the event $T$, out of $k$ nearest neighbors of $x$, the $k'$-nearest neighbors of $x$ which are $X_1,\ldots,X_{k'}$ belongs to the open ball $B(x,r)$ and the remaining ($k$-$k'$)-nearest neighbors, $X_{k'+1},\ldots,X_{k}$ are coming from the sphere with the distance ties being broken uniformly on the sphere. Therefore, for $i =1 ,\ldots, k'$ $$\begin{aligned} \mathbb{E}\{ Y_{i}| T \} & = \mathbb{P}(Y_i=1 | T) \ \\ & = \frac{\mathbb{P}(X_i \in B(x,r),Y_i =1)}{\mathbb{P}(X_i \in B(x,r))} \ \\ & = \frac{\mu_1( B(x,r) )}{\mu_0( B(x,r) ) + \mu_1(B(x,r))}\ \\ & = \frac{3}{7}.\end{aligned}$$ Since the distance ties are broken uniformly on the sphere, so for $i= (k'+1), \ldots, k$, $$\begin{aligned} \mathbb{E}\{Y_{i}| T \} & = \mathbb{P}(Y_i=1 | T) \ \\ & = \frac{\mathbb{P}(X_i \in S(x,r),Y_i =1)}{\mathbb{P}(X_i \in S(x,r))} \ \\ & = \frac{\mu_1( S(x,r) )}{\mu_0( S(x,r) ) + \mu_1(S(x,r))}\ \\ & = \frac{3}{7}.\end{aligned}$$ We can write the expectation of the average of $k$-nearest neighbor labels as, $$\begin{aligned} \mathbb{E}\bigg\{ \frac{Y_{1} + \ldots + Y_{k}}{k} \bigg| T \bigg\} & = \frac{k'}{k} \mathbb{E}\bigg\{ \frac{Y_{1} + \ldots + Y_{k'}}{k'} \bigg| T \bigg\} + \\ & \ \ \ \ \ \frac{k - k'}{k} \mathbb{E}\bigg\{ \frac{Y_{k'+1} + \ldots + Y_{k}}{k- k'} \bigg| T \bigg\} \ \\ & = \frac{k'}{k} \sum_{i=1}^{k'}\mathbb{E}\bigg\{ \frac{Y_{i}}{k'} \bigg| T \bigg\} + \frac{k-k'}{k} \sum_{i=k'+1}^{k}\mathbb{E}\bigg\{ \frac{Y_{i}}{k-k'} \bigg| T \bigg\} \end{aligned}$$ $$\begin{aligned} & = \frac{k'}{k} \frac{3}{7} + \frac{(k -k')}{k} \frac{3}{7} \hspace{1cm} \\ & = \frac{3}{7}. \end{aligned}$$ Therefore, in any case, given the event $T$ the conditional expectation of the average of the labels of the $k$-nearest neighbors of $x$ is $ \frac{3}{7} $. According to the Cover-Hart lemma, $\varepsilon_{kNN} \rightarrow 0$ almost surely, whenever $n,k \rightarrow \infty$ and $k/n \rightarrow 0$. As a result, we have $$\begin{aligned} \mathbb{E}\bigg\{ \frac{Y_{1} + \ldots + Y_{k}}{k} \bigg\} & = \mathbb{E}\bigg\{ \mathbb{E}\bigg\{ \frac{Y_{1} + \ldots + Y_{k}}{k} \bigg| T \bigg\} \bigg\} + \mathbb{E}\bigg\{ \mathbb{E}\bigg\{ \frac{Y_{1} + \ldots + Y_{k}}{k} \bigg| T^c \bigg\} \bigg\} \\ & = \frac{3}{7} + \mathbb{E}\bigg\{ \mathbb{E}\bigg\{ \frac{Y_{1} + \ldots + Y_{k}}{k} \bigg| T^c \bigg\} \bigg\} \ \\ & \leq \frac{3}{7} + \mathbb{P}(T^c),\end{aligned}$$ where $T^{c}$ is the event $\{ \varepsilon_{kNN} = 1 \}$ and so $\mathbb{P}(T^{c})$ converges to 0 in the limit. This implies that less than half of the $k$-nearest neighbors have label 1 in the limit. Thus, the $k$-nearest neighbor rule will predict label 0 in the limit $n,k \rightarrow \infty$ and $k/n \rightarrow 0$. The error would be the set of all points with label 1, that is, $$\begin{aligned} \lim_{n,k \rightarrow \infty, k/n \rightarrow 0 } \mathbb{E}\{\ell_{\mu}(g_n) \} & = \mathbb{P}(X \in \Omega, Y =1) \ \\ & = \mu_1(\Omega) \ \\ & = 0.6 > \ell^{*}_{\mu}.\end{aligned}$$ Hence, the $k$-nearest neighbor rule is not consistent. Consistency and Metric Dimension {#chap:Consistency in a metrically finite dimensional spaces} ================================ Here, we present our analysis on the consistency of the $k$-nearest neighbor rule in various metric spaces with finite Nagata dimension and finite metric dimension. In this chapter, we divide our work into two main sections, consistency with zero distance ties and consistency with distance ties, to illustrate how the solution differ in the two cases. Starting with a no distance ties assumption, we prove a generalized version of geometric Stone’s lemma for spaces with finite Nagata dimension. Further, we present some counter-examples to understand the difficulty in generalizing Stone’s lemma in presence of distance ties. We then prove a different lemma to handle distance ties and finally, we reprove the universal weak consistency of the $k$-nearest neighbor rule in a metrically sigma-finite dimensional space. We also establish the strong consistency in metrically finite dimensional spaces under the assumption of no distance ties. Consistency without distance ties {#sec:Consistency without distance ties} --------------------------------- The simplest case is to work in distance ties-free settings. Assuming that there are no distance ties means the probability of a data point $x_j$ belonging to a sphere $S(x,\rho(x,x_i)), i \neq j$ is zero. Therefore, the $\nu$-measure of sphere $S(x,\rho(x,x_i))$ is zero. As, $x_i$ can be any data point in the space, so in principle, we assume that the measure of every sphere is zero. We also sometimes say that $\nu$ has zero probability of ties. Given a sample of $n$ data points $\Sigma_n = \{x_1,\ldots,x_n\}$, define a function $\varepsilon_{kNN}:\Omega \rightarrow \mathbb{R}$ such that, $$\begin{aligned} \label{eqn:varknn_ball} \varepsilon_{kNN}(x) = \inf\{r >0: \sharp\{ \bar{B}(x,r) \}\geq k+1\},\end{aligned}$$ where the ball $\bar{B}(x,r)$ is a treated as a ball in the finite set $\{x,x_1,\ldots,x_n\}$. The value $\varepsilon_{kNN}(x)$ is the minimum radius such that $\bar{B}(x,\varepsilon_{kNN}(x))$ contain at least $k+1$ sample points including the center $x$. Note that, the open ball $B(x,\varepsilon_{kNN}(x))$ contain at most $k$ points and $B(x,\varepsilon_{kNN}(x)) \subseteq \mathcal{N}_{k}(x) \cup \{x\}$. The problematic case of distance ties occurs on the sphere $S(x,\varepsilon_{kNN}(x))$. We defer the case of tie-breaking until the next section. The assumption of zero distance ties implies that $\bar{B}(x,\varepsilon_{kNN}(x)) = \mathcal{N}_{k}(x) \cap \{x\}$, containing $x$ and the $k$-nearest neighbors of $x$ from $\Sigma_n$. We prove a generalized version of Stone’s lemma for metric spaces with finite Nagata dimension in the following lemma. \[lem:gsl\_sigma\] Let $(Q,\rho)$ be a separable metric space with Nagata dimension $\beta-1$ in $\Omega$. Let $\Sigma_n = \{x_1,x_2,\ldots,x_n \}$ be a finite sample in $\Omega$ and assume there are no distance ties. For $x \in \Omega$, we have $$\begin{aligned} \sum_{x_i \in \Sigma_n \cap Q} \mathbb{I}_{\{ x \in \mathcal{N}_{k}(x_i)\}} \leq (k+1)\beta.\end{aligned}$$ Define a function $F : \Omega \rightarrow \mathbb{R}$ as, $$\begin{aligned} F(x)=\sum_{x_i \in \Sigma_n \cap Q} \mathbb{I}_{\{x \in \mathcal{N}_{k}(x_i)\}}.\end{aligned}$$ Let $x_0 \in \Omega$ and suppose there are $m$ points from the sample $\Sigma_n \cap Q$ which have $x_0$ as one of their $k$-nearest neighbors. Let this set be $\tilde{\Sigma} = \{x_1,x_2,\ldots,x_m\}$. So, $F(x_0)=m$ and it is sufficient to show that $m \leq (k+1) \beta$. Consider a family of closed balls $\mathcal{F} = \{\bar{B}(x_i,\varepsilon_{kNN}(x_i)):x_i \in \tilde{\Sigma} \}$, then every closed ball contains $x_0$. Note that, the ball $\bar{B}(x_i,\varepsilon_{kNN}(x_i))$ in $\mathcal{F}$ is considered as a ball in $\tilde{\Sigma}$. By the Lemma \[lem:wcp\_nd\], $Q$ has ball-covering dimension $\beta$. There exists a subset $\mathcal{F}'$ of $\mathcal{F}$ such that the center of every ball in $\mathcal{F}$ belongs to some ball in $\mathcal{F}'$ and every $x$ in $\Omega$ belong to at most $\beta$ number of balls in $\mathcal{F}'$, $$\begin{aligned} \sum_{\bar{B} \in \mathcal{F}'} \mathbb{I}_{\{x \in \bar{B}\}} \leq \beta.\end{aligned}$$ We know that every closed ball in $\mathcal{F}$ has at most $k+1$ data points out of $m$ sample points in $\tilde{\Sigma}$. Extracting $\mathcal{F}'$ means dividing $m$ points in $p$ number of boxes such that every box has at most $k+1$ points. The minimum number of such boxes would be $m/(k+1)$, so the cardinality of $\mathcal{F}'$ is at least $m/(k+1)$. As, $x_0$ belongs to every ball in $\mathcal{F}'$, we have $$\begin{aligned} m/(k+1) \leq \sum_{\bar{B} \in \mathcal{F}'} \mathbb{I}_{\{ x_0 \in \bar{B}\}} = \sharp \mathcal{F}' \leq \beta.\end{aligned}$$ Therefore, $m \leq (k+1) \beta$. As a consequence of the Lemma \[lem:gsl\_sigma\], the $k$-nearest neighbor rule is weakly consistent for probability measures with zero probability of distance ties. \[thm:nagata\_sc\] Let $(Q,\rho)$ be a separable metric space having Nagata dimension $\beta-1$ in $\Omega$. Let $\nu$ be a probability measure on $Q$ and assume that $\nu$ has zero probability of distance ties. Then, the expected error probability of the $k$-nearest neighbor rule converges to Bayes error with respect to $\nu$. The proof of this theorem is similar to the proof of the Theorem \[thm:stone\_euclidean\], except that we use the Lemma \[lem:gsl\_sigma\] instead of the classical Stone’s lemma. Let $\varepsilon > 0$ be any real number, by Luzin’s theorem there is a set $K \subseteq Q$ such that $\nu(U = Q \setminus K) < \varepsilon$. By the Theorem \[lem:diff\_err\], Lemma \[lem:third\_prop\_all\_metric\_sp\] and Lemma \[lem:eta\_tilde\_reg\], everything boils down to showing that $$\mathbb{E}\bigg\{ \frac{1}{k}\sum_{i =1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_k(X)\}} ( \eta^*(X_i) - \eta(X_i) )^{2} \bigg| X\in K, X_i \in U \bigg\},$$ is bounded above by some constant (which is independent of $n$ and $k$) times $\varepsilon$. We first exchange $X$ and $X_i$ such that $$\begin{aligned} & \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_k(X)\}} ( \eta^*(X_i) - \eta(X_i) )^{2} \bigg| X\in K, X_i \in U \bigg\} \\ & = \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} \mathbb{I}_{\{X \in \mathcal{N}_k(X_i)\}} ( \eta^*(X) - \eta(X) )^{2} \bigg| X_i\in K, X \in U \bigg\} \ \end{aligned}$$ Then, we apply the generalized Stone’s lemma \[lem:gsl\_sigma\] to bound the number of points having $X$ as their $k$-nearest neighbor. So, we have $$\begin{aligned} & \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i =1}^{n} \mathbb{I}_{\{X \in \mathcal{N}_k(X_i)\}} ( \eta^*(X) - \eta(X) )^{2} \bigg| X_i\in K, X \in U \bigg\} \ \\ & \leq \frac{k+1}{k}\beta \mathbb{E}\{( \eta^*(X) - \eta(X) )^{2} | X \in U \} \ \\ & \leq 2\beta \nu(U) < 2\beta \varepsilon,\end{aligned}$$ where we use the fact that $( \eta^*(X) - \eta(X) )^{2}$ is bounded above by one. In the above theorem, we proved the weak consistency for spaces with finite Nagata dimension. Indeed, the result is true for metric spaces having sigma-finite Nagata dimension. Let $(\Omega,\rho)$ be a separable metric space which has sigma-finite Nagata dimension. Let $\nu$ be a probability measure on $\Omega$ and assume that $\nu$ has zero probability of distance ties. Then, the $k$-nearest neighbor rule is weakly consistent with respect to $\nu$. If $\Omega$ has sigma-finite Nagata dimension, then $\Omega$ can be written as union of increasing chain of $Q_i$ which have finite Nagata dimension $\beta_i -1$. For given $\varepsilon$, we choose $Q_l$ such that $\nu(Q_l) > 1 -\varepsilon/2 $ (this is possible because $Q_i$ is an increasing chain and $\nu$ is $\sigma$-additive). By the Luzin’s theorem, there is a set $K \subseteq Q_l$ such that $\nu(Q_l \setminus K) < \varepsilon/2$. Let $U = \Omega \setminus K$ and so $\nu(U) < \varepsilon$. As, the samples $X_i$ take values in $K \subseteq Q_l$, so we can apply the generalized Stone’s lemma \[lem:gsl\_sigma\]. The rest of the argument is exactly same as in the proof of the Theorem \[thm:nagata\_sc\]. Although we have shown the consistency in metric spaces with sigma-finite Nagata dimension but the assumption of zero distance ties is not an ideal assumption. To obtain the $k$-nearest neighbors set and prove the universal consistency using the Stone’s lemma, we need an appropriate tie-breaking method. The index-based tie-breaking method is a popular and simplest method to obtain the set $\mathcal{N}_k(x)$ for a data point $x$. Consistency with distance ties {#sec:Consistency with distance ties} ------------------------------ In the previous section, we established the consistency under the assumption of no distance ties but proving the consistency becomes much more complicated when the distance ties are considered. This is why in the literature, the consistency is first proved under the assumption of no ties and then the solutions are extended in the presence of distance ties. It is worth to examine the cases of distance ties and no distance ties separately. In this section, we start by showing that Stone’s lemma fails in the presence of distance ties and so we prove a different geometric lemma to handle distance ties which will help in establishing the universal weak consistency in metrically finite dimensional spaces. ### Stone’s lemma fails with distance ties! {#subsec:Stone's lemma fails with distance ties!} Now, we will present few examples in order to conclude two important things that the Stone’s lemma fails in the presence of distance ties and that the distance ties are unavoidable even in metric spaces with finite Nagata dimension. Let $(\Omega,\rho)$ be a separable metric space, where $\rho$ is the 0-1 metric. Suppose $n > k+1$ and let $\Sigma_n = \{x_1,x_2,\dots,x_n\}$ be a sample of $n$ data points in $\Omega$. Then, (i) *$\Sigma_n$ has Nagata dimension $0$*. For $x_1,x_2 \in \Sigma_n$ and $a \in \Omega$, if $x_1 =x_2$ then $\rho(x_1,x_2)$ is 0 and $\max\{\rho(a,x_1), \rho(a,x_2)\}$ is either 0 or 1. In the case $x_1 \neq x_2$, the distance $\rho(x_1,x_2)$ is equal to 1 and $\max\{\rho(a,x_1), \rho(a,x_2)\}$ is 1. This is true for an two points from $\Sigma_n$. So, $\Sigma_n$ has Nagata dimension $0$ in $\Omega$. (ii) *Stone’s lemma fails.* Let $x \in \Omega$ such $x \neq x_i$ for all $x_i \in \Sigma_n$. Then $d(x,x_i) = 1$ for every $x_i \in \Sigma_n$, which means $x$ is the $1$-nearest neighbor of every $x_i$ in $\Sigma_n$. $$\begin{aligned} \sum_{i=1}^{n} \mathbb{I}_{\{x \in \bar{B}(x_i,\varepsilon_{1NN}(x_i))\}} = n \nleq k+1. \end{aligned}$$ [[ 9999 ]{}]{} The above example depicts the need for a tie-breaking method. Suppose $\Sigma_n$ is an ordered set of $n$ data points in the above example. The $1$-nearest neighbor of $x_i$ is chosen from the ordered set $\{x_1,\ldots,x_{i-1},x,x_{i+1},\ldots,x_n\}$. By the index-based tie-breaker, $x_1$ is the only data point having $x$ as its $1$-nearest neighbor after breaking distance ties. So, the Stone’s lemma holds in this particular case. In Euclidean spaces, breaking distance ties by comparing indices is sufficient but this is not true for general metric spaces. In the following example, we show that the tie-breaking by comparing indices is not the right method to obtain a version of Stone’s lemma. Indeed, the generalized Stone’s lemma fails even if the distance ties are broken randomly and uniformly, which is more stable than index based tie-breaking method. \[lem:nagata\_fails\_stone\_lemma\] Let $\alpha >0$ be any real number and $x_1$ be a data point. Then, there is a finite sample $\Sigma_n = \{x_1,\ldots,x_n\}$ of size $n$ (depends on $\alpha$) with Nagata dimension 0, such that under the random uniform tie-breaking method, $$\mathbb{E}\bigg\{ \sum_{i=2}^{n} \mathbb{I}_{\{ x_1 \in \mathcal{N}_1(x_i)\}} \bigg\} > \alpha.$$ Choose a positive integer $n$ large enough that $\sum_{i=1}^{n-1} 1/i > \alpha$. We will construct the sample $\Sigma_n$ recursively. Let $\Sigma_1 = \{ x_1\}$ and add $x_2$ to $\Sigma_1$ to form $\Sigma_2 = \Sigma_1 \cup \{x_2 \}$ such that $\rho(x_2,x_1) =1 $. Add $x_3$ to $\Sigma_2$ at a distance equal to 2 from $x_1,x_2$ and set $\Sigma_3 = \Sigma_2 \cup \{ x_3\}$. At $n-1$-th step, the set $\Sigma_{n-1}$ has already been defined, we add $x_n$ to $\Sigma_{n-1}$ to obtain $\Sigma_n= \{x_1,\ldots,x_n\}$ such that $\rho(x_i,x_n) = 2^{n-1}$ for $1 \leq i \leq n-1$. The construction is shown in the following figure. (0,0) circle (1.5pt) node\[left, yshift=-0.2cm\] [$x_1$]{}; (1,0) circle (1.5pt) node\[right, yshift=-0.2cm\] [$x_2$]{}; (0.1,0) – (0.9,0) node\[pos=0.5,sloped,anchor = center,below\] [[$1$]{}]{}; (0.9,0) – (0.1,0); (0.5,1.5) circle (1.5pt) node\[left, yshift=0.2cm\] [$x_3$]{}; (0,0.1) – (0.4,1.4) node\[pos=0.5,sloped,anchor = center,above\] [[$2$]{}]{}; (0.4,1.4) – (0,0.1); (0.9,0.1) – (0.6,1.4) node\[pos=0.5,sloped,anchor = center,above\] [[$2$]{}]{}; (0.6,1.4) – (0.9,0.1); (-3,1.2) circle (1.5pt) node\[left, yshift=-0.2cm\] [$x_4$]{}; (-0.1,0) – (-2.9,1) node\[pos=0.5,sloped,anchor = center,below\] [[$4$]{}]{}; (-2.9,1) – (-0.1,0); (0.7,0.05) – (-2.8,1.2) node\[pos=0.5,sloped,anchor = center,above\] [[$4$]{}]{}; (-2.8,1.2) – (0.7,0.05); (0.35,1.5) – (-2.9,1.3) node\[pos=0.5,sloped,anchor = center,above\] [[$4$]{}]{}; (-2.9,1.3) – (0.35,1.5); (3,4) circle (1.5pt) node\[above, yshift=0.2cm\] [$x_5$]{}; (0.1,0.1) – (2.9,3.9) node\[pos=0.5,sloped,anchor = center,below\] [[$8$]{}]{}; (2.9,3.9) – (0.1,0.1); (1.1,0.2) – (3.1,3.9) node\[pos=0.5,sloped,anchor = center,below\] [[$8$]{}]{}; (3.1,3.9) – (1.1,0.2); (0.6,1.7) – (2.8,3.95) node\[pos=0.5,sloped,anchor = center,above\] [[$8$]{}]{}; (2.8,3.95) – (0.6,1.7); (-2.9,1.5) – (2.7,4.1) node\[pos=0.5,sloped,anchor = center,below\] [[$8$]{}]{}; (2.7,4.1) – (-2.9,1.5); We now show that $\Sigma_n$ has Nagata dimension 0, for every $n \in \mathbb{N}$, by using the induction argument. For $n=1$, we have $\Sigma_1 = \{x_1\}$, a singleton and hence Nagata dimension is 0. Suppose $\Sigma_{n-1}$ has Nagata dimension 0. Let $\mathcal{F}$ be a family of closed balls whose centers are in $\Sigma_{n}$. Now we have two cases: (i) Suppose the point $x_{n}$ belongs to some ball $\bar{B}$ in $\mathcal{F}$. If $x_{n}$ is the center of $\bar{B}(x_n,r_n)$ and the radius $r_n \geq 2^{n-1}$, then the ball $\bar{B}(x_n,r_n)$ contain every point $x_i$ from $\Sigma_{n}$. Therefore, the subfamily $\mathcal{F}' \subseteq \mathcal{F}$ contains exactly one balls and covers the center of every ball in $\mathcal{F}$. This is equivalent to saying that the set $\Sigma_{n}$ has Nagata dimension 0. In the other case, when the radius of $\bar{B}(x_n,r_n)$ is $< 2^{n-1}$, the ball $\bar{B}(x_n,r_n)$ contains only $\{x_n\}$. If there is an data point $x_i$ such that $x_n$ is in $\bar{B}(x_i,r_i)$, then $r_i$ must be greater than or equals to $2^{n-1}$ and so the Nagata dimension is 0. Let $\mathcal{G} = \{ \bar{B}: \bar{B} \in \mathcal{F}, x_{n} \notin \bar{B} \}$ which is a family of balls in $\Sigma_{n-1}$. From the induction hypothesis, there is a subfamily $\mathcal{G}'$ of $\mathcal{G}$ with multiplicity one. So, the family $\mathcal{F}' = \mathcal{G}' \cup \bar{B}(x_n,r_n), r_n < 2^{n-1}$ contains every point of $\Sigma_{n}$ and has multiplicity one. So, $\Sigma_{n+1}$ has Nagata dimension 0. The above argument covers the case when $x_n$ belongs to $\bar{B}$ but is not the center. (ii) Let $x_{n+1}$ does not belong to any ball in $\mathcal{F}$. This means that the radius of every ball in $\mathcal{F}$ is strictly less than $2^{n-1}$ and $\mathcal{F}$ is a family of balls in $\Sigma_{n-1}$. By the induction hypothesis, the Nagata dimension of $\Sigma_{n}$ is 0. We are interested in finding the expected numbers of data points from $\Sigma_n \setminus \{x_1\}$ having $x_1$ as their nearest neighbor. We break distance ties uniformly as following. The data point $x_1$ is the only nearest neighbor of $x_2$, so $x_1$ is chosen as the nearest neighbor of $x_2$ with probability 1. For $x_3$, there are two data points $x_1$ and $x_2$ closest to $x_3$ but at the same distance to $x_3$. So, the probability of $x_1$ being chosen as the nearest neighbor of $x_3$ is 1/2. Similarly for $x_n$, there are $n-1$ equidistant points $\{x_1,\ldots,x_{n-1}\}$ which are candidates for nearest neighbor of $x_n$. The probability of $x_1$ being chosen as the nearest neighbor of $x_n$ is $1/(n-1)$. $$\begin{aligned} \mathbb{E}\bigg\{ \sum_{i=2}^{n} \mathbb{I}_{\{ x_1 \in \mathcal{N}_{1}(x_i) \} } \bigg\} = & \ \sum_{i=2}^{n} \mathbb{E}\bigg\{ \mathbb{I}_{\{ x_1 \in \mathcal{N}_{1}(x_i)\}} \bigg\} \ \\ = & \ \sum_{i=1}^{n-1} \frac{1}{i} \ \\ > & \ \alpha.\end{aligned}$$ The Lemma \[lem:nagata\_fails\_stone\_lemma\] shows that the Stone’s lemma fails for spaces with finite Nagata dimension even if the distance ties are broken uniformly and randomly. So, there seems no hope for generalization of Stone’s lemma in the presence of distance ties. Indeed, it is impossible to avoid distance ties. The following example demonstrates that a metric space with finite Nagata dimension can have many essential ties with high probability. A distance tie become an essential tie if it occurs at non-zero distance. \[eg:ties\_high\_prob\] Let $0<\delta<1$ be any real number. There is a compact metric space with Nagata dimension zero (a Cantor set with a suitable metric) and a sequence $(n_k), n_{k} \rightarrow \infty, k/n_{k} \rightarrow 0$ such that for each $k$, the probability that a randomly chosen $n_{k +1}$-sample $X_1,X_2,\ldots,X_{n_k +1}$ has the property that $X_1$ has essential ties for $k$-nearest neighbors among $X_2,X_3,\ldots,X_{n_k}$ is $\geq 1- \delta$. In simpler words, $X_1$ is at the same distance to all its $(n_k-1)$-nearest neighbors $\{X_{2},\ldots, X_{n_k} \}$, with probability at least $1- \delta$. *Construction*: Let a sequence of positive reals $(\delta_i)_{i=1}^{\infty},\delta_i > 0 $ such that $2 \sum_{i=1}^{\infty} \delta_{i} < \delta$. We construct two sequences $(N_k)$ and $(n_k)$,where $N_k, n_k \in \mathbb{N}$, recursively. Let $\mu_{k}$ be the uniform measure on $[N_k]$ where $[N_k]$ denotes the set $\{ 1,2,\ldots,N_k\}$. Step 1: Let $n_1 > 1$ be any natural number. Choose $N_1$ so large that if we take a random $n_1$-sequence, whose elements are chosen independently and uniformly from $[N_1]$ then with probability $ > 1-\delta_1$ all elements of the $n_1$-sequence are pairwise different. Step 2: Choose $n_2$ so large that if we take a random $n_2$-sequence with elements independent and uniform in $[N_1]$, then the probability of every element of $[N_1]$ being chosen at least $n_1$ times is $> 1-\delta_1$. Step 3: Next, we choose $N_2$ so large that if we take a random $n_2$-sequence having its elements chosen independently and uniformly in $[N_2]$, then with probability $ > 1-\delta_2$ all the elements of $n_2$-sequence are pairwise different. Continuing the above steps gives us $(N_k)$ and $(n_k)$ such that $n_k, N_k \uparrow \infty$. Lets calculate the probability of having both the desired properties for every $k$. Let $A_k$ denote the property of having pairwise different elements in a random $n_k$-sequence with elements coming independently and uniformly from $[N_k]$. Let $B_{k}$ denote the property that in a random $n_{k+1}$-sequence whose elements comes uniformly from $[N_k]$, every element of $[N_k]$ repeats at least $n_k$ times. From the construction, $\mathbb{P}(A_k) > 1 - \delta_k$ and $\mathbb{P}(B_k) > 1 - \delta_k$ Let $D_k = A_{k} \cap B_k$ be the event of both the properties being true for $k$-th recursive step. So, $\mathbb{P}(D_k) > 1 - 2 \delta_k$ and we find $\mathbb{P}( \cup_{k=1}^{\infty} D_k)$, which is the probability of both the properties holding simultaneously for every $k$. Using the union bound, $$\begin{aligned} \mathbb{P}( \cup_{k=1}^{\infty} D_k) = & \ 1 - \mathbb{P}( \cap_{k=1}^{\infty} D_k^c) \ \\ \geq & \ 1 - \sum_{k=1}^{\infty} \mathbb{P}( D_k^c) \ \\ \geq & \ 1 - 2 \sum_{k=1}^{\infty} \delta_k \ \\ > & \ 1 - \delta\end{aligned}$$ Set $\Omega = \prod_{k=1}^{\infty} [N_k]$ and define a metric $\rho$ on $\Omega$, for any $\sigma,\tau \in \Omega$, $$\begin{aligned} \rho(\sigma ,\tau) = \ \begin{cases} 0 \ \ \ \ \hspace{2.1cm} \text{if $\sigma = \tau$ } \ \\ 2^{- \min\{i : \ \sigma_i \neq \tau_i \} } \ \ \ \text{otherwise } \end{cases}\end{aligned}$$ By the Lemma \[eg:non\_archi\_metric\], $\rho$ is a non-Archimedean metric and hence the metric space $(\Omega,\rho)$ has Nagata dimension zero (from the Proposition \[cor:archi\_Nagata\]). Note that, the topology on $\Omega$ is the product topology and so $\Omega$ is a Cantor space. Let $\mu$ be the product measure of uniform measures $\mu_k$ on $[N_k]$ such that for any measurable set $S \subseteq \Omega, S = \prod_{i=1}^{\infty}S_i$, the measure $\mu(S) = \prod_{i=1}^{\infty} \mu_{i}(S_i)$. The measure $\mu$ is non-atomic and hence every distance tie will be essential. Let $k$ be any natural number. We take a random $n_{k+1}$-sample $X_1,\ldots,X_{n_k} \\ ,\ldots,X_{n_{k +1}}$ using the distribution $\mu$ on $\Omega$. The number $n_{k+1}$ is chosen so large that if we choose a word or a sample of length $n_{k+1}$ whose letter comes from $[N_k]$ then every element of $[N_k]$ should occur at least $n_k$ times. Suppose $X_i = (x_{i1},x_{i2},\ldots)$ for $1 \leq i \leq n_{k+1}$, rearranging the terms we see that at least $n_{k}$ elements in the $k$-th coordinate, $\{ x_{1k},\ldots,x_{n_{k}k} \}$ are all equal with probability $> 1-\delta_k$. By the construction, $[N_1] \subseteq [N_2] \ldots \subseteq [N_{k}]$. Therefore, at least $n_k$ elements in each $i$-th coordinate, $\{ x_{1i},\ldots,x_{n_{k}i}$, for $1 \leq i \leq k$ } are same. But we choose $N_{k+1}$ so large that if we choose randomly and uniformly letters from $[N_{k+1}]$ to make a word of length $n_{k+1}$ then all $n_{k+1}$ letters are different. The probability that, the $k$-th coordinate of $n_{k}$ elements $X_1,\ldots,X_{n_k}$ for $1 \leq i \leq k$ are same but the $k+1$-th coordinate are all different, is $> (1-\delta_k)^2 > 1- \delta$. The distance of $X_1$ to all other $n_{k} -1$ points is $\rho(X_1,X_i) = 2^{- (k+1)}$, so there are $n_{k}-1$ distance ties with positive probability. [[ 9999 ]{}]{} ### Consistency in metrically sigma-finite dimensional spaces {#subsec:Consistency in metrically sigma-finite dimensional space} We can infer from the Lemma \[lem:nagata\_fails\_stone\_lemma\] and Example \[eg:ties\_high\_prob\] that the existing tie-breaking method for Euclidean spaces may not yield similar results for general metric spaces with finite Nagata dimension and that it is impossible to find an analogue of Stone’s lemma in such spaces in the presence of distance ties. Here, we present a key lemma that provides a way to deal with distance ties. Note that, we prove the results in this subsection for metrically finite dimensional spaces but the results also hold for metric spaces with finite Nagata dimension. \[lem:assouad\_type\_sigma\] Let $(\Omega,\rho)$ be a metric space and let $Q \subseteq \Omega$ has metric dimension $\beta$ on scale $s$ in $\Omega$. Let $\Sigma_n = \{ x_1,\ldots,x_n\}$ be a finite sample in $\Omega$ and let $\tilde{\Sigma}$ be any sub-sample of $\Sigma_n$ with cardinality $m$. For $\alpha \in (0,1)$, let $T$ be the set of all $x_i$ in $\Sigma_n$ belonging to $Q$ whose $k$-nearest neighbor radius $\varepsilon_{kNN}(x_i)$ is strictly less than $s$ and the fraction of points in $\bar{B}(x_i,\varepsilon_{kNN}(x_i)) $ from $\tilde{\Sigma}$ is strictly greater than $\alpha$, $$\begin{aligned} T = \bigg\{ x_i \in \Sigma_n \cap Q: \varepsilon_{kNN}(x_i) < s, \frac{\sharp \{ \bar{B}(x_i,\varepsilon_{kNN}(x_i)) \cap \tilde{\Sigma} \}}{\sharp \bar{B}(x_i,\varepsilon_{kNN}(x_i))} > \alpha \bigg\}.\end{aligned}$$ Then, the cardinality of $T$ is at most $\beta m/ \alpha$. Let $\mathcal{F}$ be a family of closed balls with centers in $T$, $$\mathcal{F} = \{ \bar{B}_i = \bar{B}(x_i,\varepsilon_{kNN}(x_i)) : x_i \in T \}.$$ As, $Q$ has finite metric dimension, there exists a subfamily $\mathcal{F}'$ such that every $x_i$ in $T$ belongs to some ball in $\mathcal{F}'$ and any $x \in \Omega$ has multiplicity $\beta$ in $\mathcal{F}'$. By the definition of $T$, for every $x_i$ in $T$ we have, $$\begin{aligned} \sharp \bar{B}(x_i,\varepsilon_{kNN}(x_i)) & \leq \frac{1}{\alpha} \sharp \{\bar{B}(x_i,\varepsilon_{kNN}(x_i))\cap \tilde{\Sigma}\} \\end{aligned}$$ Every point of $\tilde{\Sigma}$ can belong to at most $\beta$ balls in $\mathcal{F}'$ and so the total number of points from $\tilde{\Sigma}$ in $\mathcal{F}'$ can not be more than $\beta$ times the cardinality of $\tilde{\Sigma}$. Therefore, we have $$\begin{aligned} \sharp T & \ \leq \sum_{\bar{B}_i \in \mathcal{F}'}\sharp \bar{B}(x_i,\varepsilon_{kNN}(x_i)) \ \\ & \ \leq \frac{1}{\alpha} \sum_{\bar{B}_i \in \mathcal{F}'} \sharp \{\bar{B}(x_i,\varepsilon_{kNN}(x_i))\cap \tilde{\Sigma} \} \ \\ & \ \leq \frac{1}{\alpha} \beta m.\end{aligned}$$ \[rem:stone\_ties\_open\] As is seen from the proof, the Lemma \[lem:assouad\_type\_sigma\] holds under more general assumptions: (i) The result holds for closed balls of any radius strictly less than $s$, not necessarily only $\varepsilon_{kNN}$. (ii) The proof does not use the property of the balls being closed, so the result do hold for families of open balls with radius $< s$. [[ 9999 ]{}]{} We will need the following result to derive a stronger result from the Lemma \[lem:assouad\_type\_sigma\] for the $k$-nearest neighbors sets. \[lem:alpha\_1\] Let $\alpha_1,\alpha_2,\alpha$ be non-negative real numbers. Let $t_1,t_2,t_3 \geq 0$ be such that $t_3 \leq t_2$, $t_1 + t_2 = 1$ and $\alpha_1 t_1 + \alpha_2 t_2 \leq \alpha$. Assume that $\alpha_1 \leq \alpha$, then $$\begin{aligned} \frac{\alpha_1 t_1 + \alpha_2 t_3}{t_1 + t_3} \leq \alpha.\end{aligned}$$ If $\alpha_2 \leq \alpha$, then it is trivial. If $\alpha_2 > \alpha$, $$\begin{aligned} \alpha_1 t_1 + \alpha_2 t_3 & \leq \alpha - \alpha_2 t_2 + \alpha_2 t_3 \ \\ & = \alpha - (1 - t_1) \alpha_2 + \alpha_2 t_3 \ \\ & \leq (t_1 + t_3) \alpha.\end{aligned}$$ The following lemma shows that, if the fraction of points coming from a sub-sample in a closed as well as open ball at $x$ of radius $\varepsilon_{kNN}(x)$ is bounded above by some constant then, the fraction of $k$-nearest neighbors chosen from the sub-sample is also bounded by the same constant. \[lem:ball\_to\_NN\] Let $\Sigma_n = \{x_1,\ldots,x_n\}$ be a sample of $n$ points in any metric space $(\Omega,\rho)$. Let $\tilde{\Sigma}$ be a subset of $\Sigma_n$. Let $\alpha >0$ and let $x \in \Omega$. Suppose that the fraction of points from $\tilde{\Sigma}$, both in the closed ball $\bar{B}(x,\varepsilon_{kNN}(x))$ and in the open ball $B(x,\varepsilon_{kNN}(x))$ is at most $\alpha$, $$\begin{aligned} \frac{\sharp \{ \bar{B}(x,\varepsilon_{kNN}(x)) \cap \tilde{\Sigma} \}}{\sharp \bar{B}(x,\varepsilon_{kNN}(x))}, \frac{\sharp \{ B(x,\varepsilon_{kNN}(x)) \cap \tilde{\Sigma} \}}{\sharp B(x,\varepsilon_{kNN}(x))} \leq \alpha.\end{aligned}$$ The distance ties between the $k$-nearest neighbors of $x$ are broken randomly and uniformly. Then, the fraction of points from $\tilde{\Sigma}$ in the $k$-nearest neighbors set of $x$ is at most $\alpha$, that is, $$\begin{aligned} \frac{ \sharp\{\mathcal{N}_{k}(x) \cap \tilde{\Sigma} \}}{\mathcal{N}_k(x)} \leq \alpha.\end{aligned}$$ In Lemma \[lem:alpha\_1\], let $\alpha_1,\alpha_2$ be equal to the fraction of points from $\tilde{\Sigma}$ in the open ball and the sphere at $x$, respectively. Let $t_1$ be the fraction of points from the open ball at $x$ in the closed ball at $x$, that is, $$\begin{aligned} & \alpha_1 = \frac{\sharp \{ B(x,\varepsilon_{kNN}(x)) \cap \tilde{\Sigma} \}}{\sharp B(x,\varepsilon_{kNN}(x))}, \alpha_2 = \frac{\sharp \{ S(x,\varepsilon_{kNN}(x)) \cap \tilde{\Sigma} \}}{\sharp S(x,\varepsilon_{kNN}(x))}, \\ & \ t_1 = \frac{\sharp \{ B(x,\varepsilon_{kNN}(x))\}}{\sharp \bar{B}(x,\varepsilon_{kNN}(x))}. \end{aligned}$$ So, $t_2$ is the fraction of points from the sphere at $x$ in the closed ball $\bar{B}(x,\allowbreak \varepsilon_{kNN}(x))$. By our assumption, $\alpha_1 \leq \alpha$ and $\alpha_1 t_1 + \alpha_2 t_2$ (which is equal to the fraction of points from $\tilde{\Sigma}$ in the closed ball at $x$) is less than or equal to $\alpha$. Since, $B(x,\varepsilon_{kNN}(x))$ contains at most $k$ points including $x$, so we chose the remaining $k$-nearest neighbors of $x$ uniformly from the sphere $ S(x, \allowbreak \varepsilon_{kNN}(x))$. Let $\nu_{S_{x}}$ be a uniform measure on $S_x = S(x,\varepsilon_{kNN}(x))$, then for any $A \subseteq \Sigma_n$, $$\begin{aligned} \nu_{S_{x}}(A) = \frac{ \sharp\{ S(x,\varepsilon_{kNN}(x)) \cap A\}}{ \sharp\{S(x,\varepsilon_{kNN}(x))\}}.\end{aligned}$$ As, the event of choosing $k$-nearest neighbors of $x$ is independent of $\tilde{\Sigma}$, so we have $$\begin{aligned} \nu_{S_{x}}(\mathcal{N}_{k}(x) \cap \tilde{\Sigma}) = \nu_{S_{x}}(\mathcal{N}_{k}(x)) \nu_{S_{x}}(\tilde{\Sigma}).\end{aligned}$$ Note that, $\alpha_2$ is the measure $\nu_{S_{x}}(\tilde{\Sigma})$ which is equal to, $$\begin{aligned} \nu_{S_{x}}(\tilde{\Sigma}) & = \frac{\nu_{S_{x}}(\mathcal{N}_{k}(x) \cap \tilde{\Sigma})}{\nu_{S_{x}}(\mathcal{N}_{k}(x))} \ \\ & = \frac{\sharp\{ S(x,\varepsilon_{kNN}(x)) \cap \mathcal{N}_{k(x)} \cap \tilde{\Sigma}\}}{\sharp\{S(x,\varepsilon_{kNN}(x)) \cap \mathcal{N}_{k}(x) \}}.\end{aligned}$$ Let $t_3$ be the fraction of $k$-nearest neighbors of $x$ chosen from the sphere at $s$ in the closed ball, $$\begin{aligned} t_3 = \frac{\sharp\{ S(x,\varepsilon_{kNN}(x)) \cap \mathcal{N}_{k}(x) \}}{\sharp\{\bar{B}(x,\varepsilon_{kNN}(x)) \}}.\end{aligned}$$ Now substituting all the values and using the above value for $\alpha_2$, we have $$\begin{aligned} \frac{t_1 \alpha_1 + t_3 \alpha_2}{t_1 +t_2} & = \frac{\sharp \{ \mathcal{N}_{k}(x) \cap \tilde{\Sigma}\} }{\sharp\{\mathcal{N}_{k}(x)\}} \\ & \leq \alpha \ \ \ \ \text{ (from the Lemma \ref{lem:alpha_1})}.\end{aligned}$$ Now, we present our main result on the universal weak consistency of $k$-nearest neighbor rule in a metrically sigma-finite dimensional space where the distance ties are broken randomly and uniformly. \[stone\_sigma\_scales\_ties\] Under the random and uniform tie-breaking method, the $k$-nearest neighbor rule is universally weakly consistent on a separable metrically sigma-finite dimensional space. Let $(\Omega,\rho)$ be a separable metrically sigma-finite dimensional space. It follows from the Remark \[rem:fd\_closed\] that, $\Omega$ is an increasing union of closed and metrically finite dimensional sets $\{Q_i\}_{i=1}^{\infty}$. Each $Q_i$ is measurable because it is closed. Let $\nu$ and $\eta$ be a probability measure and a regression function on $\Omega$, respectively. The $\sigma$-additivity of $\nu$ implies that $\nu(Q_i)$ approaches $1$ as $i \rightarrow \infty$. Let $\varepsilon > 0$, then there exists $l \in \mathbb{N}$ sufficiently large such that $$\begin{aligned} \nu(Q_{l}) > 1 - \varepsilon /2 .\end{aligned}$$ Given $\varepsilon > 0$, the Luzin’s theorem implies that there exists a compact subset $K \subseteq Q_l$ such that $\nu(K) > 1 - \varepsilon/2 $ and $\eta|_{K}$ is uniformly continuous. As, $K$ is a subset of $Q_l$ so $K$ has metric dimension $\beta_l$ on the scale $s_l$ in $\Omega$ (by the Remark \[rem:subset\_fd\]). Let $U = \Omega \setminus K$ and hence $\nu(U) < \varepsilon$. From the Theorem \[lem:diff\_err\], we know that the universal weak consistency follows if $\mathbb{E}\{(\eta_n(X) - \eta(X))^2\} \rightarrow 0$ whenever $n,k \rightarrow \infty$ and $k/n \rightarrow 0$. We use the inequality $(a+b)^2 \leq 2a^2 + 2 b^2$, where $a,b$ are real numbers, to obtain the following $$\begin{aligned} \mathbb{E}\{(\eta_{n}(X) - \eta(X))^2\} & = \mathbb{E}\{(\eta_{n}(X) - \tilde{\eta}(X) + \tilde{\eta}(X) - \eta(X))^2\} \ \\ & \leq 2 \mathbb{E}\{(\eta_{n}(X) - \tilde{\eta}(X))^2\} + 2\mathbb{E}\{(\tilde{\eta}(X) - \eta(X))^2\}\end{aligned}$$ The first term in the above equation goes to zero as $k$ increases to $\infty$ (by the Lemma \[lem:third\_prop\_all\_metric\_sp\]). Now, we would show that the second term in the above equation also decreases to zero in the limit of $n$ and $k$. We see that from the Lemma \[lem:eta\_tilde\_reg\], we have the following bound on the second term, $$\begin{aligned} \label{eqn:eta_star_bound} & \mathbb{E}\{(\tilde{\eta}(X) - \eta(X))^2\} \nonumber \\ &\leq \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} (\eta(X_i) - \eta^*(X_i) )^2 \bigg| X \in K, X_i \in U \bigg\} + 12\varepsilon.\end{aligned}$$ Our aim is to bound from above the first term of right-hand side of the equation by some constant (which is independent of $n$ and $k$) times $\varepsilon$. Given a random sample $(X_{0}, X_1, \allowbreak \ldots,X_n)$, let $R_{n+1}$ be the set of $X_j, 0 \leq j \leq n$ which belongs to $Q_l$ and have strictly greater than $k \sqrt{\varepsilon}$ of their $k$-nearest neighbors from $U$. That is, $R_{n+1}$ is the set of $X_j \in Q_l$ for which $\sharp\{i:X_i \in \mathcal{N}_{k}(X_j)\cap U \} > k \sqrt{\varepsilon}$. We first symmetrize the below expression using the normalized counting measure $\nu^{\sharp}$, defined on $\{0,1,\ldots,n\}$ and then divide into two cases: $X_j$ having $> k \sqrt{\varepsilon}$ of its $k$-nearest neighbors from $U$ and $X_j$ containing at most $k \sqrt{\varepsilon}$ of its $k$-nearest neighbors from $U$. Note that, $X_j$ take values in $K$ (which is a subset of $Q_l$) in the following expressions. So, we have $$\begin{aligned} & \mathbb{E}\bigg\{ \frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} (\eta(X_i) - \eta^*(X_i) )^2 | X \in K, X_i \in U \bigg\} \nonumber \\ & = \mathbb{E}\bigg\{ \mathbb{E}_{j \sim \nu^{\sharp}}\bigg\{ \frac{1}{k} \sum_{i=0, i \neq j}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X_j)\}} (\eta(X_i) - \eta^*(X_i) )^2 | X_j \in K, X_i \in U \bigg\} \bigg\} \nonumber \end{aligned}$$ $$\begin{aligned} & = \mathbb{E}\bigg\{ \mathbb{E}_{j \sim \nu^{\sharp}}\bigg\{ \frac{1}{k} \sum_{\substack{i=0 \\ i\neq j}}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X_j)\}} (\eta(X_i) - \eta^*(X_i) )^2 | X_j \in K \cap R_{n+1}, X_i \in U \bigg\} \bigg\} \label{eqn:cases_d} \\ & + \mathbb{E}\bigg\{ \mathbb{E}_{j \sim \nu^{\sharp}}\bigg\{ \frac{1}{k} \sum_{\substack{i=0 \\ i\neq j}}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X_j)\}} (\eta(X_i) - \eta^*(X_i) )^2 | X_j \in K\setminus R_{n+1}, X_i \in U \bigg\} \bigg\} \label{eqn:cases_d_not}\end{aligned}$$ Equation : : Let $T_{n+1}$ denote the set of $X_j \in Q_l$ which contain $> \sqrt{\varepsilon}$ fraction of points from $U$ in its open ball $B(X_j,\allowbreak \varepsilon_{kNN}(X_j))$, and let $\tilde{T}_{n+1}$ denote the set of $X_j \in Q_l$ which contains $> \sqrt{\varepsilon}$ fraction of points from $U$ in its closed ball $\bar{B}(X_j,\varepsilon_{kNN}(X_j))$. If there is a distance tie, then the $k$-nearest neighbors of $X_j$ is chosen randomly and uniformly from the sphere $S(X_j,\varepsilon_{kNN}(X_j))$, so $\allowbreak \sharp\{\mathcal{N}_{k}(X_j)\}\allowbreak = k$. It follows from the Lemma \[lem:ball\_to\_NN\] that for $X_j$, if the fraction of $k$-nearest neighbors of $X_j$ from $U$ is strictly greater than $\sqrt{\varepsilon}$, then either, the fraction of points from $U$ in the closed ball $\bar{B}(X_j,\varepsilon_{kNN}(X_j))$ is strictly greater than $\sqrt{\varepsilon}$ or, the fraction of points from $U$ in the open ball $B(X_j,\varepsilon_{kNN}(X_j))$ is strictly greater than $\sqrt{\varepsilon}$. Numerically, if $\mathcal{N}_{k}(X_j) \cap U > k \sqrt{\varepsilon} = \mathcal{N}_{k}(X_j) \sqrt{\varepsilon}$, then either $$\begin{aligned} \frac{\sharp\{\bar{B}(X_j,\varepsilon_{kNN}(X_j)) \cap U \}}{ \sharp\{\bar{B}(X_j,\varepsilon_{kNN}(X_j))\}} > \sqrt{\varepsilon} \text{ or, } \frac{\sharp\{B(X_j,\varepsilon_{kNN}(X_j)) \cap U \}}{ \sharp\{B(X_j,\varepsilon_{kNN}(X_j))\}} > \sqrt{\varepsilon}.\end{aligned}$$ So, the equation can be bounded as, $$\begin{aligned} & \mathbb{E}\bigg\{ \mathbb{E}_{j \sim \nu^{\sharp}}\bigg\{ \frac{1}{k} \sum_{i=0, i \neq j}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X_j) \cap U\}} (\eta(X_i) - \eta^*(X_i) )^2 | X_j \in K \cap R_{n+1} \bigg\} \bigg\} \ \\ & \leq \mathbb{E}\bigg\{ \mathbb{E}_{j \sim \nu^{\sharp}}\bigg\{ \frac{1}{k} \sum_{i=0, i \neq j}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X_j)\}} (\eta(X_i) - \eta^*(X_i) )^2 | X_j \in K \cap T_{n+1}\bigg\} \bigg\} \ \\ & + \mathbb{E}\bigg\{ \mathbb{E}_{j \sim \nu^{\sharp}}\bigg\{ \frac{1}{k} \sum_{i=0, i \neq j}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X_j)\}} (\eta(X_i) - \eta^*(X_i) )^2 | X_j \in K\cap \tilde{T}_{n+1}\bigg\} \bigg\} \ \\ & \leq \mathbb{E}\bigg\{ \mathbb{E}_{j \sim \nu^{\sharp}}\{ \mathbb{I}_{\{X_j \in T_{n+1}\}} | X_j \in K\} \bigg\}+\mathbb{E}\bigg\{ \mathbb{E}_{j \sim \nu^{\sharp}}\{ \mathbb{I}_{\{X_j \in \tilde{T}_{n+1}\}} | X_j \in K \} \bigg\} \ \\ & = \mathbb{E}\bigg\{ \frac{\sharp T_{n+1}}{n+1} \bigg\} +\mathbb{E}\bigg\{ \frac{\sharp \tilde{T}_{n+1}}{n+1} \bigg\}.\end{aligned}$$ The Lemma \[lem:assouad\_type\_sigma\] together with Remark \[rem:stone\_ties\_open\] implies that, $$\begin{aligned} \mathbb{E}\bigg\{ \frac{\sharp T_{n+1}}{n+1} \bigg\} + \mathbb{E}\bigg\{ \frac{\sharp \tilde{T}_{n+1}}{n+1} \bigg\} & \leq 2 \frac{\beta_l}{\sqrt{\varepsilon}(n+1)}\mathbb{E} \bigg\{ \sum_{i =0}^{n} \mathbb{I}_{\{ X_i \in U \}} \bigg\} \\ & = \frac{2\beta_l}{\sqrt{\varepsilon}} \nu(U) \\ & < 2\beta_l \sqrt{\varepsilon},\end{aligned}$$ where we used the law of large numbers. Equation : : If $X_j$ is not in $R_{n+1}$, this means the there can be at most $k \sqrt{\varepsilon}$ of $k$-nearest neighbors of $X_j$ that belongs to $U$ after breaking distance ties. So, we have $$\begin{aligned} \text{Equation \eqref{eqn:cases_d_not}} & \leq \mathbb{E} \bigg\{ \frac{1}{k} \sum_{i=0, i \neq j}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_k(X)\}}\mathbb{I}_{ \{ X_i \in U\}} \bigg| X_j \in K \setminus R_{n+1} \bigg\} \\ & \leq \frac{1}{k} k \sqrt{\varepsilon} = \sqrt{\varepsilon}.\end{aligned}$$ Now that we have established the universal weak consistency of the $k$-nearest neighbor rule, we aim for the strong consistency in such metric spaces. This is an obvious direction because as shown in [@Devroye_Gyorfi_Lugosi_1996], the weak consistency and strong consistency are equivalent in Euclidean spaces. The next section discusses the strong consistency in metrically finite dimensional spaces. Strong consistency {#sec:Strong consistency} ------------------ A learning rule is strongly consistent if for almost every infinite sample path, the conditional error probability given a finite set of first $n$ sample points from the infinite sample path, converges to Bayes error as the sample size $n$ increases. The strong consistency in Euclidean spaces was proved by Devroye et al. [@Devroye_Gyorfi_1985; @Zhao_1987] under the assumption of no distance ties. The argument was based on cones in Euclidean spaces and hence the proof is limited to Euclidean spaces. The strong consistency in the presence of distance ties was proved [@Devroye_Gyorfi_Krzyzak_Lugosi_1994] ten years later, as distance ties is a difficult hurdle to overcome. Therefore, in this thesis we will only examine the strong consistency under the assumption of zero probability of distance ties. In particular, we establish the strong consistency of the $k$-nearest neighbor rule in metric spaces with finite metric dimension under the assumption that the distance ties occur with zero probability. Our proof is based on a similar argument as given in Theorem 11.1 on pp. 170-174 of [@Devroye_Gyorfi_Lugosi_1996], but is based on a different geometry. Let $0< \alpha \leq 1$ be a real number, define $$\begin{aligned} \label{eqn:r_alpha} r_{\alpha}(x) = \inf\{r >0 : \nu(B(x,r)) \geq \alpha \}.\end{aligned}$$ A tie occurs with zero probability means the probability of a sphere is zero. We prove in the following lemma that the open ball at $x$ of radius $r_{\alpha}(x)$ has measure exactly equal to $\alpha$, if the measure of every sphere is zero. \[lem:ties\_zero\] Let $\nu$ be a probability measure with zero probability of ties. Then, $\nu(B(x,r_{\alpha}(x))) = \alpha$ for every $x$. If $t < r_{\alpha}(x)$, then $\nu(\bar{B}(x,t)) < \alpha$. We can find a chain of subsets $\bar{B}(x,t)$ that increases to $B(x,r_{\alpha}(x))$. So, $\nu(B(x,r_{\alpha}(x))) \leq \alpha$. Similarly if $t > r_{\alpha}(x)$, then $\nu(B(x,t)) \geq \alpha$. We can find a chain of open subsets $B(x,t)$ that decreases to $\bar{B}(x,r_{\alpha}(x))$. So, $\nu(\bar{B}(x,r_{\alpha}(x))) \geq \alpha$. The zero probability of distance ties means $\nu(S(x,r_{\alpha}(x))) = 0$, therefore $\nu(B(x,r_{\alpha}(x))) = \alpha$. Turns out, the function $r_{\alpha}$ is 1-Lipschitz continuous and has a point-wise limit. \[lem:r\_alpha\] Let $r_{\alpha}$ be a real-valued function defined as in , then $r_{\alpha}$ is a 1-Lipschitz continuous function. Also, $r_{\alpha}$ converges to $0$ as $\alpha \rightarrow 0$ at each point of the support of the measure. Let $\delta > 0$ be any real number. This means $\nu(B(x, r_{\alpha}(x) + \delta)) \geq \alpha$. This implies that $\nu(B(y, \rho(x,y)+r_{\alpha}(x) + \delta)) \geq \alpha$ and so, $r_{\alpha}(y) \leq \rho(x,y)+ r_{\alpha}(x) + \delta$. As $\delta $ is arbitrary, we have $r_{\alpha}(y) \leq \rho(x,y)+ r_{\alpha}(x)$. Therefore, $r_{\alpha}$ is a $1$-Lipschitz continuous function. We will use the $(\epsilon,\delta)$-definition to show that $r_{\alpha} \rightarrow 0$ as $\alpha \rightarrow 0$ for every element in support of $\nu$, that is, for every $\epsilon > 0$, we will find a $\delta >0$ such that $r_{\alpha}(x) \leq \epsilon$ whenever $ \alpha \leq \delta$, $x \in S_{\nu}$. Let $\epsilon > 0$. We observe that $r_{\alpha}(x) \leq \varepsilon$ if and only if $\alpha \leq \nu(B(x,\epsilon))$. If $x \in S_{\nu}$, then $\nu(B(x,\epsilon)) >0$, which is the our $\delta$ corresponding to $\epsilon$. So, for every $x$ in the support of $\nu$, the sequence $r_{\alpha}(x) \rightarrow 0$ as $\alpha \rightarrow 0$. If $x \notin S_{\nu}$, then there exists a $\epsilon >0$ such that $\nu(B(x,\epsilon)) =0$. As, $\alpha > \nu(B(x,\epsilon)) =0 $, then $r_{\alpha}(x) > \epsilon$. Thus, for $x \notin S_{\nu}$, $r_{\alpha}$ does not converge to 0 as $\alpha \rightarrow 0$. Based on the properties of $r_{\alpha}$, we show in the following lemma that the measure of all elements from a metrically finite dimensional space containing a fixed point in its $r_{\alpha}$-ball is bounded above by the metric dimension times $\alpha$. \[lem:stone\_lemma\_strong\] Let $Q$ be a separable metric space which has metric dimension $\beta$ on scale $s$. Assume that $\nu$ is a probability measure on $Q$ with zero probability of ties. For $y \in Q$, define $$\begin{aligned} A = \{ x \in Q : y \in B(x,r_{\alpha}(x)) \}.\end{aligned}$$ Then, we have $\nu(A) \leq \beta \alpha$ for $\alpha$ small enough. Let $\varepsilon >0$ be any real number. By Luzin’s theorem, there is a compact set $K \subseteq A$ such that $\nu(A \setminus K) < \varepsilon$. So, we need to estimate only the value of $\nu(K)$. It follows from the Lemma \[lem:r\_alpha\] that $r_{\alpha}$ is 1-Lipschitz continuous and $r_{\alpha}$ converges to 0 as $\alpha$ goes to 0, $\nu$-almost everywhere. Therefore, $r_{\alpha}$ converges to 0 uniformly on $K$, whenever $\alpha$ goes to 0. This means that there exists a $\alpha_0 >0$ such that for $0 < \alpha \leq \alpha_0$, we have $r_{\alpha}(x) < s$ for all $x \in K$. Every open ball $B(x,r_{\alpha}(x))$ centered at $x \in K$ contains $y$, then we have for every $x \in K$ $$\begin{aligned} \label{eqn:subset_r_alpha} \bar{B}(x,\rho(x,y)) \subseteq B(x,r_{\alpha}(x)).\end{aligned}$$ Let $D = \{ a_n: n \in \mathbb{N}\}$ be a countable dense subset of $K$. For each $n$, we select a family of closed balls $\bar{B}(a_i,\rho(a_i,y)), 1 \leq i \leq n$. Since, $Q$ has metric dimension $\beta$ on scale $s$, there exists a set of $\beta$ centers $\{x_1^{n},\ldots,x_{\beta}^{n}\} \subseteq \{a_1,\ldots,a_n\}$ such that $\cup_{i=1}^{\beta} \bar{B}(x_i^n,\rho(x_i^n,y))$ covers $\{a_1,\ldots,a_n\}$. As, $K$ is compact so every sequence has a sub-sequence which converges in $K$. For $i=1$, there is a sub-sequence $(n_1)$ of $(n)$ such that $(x_{1}^{n_1})$ converges to $x_1$. Similarly for $i=2$, there is a sub-sequence $(n_2)$ of $(n_1)$ such that $(x_{2}^{n_2})$ converges to $x_2$. Doing recursively until $i=\beta$, we have a sequence of indices $(n_{\beta})$ such that $(x_1^{n_{\beta}}, \ldots, x_{\beta}^{n_{\beta}})$ converges to $(x_1,\ldots,x_{\beta})$ as $n_{\beta} \rightarrow \infty$. We claim that the union of $\bar{B}(x_i, \rho(x_i,y) ), 1 \leq i \leq \beta$ covers $K$. As closure of finite union is the union of closures and since the balls are closed, it is enough to show that $D=\{a_m\}_{m \in \mathbb{N}}$ is contained in the union of $\bar{B}(x_i, \rho(x_i,y) ), \allowbreak 1 \leq i \leq \beta$. For $n_{\beta} \geq m$, $a_m$ belongs to at least one of the $\beta$ balls $\bar{B}(x_{i}^{n_{\beta}},\rho(x_{i}^{n_{\beta}},y))$. Then there is an $i_0$ such that $a_m \in \bar{B}(x_{i_0}^{n_{\beta}},\rho(x_{i_0}^{n_{\beta}},y))$ for infinitely many values of $n_{\beta} \geq m$. This means there is a sub-sequence $(n')$ such that $a_m \in \bar{B}(x_{i_0}^{n'},\rho(x_{i_0}^{n'},y))$, where $x_{i_0}^{n'} \rightarrow x_{i_0}$. Now, we will show that $a_m$ is closer to $x_{i_0}$ than $y$. We have, $$\begin{aligned} \rho(a_m,x_{i_0}) & = \rho(a_m, \lim_{n \rightarrow \infty} x_{i_0}^{n'} ) \ \\ & = \lim_{n' \rightarrow \infty} \rho(a_m,x_{i_0}^{n'}) \ \\ & \leq \lim_{n' \rightarrow \infty} \rho(x_{i_0}^{n'},y) \ \\ & = \rho(x_{i_0},y).\end{aligned}$$ Therefore, $a_m$ is an element of $\bar{B}(x_{i_0},\rho(x_{i_0},y))$. It follows from the equation that the family $\{B(x_{i_0},r_{\alpha}(x_{i_0})): 1 \leq i_0 \leq \beta\}$ covers $K$. By our assumption of zero probability of ties, we have $\nu(B(x,r_{\alpha}(x))) = \alpha$ (from the Lemma \[lem:ties\_zero\]). Further, the sub-additivity of $\nu$ implies that $\nu(K) \leq \beta \alpha$, and so $$\begin{aligned} \nu(A) & = \nu( K) + \nu(A \setminus K) \ \\ & \leq \beta\alpha + \varepsilon,\end{aligned}$$ where $\alpha \leq \alpha_0$. As, $\varepsilon $ is arbitrary we have $\nu(A) \leq \beta \alpha$. As a consequence of Lemma \[lem:stone\_lemma\_strong\], we have exponential concentration on the probability of difference between conditional error probabilities of the $k$-nearest neighbor rule and the Bayes rule. The following theorem was proved in Euclidean spaces (Theorem 11.1 of [@Devroye_Gyorfi_Lugosi_1996]). However, the proof remains more or less same for metrically finite dimensional spaces except that we use the Lemma \[lem:stone\_lemma\_strong\] instead of lemma based on Stone’s idea with the cones. \[thm:suc\_finite\_dim\] Let $(Q,\rho)$ be a separable metric space such that $Q$ has metric dimension $\beta$ on scale $s$. Let $\mu$ be a probability measure on $Q \times \{0,1\}$ and assume that $\nu$ on $Q$ obtained using $\mu$, has zero probability of ties. Let $g_n$ be the $k$-nearest neighbor rule. For $\varepsilon > 0$, there is a $n_0$ such that for $n > n_0$, $$\begin{aligned} \mathbb{P}\bigg( \ell_{\mu}(g_n) - \ell^*_{\mu} > \varepsilon \bigg) \leq 2 e^{-\frac{n\varepsilon^2}{18 \beta^2}},\end{aligned}$$ whenever $k,n \rightarrow \infty$ and $k/n \rightarrow 0$. Let $D_n$ be a random labeled sample, then $\ell_{\mu}(g_n) = \mathbb{P}(g_{n}(X) \neq Y | D_n)$ is a function of $D_n$ and hence a random variable. From the Theorem \[lem:diff\_err\], we have that $$\begin{aligned} \ell_{\mu}(g_n) - \ell^*_{\mu} \leq 2 \mathbb{E}_{\nu}\bigg\{|\eta(X) - \eta_n(X)| \bigg| D_n \bigg\}.\end{aligned}$$ Therefore, it is sufficient to show that $$\begin{aligned} \mathbb{P}\bigg( \mathbb{E}_{\nu}\bigg\{|\eta(X) - \eta_n(X)| \bigg| D_n \bigg\} > \frac{\varepsilon}{2} \bigg) \leq 2 e^{-\frac{ n\varepsilon^2}{18 \beta^2}}, \end{aligned}$$ We shall omit writing the expectation conditional on $D_n$ to avoid unnecessary complicated notations with an understanding that the expectation of $|\eta(X) - \eta_n(X)|$ is still a random variable. Therefore, it is sufficient to show that $$\begin{aligned} \mathbb{P}\bigg( \mathbb{E}_{\nu}\{|\eta(X) - \eta_n(X)|\} > \frac{\varepsilon}{2} \bigg) \leq 2 e^{-\frac{ n\varepsilon^2}{18 \beta^2}}, \end{aligned}$$ where $\eta_{n}(X) = \frac{1}{k}\sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}}Y_i$. Let ${\eta}_{n}^*$ be another approximation of $\eta$, $$\begin{aligned} \label{eqn:eta_star} {\eta}_{n}^*(X) = \frac{1}{k}\sum_{i=1}^{n} \mathbb{I}_{\{\rho(X_i,X) < r_{\alpha}(X)\}} Y_i.\end{aligned}$$ By the triangle’s inequality, we have $$\begin{aligned} |\eta(X) - \eta_n(X)| \leq |\eta(X) - {\eta}_n^*(X)| + |{\eta}_n^*(X) - \eta_n(X)|. \end{aligned}$$ For the second term on the right-hand side of above equation, $$\begin{aligned} |{\eta}_n^*(X) - \eta_n(X)| & = \frac{1}{k}\bigg|\sum_{i=1}^{n} \mathbb{I}_{\{\rho(X_i,X) < r_{\alpha}(X)\}} Y_i - \sum_{i=1}^{n} \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}}Y_i \bigg| \nonumber \\ & = \frac{1}{k} \sum_{i=1}^{n} \bigg| \mathbb{I}_{\{\rho(X_i,X) < r_{\alpha}(X)\}} - \mathbb{I}_{\{X_i \in \mathcal{N}_{k}(X)\}} \bigg| \nonumber \ \\ & \leq \bigg|\frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{\rho(X_i,X) < r_{\alpha}(X)\}} - 1 \bigg|, \label{eqn:eta_bound}\end{aligned}$$ where the last inequality is because $\mathcal{N}_k(X)$ contains at most $k$ points. Let $\hat{\eta}_{n}(X)$ be equal to $\frac{1}{k} \sum_{i=1}^{n} \mathbb{I}_{\{\rho(X_i,X) < r_{\alpha}(X)\}}$ and let $\hat{\eta}(X)$ be equal to 1 always. Therefore, we have $$\begin{aligned} |\eta(X) - \eta_n(X)| \leq |\eta(X) - {\eta}_n^*(X)| + |\hat{\eta}_n(X) - \hat{\eta}(X)|.\end{aligned}$$ The idea is to obtain the exponential concentration for the two terms of above equation, separately, using the McDiarmid’s inequality (see Theorem \[thm:mcdiarmid\]). So, we first show that the expected values of the integrals of the terms on the right-hand side of the above equation goes to zero. (i) From the equation and using Cauchy-Schwarz inequality, we have $$\begin{aligned} \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|{\eta}_n^*(X) - \eta_n(X)|\} \} & \leq \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|\hat{\eta}_n(X) - \hat{\eta}(X)|\} \} \ \\ & \leq \mathbb{E}_{\nu}\bigg\{ \sqrt{\mathbb{E}_{\mu^n}\{|\hat{\eta}_n(X) - \hat{\eta}(X)|\}} \bigg\} \ \\ & \leq \mathbb{E}_{\nu}\bigg\{ \sqrt{ \frac{n}{k^2}Var\{\mathbb{I}_{\{\rho(X_i,X) < r_{\alpha}(X)\}} \} } \bigg\} \ \\ & \leq \mathbb{E}_{\nu}\bigg\{ \sqrt{ \frac{n}{k^2}\nu(B(X,r_{\alpha}(X))) } \bigg\} \ \\ & = \mathbb{E}_{\nu}\bigg\{ \sqrt{ \frac{n}{k^2}\alpha } \bigg\}. \ \end{aligned}$$ As $\alpha$ is small, we can take $\alpha \leq k/n$ for large enough values of $n,k$. Substituting $\alpha \leq k/n$ in the above equation we have, $$\begin{aligned} \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|{\eta}_n^*(X) - \eta_n(X)|\} \} & \leq \mathbb{E}_{\nu}\bigg\{ \sqrt{ \frac{n}{k^2} \frac{k}{n} } \bigg\} \ \\ & = \frac{1}{\sqrt{k}}, \end{aligned}$$ which goes to zero as $k \rightarrow \infty$. (ii) We proved the following result while establishing the universal weak consistency in the Theorem \[stone\_sigma\_scales\_ties\], $$\begin{aligned} \mathbb{E}_{\mu^n}\{|\eta(X) - \eta_n(X)| \} \rightarrow 0 \text{ as } n,k \rightarrow \infty, k/n \rightarrow 0.\end{aligned}$$ Using Fubini’s theorem followed by the aforementioned result and case (i) implies that, $$\begin{aligned} & \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|\eta(X) - {\eta}_n^*(X)|\} \} \ \\ & \leq \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|\eta(X) - {\eta}_n(X)|\} \} + \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|\eta_n(X) - {\eta}_n^*(X)|\} \} \ \\ & \leq \mathbb{E}_{\nu}\{\mathbb{E}_{\mu^n}\{|\eta(X) - {\eta}_n(X)|\} \} + \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|\eta_n(X) - {\eta}_n^*(X)|\} \} \ \\ & \rightarrow 0 \text{ as } n,k \rightarrow \infty, k/n \rightarrow 0.\end{aligned}$$ So, we can choose $n,k$ so large that for a given $\varepsilon > 0$, $$\begin{aligned} \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|\eta(X) - {\eta}_n^*(X)|\} \} + \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|\hat{\eta}_n(X) - \hat{\eta}(X)|\} \} & < \frac{\varepsilon}{6}. \label{eqn:exp_1}\end{aligned}$$ Therefore, we have $$\begin{aligned} &\mathbb{P}\bigg( \mathbb{E}_{\nu}\{|\eta(X) - \eta_n(X)|\} > \frac{\varepsilon}{2} \bigg) \nonumber \\ & \leq \mathbb{P}\bigg( \mathbb{E}_{\nu}\{|\eta(X) - {\eta}_n^*(X)|\} + \mathbb{E}_{\nu}\{|\hat{\eta}_n(X) - \hat{\eta}(X)| \} > \frac{\varepsilon}{2} \bigg) \nonumber \\ & = \mathbb{P}\bigg( \mathbb{E}_{\nu}\{|\eta(X) - {\eta}_n^*(X)|\} - \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|\eta(X) - {\eta}_n^*(X)|\} \} + \nonumber \\ & \ \ \ \ \ \ \ \ \mathbb{E}_{\nu}\{|\hat{\eta}_n(X) - \hat{\eta}(X)| \} - \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|\hat{\eta}_n(X) - \hat{\eta}(X)| \} \} > \frac{\varepsilon}{3} \bigg) \nonumber \\ & \leq \mathbb{P}\bigg( \mathbb{E}_{\nu}\{|\eta(X) - {\eta}_n^*(X)|\} - \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|\eta(X) - {\eta}_n^*(X)|\} \} > \frac{\varepsilon}{6} \bigg) + \nonumber \\ & \ \ \ \ \mathbb{P}\bigg(\mathbb{E}_{\nu}\{|\hat{\eta}_n(X) - \hat{\eta}(X)| \} - \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|\hat{\eta}_n(X) - \hat{\eta}(X)| \} \} > \frac{\varepsilon}{6} \bigg), \label{eqn:bound_suc}\end{aligned}$$ where the second equation in the above set of equations is obtained using the inequality . Let $\theta$ be a function defined on labeled samples, $\theta: (Q \times \{0,1\})^n \rightarrow [0,\infty)$ as, $$\begin{aligned} \theta(\sigma_n) = \mathbb{E}_{\nu}\{|{\eta}(X) - \eta_n^*(X)| \}\end{aligned}$$ Let a new sample $\sigma_n^{'}$ is formed by replacing $(x_i,y_i)$ by $(\hat{x}_i,\hat{y}_i)$. Let ${\eta}_{ni}^*(X)$ denote the changed value of $\eta_n^*$ as defined in , with respect to the new sample $\sigma_n^{'}$. Then, we have $$\begin{aligned} | \theta(\sigma_n) - \theta(\sigma_n^{'} ) | & = \bigg|\mathbb{E}_{\nu}\{|{\eta}(X) - \eta_n^*(X)| \} - \mathbb{E}_{\nu}\{|\eta(X) - {\eta}_{ni}^*(X)| \} \bigg| \\ & \leq \mathbb{E}_{\nu}\{|{\eta}_n^*(X) - {\eta}_{ni}^*(X)| \}.\end{aligned}$$ Now, we calculate the value of $$\begin{aligned} |{\eta}_n^*(X) - {\eta}_{ni}^*(X)| & = \frac{1}{k} \bigg| \mathbb{I}_{\{\rho(X_i,X) < r_{\alpha}(X)\}} Y_i - \mathbb{I}_{\{\rho(\hat{X}_i,X) < r_{\alpha}(X)\}} \hat{Y}_i \bigg| \ \\ & \leq \frac{1}{k} \mathbb{I}_{\{\rho(X_i,X) < r_{\alpha}(X)\}}\end{aligned}$$ So, we have $$\begin{aligned} |\theta(\sigma_n) - \theta(\sigma_n^{'} ) |& \leq \frac{1}{k}\mathbb{E}_{\nu}\{\mathbb{I}_{\{\rho(X_i,X) < r_{\alpha}(X)\}} \} \ \\ & = \frac{1}{k}\nu( B(x, r_{\alpha}(x))).\end{aligned}$$ It follows from the Lemma \[lem:stone\_lemma\_strong\], $$\begin{aligned} \sup_{x_1,y_1,\ldots,x_n,y_n,\hat{x}_i,\hat{y}_i}|\theta(\sigma_n) - \theta(\sigma_n^{'} ) |& \leq \frac{1}{k} \beta \alpha \ \\ & \leq \frac{\beta}{n}.\end{aligned}$$ The above expression is true for all $1 \leq i \leq n$ and for every sample $\sigma_n$ and $\sigma'_{n}$ in $(Q \times \{0,1\})^n$. By the McDiarmid’s inequality (Theorem \[thm:mcdiarmid\] in appendix), we get the following inequality $$\begin{aligned} \label{eqn:one_bound} \mathbb{P}\bigg( \mathbb{E}_{\nu}\{|\eta(X) - {\eta}_n^*(X)| - \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|{\eta}(X) - {\eta}_n^*(X)|\} \} > \frac{\varepsilon}{6} \bigg) \leq e^{-\frac{n \varepsilon^2}{18 \beta^2}}.\end{aligned}$$ As, $\hat{\eta}_n$ is defined like $\eta^{*}_n$, we can define a new function $\tilde{\theta}(\sigma_n) = \mathbb{E}_{\nu}\{|{\hat{\eta}}_n(X) - \hat{\eta}(X)| \}$ and in a similar manner as presented above, we obtain that $|\tilde{\theta}(\sigma_n) - \tilde{\theta}(\sigma_n^{'} ) | \leq \beta /n $. Therefore, we have the following the exponential concentration (by the McDiarmid’s inequality), $$\begin{aligned} \label{eqn:second_bound} \mathbb{P}\bigg(\mathbb{E}_{\nu}\{|\hat{\eta}_n(X) - \hat{\eta}(X)| \} - \mathbb{E}_{\mu^n}\{\mathbb{E}_{\nu}\{|\hat{\eta}_n(X) - \hat{\eta}(X)|\} \} > \frac{\varepsilon}{6} \bigg) \leq e^{-\frac{n \varepsilon^2}{18 \beta^2}}.\end{aligned}$$ Substituting the equations and in the equation , we obtain $$\begin{aligned} \mathbb{P}\bigg( \mathbb{E}_{\nu}\{|\eta(X) - \eta_n(X)|\} > \frac{\varepsilon}{2} \bigg) \leq 2 e^{-\frac{ n\varepsilon^2}{18 \beta^2}}. \end{aligned}$$ From the Theorem \[thm:suc\_finite\_dim\], it follows that the $k$-nearest neighbor rule is strongly consistent in any separable metrically finite dimensional space. \[cor:sc\_fd\] Under the assumption of zero probability of ties, the $k$-nearest neighbor rule is strongly consistent on a metrically finite dimensional separable space. Let $(Q,\rho)$ be a separable metric space and suppose $Q$ has finite metric dimension $\beta$ on scale $s$. Let $\varepsilon >0$ be any real number. Let $F_n$ denote the event $\{ \ell(g_n) - \ell^* > \varepsilon\}$. From the Theorem \[thm:suc\_finite\_dim\], we have $$\begin{aligned} \mathbb{P}( F_{n} ) \leq 2 e^{-\frac{n\varepsilon^2}{18 \beta^2}},\end{aligned}$$ Taking sum on the both sides, we get $$\begin{aligned} \sum_{n=1}^{\infty} \mathbb{P}( F_n ) & \leq 2 \sum_{n=1}^{\infty} e^{-\frac{n\varepsilon^2}{18 \beta^2}} \ \\ & < + \infty.\end{aligned}$$ By Borel-Cantelli lemma, we have $$\begin{aligned} \label{eqn:error_strong} \mathbb{P}\bigg(\limsup_{n \rightarrow \infty} F_n \bigg)= 0.\end{aligned}$$ This means almost surely for any infinite sample path, the difference of error probabilities of the $k$-nearest neighbor rule and the Bayes rule converges to zero. That is, $$\begin{aligned} \mu^{\infty}\bigg\{ \sigma^{\infty} \in (Q \times \{0,1\})^{\infty}: \limsup_{n \rightarrow \infty} \ell_{\mu}(g_n| \sigma_n) - \ell^*_{\mu} = 0 \bigg\} = 1.\end{aligned}$$ Future Prospects {#chap:Future Prospects} ================ We examine the following diagram. = \[thick,-&gt;,&gt;=stealth\] = \[rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered, draw=black, fill=red!30\] = \[rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered,text width=3cm, draw=black, fill=red!30\] = \[draw, -latex’\] (start) \[io\] [2. sigma-finite metric dimension]{}; (in5) \[io,xshift=-5cm, left of=start\] [1. finite metric dimension]{}; (in1) \[io, below of=in5\] [3. strong LB-differentiation property]{}; (in2) \[io, below of=in1,yshift=-0.2cm\] [5. universal strong consistency]{}; (in3) \[io, below of=start\] [4. weak LB-differentiation property]{}; (in4) \[io, below of=in3,yshift=-0.2cm\] [6. universal weak consistency]{}; (in5.west) – ++(-0.5cm,0) – ++ (0,-3.8cm) node\[pos=0.5,sloped,anchor = center,above\] [[no ties]{}]{} – ++(0.5cm,0) (in4); (in5) -&gt; (start); (in1) -&gt; node\[pos=0.5,sloped,anchor=center,above\][[Preiss]{}   ]{}(start); (start) -&gt; node\[pos=0.5,sloped,anchor=center,below\][[Assouad & Gromard]{}]{} (in1); (in1) -&gt; (in3); (in3.200) -&gt; node\[anchor=north\][[?]{} ]{}(in1.340); (in4.south) – ++(0,-0.4cm) – ++(-7cm,0)node\[pos=0.5,sloped,anchor = center,above\] [[?]{}]{} – ++(0,0.4cm) (in2.south); (in2) -&gt; (in4); (in4) – node\[anchor=east\] [[?]{}]{}(in3); (in2) – node\[anchor=east\] [[?]{}]{} (in1); (in3.east) – ++(0.5cm,0) – ++(0,-2.3cm) node\[pos=0.5,sloped,anchor = center,below\] [[C[é]{}rou & Guyader]{}]{}– ++(-0.5cm,0) (in4); (start.east) – ++(0.8cm,0) – ++(0,-5.4cm) – ++(-1.9cm,0) – ++(0,0.4cm) (in4); Our main aim is to prove as many as implications as possible in the above flow diagram. The double lines in the above diagram represent our results. In this dissertation, we have accomplished the following implications: $2 \Rightarrow 6$, $3 \Rightarrow 2$ and partially $1 \Rightarrow 5$ under the assumption of no ties. The implications $2 \Rightarrow 6$ can also be obtained by $2 \Rightarrow 3 \Rightarrow 4 \Rightarrow 6$, but we gave a direct proof without using any other implications. Apart from these, we have some other interesting results such as Lemma \[lem:nagata\_fails\_stone\_lemma\] and Example \[eg:ties\_high\_prob\] which show that the solution for distance ties in Euclidean spaces does not extend to metric spaces with finite Nagata dimension. We also showed the inconsistency of the $k$-nearest neighbor rule on Davies’s example in Subsection \[subsec:inconsistent\_Davies\]. We outline a possible number of research directions (some are represented by question mark in the flow diagram) based on this thesis: (I) $1 \Rightarrow 5$, $2 \Rightarrow 5$: We proved $1 \Rightarrow 5$ under the additional assumption of zero probability of ties. We would like to extend this result to a metrically sigma-finite dimensional space, under the assumption of zero probability of ties. The next step would be to forgo this assumption on distance ties and prove the universal strong consistency in metrically finite and sigma-finite dimensional spaces. (II) $5 \Rightarrow 3$, $6 \Rightarrow 4$: We would like to prove these two implications which seem parallel to each other. The proof of $6 \Rightarrow 4$ would be a converse of C[é]{}rou and Guyader’s result on universal weak consistency and hence proves the equivalence between weak Lebesgue-Besicovitch differentiation property and universal weak consistency in a metrically sigma-finite dimensional space. In [@Cerou_Guyader_2006], a partial argument has been done for $6 \Rightarrow 3$. We would like to give a complete proof of this implication $6 \Rightarrow 3$, which will prove the implication $5 \Rightarrow 6 \Rightarrow 3 \Rightarrow 4$. There could be a possibility of proving $3 \Rightarrow 5$ directly, which is similar to $4 \Rightarrow 6$ (a result of C[é]{}rou and Guyader [@Cerou_Guyader_2006]) but with stronger form of convergence. (III) $6 \Rightarrow 5$: In Euclidean spaces, the strong and weak consistency of the $k$-nearest neighbor rule are equivalent. It would be interesting to find an example of a metric space such that the $k$-nearest neighbor rule is weakly consistent but not strongly consistent. The equivalence of universal weak and strong consistency in a metrically sigma-finite (or even finite) dimensional space is an advance question because most of the mathematical tools available now are limited to Euclidean spaces. We state $6 \Rightarrow 5$ as an open question. If $6 \Rightarrow 5$ and $5 \Rightarrow 3$ are true then, it answers the open question by Preiss ($4 \Rightarrow 2$) in affirmative, that is, the two notions of strong and weak Lebesgue-Besicovitch differentiation property are equivalent in a metrically sigma-finite dimensional metric space. In general metric spaces, these are not equivalent [@Mattila_1971]. (IV) Davies [@Davies_1971] constructed an interesting example of an infinite dimensional compact metric space (homeomorphic to a Cantor space) and two Borel measures which are equal on every closed balls, that fails the strong Lebesgue-Besicovitch differentiation property. Later in 1981, Preiss [@Preiss_1981] constructed an example of a Gaussian measure in a Banach space which fails the strong Lebesgue-Besicovitch density property. In our knowledge, these are the only known explicit examples of infinite dimensional spaces where the differentiation property fails. The intuition fail drastically in infinite dimensional spaces. So, we would like to construct a much simpler example of an infinite dimensional metric space which fails the Lebesgue-Besicovitch density property and thus the $k$-nearest neighbor rule fails to be consistent. In particular, we believe that Hilbert cube may be a candidate for such an example. A Hilbert cube $W$ is the set of sequences $\{x = (x_1,x_2,\ldots): 0 \leq x_i \leq 1/i, i \in \mathbb{N} \}$. As $W$ is subspace of $\ell^2$ and so it inherits the metric, $$\begin{aligned} \rho(x,y) & = \sqrt{ \sum_{i=1}^{\infty} |x_i - y_i|^2 } \ \ \ \ \text{ for all } x,y \in W.\end{aligned}$$ Let $\lambda$ be the Lebesgue measure on $\mathbb{R}$ and let $\{ ([0,1/i],\mathcal{B}_i,i\lambda) \}_{i \in \mathbb{N}}$ be a family of measure spaces such that $i\lambda$ is a probability measure and $\mathcal{B}_i$ is a Borel $\sigma$-algebra on $[0,1/i]$. Then consider the product of measurable spaces $(W,\mathcal{B}) = \ ( \prod_{i\in \mathbb{N}} [0,1/i] , \prod_{i \in \mathbb{N}} \mathcal{B}_i)$ equipped with the product measure $\mu$ : $$\begin{aligned} \mu(E) & = \prod_{i \in \mathbb{N} }{\mu_i(E_i)}, \end{aligned}$$ where $E = \prod_{i \in \mathbb{N}} E_i $ with $E_i \in \mathcal{B}_{i}$. We would like to find a subset $M$ of $W$ such that $\mu(M)<1$ and $$\begin{aligned} \lim_{r \rightarrow 0} \frac{\mu(M \cap B(x,r))}{ \mu(B(x,r))} = 1, \end{aligned}$$ for $\mu$-almost every $x \in W$. Auxiliary notions and results ----------------------------- \[lem:uni\_cts\_lip\_cts\] Let $(\Omega,\rho)$ be a metric space and $Q \subseteq \Omega$. Let $f : Q \rightarrow [0,1]$ be a uniformly continuous function. Then there exists a uniformly equivalent metric $\rho'$ defined on $\Omega$ such that $f$ is a $1$-Lipschitz continuous function on $Q$ with respect to $\rho'$. We want to define $\rho'$ such that for any $\varepsilon >0$ and for every $x,y \in Q$, if $\rho'(x,y) < \varepsilon$ then $|f(x) - f(y)| < \varepsilon $. As, $f$ is uniformly continuous, for any $\varepsilon > 0$, there exists $\delta_{\varepsilon}$ such that for any $x,y \in Q$ if $\rho(x,y) < \delta_{\varepsilon}$ then $|f(x) - f(y)| < \varepsilon$. Define a function $\mathcal{E} : [0,\infty) \rightarrow [0,1]$ such that for $\delta \leq 1$, $$\mathcal{E}(\delta) := \sup_{x,y \in Q}\bigg\{ |f(x) - f(y)| : \rho(x,y) \leq \delta \bigg\},$$ and $\mathcal{E}(\delta) = 1$ for $\delta > 1$. The function $\mathcal{E}$ is the maximum oscillation of $f$ on any subsets of $Q$ of diameter at most $\delta < 1$, otherwise 1. The function $\mathcal{E}$ is well defined and a monotonically non-decreasing function. From the definition, we have $\mathcal{E}(0) = 0$. Suppose we define $\rho'(x,y) = \mathcal{E}( d(x,y))$, then in order to prove the triangle inequality for $\rho'$ we need the following inequality $$\mathcal{E}(a + b) \leq \mathcal{E}(a) + \mathcal{E}(b) \ \ \text{for any $a,b \in \Omega$}.$$ But the above inequality is not true: let $\Omega = \{0,1/2,1\}$ and define the distance $\rho(0,1/2) = 1/2, \rho(1/2,1) = 3/2$ and $\rho(0,1) = 1$. Let $f(x) = x^2/2$ be the function on $\Omega$. Then $f$ is a uniformly continuous function. Take $a = b = 1/2$. Then $\mathcal{E}(1/2) = \sup\{|f(x) - f(y)| : \rho(x,y) \leq 1/2\} = 1/8$ but $\mathcal{E}(1) = 1/2 \geq \mathcal{E}(1/2) + \mathcal{E}(1/2) $. So, we try to construct a function $\mathcal{E}'$ such that $\mathcal{E}'\geq \mathcal{E}$ and $\mathcal{E}'(a +b) \leq \mathcal{E}'(a) + \mathcal{E}'(b)$. We take the concave majorant of $\mathcal{E}$, $$\begin{aligned} \mathcal{E}'(\delta) = \sup\bigg\{ t \mathcal{E}(a) + (1-t) \mathcal{E}(b): a,b,t \in [0,1] , \delta = t a + (1-t)b \bigg\}.\end{aligned}$$ So, we have the following properties: (i) $\mathcal{E}'(0)= 0$.\ We can write $0= ta + (1-t)b$ which is true whenever $a =0 = b$ or $t=0=b$ or $t=1,a=0$. In all these cases, $t \mathcal{E}(a) + (1-t)\mathcal{E}(b) = 0$ because $\mathcal{E}(0) =0$ and hence $\mathcal{E}'(0) =0$. (ii) $\mathcal{E}' \geq \mathcal{E}$.  \ For any $\delta \in [0,\infty)$, $\mathcal{E}'(\delta) = \sup\{ t \mathcal{E}(a) + (1-t) \mathcal{E}(b) : a,b,t \in [0,1], \delta = t a + (1-t)b\}$. Take $a = \delta,t =1$, then $\mathcal{E}'(\delta) \geq \mathcal{E}(\delta)$. (iii) $\mathcal{E}'(\delta) \downarrow 0$ whenever $\delta \downarrow 0$.\ Suppose $\delta \downarrow 0$ but $\mathcal{E}'(\delta)$ does not decrease to $0$. Then there exist sequences $a_n,t_n,b_n$ such that $ t_{n} a_{n} + (1- t_{n}) b_{n} \downarrow 0$ but $t_{n} \mathcal{E}(a_{n}) + (1- t_{n}) \mathcal{E}(b_{n}) \geq C $ for some constant $C > 0$. This means that either $t_{n},b_{n}\downarrow 0$ or $a_{n},b_n \downarrow 0 $ or $ a_n\downarrow 0, t \uparrow 1$. If $a_{n} \downarrow 0$, then $ t_{n} \mathcal{E}(a_{n}) \downarrow 0$ but $ (1-t_{n}) \mathcal{E}(b_{n}) \geq C $ which contradicts $ b_n \downarrow 0 $. We get similar contradiction for other cases also. Hence, $\mathcal{E}'(\delta) \downarrow 0$. (iv) *Claim*: Let $\delta =\sum_{i = 1}^{n} t_i a_i $ such that $\sum_{i=1}^{n}t_i = 1$, then there exist $c \leq d$ and $t$ from $[0,1]$ such that $\delta = \sum_{i = 1}^{n} t_i a_i = tc + (1-t)d$ and $\sum_{i=1}^{n} t_i \mathcal{E}(a_i) \leq t \mathcal{E}(c) + (1-t) \mathcal{E}(d)$. Let $c_1,\ldots ,c_n \leq \delta $ and $d_1,\ldots ,d_n \geq \delta$. For every pair $(c_i,d_j)$ set $$t = \frac{\delta - c_i}{d_j - c_i}, $$ then $\delta = t c_i +(1-t)d_j$. Choose $(l,k) $ such that $t \mathcal{E}(c_l) + (1-t)\mathcal{E}(d_k)$ is the maximum. The point $(\delta, \sum_{i=1}^{n} t_i \mathcal{E}(a_i))$ belongs to the convex combination of the points $(a_i,\mathcal{E}(a_i))$. This is a convex polygon. The point $(\delta, \sum_{i=1}^{n} t_i \mathcal{E}(a_i))$ is on the edge joining $(c_i,\mathcal{E}(c_i))$ and $(d_j,\mathcal{E}(d_j))$, then $$\begin{aligned} \delta & = t c_i + (1-t) d_j,\end{aligned}$$ and $$\begin{aligned} \sum_{i=1}^{n} \mathcal{E}(a_i) & = t \mathcal{E}(c_i) + (1-t) \mathcal{E}(d_j) \ \\ & \leq t \mathcal{E}(c_l) + (1-t) \mathcal{E}(d_k).\end{aligned}$$ (v) $\mathcal{E}'$ is a concave function, that is, for any $\delta_1,\delta_2 \in [0,\infty)$ and $\alpha \in [0,1]$ $$\mathcal{E}'(\alpha \delta_1 + (1-\alpha)\delta_2) \geq \alpha \mathcal{E}'(\delta_1) + (1-\alpha) \mathcal{E}'(\delta_2).$$ Let $\gamma > 0$. For $\delta_1$, there exist $a_1,b_1,t_1 \in [0,1]$ such that $\delta_1 = t_1 a_1 + (1-t_1)b_1$ and $ t_1 \mathcal{E}(a_1) + (1-t_1) \mathcal{E}(b_1) > \mathcal{E}'(\delta_1) - \gamma$. Similarly for $\delta_2 > 0$ there exist $a_2,b_2,t_2 \in [0,1]$ such that $\delta_2 = t_2 a_2 + (1-t_2)b_2$ and $ t_2 \mathcal{E}(a_2) + (1-t_2) \mathcal{E}(b_2) > \mathcal{E}'(\delta_2) - \gamma$. We have, $$\begin{aligned} \alpha \mathcal{E}'(\delta_1) + (1 - \alpha) \mathcal{E}'(\delta_2) & \leq \alpha (t_1 \mathcal{E}(a_1) + (1-t_1) \mathcal{E}(b_1) ) + (1-\alpha)( t_2\mathcal{E}(a_2) \\ & \ \ + (1-t_2) \mathcal{E}(b_2)) + \gamma \ \\ & < \alpha t_1 \mathcal{E}(a_1) + \alpha (1-t_1)\mathcal{E}'(b_2) + (1-\alpha)t_2 \mathcal{E}(a_2) \\ & \ \ + (1-\alpha)(1-t_2)\mathcal{E}(b_2) + \gamma. \ \end{aligned}$$ From property (iv), there exist $c,d$ and $t$ such that $\alpha \delta_1 + (1-\alpha) \delta_2 = tc + (1-t )d$ and $$\begin{aligned} \alpha \mathcal{E}'(\delta_1) + (1 - \alpha) \mathcal{E}'(\delta_2) & \leq \alpha t_1 \mathcal{E}(a_1) + \alpha (1-t_1)\mathcal{E}(b_2) + (1-\alpha)t_2 \mathcal{E}(a_2) \\ & \ \ +(1-\alpha)(1-t_2)\mathcal{E}(b_2) + \gamma \ \\ & \leq t \mathcal{E}(c) + (1-t) \mathcal{E}(d) + \gamma \ \\ & \leq \mathcal{E}'(t c + (1-t)d) + \gamma \ \\ & = \mathcal{E}'(\alpha \delta_1+ (1-\alpha)\delta_2) + \gamma. \end{aligned}$$ As $\gamma$ is arbitrary, we have $ \mathcal{E}'(\alpha \delta_1 + (1-\alpha) \delta_2) \geq \alpha \mathcal{E}'(\delta_1) + (1-\alpha) \mathcal{E}'(\delta_2)$. (vi) $\mathcal{E}'(\delta_1 + \delta_2) \leq \mathcal{E}'(\delta_1) + \mathcal{E}'(\delta_2)$.\ Let $\alpha \in [0,1]$. As $\mathcal{E}'$ is a concave function, $$\begin{aligned} \mathcal{E}'(\alpha \delta) & = \mathcal{E}'(\alpha \delta + (1-\alpha)0) \ \\ & \geq \alpha \mathcal{E}'(\delta) +(1-\alpha) \mathcal{E}'(0) \ \\ & = \alpha \mathcal{E}'(\delta).\end{aligned}$$ Then, $$\begin{aligned} \mathcal{E}'(\delta_1) + \mathcal{E}'(\delta_2) & = \mathcal{E}'\bigg((\delta_1 + \delta_2) \frac{\delta_1}{\delta_1 + \delta_2} \bigg) +\mathcal{E}'\bigg((\delta_1 + \delta_2) \frac{\delta_2}{\delta_1 + \delta_2} \bigg) \ \\ & \geq \frac{\delta_1}{\delta_1 + \delta_2}\mathcal{E}'(\delta_1 + \delta_2) + \frac{\delta_2}{\delta_1 + \delta_2}\mathcal{E}'(\delta_1 + \delta_2) \\ & = \mathcal{E}'(\delta_1 + \delta_2).\end{aligned}$$ (vii) $\mathcal{E}'$ is a monotonically non-decreasing function.\ Let $\gamma >0$ and assume $\delta_1 < \delta_2$. So, there exists $a \leq b$ such that $\delta_1 = t a + (1-t)b < \delta _2$ and $ t \mathcal{E}(a) +(1-t)\mathcal{E}(b) > \mathcal{E}'(\delta_1) - \gamma$. Then, there is a $b_2 \geq b$ such that $\delta_2 = t a + (1-t)b_2$. We have, $$\begin{aligned} \mathcal{E}'(\delta_2) & \geq t \mathcal{E}(a) + (1-t) \mathcal{E}(b_2) \ \\ & \geq t \mathcal{E}(a) + (1-t) \mathcal{E}(b) \ \\ & > \mathcal{E}('\delta_1) - \gamma.\end{aligned}$$ As $\gamma$ is arbitrary, $\mathcal{E}'(\delta_2) \geq \mathcal{E}'(\delta_1)$. Now, we define the function $\rho'(x,y) = \mathcal{E}'(\rho(x,y)) + \rho(x,y) $ for any $x \neq y \in \Omega$. From the property (vi) of $\mathcal{E}'$, it follows that $\rho'$ is a metric. Let $\alpha > 0$ and $\gamma_1 = \alpha,\gamma_2 = \alpha/2$. For all $x,y \in \Omega$, if $\rho'(x,y) < \gamma_1 = \alpha$, then $\rho(x,y) < \alpha$. And if $\rho(x,y) < \gamma_2$, then by the property (iii) of $\mathcal{E}'$ we have $\mathcal{E}'(\rho(x,y)) < \alpha/2$. This gives $\rho'(x,y) = \mathcal{E}'(\rho(x,y)) + \rho(x,y) < \gamma_2 + \alpha/2 = \alpha$. Therefore, $\rho$ and $\rho'$ are uniformly equivalent metrics. Let $x,y \in Q$ and suppose $\rho'(x,y) < \varepsilon $, then $\mathcal{E}'(\rho(x,y)) < \varepsilon$. This implies $\mathcal{E}(\rho(x,y)) < \varepsilon $ and so, $ |f(x) - f(y)| < \varepsilon $ which means $f$ is a $1$-Lipschitz continuous function. \[lem:lip\_cts\_lip\_cts\] Every $1$-Lipschitz continuous function $f : Q \rightarrow [0,1]$ can be extended to a $1$-Lipschitz continuous function $\bar{f} : \Omega \rightarrow [0,1]$ in the following way, $$\bar{f}(x) := \min\bigg\{ 1, \inf_{y\in Q}\{f(y) + \rho(x,y) \} \bigg\}.$$ Let $\gamma>0$ and $x_1,x_2 \in \Omega$. There are mainly three cases: (i) If $\bar{f}(x_1) = 1 = \bar{f}(x_2)$, then it is trivial. (ii) If $\bar{f}(x_1) =1$ and $\bar{f}(x_2) = \inf_{y\in Q}\{f(y) + \rho(x_2,y) \}$, then there exists $y_2 \in Q$ such that $ 1 \geq \bar{f}(x_2) > f(y_2) + \rho(x_2,y_2) - \gamma $. So, $$\begin{aligned} |\bar{f}(x_1)- \bar{f}(x_2)| & = 1 - f(y_2) - \rho(x_2,y_2) +\gamma \ \\ & \leq f(y_2) + \rho(x_1,y_2)-f(y_2) - \rho(x_2,y_2) + \gamma\ \\ & = \rho(x_1,y_2) - \rho(x_2,y_2) + \gamma \ \\ & \leq \rho(x_1,x_2) + \gamma.\end{aligned}$$ (iii) If $\bar{f}(x_1) = \inf_{y\in Q}\{f(y) + \rho(x_1,y) \}$ and $\bar{f}(x_2) = \inf_{y\in Q}\{f(y) + \rho(x_2,y) \}$, then there exist $y_1, y_2 \in Q$ such that $\bar{f}(x_1) \leq f(y_1) + \rho(x_1,y_1) $ and $\bar{f}(x_2) > f(y_2) + \rho(x_2,y_2) \} - \gamma$. Therefore, $$\begin{aligned} |\bar{f}(x_1)- \bar{f}(x_2)| & = |f(y_1) + \rho(x_1,y_1) - f(y_2) - \rho(x_2,y_2) + \gamma | \ \\ & \leq | f(y_2) + \rho(x_1,y_2) -f(y_2) - \rho(x_2,y_2) + \gamma|\ \\ & = \rho(x_1,y_2) - \rho(x_2,y_2) + \gamma \ \\ & \leq \rho(x_1,x_2) + \gamma.\end{aligned}$$ As $\gamma$ is arbitrary, $\bar{f}$ is a 1-Lipschitz continuous function. We state the important Luzin’s theorem in our settings for better understanding. \[thm:Luzin\_theorem\] Let $\eta: Q \rightarrow [0,1]$ be measurable function and let $\nu$ be a probability measure on $\Omega$, where $(\Omega,\rho)$ is a separable metric space and $Q \subseteq \Omega$. Given $\varepsilon >0$, there exists a compact set $K \subseteq Q$ such that $\nu(Q \setminus K) < \varepsilon$ and $\eta|_{K}$ is a uniformly continuous function. A topological space $\Omega$ is said to be paracompact if every open cover of $\Omega$ has a locally finite open refinement. That is, if $\Omega \subseteq \cup_{i \in I} O_i$, where each $O_i$ is an open set, then there is a collection of open sets $\{V_j : V_j \text{ is open }, j \in J\}$ such that (i) $\cup_{j \in J} V_j$ is an open cover for $\Omega$, (ii) each $V_j$ is a subset of $O_i$ for some $i$ in $I$ and, (iii) every element $x$ of $\Omega$ has a neighborhood around $x$ which intersects finitely many $V_j, j \in J$. A cover of $\Omega$ is locally finite if it satisfies the above stated property (iii). \[app:para\_space\] Every metric space is paracompact. \[app:countable\_closed\] Let $\{V_j: V_j \text{ is open}, j \in \mathbb{N}\}$ be a locally finite countable family. Then, closure of union of $V_j$ is the union of $\bar{V}_{j}$. A topological space $\Omega$ is called a Lindel[ö]{}f space if any open cover of a subset of $\Omega$ has a countable subcover. \[app:lindelof\] A metric space is a Lindel[ö]{}f space if and only if it is separable. \[lem:inter\_dense\] Let $(\Omega,\rho)$ be a complete metric space and $\{D_n\}_{n \in \mathbb{N}}$ be a sequence of dense open sets. Then $\cap_{n \in \mathbb{N}} D_n$ is dense. \[eg:non\_archi\_metric\] Let $\Omega = \{ \sigma: \sigma = (\sigma_1,\sigma_2,\ldots) \}$ and let $\rho$ be defined as, for any $\sigma,\tau \in \Omega$, $$\begin{aligned} \rho(\sigma ,\tau) = \ \begin{cases} 0 \ \ \ \ \hspace{2.1cm} \text{if $\sigma = \tau$ } \ \\ 2^{- \min\{i : \ \sigma_i \neq \tau_i \} } \ \ \ \text{otherwise } \end{cases}\end{aligned}$$ Then, $\rho$ is a non-Archimedean metric and $(\Omega, \rho)$ is called a non-Archimedean metric space. By the definition, the function $\rho$ is symmetric and non-negative. Also, $\rho(\sigma,\tau) = 0 $ iff $\sigma = \tau$. We will show that for any $\sigma,\tau,\gamma \in \Omega$, $$\rho(\sigma,\tau) \leq \max \{ \rho(\sigma,\gamma) ,\rho(\gamma,\tau)\}.$$ If $\gamma = \sigma$ or $\gamma = \tau$, then the above inequality follows easily. Suppose $\gamma \neq \sigma \neq \tau$ and $ \max \{ \rho(\sigma,\gamma) ,\rho(\gamma,\tau) \} = \rho(\sigma,\gamma)$. Let $\rho(\sigma,\gamma) = 2^{-i}$. Then, $\sigma_j = \gamma_j $ for $j < i$ and $\sigma_i \neq \gamma_i$. Since $\rho(\sigma,\gamma) \geq \rho(\tau,\gamma)$, this means $\tau_j = \gamma_j$ for $j < i$. So, $\sigma_j = \tau_j $ for $j < i$. So, $\rho(\sigma,\tau) \leq 2^{-i} = \max \{ \rho(\sigma,\gamma) ,\rho(\gamma,\tau)\}$. Similarly, the strong triangle inequality holds when $ \max \{ \rho(\sigma,\gamma) ,\rho(\gamma,\tau)\} = \rho(\tau,\gamma)$. Hence, $(\Omega,\rho)$ is a non-Archimedean metric space. \[app:discrete\_metric\] Let $\rho$ on $\Omega$ be defined as: for $x,y \in \Omega$, $\rho(x,y) = 1$ if and only if $x \neq y$. Then $\rho$ is called a 0-1 metric. \[thm:mcdiarmid\] Let $(X_1,Y_1),\ldots, (X_n,Y_n)$ be independent pair of random variables taking values in $\Omega\times \{0,1\}$. Let $f$ be a real-valued function defined on $(\Omega \times \{0,1\})^n$ such that for every $1\leq i \leq n$, and for all $(x_1,y_1), \ldots,(x_n,y_n), (\hat{x}_i,\hat{y}_i) \in \Omega \times \{0,1\}$, $$\begin{aligned} & \bigg| f\bigg((x_1,y_1),\ldots,(x_i,y_i),\ldots,(x_n,y_n)\bigg)- f\bigg((x_1,y_1),\ldots,(\hat{x}_i,\hat{y}_i),\ldots, (x_n,y_n)\bigg) \bigg| \\ & \leq \alpha_i, \end{aligned}$$ Then, for $\varepsilon >0$ $$\begin{aligned} \mathbb{P}\bigg( f((x_1,y_1),\ldots,(x_n,y_n)) - \mathbb{E}\{f((x_1,y_1),\ldots,(x_n,y_n))\} \geq \varepsilon \bigg) \leq e^{\frac{-2 \varepsilon^2}{\sum_{i=1}^{n} \alpha_{i}^2}}.\end{aligned}$$ [^1]: a semimetric is a distance function which satisfy every axioms of a metric but not necessarily the triangle’s inequality [^2]: a inframetric space is a semimetric space satisfying weak-triangle’s inequality, $\rho(x,y) \leq C \max\{ \rho(x,z),\rho(z,y)\}$
--- abstract: 'Starting from a supercompact cardinal and a measurable above it, we construct a model of $\operatorname{ZFC}$ in which the definable tree property holds at all uncountable regular cardinals. This answers a question from [@daghighi-pourmahdian]' author: - Mohammad Golshani title: Definable tree property can hold at all uncountable regular cardinals --- [^1] Introduction ============ In this paper we study the definable version of Magidor’s question on tree property. Recall that tree property at $\kappa$ is the assertion “there are no $\kappa$-Aronszajn trees”. A question of Magidor asks if it is consistent that tree property holds at all regular cardinals greater than $\aleph_1$, and despite of many results which are obtained around it, it seems we are very far from solving it. In this paper we consider the definable version of this question, and give a complete solution to it. For a regular cardinal $\kappa,$ let definable tree property at $\kappa$, denoted $\operatorname{DTP}(\kappa)$, be the assertion “any $\kappa$-tree definable in the structure $(H(\kappa), \in)$ has a cofinal branch”. In [@leshem], it is proved that if $\kappa$ is regular and $\lambda > \kappa$ is a $\Pi^1_1$-reflecting cardinal, then in the generic extension by the Levy collapse $\operatorname{Col}(\kappa, < \lambda)$, $\operatorname{DTP}(\lambda)$ holds. In [@daghighi-pourmahdian], this result is extended to get definable tree property at successor of all regular cardinals. We continue [@daghighi-pourmahdian] and [@leshem] and prove a global consistency result, by producing a model of $\operatorname{ZFC}$ in which definable tree property holds at all uncountable regular cardinals. Our result solves a question of [@daghighi-pourmahdian] and provides an answer to the definable version of Magidor’s question. \[main theorem\] Assume $\kappa$ is a supercompact cardinal and $\lambda > \kappa$ is measurable. Then there is a generic extension $W$ of the universe in which the following hold: - $\kappa$ remain inaccessible. - Definable tree property holds at all uncountable regular cardinals less than $\kappa$. In particular the rank initial segment $W_\kappa$ of $W$ is a model of $\operatorname{ZFC}$ in which definable tree property holds at all uncountable regular cardinals. We can weaken the large cardinal strength of $\lambda$ to a $\Pi^1_1$-reflecting cardinal. In Section \[some preliminaries\], we present some preliminaries and results that will be used in the proof of Theorem \[main theorem\]. Then in Section \[proof of main theorem\], we complete the proof of Theorem \[main theorem\]. We assume familiarity with the papers [@gitik-merimovich] and [@merimovich]. Some preliminaries {#some preliminaries} ================== We begin with some simple observations and facts that will be used through the paper. Assume $M$ is a transitive class. Then $\operatorname{HMD}$ is the class of all sets which are hereditarily definable using parameters from $M$. In particular, $\operatorname{HOD}$ is just $\operatorname{HMD}$, where $M$ is the class of all ordinals. We will use the following well-known fact. Assume $\MPB$ is a homogeneous ordinal definable forcing notion and let $G$ be $\MPB$-generic over $V$. Then $\operatorname{HVD}^{V[G]} \subseteq V.$ In the next lemma we present a simple proof Leshem’s result stated above, using a measurable cardinal. \[leshem result\] Assume $\kappa$ is regular and $\lambda>\kappa$ is a measurable cardinal. Then $\Vdash_{\operatorname{Col}(\kappa, < \lambda)}$“$\lambda=\kappa^+ + \operatorname{DTP}(\lambda)$”. Let $\MPB=\operatorname{Col}(\kappa, < \lambda)$ and let $G$ be $\MPB$-generic over $V$. Also let $U$ be a normal measure on $\lambda$ and let $j: V \to M \simeq \operatorname{Ult}(V, U)$ be the corresponding ultrapower embedding. By standard arguments, we can lift $j$ to some $j: V[G] \to M[G \times H]$, where $H$ is $\operatorname{Col}(\kappa, [\lambda, j(\lambda)))$-generic over $V[G]$ which is defined in $V[G \times H].$ Now suppose that $T \in V[G]$ is a $\lambda$-tree which is definable in $H(\lambda)^{V[G]}=H(\lambda)[G]$. Then in $M[G \times H], T$ has a branch which is defined in a natural way: take some $y \in j(T)_\lambda$ and let $$b=\{x \in j(T) \mid x <_{j(T)} y \}=\{x \in T \mid x <_{j(T)} y \}.$$ But as the tree is definable and the forcing is homogeneous, we can easily see that $b$ is in fact in $V[G]$: let the formula $\phi$ and $z \in H(\lambda)^{V[G]}$ be such that $$\alpha <_T \beta \iff H(\lambda)^{V[G]} \models \phi(z, \alpha, \beta).$$ By the chain condition of forcing, $z$ has a name $\dot{z} \in H(\lambda)^V,$ and so $j(\dot{z})=\dot{z}$. As $j$ is an elementary embedding, we have $$\alpha <_{j(T)} \beta \iff H(j(\lambda))^{M[G \times H]} \models \phi(z, \alpha, \beta).$$ It follows that $b \in \operatorname{H(M[G])D}^{M[G \times H]}$, and by the homogeneity of the forcing, $b\in V[G]$. The next lemma is proved in [@daghighi-pourmahdian]. \[preservation of DTP\] Assume $\operatorname{DTP}(\kappa^+)$ holds and let $\MPB$ be a homogeneous forcing notion which preserves $H(\kappa^+).$ Then $\operatorname{DTP}(\kappa^+)$ holds in the generic extension by $\MPB.$ Assume $G$ is $\MPB$-generic over $V$ and let $T \in V[G]$ be a $\kappa^+$-tree definable in $H(\kappa^+)^{V[G]}$. As $H(\kappa^+)^{V[G]}=H(\kappa^+)^{V}$, so $T$ is defined by parameters from $V$ and since the forcing is homogeneous, $T \in V.$ By our assumption, $T$ has a branch in $V$ and hence in $V[G]$. Proof of main theorem {#proof of main theorem} ===================== In this section we complete the proof of Theorem \[main theorem\]. Thus assume $\kappa$ is a supercompact cardinal and let $\lambda$ be the least measurable cardinal above $\kappa$. Let $\bar{E}= \langle E_\xi \mid \xi < \lambda \rangle$ be a Mitchell increasing sequence of extenders such that for each $\xi < \lambda, \operatorname{crit}(j_\xi)=\kappa, ^{<\lambda}M \subseteq M$ and $M_\xi \supseteq V_{\kappa+2}$, where $j_\xi: V \to M_\xi \simeq \operatorname{Ult}(V, E_\xi)$ is the corresponding elementary embedding. Let $\MPB_{\bar{E}}$ be the supercompact extender based Radin forcing using $\bar{E}$ as defined in [@merimovich] and let $G$ be $\MPB_{\bar{E}}$-generic over $V$. Let us recall the basic properties of $\MPB_{\bar{E}}$. \[basic properties\] ([@merimovich]) The following hold in $V[G]$: - There exists a club $C= \langle \kappa_\xi \mid \xi < \kappa \rangle$ of $\kappa$ consisting of $V$-measurable cardinals. - For each limit ordinal $\xi < \kappa$ let $\lambda_\xi$ be the least measurable cardinal above $\kappa_\xi.$ Then $\lambda_\xi = \kappa_\xi^+$. - $\kappa$ remains inaccessible. In $V[G],$ definable tree property holds at all $\lambda_\xi,$ where $\xi < \kappa$ is a limit ordinal. Assume $\xi < \kappa$ is a limit ordinal. Let $p \in G$ be of the form $p= p_0 ^{\frown} p_1,$ where $p_0 \in \MPB^*_{\bar{e}}$ with $\kappa(\bar{e})=\kappa_\xi$ and $p_1 \in \MPB^*_{\bar{E}}$. So we can factor $\MPB_{\bar{E}}/ p$ as $\MPB_{\bar{E}}/p = \MPB_{\bar{e}}/ p_0 \times \MPB_{\bar{E}}/ p_1,$ where $ \MPB_{\bar{e}}/ p_0$ is essentially the forcing up to level $\kappa_\xi$ below $p_0$ and $\MPB_{\bar{E}}/ p_1$ does not add any new subsets to $\lambda_\xi^+$. Thus it suffices to show that $\operatorname{DTP}(\lambda_\xi)$ holds in the generic extension by $ \MPB_{\bar{e}}/ p_0 $. This follows by essentially the same ideas as in the proof of Main Theorem 3 from [@daghighi-pourmahdian] and the homogeneity of the forcing notion $ \MPB_{\bar{e}}/ p_0 $ as proved in [@gitik-merimovich]. The forcing $ \MPB_{\bar{e}}/ p_0$ is just homogeneous modulo Radin forcing, as we describe it below, hence more work is needed to get the result, and so we sketch the proof for completeness. Given $p= (f^p, T^p) \in \MPB^*_{\bar{E}}$, set $s(p)=(f^p \upharpoonright \{\kappa\}, T^p \upharpoonright \{ \kappa\})$, and by recursion, define the projection of an arbitrary condition $p= p_0 ^{\frown} \dots ^{\frown} p_n \in \MPB_{\bar{E}}$, by $s(p)= s(p_0 ^{\frown} \dots p_{n-1})^{\frown} s(p_n).$ Set $\MPB^\pi_{\bar{E}}=s``(\MPB_{\bar{E}})$. Then $\MPB^\pi_{\bar{E}}$ is the ordinary Radin forcing using the measure sequence $u= \langle E_\xi(\kappa) \mid \xi < \lambda \rangle.$ As shown in [@gitik-merimovich], the forcing $\MPB_{\bar{E}} / H$ is homogeneous, where $H=s``(G)$ is $\MPB^\pi_{\bar{E}}$-generic over $V$. In $V$, $\lambda_\xi$ is a measureable cardinal, so let $k: V \to N$ witness this. It follows that $k(s)$ is a projection from $k(\MPB_{\bar{E}})$ onto $k(\MPB^\pi_{\bar{E}})$. Now let $T \in V[G]$ be a $\lambda_\xi$-tree, which is definable in $H(\lambda_\xi)^{V[G]}=H(\lambda_\xi)^{N[G]}$. We have a natural projection $$\sigma: k(\MPB_{\bar{E}}) \to \MPB_{\bar{E}}$$ which induces a projection $$\sigma^\pi: k(\MPB^\pi_{\bar{E}}) \to \MPB^\pi_{\bar{E}}$$ so that the following diagram is commutative $$\begin{array}{ccc} k(\MPB_{\bar{E}}) & \stackrel{\sigma}{\longrightarrow} & \MPB_{\bar{E}} \\ k(s) \downarrow && \downarrow s \\ k(\MPB^\pi_{\bar{E}}) & \stackrel{\sigma^\pi}{\longrightarrow} & \MPB^\pi_{\bar{E}} \end{array}$$ i.e., $\sigma^\pi \circ k(s) = s \circ \sigma.$ Let $K$ be $k(\MPB_{\bar{E}})$-generic over $V$ such that $\sigma``(K)=G.$ Then $L= k(s)``(K)$ is $k(\MPB^\pi_{\bar{E}}) $-generic over $V$ and $\sigma^\pi``(L)=H.$ It is easily seen that we can lift $k$ to some elementary embedding $k: V[G] \to N[K]$, which is defined in $V[K].$ As in the proof of Lemma \[leshem result\], $T$ has a branch $b \in N[K]$ of the form $$b= \{ x \in k(T) \mid x <_{k(T)} y \}=\{ x \in T \mid x <_{k(T)} y \},$$ where $y \in k(T)_{\lambda_\xi}$ is a node on $\lambda_\xi$-th level of $k(T)$. Assume $\phi$ defines $T$, so that for some $z \in H(\lambda_\xi)^{V[G]}$ $$\alpha <_T \beta \iff H(\lambda_\xi)^{V[G]} \models \phi(z, \alpha, \beta).$$ But $H(\lambda_\xi)^{V[G]}=H(\lambda_\xi)[G],$ so for some $\dot{z} \in H(\lambda_\xi), z=\dot{z}[G]$. We can assume from the start that each element of $H(\lambda_\xi)$ is definable in $H(\lambda_\xi)$ using ordinal parameters less that $\lambda_\xi$, so without loss of generality, assume $z$ is an ordinal less than $\lambda_\xi$. It follows that $k(z)=z$, and $$b=\{ x \in T \mid H(k(\lambda_\xi))^{N[K]} \models \phi(z, x, y) \} \in \operatorname{HOD}^{N[K]}.$$ Note that $k$ fixes elements of $H(\lambda_\xi)$ and $\MPB^\pi_{\bar{E}}$ is essentially the Radin forcing at $\kappa_\xi < \lambda_\xi$ using the measure sequence $u$, so we can easily show that $k(\MPB^\pi_{\bar{E}})/H$ is homogeneous. On the other hand, using the elementarity of $k$ and by [@gitik-merimovich], $k(\MPB_{\bar{E}}) / L$ is also homogeneous. It easily follows that $k(\MPB_{\bar{E}}) / H$ is indeed homogeneous, so $\operatorname{HOD}^{N[K]} \subseteq N[H]$ and hence $$\operatorname{HOD}^{N[K]} \subseteq N[H] \subseteq V[G].$$ It follows that $b \in V[G],$ as required. From now on, assume that $\kappa_0=\aleph_0$ and that each limit point of $C$ is a singular cardinal in $V[G]$. Now work in $V[G]$, and let $$\MQB = \langle \langle \MQB_\xi \mid \xi \leq \kappa \rangle, \langle \dot{\MRB}_\xi \mid \xi < \kappa \rangle\rangle$$ be the reverse Easton iteration of forcing notions where for each $\xi < \kappa,$ $\Vdash_{\MQB_\xi}$“$\dot{\MRB}_\xi=\dot{\operatorname{C}}ol(\kappa^*_\xi, < \kappa_{\xi+1})$”, where $\kappa^*_\xi=\kappa_\xi^+$ if $\xi>0$ is a limit ordinal and $\kappa^*_\xi=\kappa_\xi$ otherwise. Let $H$ be $\MQB$-generic over $V[G].$ By Lemmas \[leshem result\] and \[preservation of DTP\], $V[G][H] \models$“$\operatorname{DTP}(\nu^+)$ holds for all regular cardinals $\nu < \kappa$”. Note that in $V[G][H]$, there are no inaccessible cardinals below $\kappa$ (by our assumption on $C$) and limit cardinals of $V[G][H]$ below $\kappa$ are of the form $\kappa_\xi,$ for some limit ordinal $\xi < \lambda$. By \[basic properties\], $(\kappa_\xi^+)^{V[G][H]}=\lambda_\xi$, so the following lemma completes the proof. $V[G][H] \models$“$\operatorname{DTP}(\lambda_\xi)$ holds for all limit ordinals $\xi < \kappa$”. Let $\MQB=\MQB_\xi * \dot{\MQB}_\infty$, where $\MQB_\xi$ is the iteration up to level $\xi$ and $\Vdash_{\MQB_\xi}$“$\dot{\MQB}_\infty$ is $\lambda_\xi$-closed and homogeneous”. So by \[preservation of DTP\], it suffices to show that $\operatorname{DTP}(\lambda_\xi)$ holds in the extension by $\MPB_{\bar{E}}* \dot{\MQB}_\xi$. As before, let $p \in G$ be of the form $p= p_0 ^{\frown} p_1,$ where $p_0 \in \MPB^*_{\bar{e}}$ with $\kappa(\bar{e})=\kappa_\xi$ and $p_1 \in \MPB^*_{\bar{E}}$ and factor $\MPB_{\bar{E}}/ p$ as $\MPB_{\bar{E}}/p = \MPB_{\bar{e}}/ p_0 \times \MPB_{\bar{E}}/ p_1.$ As $\MPB_{\bar{E}}/ p_1$ does not add new subsets to $\lambda_\xi^+$, the forcing notion $\MQB_\xi$ is computed in both models $V^{\MPB_{\bar{E}}/p}$ and $V^{\MPB_{\bar{e}}/p_0}$ in the same way, so it suffices to prove the following: $$V^{\MPB_{\bar{e}}/p_0 * \dot{\MQB}_\xi} \models \text{~``~} \operatorname{DTP}(\lambda_\xi) \text{~holds''~}.$$ The proof is essentially the same as before using the homogeneity of the corresponding forcing notions . [99]{} Daghighi, Ali Sadegh; Pourmahdian, Massoud; The definable tree property for successors of cardinals. Arch. Math. Logic 55 (2016), no. 5–6, 785-–798. Gitik, Moti; Merimovich, Carmi, Some applications of Supercompact Extender Based Forcings to $\text{HOD}$. Preprint. Leshem, Amir, On the consistency of the definable tree property on $\aleph_1$. J. Symbolic Logic 65 (2000), no. 3, 1204–-1214. Merimovich, Carmi, Supercompact Extender Based Magidor-Radin Forcing. Preprint. School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box: 19395-5746, Tehran-Iran. E-mail address: [email protected] [^1]: The author’s research has been supported by a grant from IPM (No. 91030417).
--- abstract: 'In industrial data analytics, one of the fundamental problems is to utilize the temporal correlation of the industrial data to make timely predictions in the production process, such as fault prediction and yield prediction. However, the traditional prediction models are fixed while the conditions of the machines change over time, thus making the errors of predictions increase with the lapse of time. In this paper, we propose a general data renewal model to deal with it. Combined with the similarity function and the loss function, it estimates the time of updating the existing prediction model, then updates it according to the evaluation function iteratively and adaptively. We have applied the data renewal model to two prediction algorithms. The experiments demonstrate that the data renewal model can effectively identify the changes of data, update and optimize the prediction model so as to improve the accuracy of prediction.' author: - 'Hongzhi Wang,  Yijie Yang, and Yang Song[^1]' bibliography: - 'sample-sigconf.bib' nocite: - '[@huang2017remaining]' - '[@wang2016residual]' - '[@si2012remaining]' - '[@zhuang2016fault]' - '[@wan2017manufacturing]' - '[@hasani2017automated]' - '[@zhang2017opportunistic]' - '[@batzel2009prognostic]' - '[@kharoufeh2013reliability]' - '[@balcan2015efficient]' - '[@verstaevel2017lifelong]' - '[@silver1996parallel]' - '[@singh1992transfer]' - '[@silver2002task]' title: A General Data Renewal Model for Prediction Algorithms in Industrial Data Analytics --- Machine Learning, Data Mining, Time-series, Data Streams Introduction ============ In the industrial processes, though strict regulations and stable operating conditions are required, even the most sophisticated machines cannot avoid the runtime exception [@mckee2014review]. Due to a large number of human interactions in the industrial manufacturing process, human errors may cause abnormalities in the overall process. According to statistics, the maintenance cost of various industrial enterprises accounts for about 15%-70% of the total production cost [@bevilacqua2000analytic]. Therefore, how to conduct malfunction analysis, yield dynamic prediction and make timely fault detection for industrial processes to ensure the effectiveness and efficiency of the production process have received much attention in both academia and industry. Because of the numerous sensors with high sampling frequency in industrial processes, the devices will accumulate large amounts of data in a short time interval. As time goes on, some parameters related to production prediction and fault detection are imperative to change synchronously due to the equipment aging, abrasion and so on. However, the currently known prediction algorithms in industry, mainly including artificial intelligence and data-driven Statistical methods[@pecht2009prognostics] [@sikorska2011prognostic] , are all constrained by time, up to a point. In other words, these prediction models can only accurately reflect the state of the industrial equipment in a certain period, whereas the inaccuracy increases over time. Additionally, in the problems of malfunction diagnoses and predictions based on transfer learning[@Long2015Learning][@Ran2017Transfer], though there are various kinds of faults in industrial processes, the similarities of them can be utilized to conduct the transfer learning of the malfunctions, so as to predict the faults efficieintly and effectively. We will give an outline of a transfer-learning-based fault prediction algorithm in Section 3. However, in our experiments, we find that the transferability of two different types of equipment in the same technological process considerably decreased if the data used in the process are at different periods. It shows that the crucial precondition for the application of transfer learning to industrial time-series data is that the data of the two different types of industrial equipment in the same production process should also be in the same period. Therefore, the prediction model is required to recognize the changes of industrial data stream and update the parameters automatically with the lapse of time. This paper focuses on the automatic update and replacement of the model based on industrial time-series data, which are featured with periodicity and complex correlation. To address these problems, this paper proposes a general data renewal model based on lifelong machine learning[@chen2016lifelong][@liu2017lifelong][@silver2013lifelong]. It can be applied to some prediction algorithms to improve their accuracy. The main idea of the model is to assess the freshness of the industrial time-series data according to the existing prediction model and the new data stream. Then it can decide whether to invalid the old model and retrain a new one according to the similarity function and the loss function. In our previous work, we attempted to establish a time-series forecasting model system which could solve the problems of both discrete and continuous variable prediction. We have proposed a time-series yield prediction algorithm and a transfer-learning-based fault prediction algorithm, which will be roughly described in Section 3. However, their practical effects were limited due to the reasons mentioned above. Therefore, in our experiments, we will apply the data renewal model to them and testify its effectiveness by making a comparison between the learning models with and without a data renewal model. This paper makes following contributions. - We propose a general data renewal model combined with the similarity function and the loss function. It can be applied to some industrial prediction algorithms to find the regulations of renewing the prediction model based on industrial time-series, so as to update the prediction model opportunely and iteratively. - Through self-learning and automatic updating, the prediction algorithms applied with the data renewal model can be improved over time, thus reducing human interventions and perfecting the algorithm performance in industrial processes. - We evaluate the proposed model on three datasets and two prediction algorithms in industry. The results demonstrate that the model can be updated effectively according to the industrial data stream, and the accuracy of the predictions can be increased by at least 33%, which is a significant improvement. The rest of the paper is organized as follows. In Section 2, firstly, we define the problem and our target, then describe the model and approach in detail. And we apply our data renewal model to two prediction algorithms. The tuning process, brief introduction of the two prediction algorithms and the experimental results are presented in Section 3. Section 4 makes a summary of the paper; meanwhile, it explains the future work. Prediction Algorithms combined with Data Renewal Model ====================================================== Task Definition and Overview ---------------------------- In a prediction algorithm, we build a renewal model based on the data, then continuously update the prediction model according to the data similarity and the loss function. In practical problems, the inputs are often time series. Given a sequence of data points measured in a fixed time interval, $X_{t} = [x_{t1} \quad x_{t2} \quad \cdots x_{tm} ]\in R^{t\times m} $, and the existing model $M$. The output is the updated prediction model $M'$. The problem is formalized as follows. Given the data of a piece of industrial equipment $E$ acquired from time $0$ to time $t$, for model $M$, there exists a function $f(\cdot )$, such that $$M(X_t,f(\cdot))= Y_t$$ In the meantime, $E$ keeps on running. Given the data of $E$ from time $t$ to time $t+n(n>0)$, $X_{t+n} = [x_1 \quad x_2 \cdots x_m ]\in R^{n\times m} $, for model $ M'$, there exists a function $f'(\cdot )$, such that $$M'(X_{t+n},f'(\cdot))= Y_{t+n}$$ If $f(\cdot)=f'(\cdot)$, the model does not need to update. If $f(\cdot)\not=f'(\cdot)$, the model needs further analysis to determine whether it should be updated. If it is, then let $M =M'$. In practical industrial problems, $X$ are time series. Therefore, we need to choose the model according to their features. Since the model often has a complicated mechanism and the correlation may be nonlinear, $f$ is difficult to find an analytical solution. To achieve better performance, we often use neural network algorithms, such as BP Neural Network[@Liu2017A], Convolutional Neural Network[@Shin2016Deep] and Recurrent Neural Network (LSTM)[@Ma2015Long] In most cases, the loss function is enough to measure and determine whether a model needs to update. However, since the industrial data are often discrete, continuously collected and transferred by the sensors, the traditional mechanism model is ineffective. As a result, the model should be based on data instead of mechanism. For accurate estimation, the similarity function and the loss function are combined to estimate the time of updating the existing model automatically, then the model is retrained on the calculated time iteratively. Data Renewal Model ------------------ ![The Components of the Data Renewal Model []{data-label="fig:3"}](3-31.png){width="1.0\linewidth"} The primary problem of the updating algorithm is to build a data renewal model to predict when the model ought to be updated. As shown in Figure \[fig:3\], the model achieves this by two considerations. One is the similar of the original data and the new data. If they are similar enough, the model does not have to be updated or retrained. It is measure by the similarity computation, which will be described in Section \[sec:sim\]. The other is the applicability of the existing model. If the model loses efficacy in new data stream, it should be updated or even retrained. It is measured by the loss function, which will be described in Section \[sec:loss\]. ### Similarity Computation {#sec:sim} The data similarity computation begins with analyzing the changed extent of the unprocessed data in the previous moment with the data in the next moment. Chiefly, the data are classified into two types: the binary attribute data and the numeric data. They will be discussed in this section, respectively. For the binary attribute data, the similarity is measured by the counting method. That is, for time series $X_{t} = [x_{t1} \quad x_{t2} \cdots x_{tm} ]\in R^{t\times m} $ and $X_{t+n} = [x_1 \quad x_2 \cdots x_m ]\in R^{n\times m} $, assuming that $X_t^i$ and $X_{t+n}^i$, the $i$-th dimension of $X_t$ and $X_{t+n}$, are both binary attributes, the number of data pairs of them both belong to the first state is $S1$, while the number of data pairs are both the second state is $S2$, the total amount of data is $n$, then their similarity is shown as follows. $$sim(X_t^i,X_{t+n}^i)=\frac{S_1+S_2}{n}$$ For a specific example, $X_t^i=[1~0~0~0~1~0~1~1]$, $X_{t+n}^i=[0~0~0~1~1~1~1~1]$, the 2nd and 3rd numbers of $X_t^i$ and $X_{t+n}^i$ are both 0, so $S1=2$, the 5th, 7th and 8th numbers of $X_t^i$ and $X_{t+n}^i$ are both 1, so $S2=3$, $sim(X_t^i,X_{t+n}^i)=\frac{2+3}{8}=0.625$. In the case of the numerical data, generally, there are two methods to measure the similarity degree: similarity coefficient and similarity measurement. Pearson Correlation Coefficient, as shown in equation (4), can be used to avoid the error of similarity measurement resulting from the severe dispersion of industrial data. It has a value between $+1$ and $-1$, where $+1$ is total positive linear correlation, 0 is no linear correlation, and $-1$ is total negative linear correlation. $$\begin{aligned} \rho_{X,Y}=\frac{cov(X,Y)}{\sigma_{X}\sigma_{Y}} =\frac{E[(X-\mu x)(Y-\mu y)]}{\sigma_{X}\sigma_{Y}} \end{aligned}$$ For the data in the same set but different periods, they also have a certain degree of similarity. In order to better measure the similarity between them, we take the absolute value and ignore the positive and negative correlation. In conclusion, for the purpose of achieving better performance, we adopt the similarity coefficient method here and modify the Pearson Correlation. The new coefficient formula is shown as follow. $$\scriptsize \begin{aligned} sim(X_t^i,X_{t+n}^i) &=\rho_{X^i_t,X^i_{t+n}}=|\frac{cov(X_t^i,X^i_{t+n})}{\sigma_{X^i_t}\sigma_{X^i_{t+n}}}| \\ &=|\frac{E[(X_t^i-\mu x_t^i)(X^i_{t+n}-\mu x^i_{t+n})}{\sigma_{X^i_t}\sigma_{X^i_{t+n}}}| \end{aligned}$$ With respect to different data vectors in the same data set, the similarity is gained by calculating the mean value of them and adjusted overall using the parameter $\delta$, which is shown in (5). In most cases, $\delta_i=1$, if the data set does not contain attribute $i$, then $\delta_i=0$. $$\scriptsize \begin{aligned} sim(X_t,X_{t+n}) &=1-\frac{\sum^m_{i=1}\delta_i(1-sim(X_t^i,X^i_{t+n})}{\sum^m_{i=1}\delta_i }\\ &=\frac{\sum^m_{i=1}\delta_i\cdot sim(X_t^i,X^i_{t+n})}{\sum^m_{i=1}\delta_i } \end{aligned}$$ According to (3), (4) and (5), it can be concluded that the value of the similarity is in the range $(0, 1)$. The higher the value is, the higher the degree of similarity should be between the two data sets. When the value of the similarity function is lower than a threshold $z$, the model needs to be updated. ### Loss Function {#sec:loss} The loss function aims to determine whether to update or abandon the old model and train a new one. In fact, the updating is to adjust the parameters in the existing model. Therefore, it is necessary to estimate the effect of the model, and generally it is measured by loss function. $$\theta \ast =argmin \frac{1}{N}\sum_{i=1}^{N}L(y_i,f(x_i;\theta _i))+\lambda \Phi(\theta)$$ where $L$ is the loss function, $ \Phi(\theta)$ is the regularization term or the penalty term. As regards to specific problems, we use a more specific loss function for analysis. One of the typical problems is the updating of the industrial big data model, which mainly involves two aspects, the model for continuous data such as yeild prediction, and the model for categorical attribute data such as fault prediction. According to the features of different models, we should define the corresponding loss function for estimation. For continuous data prediction, we use RMSE (Root Mean Square Error) to evaluate the loss, since it shows greater estimation results than quadratic loss function in our experiments. $$RMSE(X,f(\cdot)) = \sqrt{\frac{1}{m}\sum^m_{i=1}(f(x_i)-y_i)^2}$$ For classification prediction problems, 0-1 loss function, Log loss function, Hinge loss, and perceptual loss function[@rosasco2004loss] [@shen2005loss] [@masnadi2009design] are generally used to estimate the quality of the model. Here, we use the perceptual loss function: $$\mathop{\arg\min}_{w,b} \ \ [-\sum^n_{i=1}y_i(w^Tx_i+b)]$$ In the loss function, two questions should be taken into account, whether to update the existing model and whether to abolish the old one. The loss of the existing model is $Lm$, then the new data are collected and put into it for training, and the loss of the new model is $Ln$. Then the change rate of the loss is computed as follow. $$LC=|\frac{Ln-Lm}{Lm}|$$ ![The State-determining Schematic Diagram of the Loss Function Module []{data-label="fig:3-3"}](3-3.png){width="1.0\linewidth"} In the formula, obviously, $LC$ is greater than 0. The threshold values need to be set to determine when to update and discard the model (retrain a new one). So the they are set as follows: if $LC>y$, the original model will be discarded and a new model will be built through retraining. If $y>LC>x$, the model is updated: new data are added to the model for training, so the model parameters are changed. If $LC<x$, the original model state is retained with no update. The standards are shown in Figure  \[fig:3-3\]. The thresholds will be adjusted according to the experiments. Algorithm 1 describes the procedure of the data renewal model. Line 1 analyzes the data similarity of period $t1$ and period $t2$. Lines 2-11 decide whether to update the model according to the data similarity. Lines 3-9 calculate the value of loss function in the period $t1$ and $t2$ and output the flag bit according to the state-determination rules. \[1\] The data set $data_{T1}$ in period $t1$; The data set $data_{T2}$ in period $t2$ The updating flag of the model $flag$\ $p \gets sim(data_{T1},data_{T2})$ $l \gets loss(data_{T1},data_{T2})$; return 2; // Discard the model and retrain a new one return 1; // Add the new data and update the model return 0; // Retain the model return 0;// Retain the model Algorithm Description --------------------- The data renewal model is based on the data similarity and the loss function. Furthermore, to control the updating frequency, the size of the new data should be controled. That is, the similarity will not be calculated until a certain amount of data has been accumulated. Only when the threshold of the data size is reached, will the loss function of the data be computed, and the corresponding processes of the model be performed. Algorithm 2 describes the procedure of updating according to Algorithm 1 and the algorithm flow discussed above. Lines 1-4 decide whether enough new data have been accumulated, and Lines 5-12 decide whether to update the model referring to the data renewal model in Algorithm 1. \[1\] The data set $data_{T1}$ in period $t1$; The data set $data_{T2}$ in period $t2$; The minimum (threshold) amount of data $L$,the model $M$ The model $M_f$\ $n \gets shape(data_{T2})$ return $M$; $flag \gets update(data_{T1},data_{T2})$; $M_f \gets train(data_{T1},data_{T2})$; $M_f \gets train(data_{T2})$; $M_f \gets M$; return $M_f$; The complexity analysis of the algorithm is focused on the data renewal model. Let the data size in the period $t_1$ and $t_2$ be $N$ and $M$, respectively. The cost of similarity computation is $O(MN)$. The loss function add the data in period $t_2$ into the existing model and compares the new loss with the previous one. It only needs one round of calculation, so the time complexity is $O(M)$. Therefore, the overall time complexity is $O(MN+M)=O(MN)$ Let the needed space of the data in period $t_1$ and $t_2$ be $N$ and $M$, respectively. The similarity matrix costs $O(NM)$. $O(M)$ is required to store the intermediate results calculated by the data in period $t_2$ to calculate its loss function. As a result, the overall spatial complexity is $O(N+2M+MN)$ = $O(MN)$ Experiments and Evaluation ========================== Experimental Environment and Datasets ------------------------------------- The experimental environment and datasets are shown in Table 1. Among them, the industrial boiler dataset and the generator dataset are generated by real-time production system in the Third Power Plant of Harbin. The boiler dataset includes more than 400,000 pieces of data, whose dimension are up to 70, and the main attributes involve time, flow, pressure, and temperature. Moreover, the generator dataset includes more than 80,000 pieces of data, whose dimension are up to 38, and the major attributes include time, speed, power, pressure and temperature. In this section, the data renewal model is respectively applied to the time-series yield prediction algorithm and the transfer-learning-based fault prediction algorithm to verify its effectiveness in model updating and optimization. \[Tab:bookRWCal\] Machine Configuration 2.7GHz Intel Core i5 8GB 1867 MHz DDR3r -------------------------- ---------------------------------------------------------------------------------------------------- -- Experimental Environment Python 3.6.0; Tensorflow Datasets Industrial Boiler Dataset; Industrial Generator Dataset; Synthetic Industrial Generator Dataset Algorithms The Time-series Yield Prediction Algorithm; The Transfer-learning-based Fault Prediction Algorithm : The Experimental Environment and Datasets The Optimization of the Prediction Algorithm Based on Data Renewal Model ------------------------------------------------------------------------ The data renewal model has two critical parameters, the similarity and the change rate of loss. Since the thresholds may have an impact on accuracy, we are imperative to test their impacts. We first test the impact of the similarity. The goal of the similarity is to estimate the similar degree of the data at different time. The higher the value is, the more similar the data are. Therefore, the similarity threshold can neither be too low nor too high. Initially, it is set to 0.3, 0.5, and 0.7, while the loss rate threshold is set to 1/0.4. In order to observe the changes with different thresholds more intuitively, we set the original number of tuples to 10,000. The model is assessed whether to be updated according to the flag bit for every 10,000 pieces of data added. ![The Impact of the Similarity for Algorithm 1[]{data-label="fig:6"}](6.png){width="0.8\linewidth"} As shown in Figure \[fig:6\], when the similarity threshold is 0.3, the model updating frequency is low, while the model updates too often when the threshold is 0.7. To ensure that the frequency is kept at an appropriate level, the similarity threshold is set to 0.5 in the next experiments . ![The Impact of the Change Rate for Algorithm 1[]{data-label="fig:7"}](7.png){width="0.8\linewidth"} Then we test the impact of the change rate of loss. We vary the rate of loss as 1/0.4, 0.9/0.4, 0.8/0.4, 1/0.3, 0.9/0.3, and 0.8/0.3. The results are shown in Figure  \[fig:7\]. When the flag is 2, the model is discarded. When the flag is 1, the model is updated, and it is retained if the flag is 0. It is found that the discarding and updating of the model shows a more balanced frequency when the threshold is 0.9/0.3. ![The Impact of the Similarity for Algorithm 2 []{data-label="fig:8"}](8.png){width="0.8\linewidth"} ![The Impact of the Change Rate for Algorithm 2 []{data-label="fig:9"}](9.png){width="0.8\linewidth"} The similarity and the loss rate are tuned in a similar process for Algorithm 2. The experimental results are shown in Figures  \[fig:8\] and  \[fig:9\]. We have observed that when the similarity is 0.5 and the loss rate is 0.9/0.3, the updating is more effective. When the thresholds are moderate, the model can execute the three actions, update, discard or retain the model in a proper and balanced frequency. From the experimental results described above, after tuning the prediction algorithm based on the data renewal model, the threshold is fixed at the similarity of 0.5, and the loss rate is 0.9/0.3. The experimental results show that the algorithm does not need to be tuned frequently and the fixed parameters can also achieve great experimental results. Results for the Time-series Yield Prediction Algorithm ------------------------------------------------------ We test the effectiveness of the proposed model renewal strategy on the time-series yield prediction algorithm. It’s an LSTM algorithm based on multi-variable tuning. The algorithm improves the traditional LSTM algorithm and converts the time-series data into supervised learning sequences utilizing their periodicity, so as to improve the prediction accuracy. ![ The Procedure of the Time-series Yield Prediction Algorithm []{data-label="fig:10"}](10.png){width="1.0\linewidth"} The LSTM algorithm based on multi-variable tuning is divided into three modules, a data transform module, an LSTM modeling module, and a tuning module. The data transform module converts the time-series data into a supervised learning sequence, and simultaneously searches for the variable sets which are most relevant and have the highest Y regression coefficient; the LSTM modeling module connects multiple LSTM perception to form an LSTM network; the tuning module adjusts the parameters according to the RMSE in each round, and returns the adjusted parameters to the data transform module for training iteratively. Through continuous iteration, the approximate optimization solution of the algorithm is obtained. The algorithm process is shown in Figure  \[fig:10\]. The following is the result of applying the data renewal model to the yield prediction algorithm based on time-series. After determining the optimal thresholds, we selected different update frequencies, 10,000, 50,000 and 100,000 pieces of data for each batch, to verify the effectiveness of the model. ![The Result at a Frequency of 10000 pieces/batch[]{data-label="fig:11"}](11.png){width="0.8\linewidth"} First, we set the data stream updating frequency to 10,000 pieces/batch. The experimental results are shown in Figure  \[fig:11\]. The RMSE changes with the continuously updating of the data stream. During this process, the model’s RMSE remains unchanged (the flag is 0) or decreases (the flag is 1 or 2). As time goes by and the data accumulate, the RMSE of the model constantly decreases with an irregular frequency. ![The Result at a Frequency of 50000 pieces/batch []{data-label="fig:12"}](12.png){width="0.8\linewidth"} When the data updating frequency is 50,000 pieces/batch, the experimental results are shown in Figure  \[fig:12\]. It is found that when the updating frequency of the data is reduced, the RMSE suffers different degrees of reduction, which indicates that different data update frequencies will affect prediction accuracy. Nevertheless, when the amount of data tend to be consistent, overall, the RMSE can be reduced to a specific constant with less error. We set the frequency for data updating to 100,000 pieces/batch. Let the original data be 100,000 pieces, and take the reduction degree of RMSE as its updating accuracy. As shown in Table 2, the algorithm can perform effective model updating in various frequencies. Moreover, the updating accuracy is up to 63.94%. No. Data Volume Similarity Loss Rate RMSE Accuracy ----- ------------- ------------ ------------- ------- ---------- -- 1 100000 0.34 0.592957366 30.17 - 2 100000 0.44 0.795650783 10.88 63.94% 3 100000 0.89 0.122410221 10.88 - : Experimental Parameters of the Yield Prediction Algorithm []{data-label="Tab:bookRWCal"} Results for Transfer-learning-based Fault Prediction Algorithm -------------------------------------------------------------- In the problems of malfunction diagnosis and predictions, though there are various kinds of faults, the similarities of them can be utilized to adopt the transfer learning of malfunctions, so as to predict the errors efficiently and effectively. ![The Schematic Diagram of the Transfer Learning Based Fault Prediction Algorithm []{data-label="fig:15"}](15.png){width="1.0\linewidth"} The transfer-learning-based fault prediction algorithm mainly consists of three modules, the time window module, the mapping network module, and the model transfer module. The time window module preprocesses the data and conducts similarity analysis based on the time window. The mapping network module mainly uses the step of the similar region of time-series data to transfer the data, and construct the mapping network with the converted data. The model transfer module mainly utilizes the mapping network to transfer the trained model, which is prepared with the neural network method to construct a deep learning network. Figure  \[fig:15\] shows the structure of the algorithm. In the experiment, we suppose that there are two devices, device $A$ with labeled data and device $B$ with unlabeled data. The output is the fault detection model $M$ of device $B$. The update of the algorithm is mainly composed of two parts. For device $A$, the update target is the fault prediction model. For device $B$, if the data change, it is the mapping network module that should be updated. Therefore, our experiments are also divided into two parts. For the prediction of device $A$, the original model is evaluated using the AUC (Area Under the Curve) [@fawcett2006introduction]. When the update frequency of the data is 10,000 or 50,000 pieces/batch, the prediction accuracy AUC is about 0.97. When the frequency is 100,000 pieces /batch, the prediction accuracy AUC is about 0.958, and the loss is less than 1%, so the model need not be updated after the initial modeling. For device $B$ which gets new data, the updating process is more complicated. The mapping network module is updated first. Then the prediction model of device $A$ through the mapping network is trained. Finally, the prediction model of device $B$ is analyzed. The experimental results are similar to that of the time-series yield prediction algorithm. We set the update frequency to 10,000, 50,000 and 100,000 pieces/batch for the verification. ![The Result at a Frequency of 10000 pieces/batch []{data-label="fig:13"}](13.png){width="0.8\linewidth"} During updating, the model’s AUC remains unchanged (the flag is 0) or increases (the flag is 1 or 2). The experimental results with a frequency of 10,000 pieces/batch are shown in Figure  \[fig:13\]. With the accumulation of data over time, the AUC of the model is continually increasing, which indicates better performance. ![The Result at a Frequency of 50000 pieces/batch []{data-label="fig:14"}](14.png){width="0.8\linewidth"} When the data update frequency is 50,000 pieces/batch, the experimental results are shown in Figure  \[fig:14\]. It is observed that the degree of updating varies with different updating frequencies. When the amount of data tends to be consistent, the values of AUC will get stable. We set the frequency for data update to 100,000 pieces/batch, and take the AUC as its accuracy. We observe that when the data similarity is low, there will be a more considerable degree of updating, as the results shown in Table 3. It demonstrates that the data renewal model can effectively update the existing model with the data stream in different updating frequencies. No. Data Volume Similarity Loss Rate AUC Accuracy ----- ------------- ------------ ------------- ------ ---------- -- 4 100000 0.33 0.874255365 0.68 - 5 100000 0.64 0.571217426 0.68 - 6 100000 0.23 0.427696411 0.91 33.82% : Experimental Results for the Transfer-learning-based Fault Prediction Algorithm []{data-label="Tab:bookRWCal"} Conclusions and future work =========================== In this paper, we propose a general data renewal model to assess the industrial data stream and set the thresholds to update the prediction model adaptively. It can by applied to some prediction algorithms to improve their performance. The effectiveness of the model and its significance to improving the accuracy of prediction are demonstrated by experiments on real-world industrial datasets and prediction algorithms. However, since it’s a general model, the tuned parameters couldn’t be adaptive in any different kind of problems. So self-adaptive tuning may be taken into consideration in the future work. And we only apply the model to two industrial prediction algorithms in our work, the model can be examined by more algorithms. [^1]: Hongzhi Wang, Yijie Yang and Yang Song are with the Department of Computer Science and Technology, Harbin Institute of Technology, Harbin, Heilongjiang, 150000 China e-mail: [email protected].
--- abstract: 'We present two-dimensional inviscid hydrodynamic simulations of a protoplanetary disk with an embedded planet, emphasizing the evolution of potential vorticity (the ratio of vorticity to density) and its dependence on numerical resolutions. By analyzing the structure of spiral shocks made by the planet, we show that progressive changes of the potential vorticity caused by spiral shocks ultimately lead to the excitation of a secondary instability. We also demonstrate that very high numerical resolution is required to both follow the potential vorticity changes and identify the location where the secondary instability is first excited. Low-resolution results are shown to give the wrong location. We establish the robustness of a secondary instability and its impact on the torque onto the planet. After the saturation of the instability, the disk shows large-scale non-axisymmetry, causing the torque on the planet to oscillate with large amplitude. The impact of the oscillating torque on the protoplanet’s migration remains to be investigated.' author: - 'Hui Li, Shengtai Li, Josef Koller, Burton B. Wendroff, Richard Liska, Chris M. Orban, Edison P.T. Liang and Douglas N.C. Lin' title: Potential Vorticity Evolution of a Protoplanetary Disk with An Embedded Protoplanet --- INTRODUCTION ============ Well before extrasolar planets were discovered, Goldreich & Tremaine (1979; 1980) and Lin & Papaloizou (1986a,b) have speculated that tidal interactions between disks and embedded protoplanets would lead to planet migration. Ward (1997) suggested that two different types of migration could occur. One, called type I migration, is when the protoplanet mass is still small enough that the migration rate can be evaluated using linear analysis and is shown to be quite fast in the sense that migration timescale is shorter than the estimated buildup timescale of a giant protoplanet core. This has presented a serious problem for the formation of giant planets. The other, called type II migration, is when the protoplanet is massive enough that it opens gaps in the disk. The protoplanet is then locked into the disk’s viscous evolution timescale, which is typically long. The discovery of extrasolar planets with a large number of short-period planets very close to their parent stars seems to support a migration scenario since it is more likely that giant planet formation takes place at larger distances outside the snow line. This has generated renewed interests in the study of tidally induced planet migration (e.g., Kley 1999; Lubow, Seibert, & Artymowicz 1999; Nelson et al. 2000; D’Angelo, Kley, & Henning 2003). Both types of migration have been basically confirmed by non-linear numerical simulations though most simulations to-date are performed with a prescribed viscosity. A recent study by Balmforth & Korycansky (2001) focused on the evolution of potential vorticity in the co-orbital region of a planet and they noted the possibility of secondary instabilities in the corotation resonance region, which lead to the formation of vortices. They showed that nonlinear dynamics of the co-orbital region have an important influence on the saturation of the corotation torque. In an earlier paper [@KollerLiLin03], we also investigated the nonlinear dynamics in the co-orbital region (within $\sim 10$ Roche lobe radii of the planet) through global nonlinear 2D inviscid simulations. We found that the potential vorticity of the disk flow undergoes systematic changes, presumably caused by the spiral shocks generated by the planet though we did not give any detailed analysis of the shocks. We also found that the flow eventually becomes unstable due to the inflexion points in the potential vorticity profile. Vortices are formed as a consequence. This paper is a close follow up to Koller et al. Our purpose is two fold. First, we present quantitative analyses of the ways that shocks change the potential vorticity of the flow. Second, through extensive numerical resolution studies, we confirm the physical mechanism for exciting a secondary instability through inflexion points in the potential vorticity profile. However, we now find that the instability is excited at a different location from what Koller et al. presented. The phenomenon observed by Koller et al. is apparently due to limited numerical resolutions, hence artificial. Nevertheless we are still able to demonstrate the existence of such a secondary instability, and its impact on the planet’s torque. We have also performed extensive numerical tests via different resolutions to check the validity of these new findings. This paper is organized as follows. We first describe our numerical approach and how we set up our simulations in §\[sec:num\] and §\[sec:init\]. We then present the analyses on shocks and how they change the potential vorticity in §\[sec:shock\]. The excitation of a secondary instability is described in §\[sec:s\_instab\]. The influence of numerical resolution is presented in §\[sec:resol\]. Discussions are given in §\[sec:diss\]. NUMERICAL MODEL {#sec:num} =============== We assume that the protoplanetary disk is thin and can be described by the inviscid two-dimensional isothermal Euler equations in a cylindrical {$r, \phi$} plane with vertically integrated quantities. The differential equations are the same as given in Kley (1998). Simulations are carried out mostly using a split dimension hydro code (Li & Li 2005) whose basic algorithm is based on the MUSCL-Hancock scheme (MHS) (Toro 1999). The methodology of MHS is extended here for the presence of source terms with two following main modifications: the fluxes are computed using primitive instead of conserved variables, where van Leer flux limiter is applied to the slopes of the primitive variables, and an iterative approximate Riemann solver is used. Standard dimensional splitting as well as some source splitting are used. Radial sweeps with the angular flux derivatives omitted are alternated with angular sweeps with the radial flux derivatives omitted. All source terms except the planet’s gravitational force are included in the radial sweep. The planet force is applied in a separate sweep as the numerical gradient of the planet potential, so that the numerical curl of this gradient is zero, minimizing the potential vorticity contamination by the planet. We use the local co-moving angular sweep as proposed in the FARGO scheme of Masset (2000) and modified in Li et al. (2001). The idea is basically to use a semi-Lagrangian form for the transport terms in the angular sweep. A constant velocity is subtracted from the angular transport velocity. This velocity is chosen to be close to the mean angular velocity, in order to move the data an integral number of angular cells in one time step. This is then corrected by an integral shift of the data. This enables us to use a much larger time step than would otherwise be possible. The MHS scheme requires two ghost cells in the radial sweep (the angular sweep is periodic). Holding these ghost cells at the initial steady state produced the smallest boundary reflection of all boundary conditions tried. INITIAL SETUP {#sec:init} ============= The two-dimensional disk is modeled between $0.4 \leq r \leq 2$. The planet is assumed to be on a fixed circular orbit at $r=1$. A co-rotating frame is used and the positions of the central star and the planet are fixed at $(r,\phi)=(0,0)$ and $(r_p,\phi_p)=(1,\pi)$ \[acceleration due to frame rotation is also included, see Kley (1998)\]. The mass ratio between the planet and the central star is $\mu=M_{p}/M_{\ast}$. Its Hill (Roche) radius is $r_H = r_p (\mu/3)^{1/3}$. The disk is assumed as isothermal with a constant temperature throughout the simulation region. The isothermal sound speed, scaled by the Keplerian rotation speed $v_\phi$ at $r=1$, is $c_s/v_{\phi}=H/r$ where $H$ is the disk scale height. Time is measured by the orbital period at $r=1$. We choose an initial surface density profile with $\Sigma(r) \propto r^{-3/2}$, so that the ratio of vorticity to surface density, potential vorticity (PV $\equiv \zeta$), has a flat radial profile $\zeta = (\nabla\times {\bf v})_z/\Sigma \approx \textrm{const}.\approx 0.5$. (Note that $\zeta$ has a small deviation from being precisely $0.5$ due to the finite pressure gradient which slightly modifies $v_\phi$ from its Keplerian value.) With this initial condition, we avoid the generation of inflection points due to the rearrangement of $\zeta$ distribution as it is carried by the stream lines [@bk01]. We typically run the disk without the planet for 10 orbits first so that the disk is settled into a numerical equilibrium (very close to the analytic one). Then the planet’s gravitational potential is gradually “turned-on” over a 10-orbit period, allowing the disk to respond to the planet potential gradually. Furthermore, the planet’s potential is softened by an approximate three-dimensional treatment. The basic idea is that, at any location {$r,\phi$}, the matter with surface density $\Sigma$ is distributed (divided into many cells) vertically over many scale heights (say $\pm 6H$) according to a Gaussian isothermal profile. Radial and azimuthal forces exerted by the planet at a specific {$r,\phi,z$} are calculated in this 3D fashion. To get the 2D radial and azimuthal forces for a specific {$r,\phi$}, one then integrates all the cells along $z$. Comparing this pseudo-3D treatment with the usual 2D treatment where a fixed smoothing distance is used, we find that it reduces the force close to the planet (within $\sim 2 r_H$) by about a factor of 2 but converges to the usual 2D force beyond $\sim 5 r_H$. Although there is some accumulation of gas near the planet, we do not allow any disk gas to be accreted onto it. Self-gravity of the disk is not included. Runs are made using several radial and azimuthal grids to study the influence of resolution, which ranges from $(n_{r}\times n_{\phi})= 200\times 800$ to $1200\times 4800$. Simulations typically last several hundred orbits at $r=1$. One key feature of our simulations is that they are performed at the inviscid limit, i.e., we do not explicitly include a viscosity term, though numerical viscosity is inevitable and is needed to handle shocks. In addition, our simulations tend to be of higher resolution. With a radial $n_r = 800$ (some runs go up to $n_r=1200, n_\phi=4800$) and $0.4 \leq r \leq 2$, the diameter of the Hill sphere of a planet with $\mu=10^{-4}$ is resolved by $\sim 32$ cells in each direction. SHOCK STRUCTURE and PV CHANGES {#sec:shock} ============================== We will use one run in this section to facilitate discussion. This run has $\mu=10^{-4}$, $c_s = 0.05$, $r_H = 0.0322$, and a $n_r\times n_\phi = 800\times 3200$ resolution. Figure \[fig:den\_strm\] shows the surface density (multiplied by $r^{1.5}$ so that the background surface density is unity initially) in the {$r,\phi$} plane at $t=100$ orbits. It shows the characteristic density enhancement at the planet and at the two spiral shocks. Two density “depressions” at $|\Delta r| \sim 3-4 r_H$ are developed as angular momentum is gradually deposited there. Two density bumps (enhancements) occur broadly over $|\Delta r| \sim 5-8 r_H$. Here, $\Delta r = r - r_p$. Some streamlines (in the co-rotating frame) are plotted as well. PV Change by Shocks ------------------- Figure \[fig:pv\_r\] shows the azimuthally averaged $\langle\zeta\rangle$ profile from $t= 40, 100, 160, 220,$ and $300$ orbits. (The initial value $\approx 0.5$ is subtracted.) Several features can be seen. First, there are very small changes in $\langle\zeta\rangle$ for $|\Delta r| \leq 2 r_H$. Second, at larger distances from the planet, $\langle\zeta\rangle$ profile changes progressively with time and shows both increase and decrease from its initial value. An interesting feature is that even though both “peaks” and “valleys” are growing with time, two locations around $\Delta r = \pm 4.5 r_H$ have no change in $\langle\zeta\rangle$. A natural question is what causes these features. The flow around the planet can be separated into several sub-regions depending on their streamline behavior (e.g., see Masset 2002), including the horseshoe (or librating) region with $|\Delta r| \leq r_H$, the separatrix region $r_H < |\Delta r| < \sqrt{12} r_H$, and the streaming region $|\Delta r| > \sqrt{12} r_H$. Typically, spiral shocks do not occur until the relative velocity between the disk flow and the planet becomes larger than the sound speed. This roughly translates to $|\Delta r| \approx H$. So, a critical parameter is $H/r_H \approx c_s/r_H$, which gives an indication whether the spiral shock is starting from within the planet’s Roche lobe or not (see also Korycanski & Papaloizou 1996). In the limit of small $c_s/r_H$, flow around the planet becomes quite complicated (e.g., see Tanigawa & Watanabe 2002). Another added yet potentially very important complication is that any numerical error coming out nearing the planet might be strongly amplified by these shocks, giving numerical artifacts which can be physically unreal. So, we have mostly used a relatively higher sound speed of $c_s=0.05~ (> r_H)$ in this study. Higher sound speeds mean weaker shocks, which typically also mean slower changes in the disk properties. As discussed in Koller et al. (2003), spiral shocks are capable of destroying the conservation of PV. For the parameters used in this run, we do not expect shocks within $\sim \pm 2 r_H$, which is confirmed by the fact that $\langle\zeta\rangle$ stays roughly at the initial value. Once in the shocked region, we can use a very useful formula for calculating the PV jump across a (curved) shock, which is given in Kevlahan (1997), after some manipulation, $$\label{eq:dzeta} \Delta \zeta = \frac{(M_\perp^2-1)^2}{M_\perp^2} \frac{\partial M_\perp}{\partial \tau}\frac{c_s}{\Sigma_2}~,$$ where $M_\perp$ is the perpendicular Mach number ($u_\perp/c_s$) which is equal to $(\Sigma_2/\Sigma_1)^{1/2}$ for isothermal shocks. $\Sigma_1$ and $\Sigma_2$ are the pre- and post-shock surface densities. The direction $\tau$ is tangential to the shock front and is defined to be pointing away from the planet. According to Eq. (\[eq:dzeta\]), the sign of $\Delta \zeta$ is determined only by the gradient of $M_\perp$ along the shock front. Figure \[fig:machperp\] shows $M_\perp$ as a function of radial distance to the planet (only one side is shown). This is obtained by first identifying the high compression region through calculating the velocity compression. After finding the flow impact angle to that surface, one can then calculate $M_\perp$ at the shock. Differentiating along that surface gives $\partial M_\perp/\partial \tau$. This figure shows the general behavior of what one might expect of a spiral shock produced by the planet. As one moves radially away from the planet, the flow goes from a non-shocked state to having shocks, which means that $M_\perp$ gradually increases from being $< 1$ to $1$ when the shock actually starts. Very far away from the planet, the pitch angle between the spiral wave and the background flow becomes small enough that $M_\perp$ is expected to become less than one again. So, as a function of radial distance from the planet, $M_\perp$ is expected to increase first, reaching its peak, then starting to decrease, it eventually drops to be smaller than one at some large distance. Consequently, using eq. (\[eq:dzeta\]), we can expect that $\Delta \zeta$ should be positive first, becomes zero when $M_\perp$ reaching its peak, then is negative when $M_\perp$ is decreasing. This matches quite well with the behavior of $\langle\zeta\rangle$ given in Fig. \[fig:pv\_r\]. Note that over the whole evolution, shocks are quite steady with very slow evolution, indicated by the small “outward” movement of the inflexion points (where $\Delta \zeta \approx 0$). All these features are quite generic and we have confirmed this by using several different planet masses. From Fig. \[fig:machperp\], we can see that $M_\perp$ peaks at $\Delta r \approx -3.8 r_H$, but the point where $\Delta \zeta = 0$ is at $-4.5 r_H$ based on Fig. \[fig:pv\_r\]. To quantitatively match the two profiles in terms of radial distances to the planet, one has to take into the account the fact that the radial location where shock occurs (and the flow’s PV is changed) is [*different*]{} from where these PV changes are eventually deposited. This is illustrated in Fig. \[fig:den\_strm\]. Correcting for this systematic shift in radial locations, the agreement between the azimuthally averaged $\langle\zeta\rangle$ profile and the $M_\perp$ profile is quite good. To summarize, the spiral shocks emanating close to the planet cause systematic changes to the PV distribution of the disk flow. As the disk rotates, the disk flow repeatedly passes through these shocks so that changes in PV then accumulate. The sign of PV change is determined by the shock structure, and the amplitude of change is determined both by the shock structure \[eq. (\[eq:dzeta\])\] and how many times the flow passes through the shocks in a given time \[$\sim (r^{-3/2} -1) \Delta t/2\pi$\]. We have identified the physical cause for having both positive and negative changes in PV, in addition to the existence of an inflexion point. We defer the quantitative comparison of the amplitude of PV changes between simulations and eq. (\[eq:dzeta\]) to a future study. SECONDARY INSTABILITY {#sec:s_instab} ===================== Instability and Vortices ------------------------ Having established the profile of $\langle\zeta(r,t)\rangle$, we now turn to the issue of secondary instability. This type of instability is usually associated with the existence of inflexion points (extrema) in the PV profile (see Drazin & Reid 1981) and often requires satisfying a threshold condition. As shown in Fig. \[fig:pv\_r\], the PV profile has extrema points, and the magnitude of the slope between the peak at $\Delta r \approx 3.5 r_H$ and the valley at $\Delta r \approx 4.6 r_H$ is increasing with time (similarly on the other side of the planet). It is then expected that these changes will become large enough that a threshold is reached and secondary instabilities will be excited. To demonstrate this, we have made runs out to several hundred orbits ($400-800$ orbits depending on resolution). Figure \[fig:pv\_2d\] shows the evolution of PV in the {$r,\phi$} plane for a run with $\mu = 10^{-4}$, $c_s = 0.05$, and $n_r\times n_\phi = 400\times 1600$. Indeed, a secondary instability is observed to occur in the outer “valley” of the PV profile at $t \approx 180$ orbits. (The other valley in the $\sim -5 r_H$ region also becomes unstable at a later time.) The negative PV regions correspond to the two density bumps, which are roughly axisymmetric early on. When the instability is excited, density bumps break up into $\sim 5$ anticyclonic vortices (even higher density blobs). Over the next tens of orbits, as the instability evolves and saturates, these density blobs merge, forming density enhancements that are non-axisymmetric on large scales. This is illustrated in Fig. \[fig:den\] at $t=500$ orbits. Note that density depression regions also become non-axisymmetric. Torque Evolution ---------------- As discussed in Koller et al. (2003), one primary goal of studying secondary instability is to understand its impact on the torque evolution of disk gas on the planet. Figure \[fig:torque\] shows the torque history on the planet for a run with the same parameters as in Figure \[fig:pv\_2d\]. The rapid oscillations in the torque history starting at $t \approx 180$ indicates the initiation of the instability (consistent with Fig. \[fig:pv\_2d\]). As the higher density blobs pass by the planet, they exert stronger torques. Furthermore, Figure \[fig:torque\] shows that the oscillation amplitude grows with time, which is caused by the gradual density increase in the PV “valleys” over long timescales. To better understand the origin of fast oscillations, we have calculated the torques separately from three different parts of the disk, which are inner: $\Delta r < 3 r_H$, middle: $|\Delta r| \leq 3 r_H$, and outer: $\Delta r > 3 r_H$, respectively. Figure \[fig:torq\_det\] shows their individual contributions over a time interval from $t=490 - 512$. We can see that both inner and outer regions have a periodic large amplitude variation. These are caused by the orbital motion of the non-axisymmetric disk flow at $\Delta r \sim -5 r_H$ and $\Delta r \sim 7.5 r_H$, respectively (see Fig. \[fig:den\]). In fact, the torque oscillation periods from these two regions are precisely the expected orbital periods at those radii. Since these two periods are different, their sum gives an erratic appearance (Fig. \[fig:torque\]). One might speculate that even with these large amplitudes and rapid oscillations, there seems to be a mean value that is negative and has a small amplitude, consistent with the type I migration expectations. But since our simulations have the planet moving on a fixed circular orbit, it is premature to conclude that these oscillations will not change the type I migration picture. Studies allowing the planet to migrate under the influence of these oscillating torques is under way to address this important issue. Here, we have mostly concentrated on demonstrating the existence of such a secondary instability and its potential impact on the planet’s torque. We have not quantitatively analyzed what is the threshold condition in the PV profile that excites this instability. Such a threshold obviously depends on the planet mass, shock structure, sound speed of disk gas, and viscosity in the disk, etc. We have assumed the inviscid limit. Sufficient viscosity could potentially remove the buildup of the PV’s peaks and valleys made by shocks, hence never allowing the profile to reach the critical destabilization level. RESOLUTION STUDY {#sec:resol} ================ The results we presented here, especially Figs. \[fig:pv\_r\] and \[fig:pv\_2d\], are different from the findings in Koller et al. (2003, see their Fig. 3, also curve A in our Fig. \[fig:pv\_resolu\]). Results there showed additional “dips” interior (i.e., closer to the planet) to the main positive peaks in $\langle\zeta\rangle$. Furthermore, those “dips” were shown to deepen with time and eventually became unstable. Figure \[fig:pv\_resolu\] shows a comparison of runs with 4 different resolutions but the same set of physical parameters. Even though different curves of $\langle\zeta\rangle$ converge at large $\Delta r$ ($> 6 r_H$) because the shock is weak and the gradient is small there, the behavior at small $\Delta r$ (where the shock is strong) shows large differences. Comparing curves A and C, for example, we can see that when the resolution is low and shocks are not well resolved, a larger part of the disk was affected (see the region of $\Delta r \sim 0.5 - 2.5 r_H$ in curve A). We were able to reproduce the results of Koller et al. in our low-resolution runs but our high-resolution studies indicate that the “dips” observed in Koller et al. are getting narrower and shallower with higher resolutions. This means that the previous results showing the inner “dips” are numerical artifacts, not true physical effects. Note that the PV within the planet’s Roche lobe is not conserved due to both the switch-on of the planet’s mass (no matter how slowly) and the numerical error in implementing the planet’s potential. This is indicated by Fig. \[fig:pv\_two\], where PV in the {$r,\phi$} plane at $t=100$ is shown for two different resolutions. Small PV changes are visible coming out of the planet region. For the low resolution case, the PV change from the planet is “spread” over a wide region. It is then amplified by poorly resolved shocks. Such amplification eventually leads to an instability, unfortunately, in much the same spirit we have discussed here. For the high resolution run, the shock and the PV change from the planet is well separated. The erroneous amplification does not occur. It is seen that very high resolutions ($800\times 3200$ to $1200\times 4800$) are needed to obtain convergence in the $\langle\zeta\rangle$ profile (curves C and D). We emphasize that even though there are still some minor differences for the high resolution runs we presented here, the overall feature of having a positive peak and a negative valley is quite consistent among all resolutions. Furthermore, the instability now comes out of the valley region, which typically has a much better convergence than other parts of the $\zeta$ profile. In addition, we have also made a run using pure Lax-Wendroff scheme with $n_r \times n_\phi = 800\times 3200$ (curve E). Comparing with curve C, it gives very similar results. It also shows the instability at later times (not shown here), though the MHS scheme gives sharper and smoother shocks than hybrid schemes. CONCLUSIONS {#sec:diss} =========== We have carried out high-resolution two-dimensional hydrodynamic disk simulations with one embedded protoplanet. We find that the total torque on the planet, caused by tidal interactions between the disk and the planet, can be divided into two stages. First, it is negative, and slowly varying, consistent with the type I migration expectation. Second, it shows large amplitude and very fast oscillations, due to the excitation of an instability which first breaks up the axisymmetric density enhancement into higher density blobs, e.g., vortices. These vortices then merge, forming large-scale non-axisymmetric density enhancements. This non-axisymmetry causes the torque to continuously oscillate. In Koller et al. (2003), we were misled by a spurious feature in the PV profile, due to an inadequate numerical resolution, which eventually became unstable. Despite the inaccurate location where a secondary instability is initiated, the physical explanation proposed by Koller et al. for exciting such an instability is well founded. We now have a self-consistent picture from performing very high resolution simulations to understand the shock structure and the PV profile. Further studies allowing the planet to migrate under the influence of these torques will be necessary and very interesting. This research was performed under the auspices of the Department of Energy. It was supported by the Laboratory Directed Research and Development Program at Los Alamos and by LANL/IGPP. [14]{} , N. J. & [Korycansky]{}, D. G. 2001, , 326, 833 D’Angelo, G., Kley, W., & Henning, T. 2003, , 586, 540 , P. G. & [Reid]{}, W.H. 1981, [*Hydrodynamics Stability*]{}, Cambridge , P. & [Tremaine]{}, S. 1979, , 233, 857 — 1980, , 241, 425 Kevlahan, N. K.-R. 1997, J. Fluid Mech., 341, 371 , W. 1998, , 338, L37 , W. 1999, MNRAS, 303, 696 Koller, J., Li, H., & Lin, D.N.C. 2003, , 596, L91-94 , D. & [Papaloizou]{}, J.C.B. 1996, , 105, 181 , H., [Colgate]{}, S. A., [Wendroff]{}, B., & [Liska]{}, R. 2001, , 551, 874 Li, S., & Li, H. 2005, , to be submitted Lin, D. N. C. & Papaloizou, J. 1986a, , 307, 395 — 1986b, , 309, 846 Lubow, S.H., Seibert, M. Artymowicz, P. 1999, , 526, 1001 , F. S. 2000, , 141, 165 — 2002, , 387, 605 , R.P., [Papaloizou]{}, J.C.B., [Masset]{}, F., & [Kley]{}, W. 2000, , 318, 18 Tanigawa, T., & Watanabe, S. 2002, , 580, 506 Toro, E.F. 1999, Riemann Solvers and Numerical Methods for Fluids Dynamics, Springer, Berlin, Heidelberg, Second Edition , W. R. 1997, Icarus, 126, 261
IFT-UAM/CSIC-03-43\ October, $2003$ [^1] [**Adil Belhaj**]{}[^2] 0.4truecm 0.2cm [*Instituto de Física Teórica, C-XVI, Universidad Autónoma de Madrid\ E-28049-Madrid, Spain*]{} 0.2cm [**Abstract**]{} > In this talk, we discuss four-dimensional $N=1$ affine $ADE$ quiver gauge models using the geometric engineering method in M-theory on $G_2$ manifolds with K3 fibrations. ł $${\bigl[} \def$$[\]]{} ${\bigl(} \def$[)]{} ø Introduction ============ A well-known way to get supersymmetric quantum field theory (QFT) from superstrings, M- or F-theory, is to consider compactifications on a singular manifold $X$ with K3 fibration over a base space $B$. This method is called geometric engineering [@KV; @KLMVW; @KaV; @KKV; @BJPSV; @KMV; @BFS; @ABS]. In this way, the gauge group $G$ and matter content of the QFT are defined by the singularities of the fiber and the non-trivial geometry of the base space, respectively. The gauge coupling $g$ is proportional to the inverse of the square root of the volume of the base: $V(B)=g^{-2}$. The complete set of physical parameters of the QFT is related to the geometric moduli space of the internal manifold. For instance, several exact results for the Coulomb branch of $N=2$ four-dimensional QFT embedded in type IIA superstring theory are obtained naturally by the toric geometry realization and local mirror symmetry of Calabi-Yau threefolds. The latter are realized as a K3 fibration with $ADE$ singularity over a projective $\bf P^1$ complex curve or a collection of intersecting $\bf P^1$ curves. The corresponding $N=2$ QFT$_4$ are represented by quiver diagrams similar to Dynkin diagrams. One should distinguish three cases: ordinary, affine and indefinite Lie algebras [@KMV; @BFS; @ABS]. Quite recently, four-dimensional gauge theories preserving only four supercharges have attracted a lot of attention. Work has been done using either intersecting D-brane physics [@CSU; @HI; @FHHI; @FH; @U; @CIM], geometric transition in type II superstrings on the conifold [@V; @AMV; @CIV; @CKV; @CFIKV; @AGMV], or M-theory on $G_2$ manifolds [@H; @Be; @BDR]. The aim of this talk is to discuss $ N=1$ $ADE$ quiver gauge models from M-theory compactification on a seven-dimensional manifold with $G_2$ holonomy group. The manifold is realized explicitly as K3 fibrations over a three-dimensional base space with Dynkin geometries. First we review the geometric engineering of $N=2$ QFT$_4$ in type IIA superstring theory on Calabai-Yau threefolds. We then extend this to geometric engineering of $N=1$ QFT$_4$ in M-theory on $G_2$ manifolds. These manifolds are given by K3 fibration over three-dimensional spaces with affine $ADE$ Dynkin geometries. Relating our construction to the related D6-brane scenario described using the method of $(p,q)$ webs used in type II superstrings on Calabi-Yau threefolds, we discuss the gauge group and matter content of our $N=1$ QFT$_4$. Generalities on K3 surfaces with $ADE$ singularities ===================================================== Since the geometric engineering method is based on the compactification on manifolds with K3 fibration, we first briefly review some basic facts about $ADE$ geometries of this fibration useful for studying supersymmetric gauge theories embedded in superstrings and M-theory compactifications on $G_2$ manifolds. Roughly speaking, $ADE$ geometries are singular four-dimensional manifolds asymptotic to $ R^4/ \Gamma$, where $\Gamma $ is a discrete subgroup of $ SU(2)$. The resolution of the singularities of these surfaces is nicely described using the Lie algebra structures. The well known singularities of these spaces are as follows: 1. Ordinary singularities classified by the finite $ADE$ Lie algebras. 2. Affine singularities classified by the affine $ADE$ Kac-Moody algebras. More general singularities classified by indefinte Lie algebra have been studied in [@ABS]. The latter are at the basis of the derivation of the new four-dimensional $N=2$ superconformal field theories. For simplicity, let us focus our attention on reviewing briefly the main lines of the finite $ADE$ geometries. Extended singularities can be found in [@KMV; @BFS; @ABS].\ In $C^3$ parametrised by the local coordinates $x,y,z$, ordinary $ADE$ geometries are described by the following complex surfaces $$\begin{array}{lcr} f(A_n): \quad xy+z^n=0\\ f(D_n): \quad x^2+y^2z+z^{n-1}=0\\ f(E_6):\quad x^2+y^3+z^4=0\\ f(E_7):\quad x^2+y^3+yz^3=0\\ f(E_8): \quad x^2+y^3+z^5=0.\\ \end{array}$$ The $C^3$ origin, $ x=y=z=0$, is a singularity since $ f=df=0$. These geometries can be ’desingularized’ by deforming the complex structure of the surface or varying its Kahler structure. The two deformations are equivalent due to the self-mirror property of the $ADE$ geometries . This is why we shall restrict ourselves hereafter to giving only the Kahler deformation. The latter consists in blowing up the singularity by a collection of intersecting complex curves. This means that we replace the singular point $(x,y,z)=(0,0,0)$ by a set of intersecting complex curves ${\bf P^1}$ (two-cycles). The nature of the set of intersecting ${\bf P^1}$ curves depends on the type of the singular surface one is considering. The smoothed $ADE$ surfaces share several features with the $ADE$ Dynkin diagrams. In particular, the intersection matrix of the complex curves used in the resolution of the $ADE$ singularities is, up to some details, minus the $ADE$ Cartan matrix $K_{ij}$. This link leads to a nice correspondence between the $ADE $ roots $ \alpha_i$ and two-cycles involved in the deformation of the $ADE$ singularities. More specifically, to each simple root $ \alpha_i$, we associate a single $({\bf P^1})_i$. This nice connection between the geometry of $ADE$ surfaces and Lie algebra turns out to be at the basis of important developments in string theory. In particular, the abovementioned link has been successfully used in:\ (i) The geometric engineering of the $ N=2$ supersymmetric four-dimensional quantum field theory embedded in type II superstrings on Calab-Yau in the presence of D-branes [@KMV; @BFS; @ABS].\ (ii) Geometric engineering of $ N=1$ quiver theories using conifold geometric transitions [@V; @AMV; @CIV; @CKV; @CFIKV; @AGMV]. Geometric engineering of $N=2 $ QFT in four dimensions ======================================================== In this section we review the geometric engineering method of $N=2 $ models embedded in type IIA superstring in four dimensions. Then we extend this method to engineering $N=1$ models in the context of M-theory on $G_2$ manifolds. $SU(2)$ Yang-Mills in six dimensions ------------------------------------- The main steps in getting $N=2 $ four-dimensional QFT from type IIA superstrings on Calabi-Yau threefolds is to start first with the propagation of type IIA superstrings on K3 surfaces in the presence of D2-branes wrapping on two-cycles. Then we compactify the resulting model down to four dimensions. To illustrate the method, suppose that K3 has an $su(2)$ singularity. In the vicinity of the $su(2)$ singularity, the fiber K3 may be described by the following equation $$xy=z^2,$$ where $x, y$ and $z$ are complex variables. The deformation of this singularity consists in replacing the singular point $x=y=z=0$ by a $\bf P^1$ curve parameterized by a new variable $ x'$ defined as $x'={x\over z}$. In the new local coordinates $(x',y,z)$, the equation of the $A_1$ singularity may be written as: x’ y=z which is not singular. The next step is to consider the propagation of type IIA superstrings in this background in the presence of a D2-brane wrapping around the blown-up $\bf P^1$ curve (real two-sphere) parametrized by $x'$. This gives two $W_\mu^{\pm}$ vector particles, one for each of the two possible orientations for wrapping. These particles have mass proportional to the volume of the blown-up real two-sphere. $W_\mu^{\pm}$ are charged under the $U(1)$ field $Z_0^\mu$ obtained by decomposing the type IIA superstring three-form in terms of the harmonic form on the two-sphere. In the limit where the blown-up two-sphere $x'$ shrinks, we get three masslesss vector particles $W_\mu^{\pm}$ and $Z_0^\mu$ which form an $SU(2 )$ adjoint representation. We thus obtain an $N=2$ $SU(2)$ gauge symmetry in six dimensions. $N=2$ models in four dimensions -------------------------------- A further compactification on a base $B_2$, that is on a real two-sphere, gives $N=2$ pure $SU(2)$ Yang-Mills in four dimensions. This geometric $SU(2)$ gauge theory analysis can be easily extended to all simply-laced $ADE$ gauge groups. To incorporate matter, one should consider a non trivial geometry on the base $B_2$ of the Calabi-Yau threefolds. For example, if we have a two-dimensional locus with $SU( n)$ singularity and another locus with $SU( m)$ singularity and they meet at a point, the mixed wrapped two-cycles will now lead to $(n ,m) $ $N=2$ bi-fundamental matter of the $SU(n)\times SU(m)$ gauge symmetry in four dimensions. Geometricaly, this means that the base geomerty of the Calabi-Yau threefolds is given by two intersecting $\bf P^1$ curves, according to $A_2$ finite Dynking diagram, whose volumes $V_1$ and $V_2$ define the gauge coupling constants $g_1$ and $g_2$ of the $SU(n)$ and $SU(m)$ gauge symmetries, respectively. Fundamental matter is given by taking the limit $V_2$ to infinity, or equivalently $g_2=0$, so that the $SU(m)$ group becomes a flavor symmetry. Geometric engineering of the $N=2$ four-dimensional QFT shows moreover that the analysis we have been describing recovers naturally some remarkable features which follows from the connection between the resolution of singularities and Lie algebras. For instance, taking $m=n$ and identifying the $SU(m)$ gauge symmetry with $SU(n)$ by equating the $V_1$ and $V_2$ volumes, which imply in turn that $g_1=g_2$, the bi-fundamantal matter becomes then an adjoint one. This property is more transparent in the language of the representation theory. The adjoint of $SU(n+m)$ splits into $SU(n)\times SU(m)$ representations as: (n+m)= n.[|n]{}+m.[|m]{} +[|n]{}.m+n.[|m]{}, where $n.{\bar n}+m.{\bar m}$ gives the gauge fields and ${\bar n}.m+n.{\bar m} $ define the bi-fundamental matters. We note that $ N=2 $ four-dimensional QFT’s are represented by quiver diagrams where to each $SU$ gauge group factor one associates a node and for each pair of groups with bi-fundamental matter, the two corresponding nodes are connected with a line. These diagrams are similar to the $ADE$ Dynkin graphs. $N=1$ $ADE$ quiver models from M-theory compactification ======================================================== In this talk we discuss a straightforward way of elevating the geometric engineering of $N=2$ gauge models in type IIA superstring theory to a similar construction of $N=1$ $ADE$ quiver gauge models in M-theory [@BDR][^3]. The method we will be using here is quite similar to the geometric engineering of $N=2$ QFT$_4$. Indeed, we start with a local description of M-theory compactification on a $G_2$ manifold with K3 fibration over a real three-dimensional base space $B_3$. The scenario of type IIA superstring theory in six dimensions appears in seven dimensions in M-theory. In this way, the type IIA D2-branes are replaced by M2-branes, and we end with a seven-dimensional pure gauge models. To get models with only four supercharges in four dimensions, we need to compactify the seven-dimensional model on a three-dimensional base preserving $1/4$ of the remaining 16 supercharges. The internal space must have vanishing first Betti number, $b_1=0$, to meet the requirement of $G_2$ holonomy. An example of such a geometry has the three-dimensional sphere $\bf S^3$ as base. However, in this case we obtain only [*pure*]{} $N=1$ Yang-Mills theory. The incorporation of matter may be achieved by introducing a non-trivial geometry in the base of the K3 fibration. This leads us to consider a three-dimensional intersecting geometry to describe a product gauge group with bi-fundamental matter in four dimensions. Our method here is quite simple and motivated by the work on Lagrangian sub-manifolds in Calabi-Yau manifolds [@GVW]. To do so, we consider the three-dimensional base space as a two-dimensional fibration over a one-dimensional base, where the fiber and the base each preserves half of the seven-dimensional M-theory supercharges. The entire base space $B_3$ could be embedded in a three-dimensional complex Calabi-Yau threefolds. The latter are realized explicitly as a family of (deformed) $ADE$ singular K3 surfaces over the complex plane. In this way our base geometry can be identified with Lagrangian sub-manifolds in such Calabi-Yau manifolds. It is easy to see that non-trivial three-cycles, satisfying the constraints of the $G_2$ base geometry, constitute $ADE$ intersecting two-cycles of K3 surfaces fibered over a line segment in the complex plane. For simplicity, let us consider the case of $A_1$ singularity. This is subsequently extended to more general $ADE$ geometries. The deformed $A_1$ geometry is given by z\_1\^2+z\_2\^2+z\_3\^2= \[mu\] where $\mu$ is a complex parameter. The $A_1$ threefolds may be obtained by varying the parameter $\mu$ over the complex plane parametrized by $w$ z\_1\^2+z\_2\^2+z\_3\^2= (w). The base space $B_3$ can then be viewed as a finite line segment with an $\bf S^2$ fibration, where the radius $r$ of $\bf S^2$ vanishes at the two interval end points, and at the end points only [@HI]. The latter requirement ensures that no unwanted singularities are introduced. One way to realize this geometry is r \~ x \[r\] where $x$ is a real variable parameterizing the interval $\[0,\pi\]$ in the $w$-plane. This construction may be extended to more complicated geometries where we have intersecting spheres according to affine $ADE$ Dynkin diagrams. This extension has a nice toric geometry realization, where the ${\bf S^2}$ can be viewed as a segment $[v_1,v_2]$ with a circle on top, where it shrinks at the end points $v_1$ and $v_2$ [@LV]. On the other hand, $\bf S^3$ can be viewed as an interval with two circles on top, where a $\bf S^1 $ shrinks at the first end and the other $\bf S^1$ shrinks at the second end. In the resolved elliptic singularity, $\bf S^1 \times S^1 $ can be identified with a collection of two-cycles $B_2$ according to affine $ADE$ Dynkin diagrams. In this way, the intersecting matrix of this geometry is given by the Cartan matrix of $ADE $ affine Lie algebra. Having determined the base geometry of the $G_2$ manifold with K3 fibration, we will discuss the corresponding gauge theory of the compactified M-theory using a type IIA superstring dual description. Indeed, M-theory on $G_2$ manifolds can be related to type IIA superstrings on Calabi-Yau threefolds in the presence of D6-branes wrapping Lagrangian sub-manifolds and filling the four-dimensional Minkowski space [@CSU]. For instance, a local description of M-theory near the $A_{n-1}$ singularity of K3 surfaces is equivalent to $n$ units of D6-branes [@LV]. Indeed, on the seven-dimensional world-volume of each D6-brane we have a $U(1)$ symmetry. When the $n$ D6-branes approach each other, the gauge symmetry is enhanced from $U(1)^{\otimes n}$ to $U(n)$. An extra compactification of M-theory down to four-dimensional space-time is equivalent to wrapping D6-branes on the same geometry. In this way, the D6-brane physics can be achieved by using the method of $(p,q)$ webs used in type II superstrings on toric Calabi-Yau manifolds [@HI; @FHHI; @FH; @U; @CIM]. Using the results of this method, we expect that the gauge model in four-dimensional Minkowski space has gauge group G = \_i U(n\_i). The integers $n_i$ are specified by the anomaly cancellation condition. This means that they should form a null vector of the intersection matrix of the three-cycles $I_{ij}$. In the infra-red limit the $U(1)$ factors decouple and one is left with the gauge group $ G\ =\ \bigotimes_{i=1}^m SU(n_i)$. The gauge group and matter content depend on the intersecting geometry in the three-dimensional base of the $G_2$ manifold. Identifying the base with a collection of the three-cycles being $\bf S^2$ fibrations over a line segment, the intersection matrix of the three-cycles can be identified with the Cartan matrix, $K$, of the associated $ADE$ Lie algebra: $ I_{ij}=-K_{ij} $. The anomaly cancellation condition is now translated into a condition on the affine Lie algebra, so the gauge group becomes $ G\ =\ \bigotimes_{i} SU(s_in). $ where $s_i$ are the corresponding Dynkin numbers. The resulting models are $N=1$ four-dimensional quiver models with bi-fundamental matter. They are represented by $ ADE$ affine Dynkin diagrams.\ In this talk we have discussed the geometric engineering of $N=1$ four-dimensional quiver models. In particular, we have considered models embedded in M-theory on a $G_2$ manifold with K3 fibration over a three-dimensional base space with $ADE$ geometry. This base geometry is identified with $ADE$ intersecting two-cycles over a line segment. Using the connection between M-theory and D6-brane physics of type IIA superstring, we have given the physics content of M-theory compactified on such a $G_2$ manifold. The corresponding gauge model has been discussed in terms of $(p,q)$ brane webs.\ [**Acknowledgments.**]{} The author thanks L.B. Drissi, J. Rasmussen and E.H. Saidi for collaborations related to this work. He is supported by Ministerio de Educación Cultura y Deportes (Spain) grant SB 2002-0036. [99]{} S. Kachru, C. Vafa, [*Exact results for $N=2$ compactifications of heterotic strings*]{}, Nucl. Phys. [**B450**]{} (1995) 69, hep-th/9505105. A. Klemm, W. Lerche, P. Mayr, C. Vafa, N. Warner, [*Self-dual strings and $N=2$ supersymmetric field theory*]{}, Nucl. Phys. [**B477**]{} (1996) 746, hep-th/9604034. S. Katz, C. Vafa, [*Matter from geometry*]{}, Nucl. Phys. [**B497**]{} (1997) 146, hep-th/9606086. S. Katz, A. Klemm, C. Vafa, [*Geometric engineering of quantum field theories*]{}, Nucl. Phys. [**B497**]{} (1997) 173, hep-th/9609239. M. Bershadsky, A. Johansen, T. Pantev, V. Sadov, C. Vafa, [*F-theory, geometric engineering and $N=1$ dualities*]{}, Nucl. Phys. [**B505**]{} (1997) 153, hep-th/9612052. S. Katz, P. Mayr, C. Vafa, [*Mirror symmetry and exact solution of $4d$ $N=2$ gauge theories I*]{}, Adv. Theor. Math. Phys. [**1**]{} (1998) 53, hep-th/9706110. A. Belhaj, A.E. Fallah, E.H. Saidi, [*On the non-simply mirror geometries in type II strings*]{}, Class. Quant. Grav. [**17**]{} (2000) 515. M. AitBenhaddou, A. Belhaj, E.H. Saidi, [*Geometric Engineering of N=2 CFT$_{4}$s based on Indefinite Singularities: Hyperbolic Case*]{}, hep-th/0307244. [*Classification of N=2 supersymmetric CFT$_{4}$s: Indefinite Series*]{}, hep-th/0308005. M. Cvetic, G. Shiu, A.M. Uranga, [*Three-family supersymmetric standard-like models from intersecting brane worlds*]{}, Phys. Rev. Lett. [**87**]{} (2001) 201801, hep-th/0107143; [*Chiral four-dimensional $N=1$ supersymmetric type IIA orientifolds from intersecting D6-branes*]{}, Nucl. Phys. [**B615**]{} (2001) 3, hep-th/0107166; [*Chiral type II orientifold constructions as M theory on $G_2$ holonomy spaces*]{}, hep-th/0111179. A. Hanany, A. Iqbal, [*Quiver theories from D6-branes via mirror symmetry*]{}, JHEP [**0204**]{} (2002) 009, hep-th/010813. B. Feng, A. Hanany, Y.-H. He, A. Iqbal, [*Quiver theories, soliton spectra and Picard-Lefschetz transformations*]{}, hep-th/0206152. S. Franco, A. Hanany, [*Geometric dualities in 4d field theories and their 5d interpretation*]{}, hep-th/0207006; [*Toric duality, Seiberg duality and Picard-Lefschetz transformations*]{}, hep-th/0212299. A.M. Uranga, [*Chiral four-dimensional string compactifications with intersecting D-branes*]{}, hep-th/0301032. D. Cremades, L.E. Ibanez, F. Marchesano, [*Yukawa couplings in intersecting D-brane models*]{}, hep-th/0302105. C. Vafa, [*Superstrings and Topological Strings at Large N*]{}, J.Math.Phys. [**42**]{}(2001) 2798, [ hep-th/0008142]{}. M. Atiyah, J. Maldacena, C. Vafa, [*An M-theory Flop as a Large N Duality*]{} J.Math.Phys. [**42**]{} (2001) 3209, [ hep-th/0011256 ]{}. S. Sinha, C. Vafa, [*SO and Sp Chern-Simons at Large N*]{}, [ hep-th/0012136]{}. F. Cachazo, K. Intriligator, C. Vafa, [*A Large N Duality via a Geometric Transition*]{}, Nucl.Phys. [**B603**]{} (2001) 3, [ hep-th/0103067]{}. F. Cachazo, S. Katz, C. Vafa, [*Geometric Transitions and N=1 Quiver Theories*]{}, [ hep-th/0108120 ]{}. F. Cachazo, B. Fiol, K. Intriligator, S. Katz, C. Vafa, [*A Geometric Unification of Dualities*]{}, Nucl.Phys. [ **B628**]{} (2002) 3, [ hep-th/0110028]{}. M. Aganagic, M. Marino, C. Vafa, [*All Loop Topological String Amplitudes From Chern-Simons Theory*]{}, [ hep-th/0206164 ]{}. Y.-H. He, [*$G_2$ quivers*]{}, JHEP [**0302**]{} (2003) 023, hep-th/0210127. A. Belhaj, [*Comments on M-theory on $ G_2$ manifolds and (p,q) webs*]{}, hep-th/0303198. A. Belhaj, L.B. Drissi, J. Rasmussen, [*On N=1 gauge models from geometric engineering in M-theory*]{}, Class. Quantum Grav. [**20**]{} (2003) 4973, hep-th/0304019. C. Vafa, [*On $N=1$ Yang-Mills in four dimensions*]{}, Adv. Theor. Math. Phys. [**2**]{} (1998) 497, hep-th/9801139. B.S. Acharya, [*M theory, Joyce orbifolds and super Yang-Mills*]{}, Adv. Theor. Math. Phys. [**3**]{} (1999) 227, hep-th/9812205. S. Gukov, C. Vafa, E. Witten, [*CFT’s from Calabi-Yau four-folds*]{}, Nucl. Phys. [**B584**]{} (2000) 69, Erratum-ibid. [**B608**]{} (2001) 477, hep-th/9906070. N.C. Leung, C. Vafa, [*Branes and toric geometry*]{}, Adv. Theor. Math. Phys. [**2**]{} (1998) 91, hep-th/9711013. [^1]: [ Talk presented at the Workshop on Quantum Field Theory, Geometry and Non Perturbative Physics, Rabat, July 28th-29th, 2003.]{} [^2]: [[email protected]]{} [^3]: Alternative studies of four-dimensional $N=1$ gauge models in the framework of F- or M-theory may be found in [@Va; @ACHA].
--- abstract: 'The widespread adoption of deep learning is often attributed to its automatic feature construction with minimal inductive bias. However, in many real-world tasks, the learned function is intended to satisfy domain-specific constraints. We focus on monotonicity constraints, which are common and require that the function’s output increases with increasing values of specific input features. We develop a counterexample-guided technique to provably enforce monotonicity constraints at prediction time. Additionally, we propose a technique to use monotonicity as an inductive bias for deep learning. It works by iteratively incorporating monotonicity counterexamples in the learning process. Contrary to prior work in monotonic learning, we target general ReLU neural networks and do not further restrict the hypothesis space. We have implemented these techniques in a tool called .[^1] Experiments on real-world datasets demonstrate that our approach achieves state-of-the-art results compared to existing monotonic learners, and can improve the model quality compared to those that were trained without taking monotonicity constraints into account.' author: - | Aishwarya Sivaraman\ University of California, Los Angeles\ `[email protected]`\ Golnoosh Farnadi\ Mila/ Université de Montréal\ `[email protected]`\ Todd Millstein\ University of California, Los Angeles\ `[email protected]`\ Guy Van den Broeck\ University of California, Los Angeles\ `[email protected]`\ bibliography: - 'draft.bib' title: | Counterexample-Guided Learning of\ Monotonic Neural Networks --- Introduction {#comet:intro} ============ Preliminaries: Finding Monotonicity Counterexamples {#sec:background} =================================================== Counterexample-Guided Monotonic Prediction {#sec:envelope} ========================================== Counterexample-Guided Monotonicity Enforced Training {#sec:comet} ==================================================== Related Work {#sec:relatedwork} ============ Conclusion {#sec:conclusion} ========== #### Acknowledgments The authors would like to thank Murali Ramanujam, Sai Ganesh Nagarajan, Antonio Vergari, and Ashutosh Kumar for helpful feedback on this research. This work is supported in part by NSF grants CCF-1837129, IIS-1943641, IIS-1633857, DARPA XAI grant \#N66001-17-2-4032, Sloan and UCLA Samueli Fellowships, and gifts from Intel and Facebook Research. Golnoosh Farnadi is supported by a postdoctoral scholarship from IVADO through the Canada First Research Excellence Fund (CFREF) grant. #### Broader Impact {#sec:impact} Appendix {#sec:appendix} ======== [^1]: <https://github.com/AishwaryaSivaraman/COMET>
--- author: - 'T.J.L. de Boer [^1]' - 'E. Tolstoy' - 'V. Hill' - 'A. Saha' - 'K. Olsen' - 'E. Starkenburg' - 'B. Lemasle' - 'M.J. Irwin' - 'G. Battaglia' bibliography: - 'references.bib' date: 'Received ...; accepted ...' title: 'The Star Formation & Chemical Evolution History of the Sculptor Dwarf Spheroidal Galaxy' --- Introduction ============ The Sculptor dwarf spheroidal galaxy (dSph) is a relatively faint (M$_{V}$$\approx$$-$11.2), well studied system in the Local Group. A distance of 86$\pm$5 kpc has been determined from RR Lyrae star measurements, in good agreement with other distance determinations, such as the tip of the RGB and horizontal branch level [@Pietrzynski08 and references therein]. Sculptor is located at high galactic latitude (b=$-$83$^{\circ}$) with relatively low reddening, E(B$-$V)=0.018 [@Schlegel98].\ Past studies of the Star Formation History (SFH) of Sculptor have been made using colour-magnitude diagrams (CMDs) of varying depth, and spatial coverage. Using CMDs down to the Main Sequence Turn-Off (MSTO) in a field just outside the core radius @DaCosta84 determined an age range for Sculptor of 13$\pm$2 Gyr. A small field of view ($\sim$2$^{\prime}$), deep HST image of a field well outside the centre of Sculptor confirmed the ancient age (15$\pm$2 Gyr) of the bulk of the stars in Sculptor using synthesis CMD analysis, but also showed a small tail of star formation reaching down to more recent times [@Monkiewicz99; @Dolphin02]. An effort was made to combine detailed spectroscopic abundances of 5 stars with photometric ages, which also suggested that Sculptor is predominantly old, with a small tail of intermediate age stars [@Tolstoy03].\ In a previous qualitative study of wide-field CMDs, the Horizontal Branch (HB) morphology was found to change significantly with radius [@Majewski99; @HurleyKeller99], which was later linked to a metallicity gradient [@Tolstoy04]. From deep wide-field CMDs reaching the oldest MSTOs, covering a large fraction of Sculptor, this metallicity gradient was also found to be linked to an age gradient [@deBoer2011A].\ In Sculptor, wide-field medium resolution triplet spectroscopy of a large number of RGB stars is available [@Battaglia07; @Battaglia082; @Starkenburg10], giving a well defined spectroscopic Metallicity Distribution Function (MDF) for stars with ages $\ge$1.5 Gyr old. In the central 25$^{\prime}$ diameter region of Sculptor high resolution (HR) spectroscopy [Hill et al., in prep, see @Tolstoy09] of stars on the upper RGB provides detailed abundances of $\alpha$-elements (O, Mg, Ca, Si, Ti) as well as r- and s-process elements (Y, La, Ba, Eu, Nd). Furthermore, within the central 15$^{\prime}$ diameter region stars have been observed using medium resolution spectroscopy going down to fainter magnitudes [@Kirby09; @Kirby10], giving \[Fe/H\] as well as $\alpha$-element abundances. ![Coverage of the photometric and spectroscopic observations across the Sculptor dwarf spheroidal galaxy. The solid (black) square denotes the full coverage of the CTIO 4m/MOSAIC fields. The (blue) open circles show the stars observed in the VLT/FLAMES low resolution triplet survey [@Battaglia07; @Starkenburg10]. The (red) solid dots mark the RGB stars that also have high resolution abundance measurements [Hill et al., in prep, see @Tolstoy09] and the (green) dashed ellipse is the tidal radius of Sculptor, as given by @Irwin95. \[Sclcov\]](sclcovphotspecbig.eps){width="50.00000%"} \ The presence of these gradients suggests that Sculptor has experienced an initial, spatially extended episode of metal-poor star formation at all radii, and stars of higher metallicity were subsequently formed more towards the centre. This picture explains the properties of the CMD, such as the different spatial distributions traced by the HB morphology and Red Giant Branch (RGB) metallicity gradient.\ Sculptor is a galaxy that can be used as a benchmark for a simple galaxy with an extended episode of star formation in the early Universe. However, in order to quantify this picture and obtain accurate timescales for different stellar populations at different radii the detailed SFH needs to be determined over a large area of the galaxy.\ The Sculptor dSph has been modelled several times in simulations using different techniques [e.g., @Salvadori08; @Revaz09; @Lanfranchi10; @Kirby11; @Revaz11]. The cosmological semi-analitycal model of @Salvadori08 follows simultaneously the evolution of the Milky Way and its dwarf galaxy satellites, and reproduces the observed MDF and total mass content of the Sculptor dSph. @Lanfranchi10 use chemical evolution modelling to reproduce the observed MDF of a variety of dwarf spheroidal galaxies, including Sculptor [see also @Lanfranchi04]. @Kirby11 use a chemical evolution model to match the observed MDF and alpha-element distributions from a large sample of spectroscopic observations in Local Group dwarf spheroidal galaxies. @Revaz11 use a chemo-dynamical Smoothed-particle Hydrodynamics (SPH) code to model the properties of Local Group dwarf spheroidal galaxies. Their model for the Sculptor dSph correctly matches the observed MDF and shows an extended SFH. Furthermore, the narrow \[Mg/Fe\] distribution in their Sculptor-like model matches well with observations from HR spectroscopy.\ In this paper the SFH of a large area of the Sculptor dSph will be determined using CMD synthesis methods [e.g., @Tosi91; @Tolstoy96; @Gallart962; @Dolphin97; @Aparicio97] to interpret the deep wide-field photometry presented in @deBoer2011A. The available spectroscopic information from triplet and HR spectroscopy will be directly used together with the photometry to provide additional constraints on the SFH. The spatial coverage of the photometric data (covering $\approx$80% of the tidal radius of Sculptor) allows us to determine the SFH at different radii and quantify the observed radial age and metallicity gradients.\ Using the detailed SFH we also determine the probability distribution function for age for stars on the upper RGB, giving age estimates for individual stars. By linking these ages to the observed spectroscopic abundances we directly obtain, for the first time, the timescale of enrichment from different types of Supernovae (SNe).\ The paper is structured as follows: in section \[data\] we present our photometric and spectroscopic observations. In section \[method\] we describe our method of obtaining the SFH using the synthetic CMD method, adapted to include the MDF information. The detailed SFH analysis of the Sculptor dSph is given in section \[results\]. The details of the age determination of individual stars are given in Section \[indivages\]. Finally, the conclusions that can be drawn from the SFH are discussed in section \[conclusions\]. Data ==== Deep optical images of the Sculptor dSph in the B, V and I bands were obtained using the CTIO 4-m MOSAIC II camera. The reduction and accurate calibration of this dataset is described in detail in a preceding paper [@deBoer2011A]. The total coverage of the nine photometric fields observed is shown in Figure \[Sclcov\] as the solid black square. The spatial coverage of the B band photometry is complete for radii r$_{ell}$$\le$0.5 degrees, while the V and I bands are complete for r$_{ell}$$\le$1 degree.\ By stacking together several images for each pointing the deepest photometry possible was obtained. Short exposures were also obtained, to be able to accurately photometer the bright stars that are saturated in the deep images. In order to ensure accurate photometric calibration, images were obtained with the 0.9m CTIO telescope under photometric conditions. Observations were also made of Landolt standard fields [@Landolt07; @Landolt92]. Photometry was carried out on these images using DoPHOT [@Schechter93]. The different fields were placed on the same photometric scale and combined in order to create a single, carefully calibrated photometric catalog, as described in @deBoer2011A. ![Spectroscopic \[Fe/H\] and \[$\alpha$/Fe\] measurements with their errors obtained from HR observations (Hill et al., in prep) in the central 25$^{\prime}$ diameter of the Sculptor dSph. The solid (black) line shows the range of possible values of \[$\alpha$/Fe\] assumed for each bin of \[Fe/H\] in the SFH determination. \[fealprel\]](SclAMRerrbox.eps){width="49.00000%"} \ In addition, triplet spectroscopy is available for $\approx$630 individual RGB stars that are likely members of Sculptor, from medium resolution (R$\sim$6500) VLT/FLAMES observations [@Battaglia07; @Battaglia082; @Starkenburg10]. These observations measure \[Fe/H\] for a large sample of stars for which we also have photometry, out to a radius of 1.7 degrees from the centre of the Sculptor dSph (see Figure \[Sclcov\]).\ Furthermore, HR spectroscopy (VLT/FLAMES) is available for 89 individual RGB stars within the central part of the Sculptor dSph, for r$_{ell}$$\le$0.2 degrees [Hill et al., in prep, see @Tolstoy09]. These observations provide \[Fe/H\] as well as \[$\alpha$/Fe\] measurements. For \[$\alpha$/Fe\] we assume \[$\alpha$/Fe\] =(\[Mg/Fe\] +\[Ca/Fe\] +\[Ti/Fe\])/3. The HR spectroscopy includes a range in metallicity from $-$2.5$<$\[Fe/H\]$<$$-$1.0 dex, with alpha-element abundances showing a clear correlation with \[Fe/H\] for the range $-$0.3$<$\[$\alpha$/Fe\]$<$0.5 dex (See Figure \[fealprel\]). Method ====== CMDs contain the signatures of numerous evolutionary parameters, such as age, chemical abundance, initial mass function, etc. The typical method of determining a SFH relies on comparing observed and synthetic CMDs of the individual stars that can be resolved. There have been many schemes proposed to quantify the SFH extracted from the CMD [e.g., @Tosi91; @Tolstoy96; @Gallart962; @Dolphin97; @Aparicio97; @Dolphin02].\ We have created our own routine, Talos, which uses an approach that compares observed CMDs with a grid of synthetic CMDs through the use of Hess diagrams (density plots of stars in the CMD) to determine the SFH [similar to @Dolphin97; @Dolphin02]. The synthetic CMDs include observational effects in a statistical manner, which provides the most realistic way to include them.\ We assume that a SFH can be built up from a linear combination of simple stellar populations. The advantage of this approach is that instead of synthesising a large number of artificial CMDs, each with their own complex Star Formation Rate (SFR(t)), the simple populations only need to be generated once. Then, the combination of simple stellar populations has to be found, which best represents the observed CMD.\ Using the standard technique a number of different CMDs (e.g., V,B-V and I,V-I) can be independently used to obtain SFHs. A CMD is inherently two dimensional (colour and magnitude), whereas when photometric information is available in more than two filters, we can use 3D information to more precisely constrain the SFH.\ To fully utilize all the available photometric information to constrain the SFH, Talos fits the measurements in all the available passbands at once. In this way all the advantages of the different CMDs (such as the more precise photometry in the I,V-I CMD and the larger colour range of the V,B-I CMD) are incorporated into a single, more accurate SFH.\ In addition to the photometric data we can also add spectroscopic observations of a large number of individual RGB stars. To take into account this extra information Talos also fits the observed MDF at the same time as the photometry, which allows us to put well motivated constraints on the metallicity range of the different stellar populations.\ The SFH determination technique consists of the following steps: 1. Construct synthetic CMD and MDF models 2. Find the combination of models that best match the data. 3. Determine the resolution and uncertainties of the final SFH. These steps will be explained in more detail in the following sections. Constructing synthetic CMD models {#CMDmodels} --------------------------------- The synthetic stellar evolution library adopted in Talos is the Darthmouth Stellar Evolution Database [@DartmouthI]. The database contains isochrones distributed with a specific grid of age, metallicity and $\alpha$-element abundance, among which are also those with \[$\alpha$/Fe\]$<$0. These are of particular importance for our analysis, since Sculptor contains a substantial number of stars (33% of the total) with low $\alpha$-element abundances (See Figure \[fealprel\]).\ In order to allow greater flexibility in choosing the population domain and parameter resolution to use when modelling the SFH, we interpolate between the provided isochrones to produce a finer grid. A routine is provided by the Dartmouth group, to interpolate isochrones in \[Fe/H\]. Interpolation in age and \[$\alpha$/Fe\] is done linearly, given that the grid of age points is sufficiently fine that there will only be small changes between isochrones of different age.\ The isochrone library does not model the HB, Asymptotic Giant Branch (AGB) or Blue Straggler Star (BSS) phases, which means we cannot use these evolutionary features to constrain the SFH. The AGB is especially important, since it merges with the RGB in the CMD. The largest uncertainties in our final model may come from misidentifying the AGB using RGB models or the BSS as a young population. However, the Dartmouth isochrones have been shown to give a good simultaneous fit to all other evolutionary features within a CMD, justifying their use [e.g., @Glatt2008a; @Glatt2008b].\ Synthetic CMDs are generated by drawing stars from the isochrones according to an initial mass function (IMF) and the mass range within each isochrone. The IMF used is the Kroupa IMF [@Kroupa01]. An initial SFR for a stellar population (within a default mass range of 0.1$-$120 M$_{\odot}$) is assumed (resulting in several times more synthetic stars than in the observed CMD), which ensures that enough stars are generated in each part of the synthetic CMD, consistent with the time spent in each evolutionary phase. These synthetic stars are then placed at the distance of the Sculptor dSph, assuming a reddening of E(B-V)=0.018.\ This gives us ideal CMDs for each stellar population that we consider in the SFH analysis. To create a synthetic CMD that can be compared directly to the observed photometry we need to include observational effects in order to simulate our observational limitations. ### Adding observational effects The crucial aspect in comparing observed and synthetic CMDs is to determine the observational biases, such as photometric errors, incompleteness, etc.\ This is done by carrying out a large number of simulations, in which a number of artificial stars (with known brightness) are placed on the observed images. These images are then re-reduced in exactly the same way as the original images, after which the artificial stars are recovered from the photometry. In this way we obtain a look-up table, which can be used to accurately model the effects of observational conditions [e.g., @Stetson88; @Gallart961]. The lookup-table is used in Talos to assign an individual artificial star (with similar colours and magnitudes) to each star in an ideal synthetic CMD and considering the manner in which this star is recovered to be representative of the effect of the observational biases. ![The recovered I,V-I CMD of the artificial stars put into the observed images of Sculptor. The colours indicate the completeness fraction of artificial stars in each bin, with a scale on the right hand side of the plot. \[SclcompHessVI\]](SclcompHessVI.eps){width="49.50000%"} \ Using individual artificial stars has the disadvantage that a huge number of stars need to be generated in order to correctly sample the effects in each CMD bin. This is because using the same artificial star in assigning offsets to several synthetic stars creates artificial clustering in the CMD. However, this approach is the only way to properly take into account the colour-dependence of the completeness level and the asymmetry of the offsets applied to model stars at faint magnitudes [e.g. @Gallart961]. ![The magnitude of the 50% completeness level with increasing elliptical radius (r$_{ell}$) from the centre of the Sculptor dSph, in B (solid blue line), V (green dashed line) and I (red dotted line) filters. The vertical (black) line indicates the elliptical radius up to which observations in the B filter are available. \[radialcomp\]](radial50completeness.eps){width="45.00000%"} \ The set of artificial stars was generated with parameters encompassing the range 5$<$Age$<$15 Gyr, $-$2.5$<$\[Fe/H\]$<$$-$0.80 dex, $-$0.2$<$\[$\alpha$/Fe\]$<$0.40 dex. The stars were distributed randomly across the nine MOSAIC pointings in Sculptor, in 3 different filters. In each image no more than 5% of the total observed stars were ever injected as artificial stars at one time, so as not to change the crowding properties in the image. For each observed pointing 200$-$400 images containing artificial stars were created, in order to obtain a sufficient total number of artificial stars. This resulted in nearly 7000 images containing a total number of 3.5 million artificial stars spread across the full area of the Sculptor dSph. Each image was re-reduced using the same techniques and calibrations as the original image, after which the output catalog was matched to the input catalog (while taking care not to match artificial stars to observed real stars), to produce a lookup-table containing the recovered artificial stars (and the recovered photometric offsets) in 3 filters.\ Figure \[SclcompHessVI\] shows the result of the artificial star tests in the I,V-I CMD of Sculptor. The completeness level accounts for only those objects that DoPHOT unambiguously recognises as stars. DoPHOT actually detects more objects, but the faintest objects do not have high enough signal-to-noise to be confidently distinguished as stars as opposed to unresolved galaxies. Only the unambiguously detected stars are considered in the completeness level, since those are observed with sufficient accuracy to be useful in our analysis. Figure \[SclcompHessVI\] shows the dependence of the completeness level on both colour and magnitude. The dependence on colour in particular has to be properly taken into account when generating realistic synthetic CMDs.\ The need to carry out artificial star tests over the entire observed field of view is highlighted in Figure \[radialcomp\]. Here we show the variation of the 50% completeness level with distance from the centre of Sculptor, in the three different filters. The completeness level in the B filter is zero for r$_{ell}$$\ge$0.5 degrees, because of the lack of B band observations in the outer parts of Sculptor. Figure \[radialcomp\] also shows that the centre of Sculptor is less complete at a fixed magnitude limit than the outer regions. This is due to the increased crowding in the central region, which means it is harder to determine the shape of the PSF of stars in the centre, causing fewer stars to be unambiguously detected at a given brightness level, and placing the 50% completeness at brighter magnitude levels.\ However, the CMDs at different radii from the centre [see Figure 5 from @deBoer2011A] show that the photometry in the central part of Sculptor actually goes deeper than in the outskirts (due to longer exposure times), with a well defined shape of the MSTOs. Additionally, using the artificial star test results to simulate observational conditions makes sure that this changing completeness level is properly taken into account when determining the SFH.\ Artificial star tests have only been carried out on the long, stacked exposures. For those stars saturated in the long exposures, observational effects are modelled by giving them random offsets within the photometric error distribution of stars with similar instrumental magnitude and colour. This is justified because the recovery fraction for the brightest stars is very close to 100%, and the offsets induced by crowding are insignificant and signal-to-noise is not an issue.\ The process of including observational errors in synthetic CMDs in a statistical manner means that they can be directly compared to the observed CMDs, to obtain the best matching SFH. Constructing MDF models {#MDFmodels} ----------------------- In order to include the spectroscopic MDF at the same time as the photometry in Talos we use the equivalent of a Hess diagram for \[Fe/H\]. This is done by generating synthetic CMDs of different stellar populations and binning the stars in metallicity. In this way it is possible to create synthetic MDF models for stellar populations of different \[Fe/H\], age and \[$\alpha$/Fe\].\ The spectroscopic observations only come from a fraction of the RGB, which are in turn a small fraction of stars present in the CMD. To make sure that the synthetic CMDs fully sample the RGB, synthetic MDF populations are generated with an artificially increased initial SFR, which is later corrected in the MDF models. To correctly reproduce observational limits we construct the synthetic MDFs using only stars within a similar magnitude range to those of the the observed spectroscopic sample. From the number of stars observed both photometrically and spectroscopically in the same magnitude range we then calculate the spectroscopic completeness fraction on the RGB. This fraction is used to scale down the synthetic MDF, and thus match the spectroscopic completeness.\ The observed spectroscopic uncertainties on the metallicites are simulated by considering each individual synthetic star to have a Gaussian profile with a width determined by the average observational uncertainty on \[Fe/H\]. The metallicities of these individual stars are then combined to form a synthetic MDF which takes into account observed uncertainties, and can be directly compared to the observed MDF. Determining the SFH ------------------- In Section \[CMDmodels\] we described how Talos can be used to create accurate synthetic CMDs, which can be compared directly to the observed CMDs. We then described in Section \[MDFmodels\] how model MDFs are created in Talos, which add additional information that can be used to restrict the SFH. Here we now put everything together to determine the SFH of a galaxy.\ The technique used to determine the best SFH minimises the difference between observed and synthetic CMDs through Hess diagrams to produce the closest match to the observed data, as described in @Dolphin02. Photometric Hess diagrams are constructed by binning CMDs in magnitude and colour space (or three magnitude space for Hess diagrams using B,V and I filters) and counting the number of stars in each bin. The synthetic Hess diagrams are scaled to contain a number of stars equal to the observed Hess diagrams, to make sure that no model is given preference during the SFH determination based on the total number of stars it contains.\ In Talos, the synthetic MDFs are used in the fitting procedure in a similar manner as the photometric Hess diagrams. The spectroscopic MDF is treated as an extra observed dimension, for which the difference between model and observations also needs to be taken into account. A different weight is applied to the photometric and spectroscopic components, in order to enhance the importance of the spectroscopic information, which contributes less to the overall goodness of fit than the photometry, but significantly restricts the possible solutions.\ Since the observed data follows a Poisson distribution, the goodness of fit is expressed as a difference parameter between models and observations (a Poisson equivalent of $\chi^{2}$), given by @Dolphin02: $$\chi_{Poisson}^{2} = 2\sum_{i}^{}m_{i} - n_{i} + n_{i} ln\frac{n_{i}}{m_{i}}$$\ In which $m_{i}$ is the total number of stars in a synthetic Hess bin  ($m_{i}=\sum_{j}^{}$SFR$_{j} \times$CMD$_{i,j}$ in the photometric part, and $m_{i}=\sum_{j}^{}$SFR$_{j} \times$MDF$_{i,j}$ in the spectroscopic part) and $n_{i}$ the total number of stars in an observed Hess bin. By minimising the difference parameter we determine the SFH (given by SFR$_{j}$) that best matches both the observed CMD as well as the observed spectroscopic MDF. ### SFH uncertainties {#uncertainties} Equally as important as determining the best matching SFH is determining the uncertainties on this choice. The two main sources of statistical uncertainties on the determination of the SFH are related to data and parameter sampling of the SFH solution [@Aparicio09]. ![The input and recovered SFHs of a series of short (10 Myr) bursts of star formation at different input age. The black, solid histogram shows the input SFH, given the binning adopted. The red histograms show the recovered SFH, along with the fit of a Gaussian distribution as the dashed line. The mean ($\mu$) and variance ($\sigma$) of the fitted Gaussian distribution are also listed. \[SFHresolution\]](SFHresolution.eps){width="45.00000%"} \ Data sampling is related to how representative a model is of the underlying population. In the context of CMD analysis, constructing the same model twice, with different random samplings of the same isochrones will result in two slightly different synthetic CMDs. In order to quantify how this effect may impact the inferred SFH without synthesizing an entire new set of synthetic CMDs we generate a number of different representations of the observed CMD. Synthetic CMDs are generated from the SFH obtained in the first run-through of the code and randomly swapping a certain fraction (by default 20%) in the observed CMD. In this way we obtain a new “observed" CMD which is representative of a CMD with a different sampling of the underlying population. By generating a number of these CMDs and using them with the original models to find the SFH we can quantify the errors on the SFH caused by data sampling [@Aparicio09].\ The other major source of error is parameter sampling. This means that the choice of CMD gridding, age bins and metallicity bins will affect the final SFH. In order to quantify the effect of this gridding on the recovered SFH, different bin sizes and distributions are used and then compared. In the parameter space of age and metallicity three different shifts are applied: a shift of half a bin size in age and in metallicity and also a shift in both age and metallicity simultaneously. For the photometric binning two different bin sizes are used, with the different shifts applied to each. For all these different griddings the SFH is determined, after which the results are returned to a common grid.\ The average of all the different solutions is adopted as the final SFH, with errorbars determined as the standard deviation of the distribution of solutions. This technique has been shown to adequately take into account the major uncertainties on the recovered SFH, and give realistic errorbars on the final SFH [@Aparicio09]. ![image](MDFcompCaTKirfullerr.eps){width="45.00000%"} ![image](MDFcompCaTKirbrighterr.eps){width="45.00000%"} General properties of Talos {#Scltests} --------------------------- In order to verify the basic operation of Talos, a number of tests have been performed, and the results are shown in Appendix \[tests\]. We checked our ability to correctly recover the age and metallicity of a simple synthetic stellar population. We also showed that we can accurately recover a synthetic SFH of a continuous period of star formation from the earliest times until now. Furthermore, we tested Talos on observations of the globular cluster NGC1904. The metallicity we determined is consistent with that obtained from spectroscopic observations, showing that Talos can accurately recover the age and metallicity of an observed simple stellar population. The main effect that makes the output solution different from the input is the depth of the photometric data, and the set of parameter griddings adopted to compute the final SFH. These result in limits to the age resolution of our final SFH.\ In order to understand the limitations of the final SFH determined by Talos, it is important to obtain an accurate estimate of the age resolution of the solution. This is determined by recovering the SFH of a set of synthetic populations at different input ages, as described by [e.g., @Hidalgo11].\ Three synthetic populations were generated at the distance of the Sculptor dSph, with single short bursts of star formation (with a duration of 10 Myr) at an age of 8.5, 10.5 and 12.5 Gyr. The metallicity of the bursts was distributed to match the observed MDF of the Sculptor dSph, as were the observational errors.\ The recovered and input SFH for these bursts are shown in Figure \[SFHresolution\]. The recovered SFH is well fit by a Gaussian distribution, which shows that the mean of the central peak ($\mu$) is recovered at the correct age, with a minor shift of $\approx$0.1$-$0.2 Gyr. The recovered SFH shows that typically $\approx$40% of the total input SFR is contained within the central bin, at all ages. The star formation is spread out over the Gaussian distribution, leading to uncertainties in the recovered SFH. However, in the case of constant star formation (see Section \[synthetictests\]) the recovered star formation rates show a more accurate recovery of the input SFH.\ From the Gaussian fits to the three bursts in Figure \[SFHresolution\] we can derive the variance $\sigma$, which determines the resolution with which the burst is recovered. The three bursts are recovered with an age resolution of $\approx$1.5 Gyr at an age of 12.5 Gyr, $\approx$1.3 Gyr at an age of 10.5 Gyr and $\approx$1.1 Gyr at an age of 8.5 Gyr, which is consistent with a value of 12% of the adopted age. ![The relative fractions of different metallicities present on the RGB at different photometric depths, with respect to an MDF going down to M$_{V}$=$-$3.2 (V=19.5) on the RGB. The relative fraction indicates by which factor a population is incorrectly sampled in the CMD. For comparison the level of the HB is V$_{HB}$=20.13 (M$_{V}$=0.43). \[lumfuncfrac\]](lumfuncfraccmd35BVabs.eps){width="47.50000%"} Simulating the Sculptor dSph {#Sclspecifics} ---------------------------- To use Talos to determine the SFH of the Sculptor dSph the general method described in the preceding sections needs to be adapted to the specifics of this galaxy. ### General setup The observed CMD of Sculptor contains both stars that belong to Sculptor as well as Milky Way foreground and unresolved background galaxies. To avoid these objects influencing the SFH, the observed CMD is “cleaned" using the colour-colour diagram presented in @deBoer2011A. Foreground stars with colours that coincide with those of Sculptor are assumed not to significantly alter the total number of stars in different evolutionary features, as determined by using the Besançon models to predict the number of Milky Way stars in the CMD [@Robin03].\ Additionally, the observed CMDs [shown in Figure 5 from @deBoer2011A] show that Sculptor contains a significant number of BSS stars, which we do not attempt to include in the SFH determination as a younger population, since they are likely to be genuine BSS stars [@Mapelli09]. To avoid the BSS stars influencing the SFH, we generate a mock BSS population which is used as a fixed background in the SFH determination. The mock BSS stars are constructed by using only the main sequence part of populations spanning a range in metallicity of $-$2.5$<$\[Fe/H\]$<$$-$1.0, with ages between 3$-$4 Gyr. This background Hess diagram is subsequently scaled so the number of synthetic stars in the BSS region matches the observed number. ![image](SFHall.eps){width="49.00000%"} ![image](ZFHalldex.eps){width="49.00000%"} \ The SFH is determined using the CMD without including the RGB region. The number of stars on the RGB is low compared to that in the MSTO region, and therefore does not contribute strongly to the determination of the SFH. Furthermore, the RGB feature has been shown to substantially vary from one set to the other [e.g., @Gallart05]. Conversely, there is good agreement between the theoretical MSTO phase at old ages ($\ge$10 Gyr) in different isochrone sets. Instead, information from the RGB phase is taken into account through the inclusion of the MDF fitting.\ Determining the SFH without including the spectroscopic MDF shows similar overall trends as obtained when including the MDF. However, the addition of the MDF has a result of putting better constraints on less populated region of the CMD, such as the metal-rich stars, which would otherwise be fit using a lower metallicity and higher age, due to the mismatch between the RGB and MSTO in the isochrones.\ A single, well defined distance is assumed for the Sculptor dSph in the SFH determination, as obtained by a number of reliable distance indicators [@Pietrzynski08]. The effect of adopting a different distance is shifting the main peak of star formation to younger ages for a larger distance and older ages for a smaller distance. However, the internal distribution of the presented SFHs does not change significantly when adopting a different distance, given the small observed uncertainties on the distance. Furthermore, the good agreement between different distance determination methods gives confidence to the single distance adopted in this work. ### Taking into account the spectroscopic information The spectroscopic information which is used to provide extra constraints on the SFH comes from two different sources. For r$_{ell}$$\le$0.2 degrees the metallicity information comes from HR spectroscopy on the upper RGB, while in the outer parts of Sculptor (0.2$\le$r$_{ell}$$\le$1.7 degrees) it comes only from triplet spectroscopy which also includes fainter stars. triplet spectroscopy is also available in the central region, which shows that metallicities from both samples are placed on the same scale, and consistent with each other [@Battaglia082]. However, because the spectroscopic sample in the central region encompasses only the brightest $\approx$1mag of the RGB, an intrinsic bias may be present in the observed central MDF due to the limited sampling of the stellar populations on the upper RGB.\ To check for the presence of such a bias in our HR spectroscopic sample, a comparison is made with a different spectroscopic sample taken in the same region, going down to fainter magnitudes [@Kirby10]. Figure \[MDFcomparison\]a shows a comparison between the normalised MDFs of both samples. The MDF of the deeper sample clearly shows a larger fraction of metal-poor stars than the brighter sample. Figure \[MDFcomparison\]b shows that the MDF of both samples look the same, when the sample of RGB stars is truncated at the same magnitude. Figure \[MDFcomparison\] shows that an MDF determined from only the upper RGB stars in Sculptor results in a lower relative fraction of metal-poor stars. This effect is due to the luminosity function bias on the upper RGB, causing an incomplete sampling of the metal-poor components for the brightest RGB stars.\ In order to investigate the effects of the bias in more detail we consider the sampling of the MDF at different photometric depths. Using a synthetic CMD of a Sculptor-like galaxy for which the metallicity is known for each star, we construct MDFs from RGB stars going down to different photometric depths. By comparing to the input parameters of the synthetic CMD we find that going down to M$_{V}$=$-$3.2 (V=19.5) fully samples the overall MDF. MDFs of different depths are compared to the overall MDF to produce the correct relative fraction of stars sampled at each metallicity. Figure \[lumfuncfrac\] shows the relative fraction of stars sampled on the RGB at different metallicities in CMDs of different photometric depth. The effect of depth on the RGB is clearly seen, with shallower MDFs under-sampling populations with \[Fe/H\]$<$$-$1.9 dex and over-sampling more metal-rich components. A similar effect is also seen when applying the same approach to the observed, deep triplet data, with a more severe under-sampling of the populations with \[Fe/H\]$<$$-$1.9 dex compared to the synthetic CMD results.\ It is important not to neglect the metal-poor component in Sculptor, which is well sampled by the MSTO region, but not on the RGB. Figure \[lumfuncfrac\] shows that no bias is present in the MDF for \[Fe/H\]$\ge$$-$1.9 dex. Therefore, we adopt this as a metallicity cut-off, below which the SFH is only constrained using photometry. In the outer parts of Sculptor (r$_{ell}$$\ge$0.2 degrees) no metallicity cut-off is adopted, since the triplet spectroscopy extends to faint enough magnitudes to avoid this bias. ![image](SFHisotest.eps){width="45.00000%"} ![image](ZFHisotestdex.eps){width="45.00000%"} ### SFH parameter space To set the limits in age and metallicity used in determining the SFH of the Sculptor dSph we consider all information currently available in the literature. We first consider the spectroscopic MDF from triplet and HR spectroscopy. The range in \[Fe/H\] is limited at the metal-poor end by the availability of the isochrones, which go down to only \[Fe/H\]$\approx$$-$2.5 dex. This is not a problem, because the spectroscopic MDF shows that the majority of the stars in Sculptor (92% of the total) have \[Fe/H\]$\ge$$-$2.5 dex [@Starkenburg10]. The metal-rich is limited by the absence of stars with \[Fe/H\]$\ge$$-$1.0 dex in the observed MDF (see Figure \[fealprel\]). A binsize of 0.2 dex is assumed for \[Fe/H\], similar to the average uncertainty on \[Fe/H\] in the spectroscopic sample.\ To constrain \[$\alpha$/Fe\] we use the 89 stars in the HR sample in the central 25$^{\prime}$ diameter region of Sculptor. Figure \[fealprel\] shows that a well-defined \[$\alpha$/Fe\]$-$\[Fe/H\] relation from HR spectroscopy is present in the centre of Sculptor. We assume that there is no change in this relation with radius and directly use it to determine the range in \[$\alpha$/Fe\] that corresponds to the \[Fe/H\] used in a particular stellar population (the region defined by the solid black lines in Figure \[fealprel\]). In this way we generate a set of populations that cover the full range of \[Fe/H\] and \[$\alpha$/Fe\] in the spectroscopic observations.\ To restrict our choice of possible ages we notice that the distribution of stars in the observed CMDs do not allow any stars $\le$5 Gyr old to be present in Sculptor [@deBoer2011A]. Assuming a maximum age of 14 Gyr, for the age of the Universe, the range of ages we consider is thus between 5 and 14 Gyr old, with a bin size of 1 Gyr. ![Synthetic V,B-V CMDs of the MSTO region of the Sculptor dSph, colour coded by age as inferred from the SFH. \[CMDoverlay\]](SclVBVAge.eps){width="49.00000%"} Results ======= We have described our method and carried out tests to show that it works as expected using real and synthetic test data (see Section \[Scltests\]) Now we apply Talos to our photometric and spectroscopic data sets of the Sculptor Sph [@deBoer2011A; @Starkenburg10 Hill et al., in prep].\ To derive the SFH of Sculptor the available photometry (excluding the RGB) is fit simultaneously with the spectroscopically determined MDF. Within r$_{ell}$$\le$0.43 degrees photometry in three filters (B,V and I) is used to determine the SFH, while further out only the V and I filters can be used. For radii r$_{ell}$$\le$0.2 degrees the spectroscopic MDF obtained from HR spectroscopy is used. This MDF is used in the SFH fitting for \[Fe/H\]$\ge$$-$1.9 dex, to avoid the luminosity function bias described in Section \[Sclspecifics\]. Below this value only the photometry is used to constrain the MDF and SFH. For r$_{ell}$$\ge$0.2 degrees the MDF obtained from triplet data is used, and no metallicity cut is applied. Furthermore, we include a static background Hess diagram to account for the presence of a BSS population (as discussed in Section \[Sclspecifics\]). The observed HB and AGB are ignored, since the isochrone set we use does not include these evolutionary features.\ The final Star Formation History and Chemical Evolution History (CEH) of the entire Sculptor dSph (out to r$_{ell}\approx$1 degree) are presented in Figure \[overallSFH\]. The SFH and CEH display the rate of star formation at different age and metallicity respectively, in units of solar mass per year or dex respectively, over the range of each bin. The total mass in stars formed in a bin can be obtained by multiplying the star formation rate by the age or metallicity range of that bin.\ The overall SFH shows that the Sculptor dSph is dominated by an ancient ($>$10 Gyr old), metal poor stellar population. A tail of more metal-rich, younger stars is seen at a lower SFR down to an age of $\approx$6-7 Gyr. The shape of the SFH shows that the star formation rate declines with age, suggesting a single episode of star formation over an extended period of time ($\approx$7 Gyr). ![image](SclobsHessVI.eps){width="33.00000%"} ![image](SclmodelHessVIBSS.eps){width="33.00000%"} ![image](ScldifffracHessVIBSS.eps){width="33.00000%"} \ The SFH is determined using the Darthmouth Stellar Evolution Database [@DartmouthI]. To determine the effect of using different isochrones sets, the SFH of the central part of Sculptor (r$_{ell}$$\le$0.116 deg) was obtained using three different isochrone sets [@DartmouthI; @TeramoI; @TeramoII; @YonseiYaleI; @YonseiYaleII]. The obtained SFHs displayed in Figure \[isotest\] show that the obtained solution is very similar in all cases, due to the fact that the SFH determination is dominated by the MSTO region, for which the different isochrone agree at old ages. This gives confidence to adopting the Dartmouth isochrone library for the determination of the final SFH of Sculptor.\ Using the SFH it is possible to simulate CMDs of the Sculptor dSph, which can be used to show the distribution of age across the CMD in detail. Figure \[CMDoverlay\] shows the synthetic V,B-V CMD of the Sculptor MSTOs colour coded with age as inferred from the SFH. The separation of different ages is clearly visible on the MSTO, which highlights the necessity of photometry going down to the oldest MSTOs to obtain accurate ages. ![The best matching simulated MDF (green) solid histogram and observed MDF (red) dashed histogram from spectroscopic observations. \[MDFfull\]](MDFall.eps){width="49.00000%"} Reliability of the SFH ---------------------- The most basic check of the reliability of the recovered SFH is to compare the synthetic and observed CMDs. Figure \[SclobsmodHess\] shows the observed I,V-I CMD of the Sculptor dSph and the synthetic CMD corresponding to the best matching SFH. Furthermore, the difference between both CMDs is shown, expressed as a fraction of the observed star counts in each bin.\ The total number of stars in the synthetic CMD is consistent with the observed CMD to within a few percent, showing that the total mass in stars is well matched to the observations. The CMDs are in general a good match, over most of the MSTO region. The static background BSS is not reproduced with the same colour distribution as the observed stars, leading to an under-dense region at blue colours (V$-$I$\approx$0.2) in Figure \[SclobsmodHess\]c. However, the transition from the BSS into the MSTO region is matched well, despite being modelled in a simple way (See Section \[Sclspecifics\]). Figure \[SclobsmodHess\]c shows that the colours of the RGB extend to redder colours than observed in Sculptor. The RGB colours of the adopted isochrone set are slightly too red for the metallicity and age determined from the MSTO region and  triplet spectroscopy, which is a known problem for theoretical isochrones [e.g., @Gallart05]. The HB and AGB are obviously not reproduced in the synthetic CMD, as they are not included in the Dartmouth isochrone set.\ On the RGB a comparison can be made between the observed spectroscopic MDF and the MDF obtained from the SFH determination, as shown in Figure \[MDFfull\] for the full extent of the Sculptor dSph. We can see that the synthetic MDF is mostly consistent with the observed MDF within the errors, as expected, given that the MDF is used as an input in the SFH determination. A population which is not so well fit is the tail of metal-poor stars (\[Fe/H\]$<$-2.5 dex). From the spectroscopic observations it is known that these stars are present in Sculptor, although in small numbers [$\approx$8%, see @Starkenburg10]. The models used to fit the SFH do not contain metallicities low enough to include these stars. Despite these uncertainties we feel confident that the differences between the observed and synthetic CMDs and MDFs are sufficiently low to conclude that the fitted SFH constitutes an accurate representation of the full range of ages of stellar populations present in Sculptor.\ ![image](SFH3magspecfitcompdex.eps){width="97.00000%"} ![The observed (red) dashed histogram and the synthetic (green) solid histogram MDFs from the SFH analysis in the five annuli. The solid (black) line in the upper two panels shows the \[Fe/H\] above which the HR spectroscopic MDF is unbiased, and can be used in the SFH determination. \[MDFall\]](MDFcomparison.eps){width="49.00000%"} Spatial variations in the SFH {#differentregions} ----------------------------- Due to the extensive spatial coverage of both the MSTO photometry and RGB spectroscopy we are able to determine the SFH over most of the area of the Sculptor dSph. Thus, it is possible to determine the radial variation in the SFH in 5 annuli (see Figure \[spatialSFH\]), each containing a similar number of stars observed in the V and I filters. For comparison, the Sculptor core radius is 0.1 degrees, just within the first annulus. The V and I filters were chosen, since they are offer the most complete spatial coverage further from the centre. ![The radial distribution of the total stellar mass per square degree in the Sculptor dSph, as determined from the SFH modelling. This provides the total mass of stars formed over the lifetime of Sculptor, 7.8$\times$10$^{6}$ M$_{\odot}$. \[radtotal\]](radtotal3magbig14area.eps){width="49.00000%"} \ The SFH is determined independently in each of these annuli in the same way as for the full galaxy. The SFH and CEH of the five selected spatial regions are shown in Figure \[spatialSFH\], using all photometric filters as well as the available spectroscopy. In the outermost annulus the SFH is determined only using the I,V-I CMD, together with the spectroscopic MDF. Figure \[spatialSFH\] shows that the SFH changes significantly with distance from the centre of Sculptor. The metallicity and age gradient discussed by @deBoer2011A is quantified here. The younger, metal-rich populations are found mostly in the central region and drop off towards the outer parts.\ Figure \[MDFall\] shows the observed and synthetic MDFs resulting from the SFH determination for all five annuli in Sculptor. The MDF shown in the upper two panels comes from HR spectroscopy, while the MDF in the lower three panels comes from triplet spectroscopy. Figure \[MDFall\] shows that the synthetic MDFs resulting from the SFH determination are consistent with the observed MDFs within the errors. In the upper two panels the synthetic MDF lies above the observed MDF for metallicities lower than indicated by the black line and below the observed MDF for higher metallicities. This is similar to the effects of the luminosity function bias seen in Figure \[lumfuncfrac\]. ![The cumulative radial distributions of stellar populations with different ages. \[radvariations\]](radagecumnormarea.eps){width="49.00000%"} \ Using the SFH at different positions it is possible to determine the radial distribution of the total mass of stars in Sculptor. This is done by integrating the SFRs over the duration of star formation, at different radii from the centre. Figure \[radtotal\] shows the radial distribution of the total stellar mass, obtained from the SFH modelling. The total mass in stars formed is highest in the central parts of Sculptor (r$_{ell}$$\le$0.2 degrees) and drops off steeply towards the outskirts. The total mass in stars formed over all radii in Sculptor for our SFH is 7.8$\times$10$^{6}$ M$_{\odot}$ within an elliptical radius of 1 degree or 1.5 kpc. To determine the core radius of the total mass distribution we use a Sersic profile to fit the distribution shown in Figure \[radtotal\]. The resulting core radius r$_{c}$=0.11$\pm$0.04 degrees, is consistent within the errors with that derived from the observed density profile [@Battaglia081].\ It is also possible to quantify the radial distribution of populations with different ages. Figure \[radvariations\] shows the cumulative radial distributions of stellar populations with different ages, as determined from SFH modelling. Figure \[radvariations\] is another way of showing that the oldest population is the most extended component, while younger populations show an increasing central concentration, as qualitatively described in @deBoer2011A. Resolving bursty star formation {#burstystarformation} ------------------------------- ![image](SFHannul51burst.eps){width="49.00000%"} ![image](ZFHannul51burstdex.eps){width="49.00000%"} One crucial aspect of the interpretation of our SFH is: can we distinguish between bursty and continuous star formation, given our SFH resolution and uncertainties? The intrinsic uncertainties on the SFH determination (as described in Section \[uncertainties\]) result in a smoothing of the star formation rates, potentially hiding the presence of bursts of star formation.\ To test our ability to distinguish a series of distinct bursts of star formation from continuous star formation in Sculptor, we carry out a series of experiments using the final SFH of the innermost and outermost annuli of Sculptor (anull1 and annul5 in Figure \[spatialSFH\]). The metallicities and star formation rates from the final SFH are adopted, and distributed over one or more short bursts of star formation.\ For a single narrow (10 Myr) burst of star formation at an age of 13.5 Gyr and $-$2.2$\le$\[Fe/H\]$\le$$-$2.0 dex metallicity range the recovered and input SFH and CEH for annulus 5 are shown in Figure \[burst1\]. The recovered SFH matches well with the final adopted SFH of the outermost annulus given in Section \[differentregions\]. This suggests that the star formation in the outermost part of Sculptor can be approximated with a single, short burst of star formation. However, the SFH and CEH shown in Figure \[burst1\] indicate that the spread in age and metallicity of the burst must be more extended than assumed here.\ A similar analysis for annulus 1 of Sculptor shows that the recovered SFH using a single, short burst is clearly inconsistent with the adopted SFH of the inner annulus. A comparison of the synthetic CMD with the observed CMD of annulus 1 shows a bad fit to the data, with a value of $\chi^{2}$=5.51 (compared to $\chi^{2}$=2.94 for the adopted SFH of annul1), making it unlikely to be an accurate representation of the actual SFH in the centre of Sculptor. Additionally, it seems unphysical to assume that the large spread in metallicity observed in the centre of Sculptor (1.5 dex) can be built up during a single short episode of star formation.\ We continue by adding a second burst of star formation to see if this can improve the match to the SFH of the innermost annulus of Sculptor. Figure \[burst2\] shows the recovered SFH of a synthetic population with two narrow 10Myr bursts of star formation at different age, with metallicities and star formation rates determined from the adopted SFH of annulus 1. In each test an old burst was assumed to take place at an age of 13.5 Gyr, with an additional burst at a younger age. Figures \[burst2\]a,b show that the input SFH is not recovered as two individual bursts if the age separation between the two bursts is less than 4 Gyr. Only when the youngest burst is placed at an age of 8.5 Gyr, that a separation can be seen between the two bursts in the SFH, as shown in Figure \[burst2\]c. ![The input and recovered SFH for a series of synthetic populations with two short (10 Myr) bursts of star formation. The oldest takes place at an age of 13.5 Gyr, with the second burst at an age of (**a**) 10.5 Gyr, (**b**) 9.5 Gyr and (**c**) 8.5 Gyr. The solid black histogram shows the input population, given the SFH binning adopted. The solid red histogram shows the recovered SFH along with statistical errorbars. For comparison, the adopted SFH of annulus 1 of Sculptor (Figure \[spatialSFH\]) is shown as the black dashed histogram. \[burst2\]](SFHburst2.eps){width="49.00000%"} \ The SFHs shown in Figure \[burst2\] are in relatively good agreement with the final adopted SFH of annulus 1, especially when the youngest burst is placed at 9.5 Gyr (Figure \[burst2\]b). The recovered SFH shows features consistent with the adopted SFH at similar strengths. However, a comparison of the synthetic CMD to the observed CMD of annulus 1 shows that the synthetic populations provide a bad fit to the observed CMD, with $\chi^{2}$ values of 4.32, 4.10 and 4.28 respectively for the populations with the young burst at 10.5, 9.5 and 8.5 Gyr. The smooth SFH, as given in Section \[differentregions\] provides a better representation of the observed CMD, which makes it unlikely that strong, short bursts of star formation dominated the SFH of the Sculptor dSph.\ A synthetic population with three narrow 10Myr bursts of star formation, at 13.5, 10.5 and 8.5 Gyr, gives a better agreement with the observed SFH. Furthermore, the synthetic CMD provides a better fit to the observed CMD of annulus 1, with a $\chi^{2}$ value of 3.96, although still not as good as the smooth SFH given in Section \[differentregions\]. Taking this approach further, adding ever more short bursts of star formation will provide a better representation of the observed CMD, to the point where we can not distinguish between bursty and continuous star formation. In the end, this is how our model builds up the SFH. The limits of the bursty nature of the Sculptor SFH cannot be tied down with CMD analysis at such old ages as these with better accuracy than shown in Figure \[burst2\]. However, the form of the CMD and the metallicities of individual RGB stars would be markedly different if star formation had truly occurred in a few short bursts, and also other evidence such as the broad MDF makes it unlikely. The timescale for chemical evolution {#indivages} ------------------------------------ ### Determining ages of individual stars Using the SFH determined for Sculptor it is possible to go back to the spectroscopic sample and determine the ages of the individual RGB stars. For each observed star with spectroscopic abundances we find all the stars in a synthetic CMD made with the SFH of Sculptor, with the same magnitude (in all filters) and metallicity within the observed uncertainties. These stars are considered to be representative of the age of the observed star. The mean age of the matched sample is adopted as the age of the observed star, with the corresponding standard deviation of the sample as an errorbar. In the case of Sculptor around 100 synthetic stars are typically available to compute the mean and standard deviation for each observed star, ensuring enough stars are present to determine a reasonable errorbar.\ To demonstrate the advantage of using the SFH results to obtain ages for individual RGB stars a comparison is made to the standard simple isochrone fitting technique, which uses a fine grid of \[Fe/H\], age and \[$\alpha$/Fe\] to obtain the age and uncertainty for an observed star along an isochrone. Figure \[AMR\]a shows the Age-Metallicity Relation (AMR) obtained using simple isochrone fitting, and Figure \[AMR\]b using our full SFH approach. The AMR determined using isochrone fitting has bigger age uncertainties and allows younger ages for each star, which are inconsistent with the SFH (see Fig \[overallSFH\]). This is because the age for simple isochrone fitting is dependent only on the colour of the RGB and the measured \[Fe/H\], which allows a wide range in ages for each observed star. Conversely, the more constrained AMR coming from the SFH is because information from the entire CMD (and all modelled evolutionary phases) is used to constrain the age of a single observed star. ### Chemical evolution of the Sculptor dSph With accurate ages assigned to each RGB star in the HR spectroscopic sample it is possible to effectively measure the evolution of particular elements and, for the first time, directly determine timescales for chemical evolution. Figure \[MgFeage\]a shows \[Mg/Fe\] for RGB stars in Sculptor measured from HR spectroscopy, with the age of the individual stars colour coded. Figure \[MgFeage\]b shows that \[Mg/Fe\] follows a uniform trend with age, such that the oldest stars are more enhanced than the younger. We can also determine the age at which the “knee" [@Tinsley79; @Gilmore91] occurs. This marks the time at which SNe Ia start to contribute to the \[Fe/H\] content of a galaxy [@Matteucci90; @Matteucci03]. In Sculptor, the turnover takes place between $-$1.8$<$\[Fe/H\]$<$$-$2.0 dex, from which we determine that the \[Fe/H\] from SNe Ia started to be produced 10.9$\pm$1 Gyr ago. Therefore, the SNe Ia started contributing noticeably to the chemical evolution of Sculptor approximately 2$\pm$1 Gyr after the beginning of star formation. This is the first direct measure of this timescale, although it has previously been inferred from SNe Ia timescales and chemical evolution models [@Mannucci06 and references therein]. Conclusions =========== We have presented the first detailed SFH of the Sculptor dSph, using a combination of deep, wide-field multi-colour photometry and spectroscopic metallicities and abundances. The method used to determine the SFH (described in Section \[method\]) directly combines, for the first time, photometry with spectroscopic abundances.\ The SFH (see Figure \[overallSFH\]) shows features similar to previous rough SFHs [e.g. @DaCosta84; @HurleyKeller99; @Dolphin02; @Tolstoy03], but resolves the stellar ages with much greater accuracy. Additionally, the SFH quantifies the age and metallicity uncertainties and provides well motivated errors. The SFH and CEH have been determined over a large fraction of the Sculptor dSph ($\approx$80% of the tidal radius), allowing us to quantify the radial age and metallicity gradients in Sculptor.\ The SFH shows that star formation took place in Sculptor for an extended period of time, from 14 to 7 Gyr ago (see Figure \[overallSFH\]) with a simple, steadily decreasing trend. The spatially resolved SFH over the whole galaxy (see Figure \[spatialSFH\]) shows that a radial gradient is present in age and metallicity. The MDF (right-hand panels of Figure \[spatialSFH\]) shows that the metal-poor, old populations are present at all radii while the more metal-rich, younger populations are found more toward the centre, consistent with previous qualitative results [@deBoer2011A].\ We explored the temporal resolution of our final SFH for the innermost and outermost parts of Sculptor under study. We find that the outermost annulus is roughly consistent with a single episode of star formation. In the inner parts of Sculptor it is difficult to find a bursty SFH that matches all the available data. Therefore, we find no reason to assume bursty star formation episodes here and prefer the overall picture of a single, extended episode of star formation. Additionally, the spatially resolved SFH and CEH are more consistent with an age gradient than with two separate populations.\ The SFH and CEH at different radii from the centre are consistent with the scenario where Sculptor first experienced a single sizeable burst of star formation at early times, with more metal-rich populations forming ever more concentrated towards the centre until about 6$-$7 Gyr ago, when the star formation activity ceased.\ Likewise, the chemical evolution of the Sculptor dSph seems to be straightforward, according to Figure \[MgFeage\], which shows that \[Mg/Fe\] decreases steadily with time. The simple decline in the \[Mg/Fe\] distribution suggests that the chemical enrichment occurs uniformly over the SFH of the Sculptor dSph, with a change in slope when the SNe Ia start to contribute.\ The timescale on which SNe Ia start to contribute significantly to the chemical evolution of Sculptor (approximately 2$\pm$1 Gyr after the beginning of star formation) is comparable within the errors to the timescale expected from the theory of SNe Ia timescales, although inconsistent with timescales of prompt SNe Ia explosions [see e.g. @Raiteri96; @Matteucci01]. The small number of stars and the large scatter on the knee position in Figure \[MgFeage\]a means that the turnover age is not exactly defined. A larger sample of metal-poor stars is needed to obtain better statistics.\ The formation of a well defined, narrow $\alpha$-element vs. metallicity distribution and a tight Age-Metallicity relation (see Figure \[AMR\]b) is consistent with the overall picture that the Sculptor dSph formed stars uninterrupted for an extended period of time. The temporal evolution of individual stars suggests a slow build up over several gigayears. A galaxy with a shorter period of star formation is unlikely to show such a well-defined extended AMR. If the SFH were more bursty, a greater spread in \[$\alpha$/Fe\] would also be expected, instead of a smooth trend with age and \[Fe/H\].\ As a result, Sculptor can be considered as a good benchmark for isolated star formation over an extended period of time at the earliest epochs. ![image](AMRerriso.eps){width="49.00000%"} ![image](AMRerrSFH.eps){width="49.00000%"} ![image](MgFeHage.eps){width="49.00000%"} ![image](AgeMgFeerr.eps){width="49.00000%"} Acknowledgements ================ The authors thank ISSI (Bern) for support of the team “Defining the full life-cycle of dwarf galaxy evolution: the Local Universe as a template". T.d.B., E.T., E.S. and B.L. gratefully acknowledge the Netherlands Foundation for Scientific Research (NWO) for financial support through a VICI grant. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement number PIEF-GA-2010-274151. The authors would like to thank the anonymous referee for his/her comments, that helped to improve the paper. Tests of the method {#tests} =================== In order to test the ability of Talos to accurately recover a SFH, a number of tests were made on CMDs for which the properties are known, as well as on observed data for a well studied globular cluster. Synthetic tests {#synthetictests} --------------- First of all, a number of simple artificial stellar populations were generated. By recovering the SFH of these populations it is possible to test how accurate our method can recover an input SFH. ![image](SFHdeltanoerrfin.eps){width="49.00000%"} ![image](ZFHdeltanoerrfindex.eps){width="49.00000%"} ![image](SFHdeltaspecfitfin.eps){width="49.00000%"} ![image](ZFHdeltaspecfitfindex.eps){width="49.00000%"} \ We determine the ability of Talos to recover the age, metallicity and SFR in a series of synthetic episodes of star formation. The stellar population was generated assuming the distance modulus and reddening of Sculptor, and using the artificial star tests to obtain realistic photometric uncertainties. In this way realistic colour magnitude diagrams can be obtained, of the same quality as the observed CMD of the Sculptor dSph.\ As a first check, we apply the SFH fitting method to a single synthetic episode of star formation. The Dartmouth isochrones are used to generate a synthetic CMD with a continuous star formation between 9 and 10 Gyr and $-$1.9$<$\[Fe/H\]$<$$-$1.7 dex. First, the episode was generated without any photometric errors and the SFH determined using the exact parameter gridding of \[Fe/H\] and age as used to generate the population (green histogram). The SFH is also determined using a set of different parameter griddings (red histogram, see Figure \[1burst\]a,b) in order to test the effect of using different griddings to obtain the uncertainties on the SFH (see Section \[uncertainties\]).\ Given exactly the same parameter gridding as the input population, the SFH is recovered at the right age and metallicity with the correct strength. A limited amount of “bleeding" is observed, due to the uncertainties induced by the quality of the data. The effect of using a set of different griddings is more substantial bleeding of the star formation rate into neighbouring bins in age and metallicity (as seen in the red histograms in Figure \[1burst\]a,b). This bleeding is a direct consequence of the quality of the photometric data and the use of different parameter griddings, and determines the SFH resolution (see Section \[uncertainties\]). Since we do not have a priori knowledge of the duration and metallicity of an episode of star formation in the observed data, the choice of adopting a single population gridding will lead to biases in the results. Therefore, we always consider a range of different parameter griddings in obtaining the final SFH. ![image](SFHcontfinspec.eps){width="49.00000%"} ![image](ZFHcontfinspecdex.eps){width="49.00000%"} \ When including realistic photometric errors determined from artificial star tests for the model of an episode of star formation, the same bleeding effect is seen. The parameters of the recovered SFH are determined by fitting a Gaussian profile, see Figures \[1burst\]c,d. The peaks of the SFH and CEH are recovered well within the input values (\[Fe/H\]$_{mean}$=$-$1.84 dex, Age$_{mean}$=9.64 Gyr), but the star formation is distributed over more bins, spreading out the star formation episode in time. Figure \[1burst\] shows that $\approx$40% of the total star formation is typically recovered within the central peak. Due to the quality of the observed photometric data (which only just detects the oldest MSTOs) there remains some degeneracy between the burst strength and duration, which could be removed by obtaining deeper CMDs that resolve the MSTO with more accuracy.\ Next we consider a more realistic synthetic population which has experienced constant star formation (SFR=10$^{-4}$M$_{\odot}$/yr) over the metallicity range $-$2.5$<$\[Fe/H\]$<$$-$1.1 dex between 6 and 15 Gyr ago. To take into account the effect of constraints from spectroscopic observations, the solution was determined taking into account a synthetic MDF which samples 50% of the RGB stars. The results are given in Figure \[contburst\]. It can be seen that the input values are correctly recovered for a synthetic population with constant star formation, within the SFH uncertainties. Globular cluster NGC 1904 ------------------------- The final test we carry out is to apply our method to real observations of a Globular cluster, which is, within our errors, a simple stellar population. During our observing run Galactic globular cluster NGC 1904 was also observed in the B,V and I filters. For NGC1904 there have been several photometric and spectroscopic studies (see Table \[N1904pars\]), making it a good test of our method. The reduction of these observations, as well as the artificial star tests were done in exactly the same way as for Sculptor. Property Value Reference ----------------- ------------------------ --------------- (m-M)$_{V}$ 15.45$\pm$0.02 [@Ferraro92] E(B-V) 0.01 [@Harris96] $[$Fe/H$]$ $-$1.579$\pm$0.069 dex [@Carretta09] $[\alpha$/Fe$]$ 0.31 dex [@Carretta10] : Adopted properties of NGC1904 \[N1904pars\] \ To obtain the SFH for NGC1904, \[$\alpha$/Fe\] was chosen as a fixed value constrained by spectroscopic observations (see Table \[N1904pars\]), while the age and metallicity were left as free parameters. The best solution, using only the available photometric data is given in Figure \[N1904SFH\]. The SFH and CEH show bleeding into neighbouring bins, due to measurement errors and the use of different parameter griddings used. This is similar to the bleeding effect seen in Figure \[1burst\]. The SFH clearly indicates a very old population, in good agreement with typical globular ages. A Gaussian fit to the metallicity distribution gives a mean value ($\mu$) at \[Fe/H\]=$-$1.58 dex, with a variance of $\sigma$=0.17 dex. This is in good agreement with the spectroscopic \[Fe/H\] given in Table \[N1904pars\]. This shows that Talos is able to recover the age and metallicity of a real data set with all the errors and uncertainties that it implies.\ These experiments show the capability of Talos to recover the age and metallicity of a stellar population, as well as the limit of our ability to unambiguously distinguish a burst of star formation from a more continuous SFR over a longer time. ![image](SFH19043mag.eps){width="49.00000%"} ![image](ZFH19043magdex.eps){width="49.00000%"} [^1]: Visiting astronomer, Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, which are operated by the Association of Universities for Research in Astronomy, under contract with the National Science Foundation.
--- abstract: 'There has been growing interest in increasing the amount of radio spectrum available for unlicensed broadband wireless access. That includes “prime" spectrum at lower frequencies, which is also suitable for wide area coverage by licensed cellular providers. While additional unlicensed spectrum would allow for market expansion, it could influence competition among providers and increase congestion (interference) among consumers of wireless services. We study the value (social welfare and consumer surplus) obtained by adding unlicensed spectrum to an existing allocation of licensed spectrum among incumbent service providers. We assume a population of customers who choose a provider based on the minimum [*delivered price*]{}, given by the weighted sum of the price of the service and a congestion cost, which depends on the number of subscribers in a band. We consider models in which this weighting is uniform across the customer population and where the weighting is either high or low, reflecting different sensitivities to latency. For the models considered, we find that the social welfare depends on the amount of additional unlicensed spectrum, and can actually decrease over a significant range of unlicensed bandwidths. Furthermore, with nonuniform weighting, introducing unlicensed spectrum can also reduce consumer welfare.' author: - | Thành Nguyen[^1]\ Hang Zhou, Randall A. Berry, Michael L. Honig [^2]\ Rakesh Vohra[^3] bibliography: - 'ref.bib' title: The Cost of Free Spectrum --- Introduction ============ The Model {#sec:model} ========= Main results {#sec:result} ============ Conclusions {#sec:conclude} =========== We have studied a model for adding unlicensed spectrum to a market for wireless services in which incumbents have licensed spectrum. Our analysis has shown that there is a range of scenarios in which adding unlicensed spectrum reduces overall welfare. Specifically, this occurs when the additional unlicensed bandwidth is relatively small, so that it is readily congested. In contrast, examples have shown that this decrease does not occur when the additional bandwidth is assigned to a new or incumbent SP as licensed. We emphasize that a key property of the unlicensed spectrum in our model is that the congestion increases with the total traffic offloaded by [*all*]{} SPs, which may be expected at low frequencies (TV bands). Hence our results indicate that the success of applications in unlicensed bands at high frequencies may not translate to similar success at lower frequencies. From a policy point-of-view these results indicate that if spectrum is abundant, then adding unlicensed spectrum should not lower welfare. Hence in settings such as rural areas where demand is naturally limited, the type of congestion effects we consider may not be a significant issue. However, when spectrum is scarce, introducing relatively small amounts of white space might actually reduce welfare. Thus, in areas where demand is high, policies to limit congestion should be considered, such as establishing a market for a limited number of device permits as in [@Peha09]. These conclusions assume that the unlicensed spectrum offers services that compete with the services provided in the licensed bands. While that may create the most social welfare in some scenarios, in other scenarios the unlicensed spectrum might be used to provide other types of services (e.g., within a local area). Indeed proponents of unlicensed spectrum have argued that open access fosters innovation in technology and business models that may lead to unforeseen uses of this spectrum ([@MilgromLevin]). That possibility must then be weighed against the potential for innovation in licensed allocations in addition to the congestion effects considered here. Missing Proofs {#sec:appendix} ============== Numerical Examples {#sub:numerical} ================== [^1]: Contacting author, Krannert School of Management, Purdue University, [email protected] [^2]: Northwestern University [^3]: University of Pennsylvania
--- abstract: 'We extend the Heston stochastic volatility model to a Hilbert space framework. The tensor Heston stochastic variance process is defined as a tensor product of a Hilbert-valued Ornstein-Uhlenbeck process with itself. The volatility process is then defined by a Cholesky decomposition of the variance process. We define a Hilbert-valued Ornstein-Uhlenbeck process with Wiener noise perturbed by this stochastic volatility, and compute the characteristic functional and covariance operator of this process. This process is then applied to the modelling of forward curves in energy markets. Finally, we compute the dynamics of the tensor Heston volatility model when the generator is bounded, and study its projection down to the real line for comparison with the classical Heston dynamics.' address: | Fred Espen Benth and Iben Cathrine Simonsen\ Department of Mathematics\ University of Oslo\ P.O. Box 1053, Blindern\ N–0316 Oslo, Norway author: - Fred Espen Benth and Iben Cathrine Simonsen title: The Heston stochastic volatility model in Hilbert space --- Introduction ============ Ornstein-Uhlenbeck processes in Hilbert space has received some attention in the literature in recent years (see Applebaum [@Apple]), one reason being that it is a basic process for the dynamics of commodity forward prices (see Benth and Krühner [@BK-HJM]). In the modelling of financial prices, the stochastic volatility dynamics plays an important role, and in this paper we propose an infinite dimensional version of the classical Heston model (see Heston [@H]). On a separable Hilbert space $H$, an Ornstein-Uhlenbeck process $X(t)$ takes the form $$dX(t)=\mathcal C X(t)\,dt+\sigma\,dB(t),$$ where $\mathcal C$ is some densely defined linear operator and $B$ is an $H$-valued Wiener process. Usually, $\sigma$ is some non-random bounded linear operator on $H$, being a scaling of the noise which is referred to as the volatility. We propose to model $\sigma$ as a time-dependent stochastic process with values in the space of bounded linear operators. More specifically, we consider a stochastic variance process $\mathcal V(t)$ being defined as the tensor product of another Ornstein-Uhlenbeck process with itself, which will become a positive definite stochastic process in the space of Hilbert-Schmidt operators on $H$. We use its square root process as a volatility process $\sigma$ in the dynamics of $X$. Our construction is an extension of the classical Heston stochastic volatility model. If $H$ is some suitable space of real-valued functions on $\mathbb R_+$, the non-negative real numbers, and $\mathcal C=\partial/\partial x$, one can view $X(t,x)$ as the risk-neutral forward price at time $t\geq 0$ for some contract delivering a given commodity at time $t+x$. Such forward price models (with generalisations) have been extensively analysed in Benth and Krühner [@BK-HJM], being stochastic models in the so-called Heath-Jarrow-Morton framework (see Heath, Jarrow and Morton [@HJM]) with the Musiela parametrisation. The analysis relates closely to a long stream of literature on forward rate modelling in fixed-income markets (see Filipovic [@filipovic] and references therein). However, stochastic volatility models from the infinite dimensional perspective have not, to the best of our knowledge, been studied to any significant extent. An exception is the paper by Benth, Rüdiger and Süss [@BRS], who propose and analyse an infinite dimensional generalisation of the Barndorff-Nielsen and Shephard stochastic volatility model (see Barndorff-Nielsen and Shephard [@BNS]). As indicated, we define $\mathcal V(t)=Y(t)^{\otimes 2}$, where $Y$ is an $H$-valued Gaussian Ornstein-Uhlenbeck process. We prove several properties of the tensor Heston variance process $\mathcal V$, and show that the square-root process $\mathcal V^{1/2}$ is explicitly available. Moreover, we present a family of Cholesky-type decompositions of $\mathcal V$, which will be our choice as stochastic volatility in the dynamics of $X$. We study probabilistic properties of both $\mathcal V$ and $X$, and specialize to the situation of a commodity forward market where we provide expressions for the implied covariance structure between forward prices with different times to maturity. In the situation when the Ornstein-Uhlenbeck process $Y$ is governed by a bounded generator, we can present a stochastic dynamics of $\mathcal V$ which can be related to the Heston model in the finite dimensional case. In particular, our model is an alternative to the Wishart process of Bru [@Bru]. Notation -------- We let $(\Omega, \mathcal{F}, \{\mathcal{F}_t\}_{t\geq 0}, \mathbb{P})$ be a filtered probability space and $H$ be a separable Hilbert space with inner product $\langle\cdot,\cdot\rangle$ and associated norm $|\,\cdot\,|$. Furthermore, we let $L(H)$ denote the space of bounded linear operators from $H$ into itself, which is a Banach space with the operator norm denoted $\|\cdot\|_{\text{op}}$. The adjoint of an operator $A\in L(H)$ is denoted $A^*$. Furthermore, $\mathcal{H}=L_{HS}(H)$ denotes the space of Hilbert-Schmidt operators in $L(H)$. $\mathcal{H}$ is also a separable Hilbert space, and we denote its inner product by $\langle\langle\cdot,\cdot\rangle\rangle$ and the associated norm by $\|\,\cdot\,\|$. The tensor Heston stochastic variance process ============================================= Let $\{W(t)\}_{t\geq 0}$ be an $\mathcal F_t$-Wiener process in $H$ with covariance operator $Q_W\in L(H)$, where $Q_W$ is a symmetric and positive definite trace class operator. Define the Ornstein-Uhlenbeck process $\{Y(t)\}_{t\geq 0}$ in $H$ by $$\label{Y} dY(t) = \mathcal{A} Y(t)\, dt + \eta\, dW(t), \mspace{40mu} Y(0)=Y_0 \in H,$$ where $\mathcal{A}$ is a densely defined operator on $H$ generating a $C_0$-semigroup $\{\mathcal{U}(t)\}_{t\geq 0}$, and $\eta\in L(H)$. From Peszat and Zabczyk [@PZ Sect. 9.4], the unique mild solution of is given by $$\label{eq:tensor-heston-Y-process-mild} Y(t) = \mathcal{U}(t) Y_0 + \int_0^t \mathcal{U}(t-s)\eta\, dW(s),$$ for $t\geq 0$. The next lemma gives the characteristic functional of $Y(t)$. \[lemma:gaussian-Y\] For $f\in H$ we have $$\mathbb{E}\Big[\exp \left(\mathrm{i}\langle Y(t),f\rangle\right)\Big] = \exp \left(\mathrm{i}\langle\mathcal{U}(t)Y_0,f\rangle -\frac{1}{2}\langle\int_0^t\mathcal U(s)\eta Q_W\eta^* \mathcal{U}^*(s)\,ds f,f\rangle\right),$$ where the integral on the right-hand side is the Bochner integral on $L(H)$. From the mild solution of $\{Y(t)\}_{t\geq 0}$ in , we find $$\begin{aligned} \langle Y(t),f\rangle&=\langle\mathcal U(t) Y_0,f\rangle+\langle\int_0^t\mathcal U(t-s)\eta\,dW(s),f\rangle \\ &=\langle\mathcal U(t)Y_0,f\rangle+\int_0^t\langle\eta^*\mathcal U^*(t-s) f,dW(s)\rangle.\end{aligned}$$ Hence, from the Gaussianity and independent increment property of the Wiener process, $$\begin{aligned} {\mathbb{E}}\left[\exp\left(\mathrm{i}\langle Y(t),f\rangle\right)\right]=\exp\left(\mathrm{i}\langle\mathcal U(t) Y_0,f\rangle-\frac12\int_0^t\langle \mathcal U(t-s)\eta Q_W\eta^*\mathcal U^*(t-s)f,f\rangle\,ds\right).\end{aligned}$$ As $\{\mathcal U(t)\}_{t\geq 0}$ is a $C_0$-semigroup, its operator norm satisfies an exponential growth bound in time by the Hille-Yoshida Theorem (see Engel and Nagel [@EN Prop. I.5.5]). Hence, the Bochner integral $\int_0^t\mathcal U(s)\eta Q_W\eta^*\mathcal U^*(s)\,ds$ is well-defined, and the result follows. From the lemma above we conclude that $\{Y(t)\}_{t\geq 0}$ is an $H$-valued Gaussian process with mean $\mathcal{U}(t)Y_0$ and covariance operator $$Q_{Y(t)}=\int_0^t\mathcal{U}(s)\eta\mathcal{Q}_W\eta^*\mathcal{U}(s)^*\,ds.$$ Following Applebaum [@Apple], $\{Y(t)\}_{t\geq 0}$ admits an invariant Gaussian distribution with zero mean if the $C_0$-semigroup $\{\mathcal U(t)\}_{t\geq 0}$ is exponentially stable. The covariance operator for the invariant mean zero Gaussian distribution of $\{Y(t)\}_{t\geq 0}$ then becomes $$Q_Y=\int_0^{\infty}\mathcal{U}(s)\eta\mathcal{Q}_W\eta^*\mathcal{U}(s)^*\,ds.$$ We define the *tensor Heston stochastic variance process* $\{{\mathcal{V}}(t)\}_{t\geq 0}$ by $${\mathcal{V}}(t):=Y(t)^{{\otimes}2},$$ where we recall the tensor product to be $f{\otimes}g:={\langle}f,\cdot\,{\rangle}g$ for $f,g\in H$. By the Cauchy-Schwartz inequality, it follows straightforwardly that $f{\otimes}g\in L(H)$. Hence, the tensor Heston stochastic variance process $\{\mathcal V(t)\}_{t\geq 0}$ defines an $\mathcal F_t$-adapted stochastic process in $L(H)$. The next proposition shows that $\{\mathcal V(t)\}_{t\geq 0}$ defines a family of symmetric, positive definite Hilbert-Schmidt operators. \[prop:Vsymposdef\] It holds that ${\mathcal{V}}(t)\in \mathcal{H}$ for all $t\geq 0$. Furthermore, ${\mathcal{V}}(t)$ is a symmetric and positive definite operator. Let $\{e_n\}_{n=1}^{\infty}$ be an orthonormal basis (ONB) of $H$. By Parseval’s identity applied twice, $$\begin{aligned} \|\mathcal{V}(t)\|^2&=\sum_{n=1}^{\infty}|\mathcal{V}(t)e_n|^2 =\sum_{n=1}^{\infty}|Y^{\otimes 2}(t)e_n|^2 =\sum_{n=1}^{\infty}|Y(t)|^2{\langle}Y(t),e_n{\rangle}^2 =|Y(t)|^4.\end{aligned}$$ Since $Y(t)\in H$ for every $t\geq 0$, the first conclusion of the proposition follows. We find for $f,g\in H$ that $${\langle}\mathcal{V}(t)f,g{\rangle}={\langle}{\langle}Y(t),f{\rangle}Y(t),g{\rangle}={\langle}Y(t),f{\rangle}{\langle}Y(t),g{\rangle}= {\langle}f,{\langle}Y(t),g{\rangle}Y(t){\rangle}={\langle}f,\mathcal{V}(t)g{\rangle}.$$ Moreover, with $f=g$, $${\langle}\mathcal{V}(t)f,f{\rangle}={\langle}Y(t),f{\rangle}^2\geq 0.$$ This proves the second part. The proposition shows that $\|\mathcal V(t)\|=|Y(t)|^2$ for all $t\geq 0$. The Gaussianity of the process $\{Y(t)\}_{t\geq 0}$ implies that the real-valued stochastic process $\{\|\mathcal V(t)\|\}_{t\geq 0}$ has finite exponential moments up to a certain order: It holds that $$\mathbb E[\exp(\theta\|\mathcal V(t)\|)]\leq\frac{{\text{e}}^{2\theta|\mathcal U(t)Y_0|^2}}{\sqrt{1-4\theta k}}$$ for $0\leq\theta\leq 1/4k$ and $k=\mathbb E[|\int_0^t\mathcal U(t-s)\eta\,dW(s)|^2]<\infty$. From Prop. \[prop:Vsymposdef\], $\|\mathcal V(t)\|=|Y(t)|^2$, and then by the triangle inequality $$\|\mathcal V(t)\|\leq 2|Y(t)-\mathcal U(t)Y_0|^2+2|\mathcal U(t)Y_0|^2.$$ From the mild solution of $Y(t)$ in , $$Y(t)-\mathcal U(t)Y_0=\int_0^t\mathcal U(t-s)\eta\,dW(s),$$ which is a centered Gaussian random variable. Hence, Fernique’s Theorem (see Fernique [@Fernique] or Thm. 3.31 in Peszat and Zabczyk [@PZ]) implies that $k=\mathbb E[|\int_0^t\mathcal U(t-s)\eta\,dW(s)|^2]<\infty$ and $$\mathbb E\left[\exp\left(\theta\|\mathcal V(t)\|\right)\right]\leq {\text{e}}^{2\theta|\mathcal U(t)Y_0|^2}\mathbb E\left[\exp\left(2\theta|\int_0^t\mathcal U(t-s)\eta\,dW(s)|^2\right)\right]\leq {\text{e}}^{2\theta|\mathcal U(t)Y_0|^2}\frac1{\sqrt{1-4\theta k}}$$ for $0\leq \theta\leq 1/4k$. From this lemma we can conclude that all moments of the real-valued random variable $\|\mathcal V(t)\|$ are finite, as $\|\mathcal V(t)\|\leq\exp(s\|\mathcal V(t)\|)$ for arbitrary small $s>0$. If $f,g\in H$, then we see that $$\langle\langle\mathcal V(t),f\otimes g\rangle\rangle=\langle Y(t),f\rangle\langle Y(t),g\rangle.$$ Recalling Lemma \[lemma:gaussian-Y\], $Y(t;f):=\langle Y(t),f\rangle$ is normally distributed with mean $\mathbb E[Y(t;f)]=\langle\mathcal U(t)Y_0,f\rangle$ and variance $v(f):=\textnormal{Var}(Y(t;f))=\int_0^t |Q^{1/2}_W\eta^*\mathcal U^*(s)f|^2\,ds$. Moreover, $$\begin{aligned} c(f,g):&=\textnormal{Cov}(Y(t;f),Y(t;g)) \\ &=\mathbb E\left[\langle\int_0^t\mathcal U(t-s)\eta\,dW(s),f\rangle\langle\int_0^t\mathcal U(t-s)\eta\,dW(s),g\rangle\right] \\ &=\int_0^t\langle Q^{1/2}_W\eta^*\mathcal U^*(s)f,Q^{1/2}_W\eta^*\mathcal U^*(s) g\rangle\,ds.\end{aligned}$$ A straightforward (but tedious) calculation reveals that the characteristic functional of $\mathcal V(t)$ evaluated at $f\otimes g$ becomes, $$\mathbb E\left[{\text{e}}^{\mathrm{i}\langle\langle\mathcal V(t),f\otimes g\rangle\rangle}\right]=\left(1+v(f)v(g)-c^2(f,g)-2\mathrm{i}c(f,g)\right)^{-1/2},$$ where we have assumed $Y_0=0$ for simplicity. This characteristic functional is related to a noncentral $\chi^2$-distribution with one degree of freedom. We recall that the variance process in the classical Heston model has a noncentral $\chi^2$-distribution (see Heston [@H]). Since ${\mathcal{V}}(t)$ is symmetric and positive definite, we can define its square root ${\mathcal{V}}^{1/2}(t)$, which turns out to have an explicit expression. The square root process of $\{\mathcal V(t)\}_{t\geq 0}$ is given by $${\mathcal{V}}^{1/2}(t)=\left\{\begin{array}{cl}|Y(t)|^{-1}{\mathcal{V}}(t), & Y(t)\neq 0 \\ 0, & Y(t)=0.\end{array}\right.$$ If $Y(t)=0$, it follows that $\mathcal V(t)=0$, and thus also $\mathcal V^{1/2}(t)=0$. Assume $Y(t)\neq 0$. Let $f\in H$. Define $\mathcal{T}(t)=|Y(t)|^{-1}\mathcal{V}(t)$, which is symmetric and positive definite by Prop. \[prop:Vsymposdef\]. Then, $$\mathcal{T}^2(t)f=\mathcal{T}(t)(|Y(t)|^{-1}\mathcal{V}(t)f)=|Y(t)|^{-1}\mathcal{T}(t)(\mathcal{V}(t)f)=|Y(t)|^{-2}\mathcal{V}^2(t)f\,.$$ But, $$\begin{aligned} \mathcal{V}^2(t)f &= \mathcal{V}(t)(Y^{\otimes 2}(t)f) ={\langle}Y(t),f{\rangle}_H\mathcal{V}(t)Y(t) ={\langle}Y(t),f{\rangle}_H {\langle}Y(t),Y(t){\rangle}_H Y(t) =|Y(t)|^2\mathcal{V}(t)f\,.\end{aligned}$$ Hence, $\mathcal{T}^2(t)=\mathcal{V}(t)$, and the result follows. Consider for a moment the operator $\mathcal F:H\rightarrow\mathcal H$ defined as $f\mapsto\mathcal F(f):=|f|^{-1}f^{{\otimes}2}$ for $f\neq 0$ and $\mathcal F(0)=0$. \[lem:prel-F\] The operator $\mathcal F:H\rightarrow\mathcal H$ is locally Lipschitz continuous. It holds for $f\neq 0$ that $$\|\mathcal F(f)\|=|f|^{-1}\|f^{{\otimes}2}\|=|f|^{-1}|f|^2=|f|.$$ Hence, if $f\rightarrow0$ in $H$, then $\mathcal F(f)\rightarrow 0$ in $\mathcal H$, so $\mathcal F$ is continuous in zero. Next, suppose that $f,g\in H$ are both non-zero. Then, by a simple application of the triangle inequality and its reverse, $$\begin{aligned} \|\mathcal F(f)-\mathcal F(g)\|&=\| |f|^{-1}f^{{\otimes}2}-|g|^{-1}g^{{\otimes}2}\| \\ &\leq \left| |f|^{-1}-|g|^{-1}\right| \|f^{{\otimes}2}\|+|g|^{-1}\|f^{{\otimes}2}-g^{{\otimes}2}\| \\ &= |f|^{-1}|g|^{-1}\left| |f|-|g|\right| |f|^2+|g|^{-1}\|f^{{\otimes}2}-g^{{\otimes}2}\| \\ &\leq |f||g|^{-1}|f-g|+|g|^{-1}\|f^{{\otimes}2}-g^{{\otimes}2}\|.\end{aligned}$$ Again, by triangle inequality and the elementary inequality $(x+y)^2\leq 2x^2+2y^2$, we find for an ONB $\{e_n\}_{n\in\mathbb N}$ in $H$, $$\begin{aligned} \|f^{{\otimes}2}-g^{{\otimes}2}\|^2&=\sum_{n=1}|(f^{{\otimes}2}-g^{{\otimes}2})e_n|^2 \\ &=\sum_{n=1}^{\infty}|\langle f,e_n\rangle f-\langle g,e_n\rangle g|^2 \\ &\leq 2\sum_{n=1}^{\infty}\langle f,e_n\rangle^2|f-g|^2+2\sum_{n=1}^{\infty}|g|^2\langle f-g,e_n\rangle^2 \\ &=2|f-g|^2|f|^2+2|g|^2|f-g|^2.\end{aligned}$$ Therefore, $$\|\mathcal F(f)-\mathcal F(g)\|\leq\left(|f||g|^{-1}+\sqrt{2}(1+|f| |g|^{-1})^{1/2}\right)|f-g|$$ which shows locally Lipschitz continuity of $\mathcal F$. By a result of Kotelenez [@Kotelenez] (see Peszat and Zabczyk [@PZ Thm. 9.20]), there exists a continuous version of $\{Y(t)\}_{t\geq 0}$ in if the $C_0$-semigroup $\{\mathcal U(t)\}_{t\geq 0}$ is quasi-contractive, that is, if for some constant $\beta$, $\|\mathcal U(t)\|_{\text{op}}\leq {\text{e}}^{\beta t}$ for all $t\geq 0$. Thus, from Lemma \[lem:prel-F\], we can conclude that there exists a version of $\{\mathcal V^{1/2}(t)\}_{t\geq 0}$ (namely defined by the version of $\{Y(t)\}_{t\geq 0}$ with continuous paths) which has continuous paths in $\mathcal H$, when $\{\mathcal U(t)\}_{t\geq 0}$ is quasi-contractive. We remark that $|Y(t)|>0$ $a.s.$ This holds true since by Parseval’s identity $$|Y(t)|^2=\sum_{n=1}^{\infty}{\langle}Y(t),e_n{\rangle}^2\,,$$ for $\{e_n\}_{n=1}^{\infty}$ an ONB of $H$. By Lemma \[lemma:gaussian-Y\], ${\langle}Y(t),e_n{\rangle}$ is a Gaussian random variable for all $n$, and $P\left({\langle}Y(t),e_n{\rangle}=0\right)=0$. If $|Y(t)|=0$, then we must have ${\langle}Y(t),e_n{\rangle}^2=0$ for all $n$. But this happens with probability zero, and it follows that $P(|Y(t)|=0)=0$. We move our attention to a Cholesky-type of decomposition of the tensor Heston stochastic variance process $\{\mathcal V(t)\}_{t\geq 0}$. To this end, introduce an $\mathcal F_t$-adapted $H$-valued stochastic process $\{Z(t)\}_{t\geq 0}$ for which $|Z(t)|=1$, i.e., a process living on the unit ball of $H$. Define the operator $\Gamma_Z(t) :H\rightarrow H$ for $t\geq 0$ by $$\label{def:gammaZ} \Gamma_Z(t) := Z(t){\otimes}Y(t). $$ The following lemma collects the elementary properties of this operator-valued process. \[lemma:gamma-prop\] It holds that $\{\Gamma_Z(t)\}_{t\geq 0}$ is an $\mathcal F_t$-adapted stochastic process with values in $\mathcal H$. By definition $\Gamma_Z(t)$ becomes a linear operator, where boundedness follows readily from the Cauchy-Schwartz inequality. For an ONB $\{e_n\}_{n\in\mathbb N}$ in $H$, we have from Parseval’s identity $$\| \Gamma_Z(t)\|^2 = \sum_{n=1}^{\infty}|\Gamma_Z(t) e_n|^2 = \sum_{n=1}^{\infty} |\langle Z(t),e_n\rangle Y(t)|^2 = |Y(t)|^2 |Z(t)|^2=|Y(t)|^2< \infty.$$ The $\mathcal F_t$-measurability follows by assumption on $Z(t)$ and definition of $Y(t)$. The proof is complete. We notice that with the convention $0/0=1$, we can define $Z(t)=Y(t)/|Y(t)|$ and recover $\mathcal V^{1/2}(t)=\Gamma_{Z}(t)$ for all $t\geq 0$ such that $Y(t)\neq 0$. We show that for general unitary processes, $\{\Gamma_Z(t)\}_{t\geq 0}$ defines a Cholesky decomposition of the tensor Heston stochastic variance process: \[prop:tensor-heston-cholesky\] The tensor Heston stochastic variance process $\{{\mathcal{V}}(t)\}_{t\geq 0}$ can be decomposed as $$\begin{aligned} {\mathcal{V}}(t) &= \Gamma_Z(t) \Gamma^*_Z(t), \end{aligned}$$ for all $t\geq 0$, where $\Gamma_Z^*(t)=Y(t){\otimes}Z(t)$. Since, for any $f,g\in H$ and $t\geq 0$, $$\begin{aligned} \langle\Gamma^*_Z(t)f,g\rangle&= \langle f,\Gamma_Z(t) g\rangle=\langle f,\langle Z(t), g\rangle Y(t)\rangle=\langle Z(t), g\rangle\langle f, Y(t)\rangle = \langle\langle f, Y(t)\rangle Z(t), g\rangle, \end{aligned}$$ we have that $\Gamma_Z^*(t)=Y(t){\otimes}Z(t)$. It follows that for any $f\in H$, $$\begin{aligned} \Gamma_Z(t)\Gamma_Z^*(t)f &= \Gamma_Z(t) ( \langle Y(t), f\rangle Z(t))=\langle Y(t),f\rangle|Z(t)|^2Y(t) =Y^{{\otimes}2}(t)(f)=\mathcal V(t)(f).\end{aligned}$$ The result follows. A simple choice of an $H$-valued stochastic process $\{Z(t)\}_{t\geq 0}$ is $Z(t)=\gamma$, where $\gamma\in H$ with $|\gamma|=1$. Ornstein-Uhlenbeck process with stochastic volatility ===================================================== Define the $H$-valued Ornstein-Uhlenbeck process $\{X(t)\}_{t\geq 0}$ by $$\label{X} dX(t) = \mathcal C X(t)dt + \Gamma_Z(t) dB(t), \mspace{40mu} X(0)=X_0 \in H,$$ where $\mathcal C$ is a densely defined operator on $H$ generating a $C_0$-semigroup $\mathcal{S}$, and $\{B(t)\}_{t\geq 0}$ is a Wiener process in $H$ with covariance operator $Q_B\in L(H)$ (i.e., $Q_B$ is a symmetric and positive definite trace class operator). We assume that $\{B(t)\}_{t\geq 0}$ is independent of $\{W(t)\}_{t\geq 0}$ and recall $\{\Gamma_Z(t)\}_{t\geq 0}$ from . The next lemma validates the existence of the stochastic integral in : \[lem:finite-integral-norm\] It holds that $$\begin{aligned} \mathbb{E}\left[\int_0^t \lVert \Gamma_Z(s)Q_B^{1/2} \rVert^2 ds\right] &\leq \mathrm{Tr}(Q_B)\,\mathbb{E}\left[\int_0^t | Y(s)|^2 ds \right]<\infty.\end{aligned}$$ Let $\{e_n\}_{n\in\mathbb N}$ be an ONB in $H$. By Parseval’s identity, we have $$\begin{aligned} \lVert \Gamma_Z(s) Q_B^{1/2}\rVert^2 &= \sum_{n=1}^{\infty}|\Gamma_Z(s) Q_B^{1/2} e_n|^2 \\ &= \sum_{n=1}^{\infty}|{\langle}Z(t),Q_B^{1/2}e_n{\rangle}_H Y(s)|^2 \\ &= |Y(s)|^2 \sum_{n=1}^{\infty} {\langle}Z(t),Q_B^{1/2}e_n{\rangle}^2 \\ &= |Y(s)|^2 \sum_{n=1}^{\infty} {\langle}Q_B^{1/2}Z(t),e_n{\rangle}^2 \\ &= |Y(s)|^2 |Q_B^{1/2}Z(t)|^2. $$ As $Q_B$ is a symmetric, positive definite trace class operator, we can find an ONB of eigenvectors $\{v_n\}_{n\in\mathbb N}$ in $H$ with corresponding positive eigenvalues $\{\lambda_n\}_{n\in\mathbb N}$ of $Q_B$, such that $Q_Bv_n=\lambda_nv_n$ for all $n\in\mathbb N$ and therefore $\text{Tr}(Q_B)=\sum_{n=1}^{\infty}\lambda_n<\infty$. We have by Parseval’s identity and Cauchy-Schwartz’ inequality, $$\begin{aligned} |Q_B^{1/2}Z(t)|^2&=\langle Q_B^{1/2}Z(t),Q_B^{1/2}Z(t)\rangle \\ &=\langle Q_BZ(t),Z(t)\rangle \\ &=\sum_{n=1}^{\infty}\langle Z(t),Q_B v_n\rangle\langle Z(t),v_n\rangle \\ &=\sum_{n=1}^{\infty}\lambda_n\langle Z(t),v_n\rangle^2 \\ &\leq\sum_{n=1}^{\infty}\lambda_n|Z(t)|^2|v_n|^2 \\ &=\text{Tr}(Q_B),\end{aligned}$$ since by assumption $|Z(t)|=|v_n|=1$. Next we show that ${\mathbb{E}}[\int_0^t|Y(s)|^2\,ds]<\infty$. From the expression in , it follows from an elementary inequality that $$\begin{aligned} {\mathbb{E}}\left[|Y(t)|^2\right]&={\mathbb{E}}\left[|\mathcal U(t)Y_0+\int_0^t\mathcal U(t-s)\eta\,dW(s)|^2\right] \\ &\leq 2|\mathcal U(t)Y_0|^2+2{\mathbb{E}}\left[|\int_0^t\mathcal U(t-s)\eta\,dW(s)|^2\right] \\ &=2|\mathcal U(t)Y_0|^2+2\int_0^t\|\mathcal U(t-s)\eta Q_W^{1/2}\|^2\,ds,\end{aligned}$$ where the last equality is a consequence of the Itô isometry. The Hille-Yoshida Theorem (see Engel and Nagel [@EN Prop. I.5.5]) implies that $\|\mathcal U(t)\|_{\text{op}}\leq K\exp(w t)$ for constants $K>0$ and $w$. Thus, $$\begin{aligned} {\mathbb{E}}\left[|Y(t)|^2\right]&\leq 2K^2{\text{e}}^{2wt}|Y_0|^2+2\int_0^tK^2{\text{e}}^{2w(t-s)}\,ds\|\eta\|_{\text{op}}^2\|Q^{1/2}_W\|^2.\end{aligned}$$ Finally, we observe that $\|Q^{1/2}_W\|^2=\text{Tr}(Q_W)<\infty$, and hence the lemma follows. The integral $\int_0^t \Gamma_Z(s)dB(s)$ is well-defined, and therefore according to Peszat and Zabczyk [@PZ Sect. 9.4] has a unique mild solution given by $$\label{OU-mild-sol} X(t) = \mathcal{S}(t)X_0 + \int_0^t \mathcal{S}(t-s)\Gamma_Z(s)\,dB(s), \qquad t\geq 0.$$ We remark in passing that the stochastic integral is well-defined in since $\mathcal S(t)\in L(H)$ with an operator norm which is growing at most exponentially by the Hille-Yoshida Theorem (see Engel and Nagel [@EN Prop. I.5.5]). We analyse the characteristic functional of $\{X(t)\}_{t\geq 0}$. To this end, denote by $\{\mathcal F_t^Y\}_{t\geq 0}$ the filtration generated by $\{Y(t)\}_{t\geq 0}$. \[prop:ch-funct-OU\] Assume that the process $\{Z(t)\}_{t\geq 0}$ in $\{\Gamma_Z(t)\}_{t\geq 0}$ defined in is $\mathcal F_t^Y$-adapted. It holds for any $f\in H$ $$\begin{aligned} \mathbb{E}\left[{\text{e}}^{\mathrm{i}\left<X(t),f\right>}\right] &={\text{e}}^{\mathrm{i}\left<\mathcal{S}(t)X_0,f\right>}\mathbb{E}\left[\exp\left(-\frac{1}{2} \langle\int_0^t|Q_B^{1/2}Z(s)|^2\mathcal S(t-s)\mathcal V(s)\mathcal S^*(t-s)\,dsf,f\rangle\right)\right], \end{aligned}$$ where the $ds$-integral on the right-hand side is a Bochner integral in $L(H)$. With $f\in H$ we get from , $${\mathbb{E}}\left[{\text{e}}^{\mathrm{i}\langle X(t),f\rangle}\right]={\text{e}}^{\mathrm{i}\langle\mathcal S(t)X_0,f\rangle}{\mathbb{E}}\left[{\text{e}}^{\mathrm{i}\langle\int_0^t \mathcal S(t-s)\Gamma_Z(s)\,dB(s),f\rangle}\right].$$ Recall that $\{B(t)\}_{t\geq 0}$ and $\{W(t)\}_{t\geq 0}$ are independent. Since $\{Z(t)\}_{t\geq 0}$ is assumed $\mathcal F_t^Y$-adapted, we will have that $\{Z(t)\}_{t\geq 0}$ and $\{Y(t)\}_{t\geq 0}$, and therefore $\{\Gamma_Z(t)\}_{t\geq 0}$, are independent of $\{B(t)\}_{t\geq 0}$. By the tower property of conditional expectation and the Gaussianity of $\int_0^t\mathcal S(t-s)\Gamma_Z(s)\,dB(s)$ conditional on $\mathcal F_t^Y$, $$\begin{aligned} {\mathbb{E}}\left[{\text{e}}^{\mathrm{i}\langle\int_0^t \mathcal S(t-s)\Gamma_Z(s)\,dB(s),f\rangle}\right]&={\mathbb{E}}\left[{\mathbb{E}}\left[{\text{e}}^{\mathrm{i}\langle\int_0^t\mathcal S(t-s)\Gamma_Z(s)\,dB(s),f\rangle}\vert\mathcal F_t^Y\right]\right] \\ &={\mathbb{E}}\left[{\text{e}}^{-\frac12\int_0^t|Q_B^{1/2}\Gamma_Z^*(s)\mathcal S^*(t-s)f|^2\,ds}\right].\end{aligned}$$ Recalling from Prop. \[prop:tensor-heston-cholesky\], we have $\Gamma_Z^*(s)=Y(s){\otimes}Z(s)$. Hence, $$Q^{1/2}_B\Gamma_Z^*(s)(\mathcal S^*(t-s)f)=Q^{1/2}_B(\langle\mathcal S^*(t-s)f,Y(s)\rangle Z(s))=\langle Y(s), \mathcal S^*(t-s)f\rangle Q^{1/2}_BZ(s),$$ and $$\begin{aligned} |Q^{1/2}_B\Gamma_Z^*(s)\mathcal S^*(t-s)f|^2&=\langle Y(s),\mathcal S^*(t-s)f\rangle^2|Q^{1/2}_BZ(s)|^2 \\ &=\langle\mathcal V(s)\mathcal S^*(t-s)f,\mathcal S^*(t-s)f\rangle|Q^{1/2}_BZ(s)|^2 \\ &=\langle\mathcal S(t-s)\mathcal V(s)\mathcal S^*(t-s)f,f\rangle|Q^{1/2}_BZ(s)|^2.\end{aligned}$$ The proof is complete. From the proposition we see that $\{X(t)\}_{t\geq 0}$ is a Gaussian process conditional on $\mathcal F_t^Y$, with mean $\mathcal S(t)X_0$. The covariance operator $Q_{X(t)}$ of $X(t)$, defined by the relationship $$\langle Q_{X(t)}f,g\rangle={\mathbb{E}}\left[\langle X(t)-{\mathbb{E}}[X(t)],f\rangle\langle X(t)-{\mathbb{E}}[X(t)],g\rangle\right],$$ for $f,g\in H$, can be computed as follows: since $X(t)-{\mathbb{E}}[X(t)]=\int_0^t\mathcal S(t-s)\Gamma_Z(s)\,dB(s)$, and for a fixed $T>0$, the process $t\mapsto\int_0^t\mathcal S(T-s)\Gamma_Z(s)\,dB(s)\, t\leq T$ is an $\mathcal F_t$-martingale, it follows from Peszat and Zabczyk [@PZ Thm. 8.7 (iv)], $$\begin{aligned} &{\mathbb{E}}\left[\langle\int_0^t\mathcal S(T-s)\Gamma_Z(s)\,dB(s),f\rangle\langle\int_0^t\mathcal S(T-s)\Gamma_Z(s)\,dB(s),g\rangle\right] \\ &\qquad\qquad= {\mathbb{E}}\left[\left\langle\left(\int_0^t\mathcal S(T-s)\Gamma_Z(s)\,dB(s)\right)^{{\otimes}2}f,g\right\rangle\right] \\ &\qquad\qquad={\mathbb{E}}\left[\langle\int_0^t\mathcal S(T-s)\Gamma_Z(s)Q_B\Gamma_Z^*(s)\mathcal S^*(T-s)\,ds f,g\rangle\right] \\ &\qquad\qquad=\langle\int_0^t\mathcal S(T-s){\mathbb{E}}\left[\Gamma_Z(s)Q_B\Gamma_Z^*(s)\right]\mathcal S^*(T-s)\,ds f,g\rangle,\end{aligned}$$ for $t\leq T$. Now, let $T=t$, and we find $$Q_{X(t)}=\int_0^t\mathcal S(t-s){\mathbb{E}}\left[\Gamma_Z(s)Q_B\Gamma_Z^*(s)\right]\mathcal S^*(t-s)\,ds.$$ Note that $$\begin{aligned} \Gamma_Z(s)Q_B\Gamma_Z^*(s)f&=\Gamma_Z(s)Q_B(\langle Y(s),f\rangle Z(s)) \\ &=\langle Y(s),f\rangle\Gamma_Z(s)(Q_B Z(s)) \\ &=\langle Y(s),f\rangle\langle Z(s),Q_B Z(s)\rangle Y(s) \\ &=|Q^{1/2}_BZ(s)|^2\langle Y(s),f\rangle Y(s) \\ &=|Q^{1/2}_BZ(s)|^2\mathcal V(s) f\end{aligned}$$ for any $f\in H$. Thus we recover the covariance functional that we can read off from Prop. \[prop:ch-funct-OU\]; $$\label{eq:covar-X(t)} Q_{X(t)}=\int_0^t\mathcal S(t-s){\mathbb{E}}\left[|Q^{1/2}_BZ(s)|^2\mathcal V(s)\right]\mathcal S^*(t-s)\,ds.$$ By Peszat and Zabczyk [@PZ Thm. 8.7 (iv)] and the zero expectation of the stochastic integral with respect to $W$, $$\begin{aligned} {\mathbb{E}}\left[\mathcal V(t) f,g\rangle\right]&={\mathbb{E}}\left[\langle Y(t),f\rangle\langle Y(t),g\rangle\right] \\ &=\langle\mathcal U(t)Y_0,f\rangle\langle\mathcal U(t)Y_0,g\rangle+{\mathbb{E}}\left[\langle\int_0^t\mathcal U(t-s)\eta\,dW(s)^{{\otimes}2}f,g\rangle\right] \\ &=\langle(\mathcal U(t)Y_0)^{{\otimes}2}f,g\rangle+\langle\int_0^t\mathcal U(t-s)\eta Q_W\eta^*\mathcal U^*(t-s)\,ds f,g\rangle,\end{aligned}$$ for $f,g\in H$. Hence, in the particular case of $Z(t)=\gamma\in H$, with $|\gamma|=1$, we find that the covariance becomes $$Q_{X(t)}=\int_0^t\mathcal S(t-s)\left\{(\mathcal U(s)Y_0)^{{\otimes}2}+\int_0^s\mathcal U(u)\eta Q_W\eta^*\mathcal U^*(u)\,du\right\}\mathcal S^*(t-s)\,ds.$$ We next apply our Ornstein-Uhlenbeck process $\{X(t)\}_{t\geq 0}$ with tensor Heston stochastic volatility to the modelling of forward prices of commodity markets. For this purpose, we let $H$ be the Filipovic space $H_w$, which was introduced by Filipovic in [@filipovic]. For a measurable and increasing function $w:\mathbb{R_+\rightarrow\mathbb{R_+}}$ with $w(0)=1$ and $\int_0^{\infty} w^{-1}(x)dx<\infty$, the Filipovic space $H_w$ is defined as the space of absolutely continuous functions $f:\mathbb{R_+\rightarrow\mathbb{R}}$ such that $$|f|_w^2 := f(0)^2 + \int_0^\infty w(x)|f'(x)|^2dx <\infty.$$ Here, $f'$ denotes the weak derivative of $f$. The space $H_w$ is a separable Hilbert space with inner product ${\langle}\cdot,\cdot{\rangle}_w$ and associated norm $|\cdot|_w$. We let $\{X(t)\}_{t\geq 0}$ be defined as in , with $\mathcal C$ being the derivative operator $\partial/\partial x$. The derivative operator is densely defined on $H_w$ (see Filipovic [@filipovic]), with the left-shift operator $\{\mathcal S(t)\}_{t\geq 0}$ being its $C_0$-semigroup. For $f\in H_w$, the left-shift semigroup acts as $\mathcal{S}(t)f:= f(\cdot+t)\in H_w$. Furthermore, we let $\delta_x:H_w\rightarrow\mathbb{R}$ be the evaluation functional, i.e. for $f\in H_w$ and $x\in\mathbb{R}_+$, $\delta_x(f):= f(x)$. We have that $\delta_x\in H_w^*$, that is, the evaluation map is a linear functional on $H_w$. Letting $h_x\in H_w$ be given by $$h_x(y) = 1 + \int_0^{x\wedge y} \frac{1}{w(x)}dx, \quad y\in\mathbb{R}_+,$$ we have that $\delta_x = {\langle}\cdot,h_x{\rangle}_w$ (see Benth and Krühner [@BK-HJM]). The arbitrage-free price $F(t,T)$ at time $t$ of a contract delivering a commodity at a future time $T\geq t$, is modelled by $F(t,T):= \delta_{T-t}(X(t))=X(t,T-t)$ (see Benth and Krühner [@BK-HJM]). We adopt the Musiela notation and express the price in terms of [*time to delivery*]{} $x\geq 0$ rather than [*time of delivery*]{} $T$, letting $f(t,x):= F(t,t+x) = \delta_x(X(t))=X(t,x)$. The next corollary gives the covariance between two contracts with different times to delivery. For all $x,y\in\mathbb{R}_+$, we have $$\begin{aligned} \textnormal{Cov}\big(f(t,x),\, f(t,y)\big)=\mathbb E\left[\int_0^t|Q_B^{1/2}Z(s)|_w^2Y(s,x+t-s)Y(s,y+t-s)\,ds\right],\end{aligned}$$ where $Y(s,z)=\delta_z(Y(s))$ for $z\in\mathbb R_+$. Since $f(t,x)=\delta_x(X(t))=\langle X(t),h_x\rangle_w$, we find $$\textnormal{Cov}(f(t,x),f(t,y))=\textnormal{Cov}(\langle X(t),h_x\rangle_w,\langle X(t),h_y\rangle_w)=\langle Q_{X(t)}h_x,h_y\rangle_w,$$ with $Q_{X(t)}$ given in . Since $\mathcal S^*(t)h_x=h_{x+t}$, it follows, $$\begin{aligned} \langle\mathcal S(t-s)\mathcal V(s)\mathcal S^*(t-s) h_x,h_y\rangle_w&=\langle \mathcal S(t-s)\mathcal V(s)h_{x+t-s},h_y\rangle_w \\ &=\langle Y(s),h_{x+t-s}\rangle_w\langle\mathcal S(t-s)Y(s),h_y\rangle_w \\ &=\langle Y(s),h_{x+t-s}\rangle_w\langle Y(s),\mathcal S^*(t-s)h_y\rangle_w \\ &=Y(s,x+t-s)Y(s,y+t-s).\end{aligned}$$ The claim follows. The above corollary yields that the entire covariance structure between contracts with different times of maturity is determined by $Y$ only. We notice that the choice of $\{Z(t)\}_{t\geq 0}$ in the definition of $\{\Gamma_Z(t)\}_{t\geq 0}$ only scales the covariance. Consider the special case of $Z(t)=\gamma\in H_w$. Using similar arguments to those in the derivation of $Q_{X(t)}$ yield, $$\begin{aligned} &\mathbb E\left[Y(s,x+t-s)Y(s,y+t-s)\right] \\ &\qquad=\langle\mathcal U(s) Y_0,h_{x+t-s}\rangle_w\langle\mathcal U(s)Y_0,h_{y+t-s}\rangle_w \\ &\qquad\qquad+ \mathbb E\left[\langle\int_0^s\mathcal U(s-u)\eta\,dW(u),h_{x+t-s}\rangle_w\langle\int_0^s\mathcal U(s-u)\eta\,dW(u),h_{y+t-s}\rangle_w\right] \\ &\qquad=\langle\mathcal U(s) Y_0,h_{x+t-s}\rangle_w\langle\mathcal U(s)Y_0,h_{y+t-s}\rangle_w +\mathbb E\left[\langle\int_0^s\mathcal U(s-u)\eta\,dW(u)^{\otimes 2}h_{x+t-s},h_{y+t-s}\rangle_w\right] \\ &\qquad=\langle\mathcal U(s) Y_0,h_{x+t-s}\rangle_w\langle\mathcal U(s)Y_0,h_{y+t-s}\rangle_w +\langle\int_0^s\mathcal U(u)\eta Q_W\eta^*\mathcal U^*(u)\,du h_{x+t-s},h_{y+t-s}\rangle_w.\end{aligned}$$ Thus, when $Z(t)=\gamma\in H_w$, we find the covariance to be $$\begin{aligned} \textnormal{Cov}(f(t,x),f(t,y))&=|Q_B^{1/2}\gamma|_w^2\int_0^t\delta_{y+t-s}(\mathcal U(s)Y_0)^{\otimes 2}\delta_{x+t-s}^*(1)\,ds \\ &\qquad+|Q_B^{1/2}\gamma|_w^2\int_0^t\delta_{y+t-s}\int_0^s\mathcal U(u)\eta Q_W\eta^*\mathcal U^*(u)\,du\,\delta^*_{x+t-s}(1)\,ds,\end{aligned}$$ since $\delta_x^*(1)=h_x$. The covariance of $f(t,x)$ and $f(t,y)$ is determined by the parameters of the $Y$-process, more specifically, its volatility $\eta$, the operator $\mathcal A$ (yielding a semigroup $\mathcal U$), the initial field $Y_0$ and the covariance operator $Q_W$ of the Wiener noise $W$ driving its dynamics. We also observe that the time integrals sample the parameters of $Y$ over the intervals $(x,x+t)$ and $(y,y+t)$ to build up the covariance of the field $\{f(t,z)\}_{z\in\mathbb R_+}$, not only taking the values at $x$ and $y$ into account. The case when $\mathcal{A}$ is bounded ====================================== In this section, we analyse the tensor Heston stochastic variance process when $\mathcal{A}$ in is a bounded operator. Moreover, we make comparison with the classical Heston model on the real line (see Heston [@H]) and discuss its extensions. When $\mathcal A$ is bounded, has a strong solution and we can compute the dynamics of ${\mathcal{V}}(t)$ by an infinite dimensional version of Itô’s formula. \[prop:V\_dynamics-Abounded\] Assume $\mathcal{A}$ is bounded. Then we have the following representation of ${\mathcal{V}}(t)$, $${\mathcal{V}}(t) = {\mathcal{V}}(0) + \int_0^t \Phi(s) ds + \int_0^t \Psi(s) dW(s), t\geq 0,$$ where $\{\Phi(t)\}_{t\geq 0}$ is the $\mathcal H$-valued process $$\begin{aligned} \Phi(s) &= \mathcal{A}Y(s){\otimes}Y(s) + Y(s){\otimes}\mathcal{A}Y(s)+\eta Q_W\eta^*,\end{aligned}$$ and $\{\Psi(t)\}_{t\geq 0}$ is the $L(H,\mathcal H)$-valued process $$\begin{aligned} \Psi(s)(\cdot) &= \eta(\cdot){\otimes}Y(s) + Y(s){\otimes}\eta(\cdot).\end{aligned}$$ When $\mathcal{A}$ is bounded, the unique strong solution of is given by $$Y(t) = Y_0 + \int_0^t \mathcal{A}Y(s)\,ds + \int_0^t \eta\, dW(s).$$ Define the function $v:H\rightarrow \mathcal{H}$ by $v(y):= y^{{\otimes}2}$ and observe that ${\mathcal{V}}(t)=v(Y(t))$. To use the infinite dimensional Itô formula by Curtain and Falb [@CF], we need to find the first and second order Frechét derivatives of $v$. Define the functions $g_1:H\rightarrow L(H,\mathcal{H})$ and $g_2:H\rightarrow L(H,L(H,\mathcal{H}))$ by $$\begin{aligned} g_1(y)& := \cdot{\otimes}y + y{\otimes}\cdot \end{aligned}$$ and $$\begin{aligned} g_2(y)(h) &= h {\otimes}\cdot + \cdot {\otimes}h.\end{aligned}$$ A direct calculation reveals that, $$\begin{aligned} \| v(y+h) - v(y) - g_1(y)h \|_{\mathcal{H}} &= \| (y+h) {\otimes}(y+h) - y{\otimes}y - (h{\otimes}y + y{\otimes}h) \| \\ &= \| y^{{\otimes}2} + y{\otimes}h + h{\otimes}y + h^{{\otimes}2} - y^{{\otimes}2} - h{\otimes}y - y{\otimes}h \| \\ &= \| h^{{\otimes}2} \| \\ &= |h|^2.\end{aligned}$$ Thus, we find, $$\frac{\| v(y+h) - v(y) - g_1(y)h \|}{|h|} \leq \frac{|h|^2}{|h|} =|h|,$$ which converges to 0 when $|h|\rightarrow 0$. This shows that $g_1$ is the Frechét derivative of $v$, which we denote by $Dv$. Next, for any $\xi\in H$, $$\begin{aligned} Dv(y+h)(\xi)&-Dv(y)(\xi)-g_2(y)(h)(\xi) \\ &=\xi{\otimes}(y+h)+(y+h){\otimes}\xi-\xi{\otimes}y-y{\otimes}\xi-h{\otimes}\xi-\xi{\otimes}h=0,\end{aligned}$$ which shows that $g_2$ is the Frechét derivative of $Dv$, and hence the second order Frechét derivative of $v$, which we denote by $D^2 v$. It follows from the infinite dimensional Itô formula in Curtain and Falb [@CF] that $$\begin{aligned} d{\mathcal{V}}(t) &= \left(Dv(Y(t))(\mathcal{A}Y(t)) + \frac{1}{2} \sum_{n=0}^{\infty} D^2 v(Y(t))(\eta(\sqrt{\lambda_n} e_n))(\eta(\sqrt{\lambda_n} e_n))\right) dt + Dv(Y(t))\eta\,dW(t), \end{aligned}$$ where $\{e_n\}_{n\in\mathbb N}$ is an ONB of $H$ of eigenvectors of $Q_W$ with corresponding eigenvalues $\{\lambda_n\}_{n\in\mathbb N}$. Inserting $g_1(Y(t))$ and $g_2(Y(t))$ for respectively $Dv(Y(t))$ and $D^2 v(Y(t))$, gives us $$\begin{aligned} d{\mathcal{V}}(t) =& \left(\mathcal{A}Y(t){\otimes}Y(t) + Y(t){\otimes}\mathcal{A}Y(t)\right)\,dt \\ &\qquad + \frac{1}{2} \sum_{n=0}^{\infty} \Big(\eta\big(\sqrt{\lambda_n} e_n\big){\otimes}\eta\big(\sqrt{\lambda_n} e_n\big) + \eta\big(\sqrt{\lambda_n} e_n\big){\otimes}\eta\big(\sqrt{\lambda_n} e_n\big)\Big)\,dt \\ &\qquad+ \Big(\eta(\cdot){\otimes}Y(t) + Y(t){\otimes}\eta(\cdot)\Big) dW(t) \\ =& \left(\mathcal{A}Y(t){\otimes}Y(t) + Y(t){\otimes}\mathcal{A}Y(t) + \sum_{n=0}^{\infty} \lambda_n\eta(e_n)^{{\otimes}2}\right)dt + \Psi(t) dW(t). \end{aligned}$$ For $\xi\in H$, we find, $$\begin{aligned} \eta Q_W\eta^*(\xi)&=\eta Q_W\sum_{n=1}^{\infty}\langle\eta^*(\xi),e_n\rangle e_n \\ &=\sum_{n=1}^{\infty}\langle \xi,\eta(e_n)\rangle\eta Q_W(e_n) \\ &=\sum_{n=1}^{\infty}\lambda_n\langle \xi,\eta(e_n)\rangle\eta(e_n) \\ &=\sum_{n=1}^{\infty}\lambda_n\eta(e_n)^{{\otimes}2}(\xi).\end{aligned}$$ The proof is complete. We can formulate the dynamics of $\{\mathcal V(t)\}_{t\geq 0}$ as an operator-valued stochastic differential equation. \[prop:V-dynamics\] Assume that $\mathcal A$ is bounded. Then $$\begin{aligned} d\mathcal V(t)&=\left(\mathcal A\mathcal V(t)\mathcal A^*+\mathcal V(t)-(\mathcal A-\text{Id})\mathcal V(t)(\mathcal A^*-\text{Id})+\eta Q_W\eta^*\right)\,dt \\ &\qquad+|\eta(\cdot)|\left(\Gamma_{\eta(\cdot)/|\eta(\cdot)|}(t)+\Gamma^*_{\eta(\cdot)/|\eta(\cdot)|}(t)\right)\,dW(t)\end{aligned}$$ where $\text{Id}$ is the identity operator on $H$ and $\Gamma_Z(t)$ is the Cholesky decomposition of $\mathcal V(t)$ defined in . For $y,f\in H$, we see from a direct computation that $$\begin{aligned} ((\mathcal A-\text{Id})y)^{{\otimes}2}(f)&=\langle(\mathcal A-\text{Id})y,f\rangle(\mathcal A-\text{Id})y \\ &=\langle\mathcal A y,f\rangle\mathcal A y-\langle\mathcal A y,f\rangle y-\langle y,f\rangle \mathcal A y +\langle y,f\rangle y \\ &=(\mathcal A y)^{{\otimes}2}(f)-(\mathcal A y{\otimes}y)(f)-(y{\otimes}\mathcal A y)(f)+y^{{\otimes}2}(f),\end{aligned}$$ or, $$\mathcal A y{\otimes}y+y{\otimes}\mathcal A y=(\mathcal A y)^{{\otimes}2}+y^{{\otimes}2}-((\mathcal A-\text{Id})y)^{{\otimes}2}.$$ Next, for any bounded operator $\mathcal L\in L(H)$, we have from linearity of $\mathcal L$ that $$\begin{aligned} (\mathcal L y)^{{\otimes}2}(f)&=\langle\mathcal L y,f\rangle\mathcal L y =\langle y,\mathcal L^*f\rangle\mathcal L y =\mathcal L(\langle y,\mathcal L^*f\rangle y) =\mathcal L(y^{{\otimes}2}(\mathcal L^*f)) =\mathcal L y^{{\otimes}2}\mathcal L^*(f).\end{aligned}$$ Thus, $$\mathcal A y{\otimes}y+y{\otimes}\mathcal A y=\mathcal A y^{{\otimes}2}\mathcal A^*+y^{{\otimes}2} -(\mathcal A-\text{Id}) y^{{\otimes}2} (\mathcal A^*-\text{Id}).$$ With $y=Y(t)$ and recalling the definition of $\Phi(t)$ in Prop. \[prop:V\_dynamics-Abounded\], this shows the drift of $\mathcal V(t)$. For $\xi,f\in H$, it holds that $$\Psi(t)(f)(\xi)=|\eta(f)|\left(\frac{\eta(f)}{|\eta(f)|}{\otimes}Y(t)+Y(t){\otimes}\frac{\eta(f)}{|\eta(f)|}\right)(\xi)$$ whenever $\eta(f)\neq 0$, with $\Psi(t)$ defined in Prop. \[prop:V\_dynamics-Abounded\]. The result follows. Recall from Lemma \[lemma:gamma-prop\] that $\mathcal V(t)=\Gamma_Z(t)\Gamma^*_Z(t)$. Hence, for any $f\in H$, $$\Gamma_{\eta(\cdot)/|\eta(\cdot)|}(t)\Gamma^*_{\eta(\cdot)/|\eta(\cdot)|}(t)(f)= \Gamma_{\eta(f)/|\eta(f)|}(t)\Gamma^*_{\eta(f)/|\eta(f)|}(t)=\mathcal V(t)(f).$$ Informally, we can say that the diffusion term of $\{\mathcal V(t)\}_{t\geq 0}$ is given as the sum of the “square root” of $\{\mathcal V(t)\}_{t\geq 0}$ and its adjoint. Let us consider our tensor Heston stochastic variance process in the particular case of finite dimensions, that is, $H={\mathbb{R}}^d$ for $d\in\mathbb N$. We assume $\{W(t)\}_{t\geq 0}$ is a $d$-dimensional standard Brownian motion, and the $d$-dimensional stochastic process $\{Y(t)\}_{t\geq 0}$ is defined by the dynamics with $\mathcal A,\eta\in {\mathbb{R}}^{d\times d}$. It is straightforward to see that for any $x,y\in {\mathbb{R}}^d$, $x{\otimes}y=xy^{\top}$, where $y^{\top}$ means the transpose of $y$. Hence, $\mathcal V(t)=Y^{{\otimes}2}(t)=Y(t)Y^{\top}(t)$. Moreover, if $x\in {\mathbb{R}}^d$, $$\Psi(t)(x)=(\eta x){\otimes}Y(t)+Y(t){\otimes}(\eta x)=\eta xY^{\top}(t)+Y(t) x^{\top}\eta^{\top},$$ and $$\mathcal A Y(t){\otimes}Y(t)+Y(t){\otimes}\mathcal A Y(t)=\mathcal A Y(t) Y^{\top}(t)+Y(t)(\mathcal A Y(t))^{\top}=\mathcal A\mathcal V(t)+\mathcal V(t)\mathcal A^{\top}.$$ Hence, since $Q_W=I$, the $d\times d$ identity matrix, we find from Prop. \[prop:V\_dynamics-Abounded\] that $$\label{finite-dim-tensor-Heston} d\mathcal V(t)=\left(\eta\eta^{\top}+\mathcal A\mathcal V(t)+\mathcal V(t)\mathcal A^{\top}\right)\,dt+\eta\,dW(t)\, Y^{\top}(t)+Y(t)\,dW^{\top}(t)\,\eta^{\top}.$$ This is a different dynamics than the Wishart processes on ${\mathbb{R}}^{d\times d}$ defined by Bru [@Bru], and proposed as a multifactor extension of the Heston stochastic volatility model in Fonseca, Grasselli and Tebaldi [@FGT]. The drift term in the Wishart process is analogous to the one in , while the diffusion term in the Wishart process takes the form $$R\,d\bar{W}(t)\,\mathcal V^{1/2}(t)+\mathcal V^{1/2}(t)\,d\bar{W}^{\top}(t)\,R^{\top},$$ where $\{\bar{W}(t)\}_{t\geq 0}$ is a $d\times d$ matrix-valued Brownian motion and $R$ is a $d\times d$ matrix. Our tensor model in infinite dimensions yields a simplified diffusion in finite dimensions compared to the Wishart process of Bru [@Bru], where one is using a Cholesky-type of representation of the square root of $\mathcal V(t)$, involving also the “volatility” $\eta$ of the Ornstein-Uhlenbeck dynamics of $Y$. To ensure a positive definite process, Bru [@Bru] introduces strong conditions on $\mathcal A$ and $R$, while our Heston model is positive definite by construction. Let us now slightly turn the perspective, going back to the general infinite dimensional situation, and study the projection of the $\mathcal H$-valued process $\{\mathcal V(t)\}_{t\geq 0}$ to the real line in the sense of studying the process $\{\mathcal V(t)\}_{t\geq 0}$ expanded along a given element $f\in H$. To this end, for $f\in H$ introduce the linear functions $\mathcal L_f:\mathcal H\rightarrow{\mathbb{R}}$ by $$\begin{aligned} \mathcal{L}_f(\mathcal{T}) := {\langle}{\langle}\mathcal{T},f^{{\otimes}2}{\rangle}{\rangle}&= {\langle}\mathcal{T}(f),f{\rangle}.\end{aligned}$$ We note that for $h, g\in H$, $$\begin{aligned} \label{proj_tensor} \mathcal{L}_f(h{\otimes}g) = {\langle}{\langle}h{\otimes}g, f^{{\otimes}2}{\rangle}{\rangle}= {\langle}(h{\otimes}g)f, f{\rangle}= {\langle}h,f{\rangle}{\langle}g,f{\rangle},\end{aligned}$$ and, in particular, $\mathcal{L}_f(h^{{\otimes}2}) = {\langle}h,f{\rangle}^2$. We define the real-valued stochastic process $\{V(t;f)\}_{t\geq 0}$ as $$\label{eq:dyn-projected-V} V(t;f):=\mathcal L_f(\mathcal V(t))={\langle}Y(t),f{\rangle}^2,$$ for $t\geq 0$. It is immediate from the definition that $\{V(t;f)\}_{t\geq 0}$ is an $\mathcal F_t$-adapted process taking values on ${\mathbb{R}}_+$, the positive real line (including zero). \[prop:V-dyn-realline\] Assume that $\mathcal{A}$ is bounded. Then the dynamics of $\{V(t;f)\}_{t\geq 0}$ defined in is $$\begin{aligned} dV(t;f)&=\left(V(t;f)+|Q^{1/2}_W\eta^*f|^2+\mathcal L_{\mathcal A^*f}({\mathcal{V}}(t))- \mathcal L_{(\mathcal A^*-\text{Id})f}({\mathcal{V}}(t))\right)\,dt\\ &\quad + 2|Q_W^{1/2} \eta^*f|\sqrt{V(t;f)}\, dw(t), t\geq 0,\end{aligned}$$ where $w(t)$ is a real-valued Wiener process. From Props. \[prop:V\_dynamics-Abounded\] and \[prop:V-dynamics\], we have $$\begin{aligned} dV(t;f)&=\left(\mathcal L_f(\mathcal A{\mathcal{V}}(t)\mathcal A^*)+V(t;f)- \mathcal L_f((\mathcal A-\text{Id}){\mathcal{V}}(t)(\mathcal A^*-\text{Id}))+\mathcal L_f(\eta Q_W\eta^*)\right)\,dt \\ &\qquad+\mathcal L_f\left(\Psi(t)\,dW(t)\right).\end{aligned}$$ First, $$\mathcal L_f(\eta Q_W\eta^*)={\langle}\eta Q_W\eta^* f,f{\rangle}=|Q^{1/2}\eta^*f|^2.$$ Next, $$\mathcal L_f(\mathcal A{\mathcal{V}}(t)\mathcal A^*)={\langle}\mathcal A{\mathcal{V}}(t)\mathcal A^*f,f{\rangle}={\langle}{\mathcal{V}}(t)\mathcal A^*f,\mathcal A^*f{\rangle}=\mathcal L_{\mathcal A^*f}({\mathcal{V}}(t)).$$ This proves the drift term of $\{V(t;f)\}_{t\geq 0}$. We finally consider the projection of the stochastic integral. From Thm. 2.1 in Benth and Krühner [@BK-HJM], $$\begin{aligned} \mathcal{L}_{f}\left(\int_0^t \Psi(s)dW(s)\right) &= \int_0^t \sigma(s;f) dw(s),\end{aligned}$$ where $\{w(t)\}_{t\geq 0}$ is a real-valued Wiener process and $\sigma(t;f) =|Q^{1/2}_W\gamma(t;f)|$ with $\{\gamma(t;f)\}_{t\geq 0}$ being the $H$-valued stochastic process defined by $\mathcal{L}_{f}(\Psi(t)(\cdot)) = {\langle}\gamma(t;f),\cdot{\rangle}$. Since $$\begin{aligned} \mathcal{L}_{f}(\Psi(t)(\cdot)) &= \mathcal{L}_{f}(\eta(\cdot){\otimes}Y(t)) + \mathcal{L}_{f}(Y(t){\otimes}\eta(\cdot))\\ &= {\langle}\eta(\cdot),f{\rangle}{\langle}Y(t),f{\rangle}+{\langle}Y(t),f{\rangle}{\langle}\eta(\cdot),f{\rangle}\\ &= 2 {\langle}\cdot,\eta^*f{\rangle}{\langle}Y(t),f{\rangle}\\ &= {\langle}\cdot,2{\langle}Y(t),f{\rangle}\eta^*f{\rangle},\end{aligned}$$ we have $\gamma(t;f) = 2{\langle}Y(t),f{\rangle}\eta^*f$. Observe in passing, recalling Lemma \[lem:finite-integral-norm\], that $\{\gamma(t;f)\}_{t\geq 0}$ is an $\mathcal F_t$-adapted stochastic process such that ${\mathbb{E}}[\int_0^t\gamma^2(s;f)\,ds]<\infty$ for any $t>0$, and thus $w$-integrable. The integrand $\sigma(t;f)$ is therefore given by $$\begin{aligned} \sigma^2(t;f) &= |Q^{1/2}_W\gamma(t;f)|^2=4{\langle}Y(t),f{\rangle}^2|Q^{1/2}_W\eta^*f|^2 =4\mathcal L_f({\mathcal{V}}(t))|Q^{1/2}_W\eta^*f|^2.\end{aligned}$$ Thus, $\sigma(t;f)=2\sqrt{V(t;f)}|Q^{1/2}_W\eta^*f|$ and the proof is complete. We see that the process $\{V(t;f)\}_{t\geq 0}$ shares some similarities with a classical real-valued Heston volatility model (see Heston [@H]). $\{V(t;f)\}_{t\geq 0}$ has a square-root diffusion term, and a linear drift term. However, there are also some additional drift terms which are not expressible in $V(t;f)$. If $f\in H$ is an eigenvector of $\mathcal A^*$ with an eigenvalue $\lambda\in{\mathbb{R}}$, we find that $\mathcal L_{\mathcal A^*f}({\mathcal{V}}(t))=\lambda^2V(t;f)$ and $\mathcal L_{(\mathcal A^*-\text{Id})f}({\mathcal{V}}(t))=(\lambda-1)^2V(t;f)$, and hence by Prop. \[prop:V-dyn-realline\], $$dV(t;f)=\left(|Q^{1/2}_W\eta^*f|+2\lambda V(t;f)\right)\,dt+2|Q^{1/2}_W\eta^*f|\sqrt{V(t;f)}\,dw(t),$$ which corresponds to a classical Heston stochastic variance process. [XX]{} D. Applebaum (2015). Infinite dimensional Ornstein-Uhlenbeck processes driven by Lévy processes. [*Probab. Surveys*]{}, [**12**]{}, pp. 33–54. O. E. Barndorff-Nielsen and N. Shephard (2001). Non-Gaussian Ornstein-Uhlenbeck-based models and some of their uses in financial economics. [*J. Royal Stat. Soc.: Series B (Stat. Method.)*]{}, [**63**]{}, pp. 167–241. F. E. Benth and P. Krühner (2014). Representation of infinite-dimensional forward price models in commodity markets. [*Comm. Math. Statist.*]{}, [**2**]{}(1), pp. 47–106. F. E. Benth, B. Rüdiger and A. Süss (2015). Ornstein-Uhlenbeck processes in Hilbert space with non-Gaussian stochastic volatility. To appear in [*Stoch. Proc. Applic.*]{} Available on arXiv:1506.07245 M. F. Bru (1991). Wishart Processes. [*J. Theor. Probab.*]{}, [**4**]{}, pp. 725–743. R. F. Curtain and P. L. Falb (1970). Itô’s Lemma in infinite dimensions. [*J. Math. Analysis Appl.*]{}, [**31**]{}, pp. 434–448. K.-J. Engel and R. Nagel (2000). [*One-Parameter Semigroups for Linear Evolution Equations*]{}. Springer Verlag, New York. X. Fernique (1975). Regularite des trajectoires des functions aleatoires gausiennes. In [*Ecole d’Ete de Probabilites de Saint-Flour, IV-1974.*]{}, Vol. 480, Lecture Notes in Mathematics, Springer-Verlag, Berlin Heidelberg, pp. 1–96. D. Filipovic (2001). [*Consistency Problems for Heath-Jarrow-Morton Interest rate Models*]{}. Springer Verlag, Berlin Heidelberg. J. da Fonseca, M. Grasselli and C. Tebaldi (2008). A multifactor volatility Heston model. [*Quant. Finance*]{}, [**8**]{}(6), pp. 591–604. D. Heath, R. Jarrow and A. Morton (1992). Bond pricing and the term structure of interest rates: a new methodology for contingent claims valuation. [*Econometrica*]{}, [**60**]{}, pp. 77–105 S. L. Heston (1993). A closed-form solution for options with stochastic volatility with applications to bond and currency options. [*Rev. Financial Studies*]{}, [**6**]{}(2), pp. 327–343. P. Kotelenez (1987). A maximal inequality for stochastic convolution integrals on Hilbert space and space-time regularity of linear stochastic partial differential equations. [*Stochastics*]{}, [**21**]{}, 345–458. S. Peszat and J. Zabczyk (2007). [*Stochastic Partial Differential Equations with Lévy Noise*]{}. Cambridge University Press, Cambridge.
--- abstract: 'The proposed electron-proton collider LHeC is a unique facility where electroweak interactions can be studied with a very high precision in a largely unexplored kinematic regime of spacelike momentum transfer. We have simulated inclusive neutral- and charged-current deep-inelastic lepton proton scattering cross section data at center-of-mass energies of 1.2 and 1.3[$\mathrm{TeV}$]{}including their systematic uncertainties. Based on simultaneous fits of electroweak physics parameters and parton distribution functions, we estimate the uncertainties of Standard Model parameters as well as a number of parameters describing physics beyond the Standard Model, for instance the oblique parameters $S$, $T$, and $U$. An unprecedented precision at the sub-percent level is expected for the measurement of the weak neutral-current couplings of the light-quarks to the $Z$ boson, $g_{A/V}^{u/d}$, improving their present precision by more than an order of magnitude. The weak mixing angle can be determined with a precision of about $\Delta {\ensuremath{\sin^2\hspace*{-0.15em}\theta_W}}= \pm 0.00015$, and its scale dependence can be studied in the range between about $25$ and $700\,{\ensuremath{\mathrm{GeV}}\xspace}$. An indirect determination of the $W$-boson mass in the on-shell scheme is possible with an experimental uncertainty down to $\Delta{{\ensuremath{m_W}}}=\pm6\,{\ensuremath{\mathrm{MeV}}\xspace}$. We discuss how the uncertainties of such measurements in deep-inelastic scattering compare with those from measurements in the timelike domain, e.g. at the $Z$-pole, and which aspects of the electroweak interaction are unique to measurements at the LHeC, for instance electroweak parameters in charged-current interactions. We conclude that the LHeC will determine electroweak physics parameters, in the spacelike region, with unprecedented precision leading to thorough tests of the Standard Model and possibly beyond.' bibliography: - 'lhec\_ew.bib' --- MITP/20-038\ MPP-2020-110 [ **Electroweak Physics in Inclusive Deep Inelastic\ Scattering at the LHeC**]{} Daniel Britzger$^1$, Max Klein$^{2}$ and Hubert Spiesberger$^{3}$\ *$^1$  Max-Planck-Institut f[ü]{}r Physik, F[ö]{}hringer Ring 6, D-80805 M[ü]{}nchen, Germany\ $^2$  University of Liverpool, Oxford Street, UK-L69 7ZE Liverpool, United Kingdom\ $^3$  PRISMA$^+$ Cluster of Excellence, Institut f[ü]{}r Physik, Johannes-Gutenberg-Universit[ä]{}t, Staudinger Weg 7, D-55099 Mainz, Germany*
--- abstract: 'During the past decade $\eta$ Car has brightened markedly, possibly indicating a change of state. Here we summarize photometry gathered by the Hubble Space Telescope as part of the HST Treasury Project on this object. Our data include STIS/CCD acquisition images, ACS/HRC images in four filters, and synthetic photometry in flux-calibrated STIS spectra. The HST’s spatial resolution allows us to examine the central star separate from the bright circumstellar ejecta. [*Its apparent brightness continued to increase briskly during 2002–06, especially after the mid-2003 spectroscopic event.*]{} If this trend continues, the central star will soon become brighter than its ejecta, quite different from the state that existed only a few years ago. One precedent may be the rapid change observed in 1938–1953. We conjecture that the star’s mass-loss rate has been decreasing throughout the past century.' author: - 'J. C. Martin' - Kris Davidson - 'M. D. Koppelman' title: 'The Chrysalis Opens? Photometry from the Eta Carinae HST Treasury Project, 2002–2006' --- Introduction ============ Eta Carinae’s photometric record is unparalleled among well-studied objects, especially since it has been near or exceeded the classical Eddington limit during the past two or three centuries. From 1700 to 1800 it gradually brightened from 4th to 2nd magnitude, and then experienced its famous Great Eruption or “supernova impostor event” beginning about 1837. For twenty years it was one of the brightest stars in the sky, rapidly fluctuating between magnitudes 1.5 and 0.0, briefly attaining $V \approx -1.0$. After 1858 it faded below 7th magnitude, presumably enshrouded in the nascent Homunculus nebula. Subsequent behavior, however, has been more complex than one might have expected. A mysterious secondary eruption occurred in 1887–1900; then the apparent brightness leveled off around $m_{pg} \approx 8$ for about 40 years, followed by a rapid increase in 1938–53; after that it brightened at a fairly constant rate for another 40-year interval, and most recently the rate accelerated in the 1990’s. Some, but not all, of the secular brightening can be attributed to decreasing obscuration as the Homunculus nebula expands. However, in truth this is more complex than it appears. The star’s physical structure has been changing in a decidedly non-trivial way which is, at best, only dimly understood. For historical and observational details see @dh97 [@frew05; @dvauc52; @oconnell56; @feinstein67; @feinstein74; @aavso; @kdetal99a; @kdetal99b; @vangenderen99; @sterken99; @phot1]. Spectroscopic changes have occurred along with the brightness variations. The 5.5-year spectroscopic/photometric cycle [@gaviola53; @zanella84; @whitelock94; @whitelock04; @damineli96; @phot1] is not apparent in data obtained before the 1940’s [@feast01; @rmh05]. Brief “spectroscopic events” marking the cycle are most likely mass-ejection or wind-disturbance episodes, probably regulated by a companion star [@zanella84; @kd99; @smith03; @heii]. At visual wavelengths, the associated ephemeral brightness changes represent mainly emission lines in the stellar wind, while the longer-term secular brightening trend involves the continuum [@phot1; @martinconf]. @rmh05, @halpha05, and @kd05 have speculated that the four obvious disruptions in the photometric record – c. 1843, 1893, 1948, and 2000 – might indicate a quasi-periodicity of the order of 50 years.[^1] In any case the star has not yet recovered from its Great Eruption seen 160 years ago. The Hubble Space Telescope (HST) Treasury Program for $\eta$ Car was planned specifically to study the 2003.5 spectroscopic event. We employed the Space Telescope Imaging Spectrograph (STIS) and Advanced Camera for Surveys (ACS), following earlier STIS observations that began in 1998 [@etatp04]. Fortuitously, the STIS data almost coincide with a rapid secular brightening which began shortly before 1998 (see Section 4 below). Those and the ACS images are of unique photometric value for at least two reasons: 1. [At visual wavelengths, normal ground-based observations have been dominated by the surrounding Homunculus ejecta-nebula, which, until recently, appeared much brighter than the central star and which has structure at all size scales from 0.1 to 8 arcseconds. [*So far, only the HST has provided well-defined measurements of just the central star.*]{}[^2] The Homunculus is primarily a reflection nebula, but the Homunculus/star brightness ratio has changed substantially. During 1998-99, for instance, the star nearly tripled in apparent brightness while ground-based observations showed only about a 0.3-magnitude brightening of Homunculus plus star [@kdetal99b]. This rather mysterious development is known from HST/STIS and HST/ACS data. ]{} 2. [Numerous strong emission lines perturb the results for standard photometric systems. H$\alpha$ and H$\beta$ emission, for example, have equivalent widths of about 800 and 180 respectively in spectra of $\eta$ Car. Broad-band $U$, $B$, $R$, and $I$ magnitudes, and most medium-band systems as well, are therefore poorly defined for this object. Photometry around 5500 , e.g. broad-band $V$, is relatively free of strong emission lines, but transformations from instrumental magnitudes to a standard system require the other filters [@kdetal99b; @sterken01; @vangenderen03]. This difficulty is somewhat lessened for HST observations restricted to the central star, whose spectrum has fewer emission lines than the bright ejecta; and some of the HST/ACS filters are fairly well-adapted to the case. At any rate the STIS and ACS data appear to be stable and internally consistent. Detector and filter systems used in most ground-based work, on the other hand, require fluctuating instrumental and atmospheric corrections, and do not give any major advantage for this object. ]{} In this paper we present the complete set of photometric data gathered for the $\eta$ Car HST Treasury Project. The central star has brightened, especially in the UV, since the HST results described by @phot1. We report three types of later measurements: 1. [The star’s brightness in acquisition images made with STIS before that instrument’s failure in early 2004. These images represent a broad, non-standard wavelength range from 6500 to 9000 [Å]{}.]{} 2. [The star’s brightness in ACS/HRC images made in four filters (F220W, F250W, F330W, & F550M) from October 2002 to the present.]{} 3. [Synthetic photometry in flux-calibrated STIS spectra.]{} After presenting the data below, we briefly discuss the observed trends and what they bode for the near future of $\eta$ Car. Our principal reason for reporting these data now is that the Treasury Program observations have been completed; future HST/ACS observations are possible but not assured. These last Treasury Program observations are essential for demonstrating the secular trend in brightness (see Section \[trend\]), that these brightness changes suggest a fundamental change in the state of the star (see Section \[discussion\]) and what the near future may hold for $\eta$ Carinae (\[predictions\]). Data ==== STIS Acquisition Images ----------------------- Each set of STIS observations included a pair of acquisition images, which are 100 x 100 pixel sub-frames ($5{\arcsec} \times 5{\arcsec}$) centered on the middle row and column of the CCD [@clampin96; @downes97; @kimquijano03]. The STIS acquisition images were taken with a neutral density filter (F25ND3) which, combined with the CCD response, covered the wavelength range 2000–11000 [Å]{}. Since the star’s apparent color is moderately red, these images were dominated by fluxes at wavelengths 6500–9000 [Å]{}. Fig. \[filterfig\] shows that although most of the measured flux comes from the continuum, several prominent emission features including H$\alpha$ contributed to the measured brightness. We measure the star’s brightness with in radius $R$ in the following manner: If $f(x,y)$ denotes the flux level in an image where the star was centered at $x_0$, $y_0$, we integrate the product of $w(x-x_0,y-y_0) f(x,y)$, where $w$ is a radially symmetric weighting function of the form $(1-r^2/R^2)$,$r<R$. In effect $w$ is a “parabolic virtual field aperture.” For the STIS acquisition images we chose $R = 0.3{\arcsec}$ (6 pixels). A detailed discussion of these reduction procedures, including a non-standard bias level correction which we applied, is given in @phot1. Initially we expected to add a few more points of acquisition photometry after 2004.5, but the failure of the STIS in August 2004 curtailed our observing plans. Thus we have only two additional STIS data points to report (Table \[acqdata\]) beyond those given in @phot1. They are useful, however, concerning the end of the post-event recovery (see Sections \[trend\] and \[discussion\] below). HST ACS ------- HST ACS/HRC observations of $\eta$ Car were obtained for the Treasury Project beginning in October 2002. We have also examined publicly available data from HST proposal 9721[^3] and HST proposal 10844[^4]. The bias-corrected, dark-subtracted, and flat-fielded data were obtained from the Space Telescope Science Institute via the Multi-Mission Archive (MAST) [@acscal].[^5] Treasury Program ACS/HRC images were taken in four filters that cover near-UV to visual wavelengths (Fig. \[filterfig\]): - [HRC/F220W and HRC/250W: These near-UV filters sample the “ forest” [@cassatella79; @altamore86; @viotti89], whose opacity increases dramatically during a spectroscopic event [@kdetal99c; @ironcurtain].]{} - [HRC/F330W: This filter includes the Balmer continuum in emission, supplemented by various emission lines. It also attained the best spatial resolution among the observations reported here.]{} - [HRC/F550M: With a medium-width (not broad) bandpass, this filter samples the visual-wavelength continuum flux with only minor contamination by emission features.]{} The brightness of the central star was measured using the same 0.3$\arcsec$ ($\sim$ 10 ACS/HRC pixels) weighted virtual aperture used for the STIS acquisition images. CCD flux values were converted to the STMAG system [@stmag] using the keywords provided in the MAST archive’s FITS headers. An aperture correction à la @acscal, calculated from observations of the white dwarf GD 71 in each of the filters (Table \[acscaltab\]), was applied to the measurements (Table \[acsdata\]). ACS fluxes and magnitudes measured prior to MJD 52958 (2003.87) can be found in our first paper [@phot1]. [*Caveat:*]{} We did not apply the aperture corrections to the magnitudes in that paper, but we have done so in the plots shown here. The F220W and F250W filters have known red leaks [@acshandbook] that can affect photometry of red sources. We convolved extracted STIS spectra (see Sec. \[stisphot\]) with the response function for those filters and the ACS/HRC. In the case of the central star of $\eta$ Car, the flux redward of 4000[Å]{} in the F220W and F250W filters contributed only about 0.25% and 0.06% respectively, insignificant compared to other sources of error. STIS Synthetic Photometry\[stisphot\] ------------------------------------- Originally the ACS/HRC images were meant to supplement the STIS spectra. After the untimely demise of the STIS, however, the ACS/HRC became the most suitable mode for observing $\eta$ Car with HST. This presented a problem of continuous monitoring in the same band passes over the entire program, since there were no ACS/HRC images prior to 2002.78 while the STIS data ended at 2004.18. ACS photometry can be synthesized from the flux calibrated STIS data, since nearly every grating tilt was observed during each STIS visit (Table \[stisspectra\]). The spectra were extracted with a weighted parabolic cross dispersion profile similar to the virtual aperture used to measure the ACS/HRC images, convolved with the published filter functions (Fig \[filterfig\]), and integrated. Because STMAG is computed from flux density, the integrated fluxes were divided by the effective band passes of each filter (see Table \[acscaltab\]). The effective aperture for the extracted STIS spectral data is not a rotated parabola but a parabolic cylinder having the width of the slit (0.1). To correct for the difference in aperture as well as the difference in instrumental PSF, slit throughput, and the extraction height, we converted the STIS spectral fluxes to the ACS/HRC flux scale using suitable correction factors (Table \[acscaltab\]). Those factors were computed by comparing the results from ACS/HRC images and photometry synthesized from STIS data obtained at time MJD 52683. The resulting synthetic ACS/HRC photometry is given in Table \[fakeacs\]. Plots of these various data will be discussed in Sections \[trend\] and \[discussion\] below. Distribution of Surface Brightness in the Homunculus ==================================================== Since HST and ground-based photometry have shown different rates of change for different-sized areas, it is useful to view the spatial distribution of the brightness. For this purpose we have used three ACS images made with filter F550M at $t =$ 2004.93, 2005.53, and 2005.85. Together these give a reasonably valid picture of the average visual-wavelength appearance during 2005. Fig. \[brt-distr-1\] shows the fraction of apparent brightness within projected radius $R$ measured from the central star. The solid curve represents the HST data, while a companion dashed curve incorporates Gaussian blurring with FWHM = 0.8, simulating ground-based photometry with fairly good atmospheric conditions. Half the total light originates within $R \lesssim 0.5{\arcsec}$ – which is very different from $\eta$ Car’s appearance a few decades ago [@gaviola50; @adt53; @gn72; @kdmtr75; @vgt84]. Fig. \[brt-distr-1\] also shows a curve based on photographs that Gaviola obtained in 1944 [@gaviola50; @kdmtr75], with a magnified spatial scale to compensate for subsequent expansion; even allowing for mediocre “seeing” and other uncertainties, the degree of central condensation was obviously less then. Before 1980 the central star accounted for less than 10% of the total apparent brightness; now its fraction has grown to about 40% and continues to increase. Fig. \[brt-distr-2\] is a map of the surface brightness, based on the HST/ACS F550M data mentioned above. In order to produce simple well-defined isophotes, we have Gaussian-blurred the image using FWHM 0.5. Apart from the central star and nearby compact ejecta, most of the light comes from a comma- or crescent-shaped region about 5 across, marked by the 80% isophote in the figure. Presumably the high intensity in this area results from strong forward scattering by dust grains in that part of the southeast Homunculus lobe. Surface brightnesses in the outer lobe regions, on the other hand, are fainter than the 50% contour by factors typically between 100 and 200. About half of the projected area of the Homunculus provides only 5% of the total visual-wavelength brightness. The Eight-year Trend\[trend\] ============================= Fig. \[plot1\] shows the main HST photometric data on the central star; the most significant result is a secular brightening trend superimposed on the 5.5-year pattern of spectroscopic events. The latest observations are essential in this regard, because the data reported earlier by @phot1 ended before the star had emerged from the 2003.5 event and we could not be sure of the long term trend. From 1999 to 2006 the average trend was about 0.15 magnitude per year at visual wavelengths and 0.19 mag yr$^{-1}$ at 2200 [Å]{} – much faster than the rate of $\sim$ 0.025 mag yr$^{-1}$ recorded for the Homunculus plus star from 1955 to 1995 [@kdetal99b; @martinconf]. The Weigelt condensations northwest of the central star @speckle86 have [*not*]{} brightened rapidly. Located in the equatorial plane only about 800 AU from the star[^6], their light is intrinsic emission with some reflection [@kdfos95]. Fig. \[knotd\] shows the brightness of “Weigelt D,” measured in the same way as the star but centering the virtual aperture at offset location $r = 0.25{\arcsec}$, position angle 336$^{\circ}$. The absence of a strong secular trend is significant in the following way. Extrapolating the recent trend of the star/ejecta brightness ratio back to the mid-1980’s, one would expect that the star should have been fainter than each of the Weigelt blobs at that time. But this is contradicted by early speckle observations [@speckle86; @speckle88]; [*therefore the star cannot have brightened at the present-day rate through the entire 20-year interval.*]{} Moreover, the earliest HST/FOS spectroscopy in 1991 [@fos95] and the ultraviolet spectra of the star plus inner ejecta obtained with the International Ultraviolet Explorer (IUE) from 1979 to 1990 show absolute fluxes which, though uncertain, appear comparable to both the speckle observations and the 1998 STIS results. These facts imply that the central star’s brightening rate was relatively modest from 1980 until sometime in the 1990’s. We suspect that the present-day rate began in 1994–97, when ground-based photometry showed unusual behavior (see, e.g., Fig. 2 in @kdetal99b). The last F330W and F550M observations in Fig. \[plot1\] confirm the sudden 0.2 magnitude increase observed at La Plata in late 2005 [@laplata]. Discussion\[discussion\] ======================== The observed brightening of $\eta$ Car is not easy to explain. It cannot signify a major increase in the star’s luminosity, because that would exceed the Eddington limit, producing a giant eruption. It cannot be a standard LBV-like eruption; in that case the energy distribution should have shifted to longer wavelengths, the Balmer emission lines should have decreased, and the spectrum should have begun to resemble an A- or F-type supergiant [@lbv94]. In fact, qualitatively the star’s spectrum has changed little in the past decade, and it has become bluer, not redder.[^7] The most obvious remaining explanation involves a change in the circumstellar extinction, which, in turn, probably requires a subtle change in the stellar wind. Mere “clearing of the dust” – i.e., motion of a localized concentration of dusty ejecta – cannot occur fast enough [@kdetal99b]. Therefore one must consider either destruction of dust grains, or a decrease in the formation of new dust, or both; and, if these account for the observations, why should they happen now? Dust Near the Star ------------------ The hypothetical decreasing extinction probably occurs within 2000 AU ($\sim$ 1) of the star, and preferably closer, because: 1. [In various observations between 1980 and 1995, the star did not appear as bright as expected relative to the Weigelt blobs; the discrepancy was a factor of the order of 10, based on simple theoretical arguments [@dh86; @fos95]. Evidently, then, our line of sight to the star had substantially larger extinction at visual wavelengths, even though its projected separation from the blobs was less than 0.3. The required extra extinction was of the order of 3 magnitudes. Since then the star has brightened far more than the Weigelt objects have; therefore, if this involves localized extinction, its size scale must be a fraction of an arcsec, only a few hundred AU. ]{} 2. [No known process seems likely to destroy dust more than 2000 AU from the star in a timescale of only a few years.]{} 3. [Ground-based photometry and HST images have shown only a modest, fraction-of-a-magnitude increase in the brightness of the large-scale Homunculus lobes during the past decade [@laplata; @phot1]. ]{} Dust grains should condense in $\eta$ Car’s wind at a distance of 200–600 AU, 2 to 10 years after the material has been ejected.[^8] Since newly-formed dust moves outward in a timescale of several years, the circumstellar extinction seen at any time depends partly on the current dust formation rate. This, in turn, depends on local wind density, radiation density, etc., and newly formed hot grains ($T_d > 800$ K) are susceptible to destruction. The dust column density can thus be sensitive to small changes in the stellar parameters. Moreover, the wind is latitude-dependent and our line of sight is close to the critical latitude where wind parameters can vary rapidly [@smith03]. All these factors appear suitable for the proposed explanation. On the other hand, near-infrared observations imply that extinction within $r < 2000$ AU has been quite small along most paths outward from the star. In Fig. 3 of @cox95, for instance, the 2–6 $\mu$m flux indicates the high end of the dust temperature distribution. Modeling this in a conventional way, we find that less than 5% of the total luminosity was absorbed and re-emitted by inner dust with $T_d > 500$ K during the years 1973 to 1990 when those observations were made.[^9] Therefore, [*our line of sight must be abnormal in order to have a large amount of extinction near the star.*]{} In principle one might view this as an argument against our proposed scenario, but no plausible alternative has been suggested to explain the apparent faintness of the central star before 1999 and its ratio to the Weigelt blobs [@speckle86; @dh86; @speckle88; @fos95; @kdetal99b], The spatial distribution of dust is probably quite inhomogeneous near the star. The Homunculus lobes have a conspicuously “granular” appearance; the equatorial ejecta are clumpy, including the Weigelt knots; and stars near and above the Eddington limit tend to produce clumpy outflows [@shaviv05]. Consequently the radiative transfer problem includes macroscopic effects which have not yet been modeled. If the grain albedo is sufficient, light may escape mainly by scattering through interstices between condensations. In that case, high-extinction lines of sight may be fairly common in the inner region even though most of the light escapes along other paths, not necessarily radial. Incidentally, the near-infrared photometric trends reported by @whitelock94 [@whitelock04] are not straightforward to interpret. The fairly-constant 3.5 $\mu$m flux, for instance, represents a complicated mixture of dust formation parameters and does not necessarily indicate a constant amount of dust; see comments by @kdetal99b. The Role of the Stellar Parameters \[stellarparams\] ---------------------------------------------------- If the observed brightening represents a decrease in circumstellar extinction, the likeliest reason for this to occur is through some change in the star – no one has yet proposed a suitable alternative. The most relevant stellar parameters are the radius, current luminosity, and surface rotation rate, which together determine the wind’s velocity, density, and latitude structure. All of these may still be changing today, 160 years after the Great Eruption; thermal and rotational equilibrium in particular are likely to be poor assumptions for the star’s internal structure [@smith03; @kd05]. As a working hypothesis to explain $\eta$ Car’s photometric and spectroscopic record in the past 100 years, let us tentatively suppose that [*the mass-loss rate is gradually decreasing,*]{} while the surface rotation rate may be increasing. Historical considerations include: 1. [High-excitation emission, now observed at most times, was consistently absent before 1920 [@feast01] and probably before 1940 [@rmh05]. If a hot companion star is present as most authors suppose, then the most obvious way to hide or suppress its helium ionization is to immerse the entire system in an extremely dense wind – i.e., the primary star’s mass-loss rate must have been larger then. This idea is far from straightforward [@kd99], but so far as we know it is the only qualitative explanation yet proposed. Informally, based on Zanstra-style arguments (i.e., assessing the volume emission measure $n_{He} n_e V$ needed to absorb all the photons above 25 eV), we estimate that a rate of the order of 10 times the present value, i.e. $\sim 10^{-2}$ $M_\odot$ y$^{-1}$, would have been required early in the twentieth century in order to suppress the helium recombination emission. ]{} 2. [Twenty years ago the amount of fresh dust, indicated by the near-infrared flux, appeared consistent with a mass-loss rate somewhat above $10^{-3}$ $M_{\odot}$ yr$^{-1}$ [@iue86]. This absorbed only a small fraction of the luminosity (Section 5.1 above), but the substantially higher mass-loss rate suspected for earlier times would have produced enough hot inner dust to absorb a non-negligible fraction. ]{} 3. [The brightness observed between 1900 and 1940 is rather mysterious. Judging from its mass and present-day optical thickness, around 1920 the Homunculus (then only half as large as it is today) should have had at least 5 magnitudes of visual-wavelength extinction; in a simple model the object should have been fainter than 10th magnitude instead of $m_{pg} \approx 8$ as was observed. No doubt the inhomogeneities mentioned earlier played a role, but no model has been calculated. Moreover, why did the brightness remain fairly constant even though the Homunculus expanded by about 70% in 1900–1940? This interesting problem has received practically no theoretical attention.]{} 4. [ emission first appeared, and $\eta$ Car’s brightness suddenly increased, between 1938 and 1953 as we mentioned in Section 1. This might conceivably be explained by a decrease in the wind density; but @kd05 and @halpha05 have conjectured that 1940–1950 may have been the time when rotation became fast enough to produce latitude structure in the wind. If so, a higher-excitation, lower-density zone then developed at low latitudes [@smith03]. ]{} The above points inspire two hypotheses that may explain the rapid brightening trend shown in Fig. \[plot1\]. First, if the mass-loss rate has been decreasing, this tends to reduce the column density of recently-formed dust along our line of sight. Meanwhile (or alternatively), perhaps the wind’s latitude structure is continuing to evolve so that its dense zone is now moving out of the line of sight. HST data suggest that our line of sight has been fairly close to the critical boundary latitude separating the two phases [@smith03]. A small increase in surface rotation rate, or some other parameter change, might conceivably move the dense zone to higher latitudes, decreasing the amount of dust that forms along our line of sight. This idea is appealing because it suggests a way in which the effective extinction may be very sensitive to the stellar parameters. This problem obviously requires detailed models far beyond the scope of this paper, combining the star’s changing structure, its wind, dust formation, and possibly dust destruction. Concerning the 5.5-year Cycle ----------------------------- Figs. \[plot1\] and \[plot2\] reveal no major surprises about the 2003.5 spectroscopic event, but several comments are worthwhile. First, the sharp drop in UV brightness (filters F220W and F250W) is qualitatively understood and does not involve circumstellar dust. During both the 1998 and the 2003 events, STIS data showed very heavy ultraviolet blanketing by ionized metal lines; indeed the star became quite dark at some wavelengths between 2000 and 3000 [Å]{} [@ironcurtain]. We further note that just before the spectroscopic event, a slight increase occurred at wavelengths below 4000 [Å]{} (filters F220W, F250W, F330W), but not at visual and far-red wavelengths (F550M and F25ND3). Ground-based visual-wavelength and near-IR photometry showed a qualitatively similar effect [@vangenderen03; @whitelock04; @laplata]. The brightening is particularly prominent in J, H, and K which are dominated by free-free emission [@whitelock04]. The ACS F550M and STIS F25ND3 data primarily measure the continuum brightness, while the other HST filters are heavily influenced by strong emission or absorption lines. At about the same time emission in the central star also went through a similar increase in brightness [@martinconf]. The minor pre-event brightening thus appears to represent an increase in some emission features implying a temporary increase in ionizing flux. The primary star may provide additional UV photons or the hypothetical hot companion star may excite the primary wind more than usual at that time (just before periastron), but no quantitative model has been attempted. Figs. \[plot1\] and \[plot2\] contain interesting hints about the timescale for the star’s post-event recovery. Four months after the 2003.5 event, for instance, the 2–10 keV X-ray flux had increased almost to a normal level [@xray]. The HST/ACS F220W and F250W brightnesses, however, were still quite low at that time, and they required about eight months to recover. This timescale must be explained in any valid model for the spectroscopic events. @halpha05 noted serious differences between STIS spectra of the 1998.0 and 2003.5 events, and interpreted them as evidence for a rapid secular physical change in $\eta$ Car. @damineli99 had earlier found that emission became progressively weaker after each of the last few spectroscopic events. These clues are obviously pertinent to our comments in Section \[stellarparams\] above. Fluctuations [*between*]{} spectroscopic events have received little attention in the past. For instance, Fig. \[plot1\] shows a brief 0.2-magnitude brightening at 2001.3; measured by the STIS in both imaging and spectroscopic mode. It was correlated with the behavior of a strange unidentified emission line near 6307 [Å]{}, and with other subtle changes in the spectrum [@uemit]. This is interesting because mid-cycle events have not been predicted in any of the competing scenarios for the 5.5-year cycle. Perhaps the effects seen in 2001 indicate the level of basic, LBV-like fluctuations in $\eta$ Car. Eta Carinae in the Near Future \[predictions\] ---------------------------------------------- The appearance of $\eta$ Car and the Homunculus nebula has changed dramatically. Twenty years ago the entire object could have been described as “a bright, compact nebula having an indistinct eighth-magnitude central core;” but a few years in the future, if recent trends continue, it will be seen instead as “a fifth- or even fourth-magnitude star accompanied by some visible nebulosity.” Meanwhile the color is gradually becoming bluer. This overall development has long been expected [@kd87], but now appears to be occurring 20 years ahead of schedule. If it signals an irregularity in the star’s recovery from the Great Eruption, then this may be a highly unusual clue to the highly abnormal internal structure. Unsteady diffusion of either the thermal or the rotational parameters would be significant for stellar astrophysics in general. There are several practical implications for future observations of this object. Valid ground-based spectroscopy of the star (strictly speaking its wind) is becoming feasible for the first time, as its increased brightness overwhelms the emission-line contamination by inner ejecta. Unfortunately this implies that the inner ejecta – particularly the mysterious Weigelt knots – are becoming difficult to observe. In fact, since the HST/STIS is no longer available, they are now practically impossible to observe. When some new high-spatial-resolution spectrograph becomes available in the future, the inner ejecta will probably be much fainter than the star. The expected future of the larger-scale Homunculus nebula is also interesting. At present it is essentially a reflection nebula. However, based on the presence of high-excitation emission lines such as \[\] close to the star, the system almost certainly contains a source of hydrogen-ionizing photons with energies above 13.6, and helium-ionizing photons above 25 eV. (See, e.g., @zanella84; most recent authors assume that this source is a hot companion star.) Eventually, when circumstellar extinction has decreased sufficiently due to expansion and other effects, the UV source will begin to photoionize the Homunculus. This is especially true if the primary stellar wind is weakening as we conjectured above. First the inner “Little Homunculus” will become a bright compact region, and then the bipolar Homunculus lobes. The time when that will occur is not obvious, but it may be within the next few decades if current trends continue. Acknowledgments =============== This research was conducted as part of the HST Treasury Project on $\eta$ Carinae via grant no. GO-9973 from the Space Telescope Science Institute. We are grateful to T.R. Gull and Beth Perriello for assisting with the HST observing plans. We also especially thank Roberta Humphreys, J.T. Olds, and Matt Gray at the University of Minnesota for discussions and helping with non-routine steps in the data preparation and analysis. Altamore, A., Baratta, G. B., Cassatella, A., Rossi, L., & Viotti, R. 1986, New Insights in Astrophysics: Eight Years of UV Astronomy with IUE, 303 Cassatella, A., Giangrande, A., & Viotti, R. 1979, , 71, L9 Clampin, M., Hartig, G., Baum, S., Kraemer, S. Kinney, E. Kutina, R., Pitts, R., & Balzano, V. 1996, STIS Instrument Science Report 96-030, (Baltimore: STSci) Corcoran, M. F. 2005, , 129, 2018 Cox, P., Mezger, P. G., Sievers, A., Najarro, F., Bronfman, L., Kreysa, E., & Haslam, G. 1995, , 297, 168 Damineli, A. 1996, , 460, L49 Damineli, A., Stahl, O., Wolf, B., Kaufer, A., & Jablonski, F. J. 1999, ASP Conf. Ser. 179: Eta Carinae at The Millennium, 179, 221 Davidson, K. 1987, ASSL Vol. 136: Instabilities in Luminous Early Type Stars, 127 Davidson, K. 1999, ASP Conf. Ser. 179: Eta Carinae at The Millennium, 179, 304 Davidson, K. 2004, STScI Newsletter, Spring 2004, 1 Davidson, K. 2005, ASP Conf. Ser. 332: The Fate of the Most Massive Stars, 332, 103 Davidson, K., Dufour, R. J., Walborn, N. R., & Gull, T. R. 1986, , 305, 867 Davidson, K., Ebbets, D., Weigelt, G., Humphreys, R. M., Hajian, A. R., Walborn, N. R., & Rosa, M. 1995, , 109, 1784 Davidson, K., & Humphreys, R. M. 1986, , 164, L7 Davidson, K., Ebbets, D., Weigelt, G., Humphreys, R. M., Hajian, A. R., Walborn, N. R., & Rosa, M. 1995, , 109, 1784 Davidson, K., & Humphreys, R. M. 1997, , 35, 1 Davidson, K., Humphreys, R. M., Ishibashi, K., Gull, T. R., Hamuy, M., Berdnikov, L., & Whitelock, P. 1999a, , 7146, 1 Davidson, K., Gull, T. R., Humphreys, R. M., Ishibashi, K., Whitelock, P., Berdnikov, L., McGregor, P. J., Metcalfe, T. S., Polomski, E., & Hamuy, M. 1999b, , 118, 1777 Davidson, K., Ishibashi, K., Gull, T. R., & Humphreys, R. M. 1999c, ASP Conf. Ser. 179: Eta Carinae at The Millennium, 179, 227 Davidson, K., Martin, J., Humphreys, R. M., Ishibashi, K., Gull, T. R., Stahl, O., Weis, K., Hillier, D. J., Damineli, A., Corcoran, M., & Hamann, F. 2005, , 129, 900 Davidson, K., & Ruiz, M.-T.  1975, , 202, 421 de Vaucouleurs, G., & Eggen, O. J. 1952, , 64, 185 Downes, R., Clampin, M., Shaw, R., Baum, S., Kinney, E. & McGrath, M. 1997,STIS Instrument Science Report 97-03B, (Baltimore: STSci) Feast, M., Whitelock, P., & Marang, F. 2001, , 322, 741 Feinstein, A., & Marraco, H. G. 1974, , 30, 271 Feinstein, A. 1967, The Observatory, 87, 287 Fernandez Lajus, E., Gamen, R., Schwartz, M., Salerno, N., Llinares, C., Farina, C., Amor[í]{}n, R., & Niemela, V. 2003, Informational Bulletin on Variable Stars, 5477, 1 [^10] Frew, D. J. 2005, ASP Conf. Ser. 332: The Fate of the Most Massive Stars, 332, 158 Gaviola, E. 1950, , 111, 408 Gaviola, E. 1953, , 118, 234 Gehrz, R.D., & Ney, E.P. 1972, [*Sky & Tel.,*]{} 44, 4 Gonzaga, S., et al. 2005,[*ACS Instrument Handbook,*]{} Version 6.0 (Baltimore: STScI) Gull, T. R., Davidson, K., & Ishibashi, K. 2000, American Institute of Physics Conference Series, 522, 439 Hofmann, K.-H., & Weigelt, G. 1988, , 203, L21 Humphreys, R. M., & Davidson, K. 1994, , 106, 1025 Humphreys, R. M., & Koppelman, M. 2005, ASP Conf. Ser. 332: The Fate of the Most Massive Stars, 332, 159 Kim Quijano, J., et al. 2003, STIS Instrument Handbook, Version 7.0 (Baltimore: STScI) Koornneef, J., Bohlin, R., Buser, R., Horne, K., & Turnshek, D. 1986, Highlights of Astronomy, 7, 833 Mattei, J. & Foster, G. 1998, IAPPP Comm., 72, 53 Martin, J. C., & Koppelman, M. D. 2004, , 127, 2352 Martin, J. C. 2005, ASP Conf. Ser. 332: The Fate of the Most Massive Stars, 332, 111 Martin, J. C., Davidson, K., Humphreys, R. M., Hillier, D. J., & Ishibashi, K. 2006a, , 640, 474 Martin, J. C., Davidson, K., Hamann, F., Stahl, O., & Weis, K. 2006, , 118, 697 O’Connell, D. J. K. 1956, [*Vistas in Astronomy,*]{} 2, 1165 Shaviv, N. J. 2005, ASP Conf. Ser. 332: The Fate of the Most Massive Stars, 332, 180 Sirianni, M., et al. 2005, , 117, 1049 Smith, N., Davidson, K., Gull, T. R., Ishibashi, K., & Hillier, D. J. 2003, , 586, 432 Sterken, C., Freyhammer, L., Arentoft, T., & van Genderen, A. M. 1999, , 346, L33 Sterken, C., et al. 2001, ASP Conf. Ser. 233: P Cygni 2000: 400 Years of Progress, 233, 71 Thackeray, A. D. 1953, , 113, 237 van Genderen, A. M., & Th[é]{}, P. S. 1984, Space Sci. Rev., 39, 317 van Genderen, A. M., Sterken, C., de Groot, M., & Burki, G. 1999, , 343, 847 van Genderen, A. M., Sterken, C., & Allen, W. H. 2003, , 405, 1057 Viotti, R., Rossi, L., Cassatella, A., Altamore, A., & Baratta, G. B. 1989, , 71, 983 Weigelt, G., & Ebersberger, J. 1986, , 163, L5 Whitelock, P. A., Feast, M. W., Koen, C., Roberts, G., & Carter, B. S. 1994, , 270, 364 Whitelock, P. A., Feast, M. W., Marang, F., & Breedt, E. 2004, , 352, 447 Zanella, R., Wolf, B., & Stahl, O. 1984, , 137, 79 ![Photometric response functions. The top panel shows the relative spectral flux from the central star of $\eta$ Carinae. The bottom panel shows the total relative response of each CCD and filter combination used in this study on the same wavelength scale as the top panel. For plotting purposes the curves are not representative of relative responses between filters. STIS filters are plotted with a dashed line and ACS/HRC filters are plotted with a solid line. The dotted line represents the product of the STIS CCD+F25ND3 response curve and the photon flux from the central star.[]{data-label="filterfig"}](f1.eps) [lllrrrrrrr]{} o8ma93xjq&53070.199&2004.177&9.753E-12&-0.56&-0.54&0.02\ o8ma93hjq&53071.230&2004.180&9.403E-12&-0.52&&\ [lcccc]{} ACS/HRC Aperture Correction& 0.593$\pm$0.014&0.594$\pm$0.013&0.625$\pm$0.001&0.619$\pm$0.022\ ACS/HRC Effective Bandwidth& 187.29&239.41&173.75&165.20\ STIS Throughput Correction& 1.6318&1.2903&0.6003&0.5601\ [lllrrrr]{}\ \ j8ma7ac7q&53345.391&2004.931&5.0&0.392&7.417&7.410$\pm$0.017\ j8ma7acbq&53345.395&2004.931&5.0&0.391&7.418&\ j8ma7acfq&53345.398&2004.931&5.0&0.389&7.424&\ j8ma7acrq&53345.434&2004.931&5.0&0.405&7.382&\ j8ma8aorq&53565.266&2005.534&5.0&0.462&7.239&7.228$\pm$0.011\ j8ma8ap4q&53565.301&2005.534&5.0&0.472&7.216&\ j8ma9aetq&53680.328&2005.849&5.0&0.467&7.226&7.230$\pm$0.006\ j8ma9af1q&53680.344&2005.849&5.0&0.462&7.239&\ j8ma9afaq&53680.355&2005.849&5.0&0.465&7.231&\ j8ma9afnq&53680.371&2005.849&5.0&0.468&7.225&\ j9p602req&53951.121&2006.591&4.0&0.496&7.162&7.166$\pm$0.002\ j9p602rhq&53951.125&2006.591&4.0&0.493&7.169&\ j9p602rkq&53951.125&2006.591&4.0&0.494&7.165&\ j9p602rnq&53951.129&2006.591&4.0&0.493&7.168&\ \ \ \ j8ma7ac8q&53345.391&2004.931&1.4&0.939&6.469&6.463$\pm$0.016\ j8ma7accq&53345.395&2004.931&1.4&0.941&6.466&\ j8ma7achq&53345.402&2004.931&1.4&0.928&6.481&\ j8ma7actq&53345.438&2004.931&1.4&0.967&6.437&\ j8ma8aouq&53565.273&2005.534&1.4&1.029&6.369&6.353$\pm$0.016\ j8ma8ap7q&53565.309&2005.534&1.4&1.060&6.337&\ j8ma9aewq&53680.336&2005.849&1.4&1.126&6.271&6.270$\pm$0.007\ j8ma9af5q&53680.348&2005.849&1.4&1.119&6.277&\ j8ma9afqq&53680.348&2005.849&1.4&1.137&6.261&\ j9p601qnq&53951.051&2006.591&1.0&1.261&6.148&6.149$\pm$0.003\ j9p601qqq&53951.055&2006.591&1.0&1.264&6.145&\ j9p601qtq&53951.059&2006.591&1.0&1.258&6.151&\ j9p601qwq&53951.063&2006.591&1.0&1.256&6.152&\ \ \ \ j8ma7ac9q&53345.391&2004.931&0.8&1.150&6.248&6.243$\pm$0.019\ j8ma7acdq&53345.395&2004.931&0.8&1.140&6.258&\ j8ma7aclq&53345.406&2004.931&0.8&1.142&6.256&\ j8ma7acxq&53345.441&2004.931&0.8&1.190&6.211&\ j8ma8aoyq&53565.277&2005.534&0.8&1.190&6.211&6.201$\pm$0.011\ j8ma8apbq&53565.313&2005.534&0.8&1.213&6.190&\ j8ma9aezq&53680.340&2005.849&0.8&1.440&6.004&6.002$\pm$0.004\ j8ma9af8q&53680.352&2005.849&0.8&1.440&6.004&\ j8ma9afhq&53680.367&2005.849&0.8&1.441&6.003&\ j8ma9afuq&53680.383&2005.849&0.8&1.452&5.995&\ j9p601qxq&53951.066&2006.591&0.2&1.671&5.843&5.850$\pm$0.010\ j9p601qzq&53951.066&2006.591&0.2&1.688&5.831&\ j9p601r0q&53951.066&2006.591&0.2&1.666&5.846&\ j9p602roq&53951.129&2006.591&0.3&1.638&5.864&\ j9p602rpq&53951.133&2006.591&0.3&1.656&5.852&\ j9p602rqq&53951.133&2006.591&0.3&1.643&5.861&\ j9p602rrq&53951.133&2006.591&0.3&1.653&5.854&\ \ \ \ j8ma7acaq&53345.395&2004.931&0.1&1.337&6.084&6.085$\pm$0.016\ j8ma7aceq&53345.398&2004.931&0.1&1.324&6.095&\ j8ma7acoq&53345.410&2004.931&0.1&1.318&6.100&\ j8ma7ad0q&53345.445&2004.931&0.1&1.370&6.059&\ j8ma8ap1q&53565.281&2005.534&0.1&1.342&6.081&6.077$\pm$0.004\ j8ma8apeq&53565.281&2005.534&0.1&1.351&6.073&\ j8ma9af0q&53680.340&2005.849&0.1&1.589&5.897&5.891$\pm$0.009\ j8ma9af9q&53680.355&2005.849&0.1&1.582&5.902&\ j8ma9afkq&53680.371&2005.849&0.1&1.606&5.886&\ j8ma9afxq&53680.387&2005.849&0.1&1.614&5.880&\ j9p602ryq&53951.145&2006.591&0.1&1.812&5.755&5.757$\pm$0.005\ j9p602s0q&53951.145&2006.591&0.1&1.798&5.763&\ j9p602s2q&53951.145&2006.591&0.1&1.819&5.750&\ j9p602s4q&53951.148&2006.591&0.1&1.801&5.761&\ [llrrrr]{} o4j8010y0&50891.6& -28&G230MB&1854&456.0\ o4j8010o0&50891.6& -28&G230MB&1995&360.0\ o4j8011b0&50891.7& -28&G230MB&2135&180.0\ o4j8011c0&50891.7& -28&G230MB&2276&180.0\ o4j8011a0&50891.7& -28&G230MB&2416&240.0\ o4j801040&50891.4& -28&G230MB&2557&300.0\ o4j8010e0&50891.5& -28&G230MB&2697&290.0\ o4j801050&50891.4& -28&G230MB&2836&330.0\ o4j8010f0&50891.5& -28&G230MB&2976&348.0\ o4j8010g0&50891.5& -28&G230MB&3115&336.0\ o4j8010h0&50891.5& -28&G430M&3165&144.0\ o4j801170&50891.7& -28&G430M&3423& 60.0\ o4j801180&50891.7& -28&G430M&3680& 72.0\ o4j8010z0&50891.6& -28&G430M&3936& 72.0\ o4j801060&50891.4& -28&G430M&4194& 36.0\ o4j8010x0&50891.6& -28&G430M&4961& 36.0\ o4j8010i0&50891.5& -28&G430M&5216& 36.0\ o4j801190&50891.7& -28&G430M&5471& 36.0\ o4j8010d0&50891.5& -28&G750M&5734& 15.0\ o4j801120&50891.7& -28&G750M&6252& 9.4\ o556020t0&51230.6& -28&G230MB&1854&380.0\ o556020k0&51230.5& -28&G230MB&1995&278.0\ o55602110&51230.7& -28&G230MB&2135&150.0\ o55602120&51230.7& -28&G230MB&2276&150.0\ o55602100&51230.7& -28&G230MB&2416&200.0\ o556020b0&51230.5& -28&G230MB&2557&300.0\ o556020e0&51230.5& -28&G230MB&2697&280.0\ o55602040&51230.5& -28&G230MB&2836&307.0\ o556020z0&51230.6& -28&G230MB&2976&360.0\ o556020c0&51230.5& -28&G230MB&3115&280.0\ o556020v0&51230.6& -28&G430M&3165&120.0\ o556020f0&51230.5& -28&G430M&3423& 50.0\ o556020d0&51230.5& -28&G430M&3680& 60.0\ o556020s0&51230.6& -28&G430M&3936& 60.0\ o556020y0&51230.6& -28&G430M&4961& 30.0\ o556020g0&51230.5& -28&G430M&5216& 30.0\ o556020w0&51230.6& -28&G430M&5471& 30.0\ o55602090&51230.5& -28&G750M&5734& 15.0\ o556020p0&51230.6& -28&G750M&6252& 8.0\ o62r010p0&52016.9& 22&G230MB&1854&340.0\ o62r010i0&52016.8& 22&G230MB&1995&300.0\ o62r010m0&52016.8& 22&G230MB&2135&280.0\ o62r010x0&52016.9& 22&G230MB&2276&300.0\ o62r010y0&52016.9& 22&G230MB&2416&200.0\ o62r010a0&52016.8& 22&G230MB&2557&420.0\ o62r010c0&52016.8& 22&G230MB&2697&280.0\ o62r01040&52016.8& 22&G230MB&2836&300.0\ o62r010t0&52016.9& 22&G230MB&2976&360.0\ o62r01080&52016.8& 22&G230MB&3115&350.0\ o62r010r0&52016.9& 22&G430M&3165&100.0\ o62r010d0&52016.8& 22&G430M&3423& 70.0\ o62r010b0&52016.8& 22&G430M&3680& 60.0\ o62r010o0&52016.8& 22&G430M&3936& 32.0\ o62r01030&52016.7& 22&G430M&4194& 30.0\ o62r010v0&52016.9& 22&G430M&4961& 36.0\ o62r010e0&52016.8& 22&G430M&5216& 14.0\ o62r010s0&52016.9& 22&G430M&5471& 30.0\ o62r01090&52016.8& 22&G750M&5734& 8.0\ o62r010l0&52016.8& 22&G750M&6252& 10.0\ o6ex030e0&52183.2& 165&G430M&4961& 36.0\ o6ex030c0&52183.1& 165&G430M&5216& 36.0\ o6ex030b0&52183.1& 165&G750M&5734& 15.0\ o6ex02080&52294.0& -82&G230MB&1854&800.0\ o6ex020i0&52294.1& -82&G230MB&1995&600.0\ o6ex020m0&52294.1& -82&G230MB&2135&600.0\ o6ex020x0&52294.2& -82&G230MB&2276&600.0\ o6ex020y0&52294.2& -82&G230MB&2416&320.0\ o6ex020a0&52294.0& -82&G230MB&2557&1200.0\ o6ex020c0&52294.1& -82&G230MB&2697&280.0\ o6ex02040&52294.0& -82&G230MB&2836&300.0\ o6ex020t0&52294.1& -82&G230MB&2976&340.0\ o6ex020p0&52294.1& -82&G230MB&3115&300.0\ o6ex020r0&52294.1& -82&G430M&3165& 90.0\ o6ex020d0&52294.1& -82&G430M&3423& 90.0\ o6ex020b0&52294.1& -82&G430M&3680& 52.0\ o6ex020o0&52294.1& -82&G430M&3936& 26.0\ o6ex02030&52294.0& -82&G430M&4194& 18.0\ o6ex020v0&52294.2& -82&G430M&4961& 36.0\ o6ex020e0&52294.1& -82&G430M&5216& 16.0\ o6ex020s0&52294.1& -82&G430M&5471& 34.0\ o6ex02090&52294.0& -82&G750M&5734& 6.0\ o6ex020l0&52294.1& -82&G750M&6252& 8.0\ o6mo020a0&52459.5& 69&G230MB&1854&400.0\ o6mo020x0&52459.6& 69&G230MB&1995&300.0\ o6mo02120&52459.6& 69&G230MB&2135&300.0\ o6mo021n0&52459.7& 69&G230MB&2276&300.0\ o6mo021e0&52459.7& 69&G230MB&2416&320.0\ o6mo020h0&52459.6& 69&G230MB&2557&400.0\ o6mo020m0&52459.6& 69&G230MB&2697&340.0\ o6mo02050&52459.5& 69&G230MB&2836&300.0\ o6mo021i0&52459.7& 69&G230MB&2976&320.0\ o6mo02190&52459.7& 69&G230MB&3115&300.0\ o6mo021r0&52459.7& 69&G430M&3165& 90.0\ o6mo020p0&52459.6& 69&G430M&3423& 90.0\ o6mo020l0&52459.6& 69&G430M&3680& 52.0\ o6mo021a0&52459.7& 69&G430M&3936& 26.0\ o6mo02060&52459.5& 69&G430M&4194& 18.0\ o6mo021m0&52459.7& 69&G430M&4961& 36.0\ o6mo020q0&52459.6& 69&G430M&5216& 16.0\ o6mo021h0&52459.7& 69&G430M&5471& 34.0\ o6mo020i0&52459.6& 69&G750M&5734& 9.0\ o6mo02150&52459.7& 69&G750M&6252& 8.0\ o8gm12060&52682.9& -57&G230MB&1854&600.0\ o8gm120h0&52683.0& -57&G230MB&1995&600.0\ o8gm120l0&52683.0& -57&G230MB&2135&600.0\ o8gm120w0&52683.0& -57&G230MB&2276&600.0\ o8gm120r0&52683.0& -57&G230MB&2416&320.0\ o8gm12090&52682.9& -57&G230MB&2557&800.0\ o8gm120c0&52682.9& -57&G230MB&2697&340.0\ o8gm12030&52682.9& -57&G230MB&2836&300.0\ o8gm120t0&52683.0& -57&G230MB&2976&340.0\ o8gm120o0&52683.0& -57&G230MB&3115&300.0\ o8gm120n0&52683.0& -57&G430M&3165& 90.0\ o8gm120d0&52682.9& -57&G430M&3423& 90.0\ o8gm120b0&52682.9& -57&G430M&3680& 52.0\ o8gm120p0&52683.0& -57&G430M&3936& 26.0\ o8gm12040&52682.9& -57&G430M&4194& 18.0\ o8gm120v0&52683.0& -57&G430M&4961& 36.0\ o8gm120e0&52682.9& -57&G430M&5216& 16.0\ o8gm120s0&52683.0& -57&G430M&5471& 34.0\ o8gm120a0&52682.9& -57&G750M&5734& 6.0\ o8gm12050&52682.9& -57&G750M&6252& 8.0\ o8gm210g0&52727.3& -28&G430M&4961& 16.0\ o8gm210e0&52727.3& -28&G430M&5216& 16.0\ o8gm210b0&52727.3& -28&G430M&5471& 34.0\ o8gm210d0&52727.3& -28&G750M&5734& 9.0\ o8gm410g0&52764.4& 27&G430M&4961& 16.0\ o8gm410e0&52764.4& 27&G430M&5216& 16.0\ o8gm410b0&52764.3& 27&G430M&5471& 34.0\ o8gm410d0&52764.3& 27&G750M&5734& 9.0\ o8gm320a0&52778.5& 38&G230MB&1854&400.0\ o8gm320x0&52778.9& 38&G230MB&1995&300.0\ o8gm33020&52776.4& 38&G230MB&2135&300.0\ o8gm330n0&52776.6& 38&G230MB&2276&300.0\ o8gm330e0&52776.5& 38&G230MB&2416&320.0\ o8gm320h0&52778.6& 38&G230MB&2557&400.0\ o8gm320m0&52778.7& 38&G230MB&2697&340.0\ o8gm32050&52778.5& 38&G230MB&2836&300.0\ o8gm330i0&52776.5& 38&G230MB&2976&320.0\ o8gm33090&52776.5& 38&G230MB&3115&300.0\ o8gm33060&52776.4& 38&G430M&3165& 90.0\ o8gm320p0&52778.8& 38&G430M&3423& 90.0\ o8gm320l0&52778.7& 38&G430M&3680& 52.0\ o8gm330a0&52776.5& 38&G430M&3936& 26.0\ o8gm32060&52778.5& 38&G430M&4194& 18.0\ o8gm330m0&52776.6& 38&G430M&4961& 36.0\ o8gm320q0&52778.8& 38&G430M&5216& 16.0\ o8gm330h0&52776.5& 38&G430M&5471& 34.0\ o8gm320i0&52778.6& 38&G750M&5734& 9.0\ o8gm330r0&52776.6& 38&G750M&6252& 8.0\ o8gm520a0&52791.7& 62&G230MB&1854&400.0\ o8gm520x0&52791.8& 62&G230MB&1995&300.0\ o8gm52100&52791.9& 62&G230MB&2135&400.0\ o8gm521o0&52792.0& 62&G230MB&2276&300.0\ o8gm521f0&52791.9& 62&G230MB&2416&320.0\ o8gm520h0&52791.8& 62&G230MB&2557&400.0\ o8gm520m0&52791.8& 62&G230MB&2697&340.0\ o8gm52050&52791.7& 62&G230MB&2836&300.0\ o8gm521j0&52791.9& 62&G230MB&2976&340.0\ o8gm52180&52791.9& 62&G230MB&3115&300.0\ o8gm52170&52791.9& 62&G430M&3165& 90.0\ o8gm520p0&52791.8& 62&G430M&3423& 90.0\ o8gm520l0&52791.8& 62&G430M&3680& 52.0\ o8gm521b0&52791.9& 62&G430M&3936& 26.0\ o8gm52060&52791.7& 62&G430M&4194& 18.0\ o8gm521k0&52791.9& 62&G430M&4961& 36.0\ o8gm520q0&52791.8& 62&G430M&5216& 16.0\ o8gm521g0&52791.9& 62&G430M&5471& 34.0\ o8gm520i0&52791.8& 62&G750M&5734& 9.0\ o8gm521s0&52792.0& 62&G750M&6252& 8.0\ o8gm620a0&52813.8& 70&G230MB&1854&400.0\ o8gm620x0&52814.2& 70&G230MB&1995&300.0\ o8gm62100&52814.2& 70&G230MB&2135&300.0\ o8gm630d0&52812.2& 70&G230MB&2276&300.0\ o8gm63040&52812.1& 70&G230MB&2416&350.0\ o8gm620h0&52814.0& 70&G230MB&2557&400.0\ o8gm620m0&52814.1& 70&G230MB&2697&340.0\ o8gm62050&52813.8& 70&G230MB&2836&300.0\ o8gm63080&52812.2& 70&G230MB&2976&320.0\ o8gm62170&52814.3& 70&G230MB&3115&260.0\ o8gm62140&52814.3& 70&G430M&3165& 90.0\ o8gm620p0&52814.1& 70&G430M&3423& 90.0\ o8gm620l0&52814.1& 70&G430M&3680& 52.0\ o8gm62180&52814.3& 70&G430M&3936& 26.0\ o8gm62060&52813.8& 70&G430M&4194& 18.0\ o8gm630c0&52812.2& 70&G430M&4961& 36.0\ o8gm620q0&52814.1& 70&G430M&5216& 16.0\ o8gm63070&52812.1& 70&G430M&5471& 34.0\ o8gm620i0&52814.0& 70&G750M&5734& 9.0\ o8gm630h0&52812.2& 70&G750M&6252& 15.0\ o8ma720q0&52825.5& 69&G430M&4961& 16.0\ o8ma720p0&52825.5& 69&G430M&5216& 16.0\ o8ma720h0&52825.4& 69&G750M&5734& 9.0\ o8ma820b0&52852.0& 105&G230MB&1854&400.0\ o8ma820y0&52852.2& 105&G230MB&1995&300.0\ o8ma82110&52852.2& 105&G230MB&2135&300.0\ o8ma821m0&52852.4& 105&G230MB&2276&300.0\ o8ma821b0&52852.3& 105&G230MB&2416&320.0\ o8ma820i0&52852.0& 105&G230MB&2557&400.0\ o8ma820n0&52852.1& 105&G230MB&2697&340.0\ o8ma82060&52851.9& 105&G230MB&2836&300.0\ o8ma821i0&52852.4& 105&G230MB&2976&300.0\ o8ma82160&52852.3& 105&G230MB&3115&300.0\ o8ma821a0&52852.3& 105&G430M&3165& 90.0\ o8ma820q0&52852.1& 105&G430M&3423& 90.0\ o8ma820m0&52852.1& 105&G430M&3680& 52.0\ o8ma821e0&52852.3& 105&G430M&3936& 26.0\ o8ma82070&52851.9& 105&G430M&4194& 18.0\ o8ma821o0&52852.4& 105&G430M&4961& 32.0\ o8ma820r0&52852.1& 105&G430M&5216& 16.0\ o8ma821j0&52852.4& 105&G430M&5471& 34.0\ o8ma820j0&52852.1& 105&G750M&5734& 6.0\ o8ma820a0&52852.0& 105&G750M&6252& 8.0\ o8ma92070&52904.3& 153&G230MB&1854&600.0\ o8ma920i0&52904.4& 153&G230MB&1995&600.0\ o8ma920m0&52904.4& 153&G230MB&2135&600.0\ o8ma920x0&52904.5& 153&G230MB&2276&300.0\ o8ma920s0&52904.4& 153&G230MB&2416&600.0\ o8ma920a0&52904.3& 153&G230MB&2557&800.0\ o8ma920d0&52904.4& 153&G230MB&2697&340.0\ o8ma92040&52904.3& 153&G230MB&2836&300.0\ o8ma920u0&52904.5& 153&G230MB&2976&340.0\ o8ma920p0&52904.4& 153&G230MB&3115&300.0\ o8ma920o0&52904.4& 153&G430M&3305& 90.0\ o8ma920e0&52904.4& 153&G430M&3423& 90.0\ o8ma920c0&52904.4& 153&G430M&3680& 52.0\ o8ma920q0&52904.4& 153&G430M&3936& 26.0\ o8ma92050&52904.3& 153&G430M&4194& 18.0\ o8ma920w0&52904.5& 153&G430M&4961& 36.0\ o8ma920f0&52904.4& 153&G430M&5216& 16.0\ o8ma920t0&52904.5& 153&G430M&5471& 34.0\ o8ma920b0&52904.4& 153&G750M&5734& 6.0\ o8ma920z0&52904.5& 153&G750M&6252& 8.0\ o8ma830g0&52960.7&-142&G430M&4961& 18.0\ o8ma830e0&52960.6&-142&G430M&5216& 16.0\ o8ma830b0&52960.6&-142&G430M&5471& 32.0\ o8ma830d0&52960.6&-142&G750M&5734& 8.0\ o8ma940g0&53071.3& -28&G230MB&1854&430.0\ o8ma940i0&53071.3& -28&G230MB&2135&320.0\ o8ma940m0&53071.3& -28&G230MB&2416&450.0\ o8ma94070&53071.3& -28&G230MB&2557&410.0\ o8ma940e0&53071.3& -28&G230MB&2697&323.0\ o8ma94020&53071.2& -28&G230MB&2836&320.0\ o8ma940s0&53071.3& -28&G230MB&2976&323.0\ o8ma940j0&53071.3& -28&G230MB&3115&255.0\ o8ma940h0&53071.3& -28&G430M&3165& 90.0\ o8ma940a0&53071.3& -28&G430M&3423& 90.0\ o8ma94090&53071.3& -28&G430M&3680& 52.0\ o8ma940k0&53071.3& -28&G430M&3936& 26.0\ o8ma94030&53071.2& -28&G430M&4194& 18.0\ o8ma940p0&53071.3& -28&G430M&4961& 36.0\ o8ma940b0&53071.3& -28&G430M&5216& 16.0\ o8ma940n0&53071.3& -28&G430M&5471& 34.0\ o8ma94080&53071.3& -28&G750M&5734& 6.0\ o8ma940r0&53071.3& -28&G750M&6252& 10.0\ [rrrr]{}\ \ 50891.7&1998.21&0.076&9.194\ 51230.6&1999.14&0.155&8.425\ 52016.9&2001.29&0.189&8.211\ 52294.2&2002.05&0.230&7.996\ 53459.7&2002.51&0.255&7.885\ 52683.0&2003.12&0.263&7.850\ 52777.6&2003.37&0.224&8.025\ 52791.9&2003.41&0.239&7.953\ 52813.1&2003.47&0.202&8.136\ 52852.2&2003.58&0.126&8.652\ 52904.4&2003.72&0.132&8.599\ 53071.3&2004.18&0.218&8.053\ \ \ \ 50891.6&1998.21&0.167&8.346\ 51230.6&1999.14&0.403&7.387\ 52016.9&2001.29&0.448&7.273\ 52294.2&2002.05&0.550&7.048\ 53459.7&2002.51&0.592&6.969\ 52683.0&2003.12&0.535&7.080\ 52777.6&2003.37&0.498&7.156\ 52791.9&2003.41&0.546&7.058\ 52813.1&2003.47&0.478&7.202\ 52852.2&2003.58&0.326&7.618\ 52904.4&2003.72&0.346&7.552\ 53071.3&2004.18&0.687&6.807\ \ \ \ 50891.6&1998.21&0.237&7.964\ 51230.6&1999.14&0.516&7.119\ 52016.9&2001.29&0.524&7.101\ 52294.1&2002.05&0.625&6.910\ 53459.7&2002.51&0.626&6.908\ 52683.0&2003.12&0.531&7.088\ 52777.6&2003.37&0.553&7.043\ 52791.9&2003.41&0.619&6.920\ 52813.1&2003.47&0.644&6.878\ 52852.2&2003.58&0.572&7.007\ 52904.4&2003.72&0.653&6.862\ 53071.3&2004.18&0.894&6.521\ \ \ \ 50891.6&1998.21&0.352&7.533\ 51230.6&1999.14&0.605&6.945\ 52016.9&2001.29&0.691&6.802\ 52183.2&2001.75&0.823&6.611\ 52294.1&2002.05&0.703&6.782\ 53459.7&2002.51&0.711&6.771\ 52683.0&2003.12&0.668&6.838\ 52727.3&2003.24&0.697&6.792\ 52764.4&2003.34&0.688&6.807\ 52777.6&2003.37&0.732&6.738\ 52791.9&2003.41&0.754&6.707\ 52813.1&2003.47&0.849&6.578\ 52825.5&2003.51&0.906&6.507\ 52852.2&2003.58&0.851&6.575\ 52904.5&2003.72&1.015&6.384\ 52960.6&2003.88&1.124&6.273\ 53071.3&2004.18&1.185&6.215\ [^1]: So far as we know, this idea was first voiced by Humphreys at two meetings in 2002, but it did not appear in the published proceedings [^2]: At least this is true for visual and UV wavelengths. The near-infrared photometry reported by @whitelock94 and @whitelock04 may be strongly dominated by the central star. Those observations probably represent free-free emission in the wind at larger radii than the visual wavelength data. They show both the spectroscopic events and the brightening trend better than other ground-based measurements. [^3]: “The Kinematics and Dynamics of the Material Surrounding Eta Carinae,” B. Dorland, principal investigator [^4]: “Following Eta Carinae’s Change of State,” K. Davidson, principal investigator [^5]: http://archive.stsci.edu [^6]: For conversions between apparent and linear size scales we assume that $\eta$ Car’s distance is 2300 pc [@dh97]. [^7]: The change in color is modest, however, too small to confidently quote here. Dust near $\eta$ Car has long been known to have an abnormally small reddening/extinction ratio, see @dh97, @fos95, and refs. cited therein. [^8]: Here, lacking a specific dust-formation model for the unusual case of $\eta$ Car, we suppose that appreciable grain condensation begins in the outward flowing material at the location where the equilibrium grain temperature is around 1000 K. This is a fairly conventional assumption and the precise choice of temperature has little effect on our reasoning. The quoted time-after-ejection assumes typical ejecta speeds of 200–700 km s$^{-1}$. [^9]: The measured flux was approximately a power law $f_{\nu} \sim {\nu}^{-3.7}$ at wavelengths around 4 ${\mu}$m. Assuming a typical emission efficiency dependence $Q_{\nu} \sim {\nu}$, the observed spectral slope can be explained by a grain temperature distribution $dN/dT \sim T^{-8.7}$. The result noted in the text is obtained by normalizing this to match the observed flux around 4 or 5 $\mu$m and then integrating the total emitted flux at all wavelengths due to grains above 500 K. [^10]: http://lilen.fcaglp.unlp.edu.ar/EtaCar/
--- abstract: 'For $n\geq 3$, define $T_n$ to be the theory of the generic $K_n$-free graph, where $K_n$ is the complete graph on $n$ vertices. We prove a graph theoretic characterization of dividing in $T_n$, and use it to show that forking and dividing are the same for complete types. We then give an example of a forking and nondividing formula. Altogether, $T_n$ provides a counterexample to a recent question of Chernikov and Kaplan.' address: | Department of Mathematics, Statistics and Computer Science\ University of Illinois at Chicago\ Chicago, IL, 60607, USA author: - Gabriel Conant bibliography: - 'D:/Gabe/UIC/gconant/Math/BibTeX/biblio.bib' title: Forking and dividing in Henson graphs --- Introduction ============ Classification in model theory, beginning with stability theory, is strongly fueled by the study of abstract notions of independence, the frontrunners of which are forking and dividing. These notions have proved useful in the abstract treatment of independence and dimension in the stable setting, and initiated a quest to understand when they are useful in the unstable context. Significant success was achieved in the class of simple theories (see [@KiPi]). Meaningful results have also been found for $\NIP$ theories and, more generally, $\NTP_2$ theories, which include both simple and $\NIP$. A notable example is the following recent result from [@ChKa]. \[ChKa\] Suppose $\M$ is a sufficiently saturated monster model of an $\NTP_2$ theory. Given $C\subset\M$, the following are equivalent. 1. A partial type forks over $C$ if and only if it divides over $C$. 2. $C$ is an **extension base for nonforking**, i.e. if $\pi(\xbar)$ is a partial type with parameters from $C$, then $\pi(\xbar)$ does not fork over $C$. In general, if condition $(i)$ holds for a set $C$, then condition $(ii)$ does as well. In fact, condition $(ii)$ should be thought of as the minimal requirement for nonforking to be meaningful for types over $C$. In particular, if $C$ is *not* an extension base for nonforking, then there are types with no nonforking extensions. There are few examples where condition $(ii)$ fails, and most of them do so by exploiting some kind of circular ordering (see e.g [@TeZi Exercise 7.1.6]). On the other hand, condition $(i)$ is, *a priori*, harder to achieve. It is useful because it allows us to ignore the subtlety of forking versus dividing. However, in every textbook example where condition $(i)$ fails, it is because condition $(ii)$ also fails. This leads to the natural question, which is asked in [@ChKa], of whether the result above extends to classes of theories other than $\NTP_2$. In this paper, we give an example of an $\NSOP_4$ theory in which condition $(ii)$ holds for all sets, but condition $(i)$ fails. We will consider forking and dividing in the theory of a well-known structure: the generic $K_n$-free graph, also known as the *Henson graph*, a theory with $\TP_2$ and $\NSOP_4$. Our main goal is to characterize forking and dividing in the theory of the Henson graph. We will show that dividing independence has a meaningful graph-theoretic interpretation, and has something to say about the combinatorics of the structure. Using this characterization, we will show that despite the complexity of the theory, forking and dividing are the same for complete types. As a consequence, every set is an extension base for nonforking, and so nonforking/nondividing extensions always exist. On the other hand, we will show that there are formulas which fork, but do not divide. Acknowledgements {#acknowledgements .unnumbered} ---------------- I would like to thank Lynn Scow, John Baldwin, Artem Chernikov, Dave Marker, Caroline Terry, and Phil Wesolek for their part in the development of this project. Model Theoretic Preliminaries {#model theory} ============================= This section contains the definitions and basic facts concerning forking and dividing. We first specify some conventions that will be maintained throughout the paper. If $T$ is a complete first order theory and $\M$ is a monster model of $T$, we write $A\subset\M$ to mean that $A$ is a “small" subset of $\M$, i.e. $A\seq\M$ and $\M$ is $|A|^+$-saturated. We use the letters $a,b,c,\ldots$ to denote singletons, and $\abar,\bbar,\cbar,\ldots$ to denote tuples (of possibly infinite length). Suppose $C\subset\M$, $\pi(\xbar,\ybar)$ is a partial type with parameters from $C$, and $\bbar\in\M$. 1. $\pi(\xbar,\bbar)$ **divides over $C$** if there is a $C$-indiscernible sequence $(\bbar^l)_{l<\omega}$, with $\bbar^0=\bbar$, such that $\bigcup_{l<\omega}\pi(\xbar,\bbar^l)$ is inconsistent. 2. $\pi(\xbar,\bbar)$ **forks over $C$** if there is some $D\supseteq \bbar C$ such that if $p\in S_n(D)$ extends $\pi(\xbar,\bbar)$ then $p$ divides over $C$. 3. A formula $\vphi(\xbar,\bbar)$ **forks** (resp. **divides**) over $C$ if $\{\vphi(\xbar,\bbar)\}$ forks (resp. divides) over $C$. The following basic facts can be found in [@TeZi]. \[facts\] Let $C\subset\M$. (a) If a complete type forks (resp. divides) over $C$ then it contains some formula that forks (resp. divides) over $C$. (b) If $\pi(\xbar)$ is a consistent type over $C$ then $\pi(\xbar)$ does not divide over $C$. Nondividing and nonforking are used to define ternary relations on small subsets of $\M$, given by 1. $A\ind^d_C B$ if and only if $\tp(A/BC)$ does not divide over $C$, 2. $A\ind^f_C B$ if and only if $\tp(A/BC)$ does not fork over $C$. These relations were originally defined to abstractly capture notions of independence and dimension in stable theories, and have been found to still be meaning ful in more complicated theories as well. In particular, we will consider the interpretation of these notions in the unstable theories of certain homogeneous graphs. Graphs ====== Recall that a countable graph $G$ is **universal** if any countable graph is isomorphic to an induced subgraph of $G$; and $G$ is **homogenous** if any graph isomorphism between finite subsets of $G$ extends to an automorphism of $G$. The canonical example of such a graph is the countable *random graph*, i.e. the Fraïssé limit of the class of finite graphs. In [@Henson], a new family of countable homogenous graphs was introduced: the generic $K_n$-free graphs, for $n\geq 3$, which are often called the *Henson graphs*. For a particular $n\geq 3$, there is a unique such graph up to isomorphism. Fix $n\geq 3$ and let $K_n$ be the complete graph on $n$ vertices. The generic $K_n$-free graph, $\cH_n$, is the unique countable graph such that 1. $\cH_n$ is $K_n$-free, 2. any finite $K_n$-free graph is isomorphic to an induced subgraph of $\cH_n$, 3. any graph isomorphism between finite subsets of $\cH_n$ extends to an automorphism of $\cH_n$. Given $n\geq 3$, $\cH_n$ can also be constructed as the Fraïssé limit of the clas of finite $K_n$-free graphs. We study graph structures in the graph language $\cL=\{R\}$, where $R$ is interpreted as the binary edge relation. We consider 1. $T_0$, the complete theory of the random graph, 2. $T_n=\Th(\cH_n)$, for $n\geq 3$. It is a well-known fact (and a standard exercise) that each of these theories is $\aleph_0$-categorical with quantifier elimination. Fix $n\geq 3$ and fix $\H_n\models T_n$, a sufficiently saturated “monster" model of $T_n$. As $\H_n$ is a graph, we can embed it in a *larger* sufficiently saturated “monster" model $\G\models T_0$. Note that $\H_n$ is then a subgraph of $\G$, but not an elementary substructure. Let $\kappa(\H_n)=\sup\{\kappa:\textnormal{$\H_n$ is $\kappa$-saturated}\}$. For the rest of the paper, $n$, $\H_n$, and $\G$ are fixed. By saturation, we have the following fact. \[gext\] Suppose $C\subset\H$ and $X\seq\G$, such that $X$ is $K_n$-free, $C\seq X$, and $|X|\leq\kappa(\H)$. Then there is a graph embedding $f:X\func\H$ such that $f|_C=\id_C$. The remainder of this section is devoted to specifying notation and conventions concerning the language $\cL$. First, we consider types. Suppose $C\subset\G$, with $|C|<\kappa(\H_n)$. 1. We only consider partial types $\pi(\xbar)$ such that $|\xbar|\leq\kappa(\H_n)$. Furthermore, we will assume types are “symmetricially closed". For example if $c\in C$ then $x{\makebox[.14in]{$R$}}c\in\pi(\xbar)$ if and only if $c{\makebox[.14in]{$R$}}x\in\pi(\xbar)$. 2. An **$R$-type over $C$** is a collection $\pi(\xbar)$ of atomic and negated atomic $\cL$-formulas, none of which is of the form $x_i=c$, where $c\in C$. When we say that $\pi(\xbar,\ybar)$ is an $R$-type over $C$, we will assume further that $\pi(\xbar,\ybar)$ does not contain $x_i=x_j$ or $y_i=y_j$, for some $i\neq j$. 3. Suppose $\pi(\xbar)$ is an $R$-type over $C$. An **optimal solution** of $\pi(\xbar)$ is a tuple $\abar\models\pi(\xbar)$ such that 1. $a_i\neq a_j$ for all $i\neq j$ and $a_i\not\in C$ for all $i$, 2. $a_i{\makebox[.14in]{$R$}}a_j$ if and only if $x_i{\makebox[.14in]{$R$}}x_j\in\pi(\xbar)$, 3. given $c\in C$, $a_i{\makebox[.14in]{$R$}}c$ if and only if $x_i{\makebox[.14in]{$R$}}c\in\pi(\xbar)$. We will frequently use the following fact, which says that we can always find optimal solutions of $R$-types. Suppose $C\subset\H_n$ and $\pi(\xbar)$ is an $R$-type over $C$. 1. $\pi(\xbar)$ is consistent with $T_0$ if and only if it has an optimal solution in $\G$. 2. $\pi(\xbar)$ is consistent with $T_n$ if and only if it has an optimal solution in $\H_n$. This is a straightforward exercise, which we leave to the reader. The idea is that a type cannot prove than an edge exists in a graph, without explicitly saying so. Moreover, removing extra edges to “optimize" the solution of a consistent type is always possible and, in the case of $T_n$, will not conflict with the requirement that the solution be $K_n$-free. Next, we specify notation and conventions concerning $\cL$-formulas. Suppose $C\subset\G$. 1. Let $\cL_0(C)$ be the collection of conjunctions of atomic and negated atomic $\cL$-formulas, with parameters from $C$, such that no conjunct is of the form $x=c$, where $x$ is a variable and $c\in C$. When we write $\vphi(\xbar,\ybar)\in\cL_0(C)$, we will assume further that no conjunct of $\vphi(\xbar,\ybar)$ is of the form $x_i=x_j$ or $y_i=y_j$, for some $i\neq j$. 2. Given $\vphi(\xbar)\in\cL_0(C)$ and $\theta(\xbar)$, an atomic or negated atomic formula, we write “$\vphi(\xbar)\rhd\theta(\xbar)$" if $\theta(\xbar)$ is a conjunct of $\vphi(\xbar)$. 3. We will assume $\cL_0(C)$-formulas are “symmetrically closed". For example $\vphi(\xbar)\rhd x{\makebox[.14in]{$R$}}c$ if and only if $\vphi(\xbar)\rhd c{\makebox[.14in]{$R$}}x$. 4. Let $\cL_R(C)$ be the collection of formulas $\vphi(\xbar,\ybar)\in\cL_0(C)$ such that no conjunct is of the form $x_i=y_j$. The main result of this paper will be a characterization of forking and dividing in $T_n$. We will use the following characterization of dividing in $T_0$, which is a standard exercise (see e.g. [@TeZi]). \[T0 forking\] Fix $C\subset\G$ and $\vphi(\xbar,\ybar)\in\cL_0(C)$. Suppose $\bbar\in\G\backslash C$ is such that $\vphi(\xbar,\bbar)$ is consistent. Then $\vphi(\xbar,\bbar)$ divides over $C$ if and only if $\vphi(\xbar,\bbar)\rhd x_i=b$ for some $b\in\bbar$. Consequently, if $A,B,C\subset\G$ then $A\ind^d_C B\Leftrightarrow A\cap B\seq C$. $T_0$ is a standard example of a *simple theory*, and so the previous fact is also a characterization of forking. On the other hand, $T_n$ is non-simple. Indeed, the Henson graph is a canonical example where $\ind^f$ fails *amalgamation over models* (see [@KiPi]). A direct proof of this (for $n=3$) can be found in [@Hartstab Example 2.11(4)]. The precise classification of $T_n$ is well-known, and summarized by the following result. $T_n$ is $\TP_2$, $\SOP_3$, and $\NSOP_4$. See [@ChNTP2] and [@Sh500] for definitions of these properties. The proof of $\TP_2$ can be found in [@ChNTP2] for $n=3$. $\SOP_3$ and $\NSOP_4$ are demonstrated in [@Sh500] for $n=3$. The generalizations of these arguments to $n\geq 3$ are fairly straightforward. However, $\NSOP_4$ for all $n\geq 3$ also follows from a more general result in [@PaSOP4]. Dividing in $T_n$ {#divsec} ================= The goal of this section is to find a graph theoretic characterization of dividing independence in $T_n$. Therefore, when we say that a partial type divides over $C\subset\H_n$, we mean with respect to the theory $T_n$. \[no vertical\] Suppose $(\bbar^l)_{l<\omega}$ is an indiscernible sequence in $\H_n$. 1. $\H_n\models\neg b^k_i{\makebox[.14in]{$R$}}b^l_i$ for all $k<l<\omega$ and $1\leq i\leq |\bbar^0|$. 2. If $l(\bbar^0)<n-1$ then $\bigcup_{l<\omega}\bbar^l$ is $K_{n-1}$ free. Part $(a)$. Suppose not. By indiscernibility, $\H_n\models b^k_i{\makebox[.14in]{$R$}}b^l_i$ for all $k<l<\omega$. In particular, $\{b^1_i,\ldots,b^n_i\}\cong K_n$, which is a contradiction. Part $(b)$. Suppose $S\seq B:=\bigcup_{l<\omega}\bbar^l$ with $|S|=n-1$ and, for $1\leq i\leq |\bbar^0|$, let $S_i=S\cap\{b^l_i:l<\omega\}$. Since $l(\bbar^0)<n-1$, there is some $i$ such that $|S_i|\geq 2$. By part (a), $S\not\cong K_{n-1}$. We first define a graph theoretic binary relation on disjoint graphs, which will capture the notion of dividing in $T_n$. \[bound def\]$~$ 1. Suppose $B,C\subset\H_n$ are disjoint. Then $B$ is **$n$-bound to $C$**, written $K_n(B/C)$, if there is $B_0\seq BC$ such that 1. $|B_0|=n$ and $B_0\cap C\neq\emptyset\neq B_0\cap B$, 2. if $u,v\in B_0\cap C$ are distinct then $u{\makebox[.14in]{$R$}}v$, 3. if $u\in B_0\cap B$ and $v\in B_0\cap C$ then $u{\makebox[.14in]{$R$}}v$. We say $B_0$ **witnesses** $K_n(B/C)$. Informally, $B_0$ witnesses $K_n(B/C)$ if and only if the only thing preventing $B_0\cong K_n$ is a possible lack of edges between vertices in $B$. 2. Suppose $\vphi(\xbar,\ybar)\in\cL_R(C)$ and $\bbar\in\H_n\backslash C$ such that $\vphi(\xbar,\bbar)$ is consistent. Then $\bbar$ is **$\vphi$-$n$-bound to $C$**, written $K^\vphi_n(\bbar/C)$, if there is $B\seq\bbar$, with $0<|B|<n$, such that 1. $\neg K_n(B/C)$, 2. $K_n(B/\abar C)$ for all $\abar\models\vphi(\xbar,\bbar)$. We say $B$ **witnesses** $K^\vphi_n(\bbar/C)$. The main result of this section (Theorem \[Tn dividing\]) will show that $K^\vphi_n$ is the graph theoretic interpretation of dividing. In particular, for $\vphi(\xbar,\ybar)\in\cL_R(C)$ and $\bbar\in\H_n\backslash C$ with $\vphi(\xbar,\bbar)$ consistent, we will show that $\vphi(\xbar,\bbar)$ divides over $C$ if and only if $K_n^\vphi(\bbar/C)$. The reverse direction of the proof of this will use the following construction of a particular indiscernible sequence. \[Gamma def\] Fix $C\subset\H_n$ and $\bbar\in\H_n\backslash C$. We extend $\bbar$ to an infinite $C$-indiscernible sequence $(\bbar^l)_{l<\omega}$, such that $b^l_i\neq b^m_j$ and $\neg b^l_i{\makebox[.14in]{$R$}}b^m_j$ for all $l<m<\omega$ and $1\leq i,j\leq|\bbar|$. Note that $(\bbar^l)_{l<\omega}$ is $K_n$-free and so $(\bbar^l)_{l<\omega}$ is an indiscernible sequence in $\H_n$. Given $B\seq\bbar$, we let $\Gamma(C\bbar,B)$ be the graph expansion of $C\cup(\bbar^l)_{l<\omega}$ obtained by adding $b^l_i{\makebox[.14in]{$R$}}b^m_j$ if and only if $l<m$, $i<j$, and $b_i,b_j\in B$. We can embed $\Gamma(C\bbar,B)$ into $\G$ over $C$. Furthermore, if $\Gamma(C\bbar,B)$ is $K_n$-free, then we can embed $\Gamma(C\bbar,B)$ in $\H_n$ over $C$. In this case, if $\Gamma_0(C\bbar,B)$ is the image of $(\bbar^l)_{l<\omega}$, then $\Gamma_0(C\bbar,B)$ is a $C$-indiscernible sequence in $\H_n$. \[Gamma\] Let $C\subset\H_n$ and $\bbar\in\H_n\backslash C$. Suppose $\vphi(\xbar,\ybar)\in\cL_R(C)$ such that $\vphi(\xbar,\bbar)$ is consistent and $K_n^\vphi(\bbar/C)$, witnessed by $B\seq\bbar$. 1. $\Gamma(C\bbar,B)$ is $K_n$-free. 2. If $\Gamma_0(C\bbar,B)=(\bbar^l)_{l<\omega}$ then $\{\vphi(\xbar,\bbar^l):l<\omega\}$ is $(n-1)$-inconsistent with $T_n$. We may consider $\Gamma_0(C\bbar,B)$ as an indiscernible sequence in $\G$. Part $(a)$. Suppose $K_n\cong W\seq\Gamma(C\bbar,B)$. Since $C$ is $K_n$-free, $W\cap\Gamma_0(C\bbar,B)\neq\emptyset$. Say $W\cap\Gamma_0(C\bbar,B)=\{b^{l_1}_{i_1},\ldots,b^{l_r}_{i_r}\}$ with $l_1\leq\ldots\leq l_r$. Note that $i_s\neq i_t$ for all $1\leq s<t\leq r$ by . Let $B_0=\{b_{i_1},\ldots,b_{i_r}\}$. Define $$V=(W\backslash\{b^{l_1}_{i_1},\ldots,b^{l_r}_{i_r}\})\cup\{b_{i_1},\ldots,b_{i_r}\}.$$ If $l_1=l_r$ then, since $\bbar^{l_1}\equiv_C\bbar$, it follows that $V\cong K_n$, which is a contradiction. Therefore $l_1<l_r$. By construction of $\Gamma(C\bbar,B)$ it follows that $b_{i_1},b_{i_r}\in B$. If $1\leq s\leq r$ then we either have $l_1<l_s$ or $l_s<l_r$, and in either case it follows that $b_{i_s}\in B$. Therefore $r\leq |B|\leq n-1$; in particular $C\cap W\neq \emptyset$. But then $V$ witnesses that $B$ is $n$-bound to $C$, which is a contradiction. Therefore $\Gamma(C\bbar,B)$ is $K_n$-free. Part $(b)$. By part $(a)$, we may consider $\Gamma_0(C\bbar,B)$ as in indiscernible sequence in $\H_n$. By indiscernibility, it suffices to show that the $R$-type $\pi(\xbar)=\{\vphi(\xbar,\bbar^l):0\leq l<n-1\}$ is inconsistent with $T_n$. So suppose, towards a contradiction, that $\pi(\xbar)$ is consistent with $T_n$ and let $\abar\in\H_n$ be an optimal solution. Then $\abar\models\vphi(\xbar,\bbar)$ so, by assumption, there is $D\seq BC\abar$ witnessing $K_n(B/C\abar)$. We have $D\cap B\neq\emptyset$. Moreover, $\neg K_n(B/C)$ implies $D\cap\abar\neq\emptyset$. To ease notation, let $$D\cap B=\{b_0,\ldots,b_k\}\text{ and }D\cap \abar=\{a_0,\ldots,a_m\}.$$ Note that $k<n-1$. Define $A_0=D\cap C\abar$ and $B_0=\{b^0_0,\ldots,b^k_k\}$. We make the following observations. 1. If $u,v\in A_0$ are distinct then $u{\makebox[.14in]{$R$}}v$. 2. If $b^i_i,b^j_j\in B_0$ are distinct, then $b^i_i{\makebox[.14in]{$R$}}b^j_j$ by construction of $\Gamma(C\bbar,B)$. 3. If $c\in A_0\cap C$ and $b^j_j\in B_0$ then $b^j_j{\makebox[.14in]{$R$}}c$ since $b_j{\makebox[.14in]{$R$}}c$ and $\bbar^j\equiv_C\bbar$. 4. If $a_i\in A_0\cap\abar$ and $b^j_j\in B_0$ then, since $\abar$ is an optimal solution of $\pi(\xbar)$, we have $$a_i{\makebox[.14in]{$R$}}b_j\Rightarrow\left(\vphi(\xbar,\bbar)\rhd x_i{\makebox[.14in]{$R$}}b_j\right)\Rightarrow\left(\vphi(\xbar,\bbar^j)\rhd x_i{\makebox[.14in]{$R$}}b^j_j\right)\Rightarrow a_i{\makebox[.14in]{$R$}}b^j_j.$$ These observations imply $A_0B_0\cong K_n$, which is a contradiction since $A_0B_0\subset\H$. \[Tn dividing\] Let $C\subset\H_n$, $\vphi(\xbar,\ybar)\in\cL_R(C)$, and $\bbar\in\H_n\backslash C$ such that $\vphi(\xbar,\bbar)$ is consistent. Then $\vphi(\xbar,\bbar)$ divides over $C$ if and only if $K^\vphi_n(\bbar/C)$. $(\Leftarrow)$: Suppose $B\seq\bbar$ witnesses $K^\vphi_n(\bbar/C)$. Then $\Gamma(C\bbar,B)\seq\H_n$ and $\{\vphi(\xbar,\bbar^l):l<\omega\}$ is $(n-1)$-inconsistent by . So $\vphi(\xbar,\bbar)$ divides over $C$. $(\Rightarrow)$: Suppose $\vphi(\xbar,\bbar)$ divides over $C$. Then there is a $C$-indiscernible sequence $(\bbar^l)_{l<\omega}$, with $\bbar^0=\bbar$, such that $\{\vphi(\xbar,\bbar^l):l<\omega\}$ is inconsistent. Let $G=C\cup\bigcup_{l<\omega}\bbar^l$. Consider $G$ as a subgraph of $\G$, and note that $(\bbar^l)_{l<\omega}$ is still $C$-indiscernible in $\G$. Since $\vphi(\xbar,\ybar)\in\cL_R(C)$ and $\vphi(\xbar,\bbar)$ is consistent (in $\G$), it follows from that $\vphi(\xbar,\bbar)$ does not divide over $C$ in $\G$. Therefore there is an optimal realization $\dbar\in\G$ of $\pi(\xbar):=\{\vphi(\xbar,\bbar^l):l<\omega\}$. If $G\dbar$ is $K_n$-free then $G\dbar$ embeds in $\H_n$ over $G$, which is a contradiction. Therefore there is $K_n\cong W\seq G\dbar$. Note that $W\cap\dbar\neq\emptyset$ since $G$ is $K_n$-free. Without loss of generality, let $W\cap\dbar=\{d_1,\ldots,d_m\}$. Suppose, towards a contradiction, that $W\cap G\seq C$. Let $\abar\in\H_n$ be a solution to $\vphi(\xbar,\bbar)$. Since $\dbar$ is an optimal realization of $\pi(\xbar)$, we make the following observations. 1. If $1\leq i,j\leq m$ are distinct then $a_i{\makebox[.14in]{$R$}}a_j$. 2. If $b\in W\cap G$ then $a_i{\makebox[.14in]{$R$}}b$ for all $1\leq i\leq m$. Therefore, $K_n\cong(W\cap G)\cup\{a_1,\ldots,a_m\}$, which is a contradiction, and so $W\cap(G\backslash C)\neq\emptyset$. Let $W\cap(G\backslash C)=\{b^{l_1}_{j_1},\ldots,b^{l_k}_{j_k}\}$ and note that $1\leq t<n$. By Lemma \[no vertical\], $j_s\neq j_t$ for all $s\neq t$, so without loss of generality, let $W\cap(G\backslash C)=\{b^{l_1}_1,\ldots,b^{l_k}_k\}$. Let $B=\{b_1,\ldots,b_k\}$. *Claim 1*: $\neg K_n(B/C)$. *Proof*: Suppose $X\seq BC$ witnesses $K_n(B/C)$. By indiscernibility, if $B_0=\{b^{l_s}_s:b_s\in B\cap X\}$, then $(X\cap C)\cup B_0$ witnesses that $B_0$ is $n$-bound to $C$. But $b^{l_s}_s{\makebox[.14in]{$R$}}b^{l_t}_t$ for all $s\neq t$, so $(X\cap C)\cup B_0\cong K_n$, which is a contradiction.[$_{\sslash}$]{} *Claim 2*: If $\abar\in\H_n$ is a solution of $\vphi(\xbar,\bbar)$ then $K_n(B/C\abar)$. *Proof*: We show $(W\cap C)\cup B\cup\{a_1,\ldots,a_m\}$ witnesses $K_n(B/C\abar)$, which means verifying all of the necessary relations. Recall that $\dbar$ is an optimal solution to $\pi(\xbar)$. 1. If $1\leq i\neq j\leq m$ then $d_i{\makebox[.14in]{$R$}}d_j\Rightarrow\left(\vphi(\xbar,\bbar)\rhd x_i{\makebox[.14in]{$R$}}x_j\right)\Rightarrow a_i{\makebox[.14in]{$R$}}a_j$. 2. If $1\leq i\leq m$ and $c\in W\cap C$ then $d_i{\makebox[.14in]{$R$}}c\Rightarrow\left(\vphi(\xbar,\bbar)\rhd x_i{\makebox[.14in]{$R$}}c\right)\Rightarrow a_i{\makebox[.14in]{$R$}}c$. 3. If $1\leq i\leq m$ and $1\leq s\leq k$ then $d_i{\makebox[.14in]{$R$}}b^{l_s}_s\Rightarrow\left(\vphi(\xbar,\bbar^{l_s})\rhd x_i{\makebox[.14in]{$R$}}b^{l_s}_s\right)\Rightarrow\left(\vphi(\xbar,\bbar )\rhd x_i{\makebox[.14in]{$R$}}b_s\right)\Rightarrow a_i{\makebox[.14in]{$R$}}b_s$.[$_{\sslash}$]{} Together, Claims 1 and 2 imply $K^\vphi_n(\bbar/C)$, as desired. We can now give the full characterization of nondividing formulas in $T_n$, and the ternary relation $\ind^d$ on sets, which gives the analogy of Fact \[T0 forking\] for $T_n$. \[div ind n\]$~$ 1. Suppose $C\subset\H_n$, $\vphi(\xbar,\ybar)\in\cL_0(C)$, and $\bbar\in\H_n\backslash C$ such that $\vphi(\xbar,\bbar)$ is consistent. Then $\vphi(\xbar,\bbar)$ divides over $C$ if and only if $\vphi(\xbar,\bbar)\rhd x_i=b$ for some $b\in\bbar$, or $\vphi(\xbar,\ybar)\in\cL_R(C)$ and $K^\vphi_n(\bbar/C)$. 2. Suppose $A,B,C\subset\H_n$. Then $A\ind^d_C B$ if and only if 1. $A\cap B\seq C$, and 2. for all $\bbar\in B\backslash C$, if $K_n(\bbar/AC)$ then $K_n(\bbar/C)$. Part $(a)$ follows immediately from Theorem \[Tn dividing\]. For part $(b)$, let $\abar$ enumerate $A$. Note that $\abar$ is an optimal solution to $\tp(\abar/BC)$. $(\Rightarrow)$: Suppose $A\ind^d_C B$. Then $A\cap B\seq C$ is immediate. For condition $(ii)$, fix $\bbar\in B\backslash C$ and suppose, towards a contradiction, that $\neg K_n(\bbar/C)$ and $K_n(\bbar/AC)$. Fix $X\seq AC\bbar$ witnessing $K_n(\bbar/AC)$. Let $C_0=X\cap C$. Without loss of generality, let $X\cap A=\{a_1,\ldots,a_m\}$ and $X\cap \bbar=\bbar_*=\{b_1,\ldots,b_k\}$. Define $\cL_R(C)$-formula $\vphi(\xbar,\ybar)$, where $x_i=(x_1,\ldots,x_m)$, $\ybar=(y_1,\ldots,y_k)$, and $\vphi(\xbar,\ybar)$ expresses 1. $\xbar$ is a complete graph and $\xbar{\makebox[.14in]{$R$}}\ybar$, 2. $\xbar{\makebox[.14in]{$R$}}C_0$ and $\ybar{\makebox[.14in]{$R$}}C_0$, 3. $C_0$ is a complete graph. Then $K_n^\vphi(\bbar_*/C)$ and $\vphi(\xbar,\bbar_*)\in\tp(A/BC)$. Therefore $A\nind^d_C B$ by Theorem \[Tn dividing\], which is a contradiction. $(\Leftarrow)$: Suppose $A\nind^d_C B$. Then there is some $\vphi(\xbar,\ybar)\in\cL_0(C)$ and $\bbar\in B\backslash C$ such that $\vphi(\xbar,\bbar)$ divides over $C$ and $\vphi(\xbar,\bbar)\in\tp(A/BC)$. If $\vphi(\xbar,\ybar)\rhd x_i=y_j$ for some $i,j$, then $a_i=b_j\in(A\cap B)\backslash C$ and $(i)$ fails. Otherwise, $\vphi(\xbar,\ybar)\in\cL_R(C)$ and $K^\vphi_n(\bbar/C)$. It follows that $\neg K_n(\bbar/C)$ and $K_n(\bbar/AC)$, so $(ii)$ fails. The theorem translates the model theoretic notion of dividing to the graph theoretic notion $K^\vphi_n(\bbar/C)$. Although the definition of $K^\vphi_n(\bbar/C)$ implies that we must check all solutions of $\vphi$, it suffices to check an optimal one. \[Tn dividing 2\] Let $C\subset\H_n$, $\vphi(\xbar,\ybar)\in\cL_R(C)$, and $\bbar\in\H_n\backslash C$ such that $\vphi(\xbar,\bbar)$ is consistent. Let $\abar$ be an optimal solution. Then $\vphi(\xbar,\bbar)$ divides over $C$ if and only if there is $B\seq\bbar$ such that $\neg K_n(B/C)$ and $K_n(B/C\abar)$. By Theorem \[Tn dividing\], we need to show $K_n^\vphi(\bbar/C)$ if and only if there is $B\seq\bbar$ such that $\neg K_n(B/C)$ and $K_n(B/C\abar)$. The forward direction is clear. Conversely, suppose $B\seq\bbar$ such that $\neg K_n(B/C)$ and $K_n(B/C\abar)$. Let $\dbar$ be any solution to $\vphi(\xbar,\bbar)$. We want to show $K_n(B/C\dbar)$. Let $B_0\seq BC\abar$ witness $K_n(B/C\abar)$. Define $C_0=(B_0\cap C)$ and $D=\{d_i:a_i\in B_0\cap\abar\}$. Since $\abar$ is optimal, we can make the following observations to show that $B_0C_0D$ witnesses $K_n(B/C\dbar)$. 1. If $c_1,c_2\in C_0$ then $c_1{\makebox[.14in]{$R$}}c_2$ by assumption. 2. If $c\in C_0$ and $d_i\in D$ then $a_i{\makebox[.14in]{$R$}}c\Rightarrow\left(\vphi(\xbar,\bbar)\rhd x_i{\makebox[.14in]{$R$}}c\right)\Rightarrow d_i{\makebox[.14in]{$R$}}c$. 3. If $c\in C_0$ and $b\in B_0$ then $b{\makebox[.14in]{$R$}}c$ by assumption. 4. If $d_i\in D$ and $b\in B_0$ then $a_i{\makebox[.14in]{$R$}}b\Rightarrow\left(\vphi(\xbar,\bbar)\rhd x_i{\makebox[.14in]{$R$}}b\right)\Rightarrow d_i{\makebox[.14in]{$R$}}b$. We end this section by giving some examples and traits of dividing formulas in $T_n$. We will begin using the following notation. If $A$ and $B$ are sets we write $A{\makebox[.14in]{$R$}}B$ to mean $a{\makebox[.14in]{$R$}}b$ for all $a\in A$ and $b\in B$. On other hand, $A{\makebox[.14in]{$\not\!\! R$}}B$ means $\neg a{\makebox[.14in]{$R$}}b$ for all $a\in A$ and $b\in B$. \[dividing example\] Suppose $C\subset\H_n$ and $b_1,\ldots,b_{n-1}\in\H_n\backslash C$ are distinct. Then the formula $$\vphi(x,\bbar):=\bigwedge_{i=1}^{n-1}xRb_i$$ divides over $C$ if and only if $\neg K_n(\bbar/C)$. First, if $\bbar\cong K_{n-1}$ then $\vphi(x,\bbar)$ is inconsistent and thus divides over $C$. Furthermore, in this case $\neg K_n(\bbar/C)$ since, if so, then there is some $c\in C$ such that $c{\makebox[.14in]{$R$}}\bbar$ and so $c\bbar\cong K_n$. Therefore we may assume $\bbar\not\cong K_{n-1}$. $(\Rightarrow)$: If $\vphi(x,\bbar)$ divides over $C$ then, by Theorem \[Tn dividing\], there is some $B\seq\bbar$ such that $\neg K_n(B/C)$ and $K_n(B/Ca)$ for any $a\models\vphi(x,\bbar)$. Let $a\models\vphi(x,\bbar)$ such that $a{\makebox[.14in]{$\not\!\! R$}}C$. Let $X\seq CBa$ witness $K_n(B/Ca)$. Then $\neg K_n(B/C)$ implies $a\in X$, and so $X\cap C=\emptyset$ since $a{\makebox[.14in]{$\not\!\! R$}}C$. Therefore $X\seq Ba\seq\bbar a$, $|X|=n$, and $|\bbar|=n-1$. It follows that $B=\bbar$, and so $\neg K_n(\bbar/C)$. $(\Leftarrow)$: Note that if $a$ realizes $\vphi(x,\bbar)$, then $K_n(\bbar/a)$. So if $\neg K_n(\bbar/C)$ then $\bbar$ itself witnesses $K^\vphi_n(\bbar/C)$. By Theorem \[Tn dividing\], $\vphi(x,\bbar)$ divides over $C$. \[no R\] Let $C\subset\H$ and $\vphi(\xbar,\ybar)\in\cL_R(C)$ such that $\vphi(\xbar,\ybar){\!\not\!\rhd}x_i{\makebox[.14in]{$R$}}y_j$ for all $i,j$. If $\bbar\in\H\backslash C$ such that $\vphi(\xbar,\bbar)$ is consistent, then $\vphi(\xbar,\bbar)$ does not divide over $C$. Suppose $B\seq\bbar$ is such that $\neg K_n(B/C)$. Let $\abar$ be an optimal solution to $\vphi(\xbar,\bbar)$. Then $\abar{\makebox[.14in]{$\not\!\! R$}}B$, so $\neg K_n(B/C\abar)$. By Corollary \[Tn dividing 2\], $\vphi(\xbar,\bbar)$ does not divide over $C$. \[parameters n\] Let $C\subset\H$ and $\vphi(\xbar,\ybar)\in\cL_R(C)$. Suppose $\bbar\in\H\backslash C$ such that $\vphi(\xbar,\bbar)$ is consistent and divides over $C$. Define $$R^\vphi=\{b\in C\bbar:\vphi(\xbar,\bbar)\rhd x_i{\makebox[.14in]{$R$}}b\textnormal{ for some $i$}\}\cup\{x_i:\vphi(\xbar,\bbar)\rhd x_i{\makebox[.14in]{$R$}}b\textnormal{ for some $b\in C\bbar$}\}.$$ Then $|R^\vphi|\geq n$ and $|\bbar\cap R^\vphi|>1$. By assumption, we have $K_n^\vphi(\bbar/C)$. If $\abar$ is an optimal solution of $\vphi(\xbar,\bbar)$ then there is some $X\seq C\bbar\abar$ witnessing $K_n(\bbar/C\abar)$. Note that $X\cap\abar\neq\emptyset$ since $\neg K_n(\bbar/C)$. Set $B=(X\cap C\bbar)\cup\{x_i:a_i\in X\}$. Then $|B|\geq n$ and $B\seq R^\vphi$ since $\abar$ is optimal. Finally, $|\bbar\cap B|=|\bbar\cap X|>1$, since otherwise $X\cong K_n$. Corollary \[parameters n\] says that if a formula from $\cL_R(C)$ divides then it needs to mention edges between at least $n$ vertices (and more than one parameter). This is not surprising since no consistent formula from $\cL_R(C)$ will divide in $T_0$, and so dividing in $T_n$ should come from the creation of a graph that is too close to $K_n$. Forking for complete types {#forksec} ========================== In this section, we use our characterization of $\ind^d$ in $T_n$ to show that forking and dividing are the same for complete types. The proof takes two steps, the first of which is to prove *full existence* for the following ternary relation on graphs. We take the following definition from [@Adgeo]. Given $A,B,C\subset\H_n$, define **edge independence** by $$\textstyle A\ind^R_C B\Leftrightarrow A\cap B\seq C\text{ and there is no edge from $A\backslash C$ to $B\backslash C$}.$$ \[fullext\] For all $A,B,C\subset\H_n$ there is $A'\equiv_C A$ such that $A'\ind^R_C B$. Fix $A,B,C\subset\H_n$ and enumerate $A\backslash(BC)=(a_i)_{i<\lambda}$. We define a graph $G=BC(a'_i)_{i<\lambda}\subset\G$, where each $a'_i$ is a new vertex, and 1. for all $i,j<\lambda$, $a'_i{\makebox[.14in]{$R$}}a'_j$ if and only if $a_i{\makebox[.14in]{$R$}}a_j$, 2. for all $i<\lambda$ and $c\in C$, $a'_i{\makebox[.14in]{$R$}}c$ if and only if $a_i{\makebox[.14in]{$R$}}c$, 3. for all $i<\lambda$ and $b\in B\backslash C$, $\neg a'_iRb$. We claim that $G$ is $K_n$-free. Indeed, if $K_n\cong W\seq G$ then, by $(iii)$, it follows that $W\seq BC$ or $W\seq C(a'_i)_{i<\lambda}$. But $BC\subset\H_n$ so this means $W\seq C(a'_i)_{i<\lambda}$, which, by $(i)$ and $(ii)$, means $AC$ is not $K_n$-free, a contradiction. Therefore $G$ embeds in $\H_n$ over $BC$. Let $(a'')_{i<\lambda}$ be the image of $(a'_i)_{i<\lambda}$ and set $A'=(A\cap C)\cup(a''_i)_{i<\lambda}$. Then it is clear that $A'\equiv_C A$ and $A'\ind^R_C B$. Using full existence of $\ind^R$, we can prove the full characterization of forking and dividing in $T_n$. \[Tn forking ind\] Suppose $A,B,C\subset\H_n$. Then $$\textstyle A\ind^f_C B\Leftrightarrow A\ind^d_C B\Leftrightarrow \begin{array}{l} \text{$A\cap B\seq C$ and, for all $\bbar\in B\backslash C$,}\\ \text{$K_n(\bbar/AC)$ implies $K_n(\bbar/C)$.} \end{array}$$ The second equivalence is by Theorem \[div ind n\]; and dividing implies forking in any theory. Therefore we only need to show $A\ind^d_C B$ implies $A\ind^f_C B$. Suppose $A\nind^f_C B$. Then there is some $D\subset\H_n\backslash BC$ such that $A'\nind^d_C BD$ for any $A'\equiv_{BC}A$. By Lemma \[fullext\], let $A'\equiv_{BC} A$ such that $A'\ind^R_{BC} D$. By assumption, we have $A'\nind^d_C BD$. *Case 1*: $A'\cap BD\not\seq C$. We have $A'\cap BD\seq BC$ by assumption, so this means there is $b\in (A'\cap B)\backslash C$. But $A'\equiv_{BC}A$ and so $b\in (A\cap B)\backslash C$. Therefore $A\nind^d_C B$, as desired. *Case 2*: $A'\cap BD\seq C$. Then, since $A'\nind^d_C BD$, it follows from Theorem \[div ind n\] that there is $\bbar\in BD\backslash C$ such that $\neg K_n(\bbar/C)$ and $K_n(\bbar/A'C)$. Let $X\seq A'C\bbar$ witness $K_n(\bbar/A'C)$. Note that $X\seq A'BCD$. Moreover, note also that if $X\cap(A'\backslash BC)\neq\emptyset$, then $X\seq BCD$, and so $X$ witnesses $K_n(\bbar/C)$, which is a contradiction. Therefore $X\cap(A'\backslash BC)\neq\emptyset$. Then we claim that $X\seq A'BC$. Indeed, otherwise there is $u\in X\cap(A'\backslash BC)$ and $v\in X\cap(D\backslash A'BC)$. Therefore $u\neq v$, $u\in A'$, and $v\in\bbar$, and so $u{\makebox[.14in]{$R$}}v$, since $X$ witnesses $K_n(\bbar/A'C)$. But this contradicts that there is no edge from $A'\backslash BC$ to $D\backslash BC$. So we have $X\seq A'BC$. Let $\bbar_*=X\cap\bbar\in B\backslash C$. Then $\neg K_n(\bbar/C)$ implies $\neg K_n(\bbar_*/C)$, and $X$ witnesses $K_n(\bbar_*/AC)$. Therefore $A'\nind^d_C B$. Since $A'\equiv_{BC} A$, we have $A\nind^d_C B$, as desired. It is a general fact that if $\ind^d=\ind^f$ in some theory $T$, then all sets are extension bases for nonforking. Indeed, if a partial type forks over $C$ then it can be extended to a complete type that forks (and therefore divides over $C$). Therefore, by Proposition \[facts\]$(b)$, no partial type forks over its own set of parameters. A forking and nondividing formula in $T_n$ {#example sec} ========================================== We have shown that forking and dividing are the same for complete types in $T_n$. In this section, we show that the same result cannot be obtained for partial types, by demonstrating an example of a formula in $T_n$ that forks, but does not divide. \[4 indisc\] Suppose $(\bbar^l)_{l<\omega}$ is an indiscernible sequence in $\G$ such that $l(\bbar^0)=4$ and $\bbar^0$ is $K_2$-free (no edges). Then either there are $i<j$ such that $\{b^l_i,b^l_j:l<\omega\}$ is $K_2$-free, or $\bigcup_{l<\omega}\bbar^l$ is not $K_3$-free. Let $B=\bigcup_{l<\omega}\bbar^l$. Suppose first that for all $i<j$, we have $b^0_i{\makebox[.14in]{$R$}}b^1_j$ or $b^1_i{\makebox[.14in]{$R$}}b^0_j$. Let $$f:\{(i,j):1\leq i<j\leq 4\}\func\{0,1\}\text{ such that }f(i,j)=0\Leftrightarrow b^0_iRb^1_j.$$ *Claim*: If $B$ is $K_3$-free then for all $i<j<k$, $f(i,j)=f(j,k)$.\ *Proof*: Suppose not.\ *Case 1*: $f(i,j)=1$ and $f(j,k)=0$. If $f(i,k)=0$ then by indiscernibility we have $b^1_i{\makebox[.14in]{$R$}}b^0_j$, $b^0_j{\makebox[.14in]{$R$}}b^2_k$ and $b^1_i{\makebox[.14in]{$R$}}b^2_k$; and so $\{b^1_i,b^0_j,b^2_k\}\cong K_3$. If $f(i,k)=1$ then by indiscernibility we have $b^2_i{\makebox[.14in]{$R$}}b^0_j$, $b^0_j{\makebox[.14in]{$R$}}b^1_k$ and $b^2_i{\makebox[.14in]{$R$}}b^1_k$; and so $\{b^2_i,b^0_j,b^1_k\}\cong K_3$.\ *Case 2*: $f(i,j)=0$ and $f(j,k)=1$. If $f(i,k)=0$ then $\{b^0_i,b^2_j,b^1_k\}\cong K_3$. Otherwise, if $f(i,k)=1$ then $\{b^1_i,b^2_j,b^0_k\}\cong K_3$. [$_{\sslash}$]{}\ By the claim, if $f(1,2)=0$ then $f(2,3)$, $f(3,4)$, $f(1,3)$ and $f(2,4)$ are all 0; and so $\{b^0_1,b^1_2,b^2_3\}\cong K_3$. If $f(1,2)=1$ then $f(2,3)$, $f(3,4)$, $f(1,3)$, and $f(2,4)$ are all 1; and so $\{b^2_1,b^1_2,b^0_3\}\cong K_3$. In any case we have shown that $B$ is not $K_3$-free.\ So we may assume that there is some $i<j$ such that $\neg b^0_i{\makebox[.14in]{$R$}}b^1_j$ and $\neg b^1_i{\makebox[.14in]{$R$}}b^0_j$. By indiscernibility, and since $\neg b^0_i{\makebox[.14in]{$R$}}b^0_j$, it follows that $\{b^l_i,b^l_j:l<\omega\}$ is $K_2$-free. \[fork nondivide example\] Let $C,\bbar=(b_1,b_2,b_3,b_4)\subset\H_n$ such that $C\cong K_{n-3}$, $\bbar {\makebox[.14in]{$R$}}C$, and $\bbar$ is $K_2$-free. For $i<j$, let $\vphi_{i,j}(x,b_j,b_j)=``x{\makebox[.14in]{$R$}}Cb_ib_j"$. Set $\vphi(x,\bbar)=\bigvee_{i<j}\vphi_{i,j}(x,b_i,b_j)$. Then $\vphi(x,\bbar)$ forks over $C$ but does not divide over $C$. For any $i\neq j$, $|Cb_ib_j|=n-1$, and so $\neg K_n(b_i,b_j/C)$. Moreover, we clearly have that if $a\models\vphi(x,b_i,b_j)$ then $K_n(b_i,b_j/Ca)$. By Theorem \[Tn dividing\], $\vphi_{i,j}(x,b_i,b_j)$ divides over $C$, and therefore $\vphi(x,\bbar)$ forks over $C$. Let $(\bbar^l)_{l<\omega}$ be $C$-indiscernible, with $\bbar^0=\bbar$. If there is some $K_3\cong W\subset\bigcup_{l<\omega}\bbar^l$ then $K_n\cong CW$, since $\bbar^l\equiv_C\bbar$ for all $l<\omega$ implies $C{\makebox[.14in]{$R$}}W$. Therefore $\bigcup_{l<\omega}\bbar^l$ is $K_3$-free and so, by Lemma \[4 indisc\], there are $i<j$ such that $B:=\{b^l_i,b^l_j:l<\omega\}$ is $K_2$-free. Since $|C|=n-3$, it follows that $BC$ is $K_{n-1}$-free and so there is some $a\in\H$ such that $a{\makebox[.14in]{$R$}}(BC)$. But then $a\models\{\vphi_{i,j}(x,b^l_i,b^l_j):l<\omega\}\seq\{\vphi(x,\bbar^l):l<\omega\}$. By Theorem \[facts\], $\vphi(x,\bbar)$ does not divide over $A$. Final remarks {#fin} ============= We have shown that $T_n$ is an $\NSOP_4$ theory in which all sets are extension bases for nonforking, but forking and dividing are not always the same. This only partially answers the question of how the results of [@ChKa] extend to theories with $\TP_2$. In partictular, forking is the same as dividing for complete types in $T_n$, which means there is good behavior of nonforking beyond just the fact that all sets are extension bases. This leads to the following amended version of the main question. Suppose that in some theory all sets are extension bases for nonforking. 1. Does $\NSOP_3$ imply forking and dividing are the same for partial types? 2. For what classes of theories do we have $\ind^f=\ind^d$?
--- abstract: | Landau–Kolmogorov inequalities have been extensively studied on both continuous and discrete domains for an entire century. However, the research is limited to the study of functions and sequences on $\Bbb R$ and $\Bbb Z$, with no equivalent inequalities in higher-dimensional spaces. The aim of this paper is to obtain a new class of discrete Landau–Kolmogorov type inequalities of arbitrary dimension: $$\|\varphi\|_{\ell^\infty(\Bbb Z^d)} \leq \mu_{p,d}\|\nabla_D\varphi\|^{p/2^d}_{\ell^2(\Bbb Z^d)}\, \|\varphi\|^{1-p/2^d}_{\ell^2(\Bbb Z^d)}, $$ where the constant $\mu_{p,d}$ is explicitly specified. In fact, this also generalises the discrete Agmon inequality to higher dimension, which in the corresponding continuous case is not possible. author: - Arman Sahovic bibliography: - 'bibliography.bib' title: 'Agmon–Kolmogorov inequalities on $\ell^2(\Bbb Z^d)$' --- Introduction ============ In 1912, G. H. Hardy, J. E. Littlewood and G. Pólya (see [@Hardy]) proved the following inequalities for a function $f\in L^2(\Bbb R)$: $$\label{HLP1} \|f'\|_{L^2(-\infty,\infty)}\leq \|f\|^{1/2}_{L^2(-\infty,\infty)}\|f''\|^{1/2}_{L^2(-\infty,\infty)},$$ $$\label{HLP2} \|f'\|_{L^2(0,\infty)}\leq \sqrt{2}\|f\|^{1/2}_{L^2(0,\infty)}\|f''\|^{1/2}_{L^2(0,\infty)},$$ with the constants $1$ and $\sqrt{2}$ being sharp. These results sparked interest in inequalities involving functions, their derivatives and integrals for a century to come. Specifically, in 1913, E. Landau (see [@Lan1913]) proved the following inequality: For $\Omega\subseteq\Bbb R$, and $f\in L^\infty(\Omega)$: $$\| f'\|_{L^\infty(\Omega)} \leq \sqrt{2}\| f''\|_{L^\infty(\Omega)}^{1/2} \| f\|_{L^\infty(\Omega)}^{1/2},$$ with the constant $\sqrt{2}$ being sharp. This result in turn was motivation for A. Kolmogorov (see [@Kol39]), where in 1939 he found sharp constants for the more general case, using a simple, but very effective inductive argument to extend the case to higher order derivatives: $$\| f^{(k)}\|_{L^\infty(\Omega)} \leq C(k,n)\| f^{(n)}\|_{L^\infty(\Omega)}^{k/n} \| f\|_{L^\infty(\Omega)}^{1-k/n},$$ where, for $k, n\in\Bbb N $ with $1 \leq k < n$, he determined the best constants $C(k,n)\in\Bbb R$ for $\Omega=\Bbb R$. Since then, there has been a great deal of work on what are nowadays known as the Landau–Kolmogorov inequalities, which are in their most general form: $$\| f^{(k)}\|_{L^p} \leq K(k,n,p,q,r)\,\,\| f^{(n)}\|_{L^q}^{\alpha} \| f\|_{L^r}^{\beta},$$ with the minimal constant $K = K(k,n,p,q,r)$. The real numbers $p, q, r \geq 1$; $k, n\in\Bbb N $ with $(0 \leq k < n)$ and $\alpha, \beta\in\Bbb R$ take on values for which the constant $K$ is finite (see [@Gab1967]). However, literature on discrete equivalents of those inequalities remained very limited for a long time. In 1979, E. T. Copson (see [@Cop79]) was one of the first to find equivalent results for sequences, series and difference operators. Indeed, he found the discrete equivalent to and . For a square summable sequence, $\{a(n)\}_{n\in\Bbb Z}\in\ell^2(\Bbb Z)$ and a difference operator $(Da)(n):=a(n+1)-a(n)$, we have: $$\label{Cop1} \|Da\|_{\ell^2(-\infty,\infty)}\leq \|a\|^{1/2}_{\ell^2(-\infty,\infty)}\|D^2a\|^{1/2}_{\ell^2(-\infty,\infty)},$$ $$\label{Cop2} \|Da\|_{\ell^2(0,\infty)}\leq \sqrt{2}\|a\|^{1/2}_{\ell^2(0,\infty)}\|D^2a\|^{1/2}_{\ell^2(0,\infty)},$$ with the constants $1$ and $\sqrt{2}$ yet again being sharp. Z. Ditzian (see [@Dit2]) then extended those results to establish best constants for a variety of Banach spaces, adding equivalent results for continuous shift operators $f(x+h)-f(x);\,\, x\in \Bbb R, f\in\L^2(\Bbb R)$. Comparing inequalities such as and , with and respectively, it was suspected that sharp constants were identical for equivalent discrete and continuous Landau–Kolmogorov inequalities for $1\leq p=q=r\leq\infty$. Indeed, in the cases $p=1, 2, \infty$, this was true for the whole and semi-axis. However, the general case has since been shown to be false, as for example demonstrated in [@KKZ88] by M. K. Kwong and A. Zettl, where they prove that for many values of $p$, the discrete constants are strictly greater than the continuous ones. Another important special case of the Landau–Kolmogorov inequalities is the Agmon inequality, proven by S. Agmon (see [@Agmon10]). Viewed as an interpolation inequality between $L^{\infty}(\Bbb R)$ and $L^{2}(\Bbb R)$, he states the following: $$\|f\|_{L^\infty(\Bbb R)}\leq \|f\|^{1/2}_{L^2(\Bbb R)} \|f'\|^{1/2}_{L^2(\Bbb R)}.$$ Thus, throughout this paper we shall call, for a domain $\Omega$, a function $f\in L^2(\Omega)$, a sequence $\varphi\in\ell^2(\Omega)$, $\alpha,\,\beta$ being $\Bbb Q$-valued functions of the integers $k,\,n$ with $k\le n$ and constants $C(\Omega,k,n),$ $ \,D(\Omega,k,n)\in \Bbb R$: $$\label{Agmon-Kolmogorov-Continuous} \|f^{(k)}\|_{L^\infty(\Omega)}\leq C(\Omega,k,n)\,\,\|f\|^{\alpha(k,n)}_{L^2(\Omega)} \,\|f^{(n)}\|^{\beta(k,n)}_{L^2(\Omega)},$$ $$\label{Agmon-Kolmogorov-Discrete} \|D^k \varphi\|_{\ell^\infty(\Omega)}\leq D(\Omega,k,n)\,\|\varphi\|^{\alpha(k,n)}_{\ell^2(\Omega)} \,\|D^n \varphi\|^{\beta(k,n)}_{\ell^2(\Omega)},$$ Agmon–Kolmogorov inequalities, where , for $\Omega:=\Bbb Z^d$ will be the central concern of this paper. Specifically we only require the case wherew $k=0$ and $n=1$, whereas the other inequalities, i.e. those concerned with higher order, have been discussed in [@Sah13]. Agmon–Kolmogorov Inequalities over $\Bbb Z^d$ ============================================= We introduce our notation for the $d$-dimensional inner product space of square summable sequences. For a vector of integers $\zeta:=(\zeta_1,\ldots,\zeta_d)\in\Bbb Z^d$, we say $\{\varphi(\zeta)\}_{\zeta\in\Bbb Z^d}\in\ell^2(\Bbb Z^d)$, if and only if the following norm is finite: $$\|\varphi\|_{\ell^2({\Bbb Z^d})}:=\Big(\sum_{\zeta\in\Bbb Z^d} |\varphi(\zeta)|^p\Big)^{1/2}.$$ Then, for $\varphi,\,\phi\in\ell^2(\mathbb{Z}^d)$, we let $<.,.>_d$ be the inner product on $\ell^2(\mathbb{Z}^d)$: $$\langle\varphi, \phi\rangle_d:=\sum_{\zeta\in\Bbb Z^d} \varphi(\zeta)\overline{\phi(\zeta)}.$$ We then let $D_1,\ldots,D_d$ be the partial difference operators defined by: $$(D_i \varphi) (\zeta):= \varphi(\zeta_1,\ldots,\zeta_i+1,\ldots ,\zeta_d)-\varphi(\zeta_1,\ldots,\zeta_d),$$ The discrete gradiant $\nabla_D$ shall thus take the following form: $$\nabla_D \varphi(\zeta_1,\zeta_2, \ldots, \zeta_d) = \big(D_1\varphi(\zeta), D_2\varphi(\zeta),\ldots, D_d \varphi(\zeta)\big).$$ Thus, combining this definition with that of our norm above, we obtain: $$\|\nabla_D \varphi\|^2_{\ell^2({\Bbb Z^d})}=\|D_1 \varphi\|^2_{\ell^2({\Bbb Z^d})}+\ldots +\|D_d \varphi\|^2_{\ell^2({\Bbb Z^d})}.$$ Further, we require the following notation: For a sequence $\varphi(\zeta)\in\ell^2(\Bbb Z^d)$ with $\zeta:=(\zeta_1,...,\zeta_d)\in\Bbb Z^d$, for $0\leq k\leq d$ we define: $$[\varphi]_k:=\left(\sum_{\zeta_1\in \Bbb Z}...\sum_{\zeta_k\in \Bbb Z}|\varphi(\zeta)|^2\right)^{1/2}.$$ We identify that $[\varphi]_0:=|\varphi(\zeta)|$ and if we apply this operator for $k=d$, i.e. sum across all coordinates, we obtain the $\ell^2(\Bbb Z^d)$-norm: $$\left[\varphi\right]_d= \|\varphi\|_{\ell^2{(\Bbb Z^d)}}.$$ We are interested in a higher-dimensional version of the discrete Agmon inequality (see [@Sah10]), which estimates the sup-norm of a sequence $\phi\in\ell^2(\Bbb Z)$ as follows: $$\|\phi\|^2_{\ell^\infty(\Bbb Z)}\leq\|\phi\|_{\ell^2(\Bbb Z)}\|D\phi\|_{\ell^2(\Bbb Z)}.$$ Thus we commence by ’lifting’ this estimate to encompass more variables: \[Agmon-CauchyArbDim\] For the operator $D_{k+1}$, acting on a sequence $\varphi(\zeta)\in\ell^2(\Bbb Z^d)$, we have: $$\sup_{\zeta_{k+1}\in\Bbb Z}\left[\varphi\right]_k \le \left[D_{k+1}\varphi\right]_{k+1}^{1/2} \, \left[\varphi\right]_{k+1}^{1/2}.$$ Using the discrete Agmon inequality on the $(k+1)^{th}$ coordinate, we find: $$| \varphi(\zeta_1,\ldots,\zeta_d)|^2 \le \Big(\sum_{l\in \Bbb Z} | D_{k+1}\varphi(\zeta_1,\ldots,\zeta_k,l,\zeta_{k+2},\ldots,\zeta_{d})|^2\Big)^{1/2} \Big( \sum_{l\in \Bbb Z} | \varphi(\zeta_1,\ldots,\zeta_k,l,\zeta_{k+2},\ldots,\zeta_d)|^2\Big)^{1/2}.$$ Now we sum with respect to the other coordinates: $$\sum_{\zeta_1\in \Bbb Z}...\sum_{\zeta_k\in \Bbb Z}| \varphi(\zeta_1,\ldots,\zeta_d)|^2 \le \qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$$ $$\sum_{\zeta_1\in \Bbb Z}...\sum_{\zeta_k\in \Bbb Z}\left[\Big(\sum_{l\in \Bbb Z} | D_{k+1}\varphi(\zeta_1,\ldots,\zeta_k,l,\zeta_{k+2},\ldots,\zeta_{d})|^2\Big)^{1/2} \, \Big( \sum_{l\in \Bbb Z} | \varphi(\zeta_1,\ldots,\zeta_k,l,\zeta_{k+2},\ldots,\zeta_d)|^2\Big)^{1/2}\right],$$ and use the Cauchy–Schwartz inequality on the $k^{th}$ coordinate: $$\begin{aligned} \sum_{\zeta_1\in \Bbb Z}...\sum_{\zeta_k\in \Bbb Z}| \varphi(\zeta_1,\ldots,\zeta_d)|^2 &\le & \sum_{\zeta_1\in \Bbb Z}...\sum_{\zeta_{k-1}\in \Bbb Z}\Big[\Big(\sum_{\zeta_k\in \Bbb Z}\sum_{l\in \Bbb Z} | D_{k+1}\varphi(\zeta_1,\ldots,\zeta_k,l,\zeta_{k+2},\ldots,\zeta_{d},)|^2\Big)^{1/2} \cdot\\ && \,\Big( \sum_{\zeta_k\in \Bbb Z}\sum_{l\in \Bbb Z} | \varphi(\zeta_1,\ldots,\zeta_k,l,\zeta_{k+2},\ldots,\zeta_d)|^2\Big)^{1/2}\Big]. $$ We repeat this process to finally obtain: $$\begin{aligned} \sum_{\zeta_1\in \Bbb Z}...\sum_{\zeta_k\in \Bbb Z}| \varphi(\zeta_1,\ldots,\zeta_d)|^2 &\le & \Big(\sum_{\zeta_1\in \Bbb Z}...\sum_{\zeta_k\in \Bbb Z}\sum_{l\in \Bbb Z} | D_{k+1} \varphi(\zeta_1,\ldots,\zeta_k,l,\zeta_{k+2},\ldots,\zeta_{d})|^2\Big)^{1/2} \cdot \\ && \,\Big( \sum_{\zeta_1\in \Bbb Z}...\sum_{\zeta_k\in \Bbb Z}\sum_{l\in \Bbb Z} | \varphi(\zeta_1,\ldots,\zeta_k,l,\zeta_{k+2},\ldots,\zeta_d)|^2\Big)^{1/2}.\end{aligned}$$ We estimate the $\ell^2{(\Bbb Z^d)}$-norm of a partial difference operator with the $\ell^2{(\Bbb Z^d)}$-norm of the sequence itself: \[DownDifferenceN\] For a sequence $\varphi\in\ell^2(\Bbb Z^d)$ and for $i\in\{1,\ldots,d\}$, we have: $$\,\|D_i \varphi\|_{\ell^2{(\Bbb Z^d)}} \leq 2 \|\varphi\|_{\ell^2{(\Bbb Z^d)}}.$$ We show the argument for $D_1$ and note that due to symmetry the other cases follow immediately. $$\begin{aligned} \|D_1 \varphi\|^2_{\ell^2{(\Bbb Z^d)}} &=& \sum_{\zeta\in\Bbb Z^d} | \varphi(\zeta_1+1,\ldots,\zeta_d) - (\zeta_1,\ldots,\zeta_d)|^2\\ &\le & 2\, \Big( \sum_{\zeta\in\Bbb Z^d} |\varphi(\zeta_1+1,\ldots,\zeta_d)|^2 + \sum_{\zeta\in\Bbb Z^d} |\varphi(\zeta_1,\ldots,\zeta_d)|^2\Big)\\ &=& 4\, \sum_{\zeta\in\Bbb Z^d}| \varphi(\zeta_1,\ldots,\zeta_d)|^2\\ &=& 4 \|\varphi\|^2_{\ell^2{(\Bbb Z^d)}}. $$ This implies that we can obtain an estimate for any mixed difference operator as follows: $$\|D_1\ldots D_k \varphi\|_{\ell^2{(\Bbb Z^d)}}\le 2\|D_1\ldots D_{l-1}D_{l+1}\ldots D_k \varphi\|_{\ell^2{(\Bbb Z^d)}}.$$ Therefore, by eliminating $l$ difference operators, our inequality will contain the constant $2^l$. We arrive at our main result, the Agmon–Kolmogorov inequalities on $\ell^2(\Bbb Z^d)$. \[AgmonArbDim\] For a sequence $\varphi\in\ell^2(\Bbb Z^d)$, and $p\in\{1,\ldots,2^{d-1}\}$: $$\|\varphi\|_{\ell^\infty(\Bbb Z^d)} \leq \mu_{p,d}\|\nabla_D\varphi\|^{p/2^d}_{\ell^2(\Bbb Z^d)}\, \|\varphi\|^{1-p/2^d}_{\ell^2(\Bbb Z^d)}, $$ where $$\mu_{p,d}:=\left(\frac{\kappa_{p,d}}{{d^{p/2}}}\right)^{1/2^d},$$ and $\kappa_{p,d}$ is a constant to be determined in the following section. $\ $\ We use Lemma \[Agmon-CauchyArbDim\] and Lemma \[DownDifferenceN\] repeatedly: $$\begin{aligned} \|\varphi\|_{\ell^\infty(\Bbb Z^d)} &\leq & \left[D_{1} \varphi\right]^{1/2}_{1} \, \left[\varphi\right]^{1/2}_{1}\\ &\leq & \left[D_{2}D_1\varphi\right]^{1/4}_{2} \, \left[D_1\varphi\right]^{1/4}_{2}\left[D_{2}\varphi\right]^{1/4}_{2} \, \left[\varphi\right]^{1/4}_{2}\\ &\vdots &\\ &\leq & \left[D_d\ldots D_{1}\varphi\right]^{1/2^d}_{d}\,\ldots\,\,\ldots \, \left[\varphi\right]^{1/2^d}_{d}\\ & = & \|D_d\ldots D_{1}\varphi\|^{1/2^d}_{\ell^2(\Bbb Z^d)}\,\ldots \,\,\ldots \, \|\varphi\|^{1/2^d}_{\ell^2(\Bbb Z^d)}\\ \Rightarrow \|\varphi\|_{\ell^\infty(\Bbb Z^d)}^{2^d}& \leq & \|D_d\ldots D_{1}\varphi\|_{\ell^2(\Bbb Z^d)}\,\ldots\,\,\ldots \, \|\varphi\|_{\ell^2(\Bbb Z^d)}. $$ We have generated an estimate by $2^d$ norms, with exactly $2^{d-1}$ norms originating from the term $\left[D_{1}\varphi\right]^{1/2}_{1}$. All those will thus involve the operator $D_1$, or more formally: $|\Xi_1|=2^{d-1}$, where we let $$\Xi_1:=\left\{\|D_{a_1}\ldots D_{a_k} D_1\varphi\|_{\ell^2(\Bbb Z^d)}\,\big|\,\, a_i\neq a_j \,\forall \,i\neq j\, ;\,\{a_1,\ldots,a_k\}\subset\{2,\ldots,d\}\right\}.$$ We note that we could also employ estimates by $\|D_i\varphi\|_{\ell^2(\Bbb Z^d)}$ for any $i\in\{1,\ldots,2^d\}$, but our inequality will not change due to our symmetrising argument. Similarly, we have $2^{d-1}$ norms originating from the term $\left[ \varphi \right]^{1/2}_{1}$, whose estimates will not involve the operator $D_1$. Hence $|\Xi_2|=2^{d-1}$, where we let $$\Xi_2:=\left\{\|D_{a_1}\ldots D_{a_k} \varphi\|_{\ell^2{(\Bbb Z^d)}}\,\big|\,\, a_i\neq a_j \,\forall \,i\neq j\, ;\,\{a_1,\ldots,a_k\}\subset\{2,\ldots,d\}\right\}.$$ We will now apply Lemma \[DownDifferenceN\] repeatedly, to reduce the order of the operator inside the norms to either 0 or 1. We recognise that we have to estimate all $^1\xi\in\Xi_1$ by $^1\xi_1:=\|D_1\varphi\|_{\ell^2(\Bbb Z^d)}$ or alternatively by $\|\varphi\|_{\ell^2(\Bbb Z^d)}$.\ Hence, we choose a $p\in\{0,\ldots, 2^{d-1}\}$ to estimate $p$ elements in $\Xi_1$ by $\|D_1\varphi\|_{\ell^2(\Bbb Z^d)}$, leaving $2^{d-1}-p$ elements in $\Xi_1$ to be estimated by $\|\varphi\|_{\ell^2(\Bbb Z^d)}$. However, for all $2^{d-1}$ elements $^2\xi\in\Xi_2$, we have to provide an estimate by $^2\xi_1:=\|\varphi\|_{\ell^2(\Bbb Z^d)}$ only. This means we have $2^d-p$ elements in $\Xi:=\Xi_1\bigcup\Xi_2$ to be estimated by $\|\varphi\|_{\ell^2(\Bbb Z^d)}$: $$\|\varphi\|_{\ell^\infty(\Bbb Z^d)} ^{2^d} \leq \kappa_{p,d}\|D_{1}\varphi\|^{p}_{\ell^2(\Bbb Z^d)} \, \|\varphi\|^{2^d-p}_{\ell^2(\Bbb Z^d)}.$$ where $\kappa_{p,d}$ remains a constant of the form $2^z$ with $z\in\Bbb Q$, which we leave to be identified in the next section. We thus obtain the following estimate: $$\|\varphi\|^{2^{d+1}/p}_{\ell^\infty(\Bbb Z^d)} \leq \kappa_{p,d}^{2/p}\|D_{1}\varphi\|^{2}_{\ell^2(\Bbb Z^d)} \, \|\varphi\|^{(2^{d+1}-2p)/p}_{\ell^2(\Bbb Z^d)}.$$ We now exploit the symmetry of the argument: $$d\,\|\varphi\|^{2^{d+1}/p}_{\ell^\infty(\Bbb Z^d)} \leq \kappa_{p,d}^{2/p}\left(\|D_{1}\varphi\|^{2}_{\ell^2(\Bbb Z^d)}+\ldots+\|D_{d}\varphi\|^{2}_{\ell^2(\Bbb Z^d)} \right)\, \|\varphi\|^{(2^{d+1}-2p)/p}_{\ell^2(\Bbb Z^d)} $$ $$= \kappa_{p,d}^{2/p}\|\nabla_D\varphi\|^{2}_{\ell^2(\Bbb Z^d)}\, \|\varphi\|^{(2^{d+1}-2p)/p}_{\ell^2(\Bbb Z^d)} , $$ and finally rearrange: $$\|\varphi\|_{\ell^\infty(\Bbb Z^d)} \leq \left(\frac{\kappa_{p,d}}{{d^{p/2}}}\right)^{1/2^d}\|\nabla_D\varphi\|^{p/2^d}_{\ell^2(\Bbb Z^d)}\, \|\varphi\|^{1-p/2^d}_{\ell^2(\Bbb Z^d)}. $$ The Constant $\kappa_{p,d}$ =========================== It remains to identify the constant $\kappa_{p,d}$, we thus give: \[Agmond-constant\] We have, for arbitrary dimension $d$ and $p\in\{1,\ldots,2^{d-1}\}$: $$\kappa_{p,d}= 2^{\,d\,\cdot\,2^{d-1}-p}.$$ We will break the proof down into several steps. The method for finding $\kappa_{p,d}$ will rely largely on the following observation: Let $\tau (\xi)$ be the order of the operator contained in any given $\xi\in\Xi$. Then we let $\Omega_i:=\{\xi\,\big|\,\tau (\xi)=i\}$, be the set of all terms in the estimate whose operator has a given order $i$. In $\Xi_1$ we have $1\leq i \leq d$, and in $\Xi_2$, $0\leq i \leq d-1$. \[PascalOmega\] For the size of $\Omega_i$, we have for $d\geq 2$: For $\Xi_1$: $$|\Omega_i|=\binom{d-1}{i-1}, \qquad 1\leq i \leq d,$$ and $\Xi_2$: $$|\Omega_i|=\binom{d-1}{i}, \qquad 0\leq i \leq d-1.$$ We follow by induction and prove the case of $\Xi_2$, noting that the argument for $\Xi_1$ is symmetrically identical. We have already seen that the formula is correct for $d=2$ , and now we assume it is true for $d=l$, i.e. for $0\leq i \leq l-1$: $$|\Omega_i|=\binom{l-1}{i},$$ and thus we have the following list: [rccccccccccccc]{} $\Xi_2$& $^2\xi_{2^{d-1}}$ & $\ldots$ & $\ldots$ &$^2\xi_2$& $^2\xi_1$& $|\Omega_0|$ & $|\Omega_1|$ & $|\Omega_2|$& …& $|\Omega_{l-1}|$\ $\Bbb Z^l$: & $D_l\ldots D_2$ & $\ldots$ & $\ldots$ & $D_2$ & $1$ & $\binom{l-1}{0}$ & $\binom{l-1}{1}$ & $\binom{l-1}{2}$ & …& $\binom{l-1}{l-1}$ &\ Now each term of a given order $\tau$ will, by the Agmon–Cauchy inequality (Lemma \[Agmon-CauchyArbDim\]), generate a term of order $\tau$ and one of order $\tau+1$. Thus we have: [rccccccccccccc]{} $\Xi_2$&$^2\xi_{2^{d}}$&$\ldots$ & $\ldots$ & $^2\xi_2$ & $^2\xi_1$& $|\Omega_0|$ & $|\Omega_1|$ & $|\Omega_2|$& …& $|\Omega_{l}|$\ $\Bbb Z^{l+1}$: & $D_{l+1}\ldots D_2$ & $\ldots$ & $\ldots$ & $D_2$ & $1$ & $\binom{l-1}{0}$ & $\binom{l-1}{0}+\binom{l-1}{1}$ & $\binom{l-1}{1}+\binom{l-1}{2}$ & …& $\binom{l-1}{l-1}$ &\ Now we apply the standard combinatorial identity $^aC_b+ {^aC_{b+1}}= {^{a+1}C_{b+1}}$ and consider ${^aC_{0}}={^aC_{a}}=1$, which immediately implies: [rccccccccccccc]{} $\Xi_2$ & $^2\xi_{2^{d}}$ & $\ldots$ & $\ldots$ & $^2\xi_{2}$ & $^2\xi_{1}$ & $|\Omega_0|$ & $|\Omega_1|$ & $|\Omega_2|$& ...& $|\Omega_{l+1}|$\ $\Bbb Z^{l+1}$: & $D_{l+1}\ldots D_2$ & $\ldots$ & $\ldots$ & $D_2$ & $1$ & $\binom{l}{0}$ & $\binom{l}{1}$ & $\binom{l}{2}$ & …& $\binom{l}{l}$ &\ and hence for $d=l+1$, we have: $$|\Omega_i|=\binom{l}{i},$$ completing our inductive step. As discussed previously, if we consider to estimate a given $\xi\in\Xi$ using Lemma \[DownDifferenceN\], we will, for example, obtain $ \|D_1\ldots D_k \varphi\|_{\ell^2{(\Bbb Z^d)}}\le 2\|D_1\ldots D_{l-1}D_{l+1}\ldots D_k \varphi\|_{\ell^2{(\Bbb Z^d)}}. $ We can see that we generate a factor of 2 for every partial difference operator we eliminate, and thus have, for $^1\xi\in\Xi_1$ and $^2\xi\in\Xi_2$ with order $\tau(^1\xi)$ and $\tau(^2\xi)$ respectively: $$^1\xi\leq\,\, 2^{\tau(^1\xi)-1}\,\|D_1\varphi\|_{\ell^2(\Bbb Z^d)},\qquad \text{and} \qquad ^2\xi\leq\,\, 2^{\tau(^2\xi)}\,\|\varphi\|_{\ell^2(\Bbb Z^d)}.$$ We note here that $\kappa_{p,d}$ will not depend on which $\ell^2(\Bbb Z^d)$-norms in $\Xi_1$ are chosen to be estimated by $^2\xi_1:=\,\|\varphi\|_{\ell^2(\Bbb Z^d)}$. The reason for this is transparent when considering that the sum of all the orders $\sum_{i=1}^{2^{d-1}} \tau({^1\xi_i})$ is a constant and needs to be reduced to the constant $p\cdot \tau({^1\xi_1})=p$, generating a unique $\kappa_{p,d}$. The $\min_p\kappa_{p,d}$ will be attained at $p=2^{d-1}$ and takes on the following explicit form: $$\kappa_{2^{d-1},d}=\mathlarger{\prod_{i=0}^{d-1}}\,\, 2^{2i\binom{d-1}{i}}.$$ Our minimum constant for $\Xi_1$ in fact occurs if we choose all $^1\xi_1\in\Xi_1$ to be estimated by $\|D_1\varphi\|_{\ell^2{(\Bbb Z^d)}}$, i.e. choose $p=2^{d-1}$, the maximum $p$ possible. Our minimum constant, denoted by $\rho^1_{d}$, for all terms in $\Xi_1$ will thus be: $$\rho^1_{d}=\mathlarger{\prod_{k=1}^{2^{d-1}}}\,\, 2^{\tau(^1\xi_k)-1}.$$ Instead of examining each individual element $^1\xi$, we consider that all $^1\xi$ of equal order $i$ generate the same constant, namely $2^{i-1}$. Thus we collect all $^1\xi$ of the same order, and obtain: $$\rho^1_{d}=\mathlarger{\prod_{i=1}^{d}}\,\, 2^{(i-1)|\Omega_i|} =\mathlarger{\prod_{i=1}^{d}}\,\, 2^{(i-1)\binom{d-1}{i-1}}.$$ Then we need to estimate all $^2\xi\in\Xi_2$, and we proceed as for $\Xi_1$. All $^2\xi$ need to be estimated by $\|\varphi\|_{\ell^2(\Bbb Z^d)}$, each generating the constant $2^i$, forming the equivalent pattern as that of $\Xi_1$. We thus obtain, for the minimal constant $\rho^2_{d}$: $$\rho^2_{d}=\mathlarger{\prod_{i=0}^{d-1}}\,\, 2^{i|\Omega_{i}|} =\mathlarger{\prod_{i=0}^{d-1}}\,\, 2^{i\binom{d-1}{i}}.$$ We now see that $\rho^2_{d}=\rho^1_{d}$, and: $$\kappa_{2^{d-1},d}=\rho^2_{d}\rho^1_{d}=\mathlarger{\prod_{i=0}^{d-1}}\,\, 2^{2i\binom{d-1}{i}}.$$ We are now finally in a position to prove Theorem \[Agmond-constant\]: \[Proof of Theorem \[Agmond-constant\]\] We are left to analyse the constant’s dependence on our choice of $p$. First we note that in addition to the constant generated above, we will have chosen $2^{d-1}-p$ terms to be further reduced to $\|\varphi\|_{\ell^2(\Bbb Z^d)}$, each generating a power of $2$. Hence we additionally need to multiply $\kappa_{2^{d-1},d}$ by $2^{2^{d-1}-p}$. Thus our final constant will be: $$\kappa_{p,d}= 2^{2^{d-1}-p}\cdot\mathlarger{\prod_{i=0}^{d-1}}\,\, 2^{2i\binom{d-1}{i}} = 2^{2^{d-1}-p+2\sum_{i=0}^{d-1}i\binom{d-1}{i}},$$ Then we can simplify this further by considering the binomial formula $(1+X)^n=\sum_{k=0}^n \binom{n}{k}X^k$. We differentiate with respect to $X$ and set $X=1$: $$n\cdot\,2^{n-1}=\sum_{k=0}^n k\,\binom{n}{k}.$$ Thus we arrive at: $$\kappa_{p,d}=2^{d\cdot\,2^{d-1}-p}.$$
--- abstract: 'For a large portion of real-life utterances, the intention cannot be solely decided by either their semantic or syntactic characteristics. Although not all the sociolinguistic and pragmatic information can be digitized, at least phonetic features are indispensable in understanding the spoken language. Especially in head-final languages such as Korean, sentence-final prosody has great importance in identifying the speaker’s intention. This paper suggests a system which identifies the inherent intention of a spoken utterance given its transcript, in some cases using auxiliary acoustic features. The main point here is a separate distinction for cases where discrimination of intention requires an acoustic cue. Thus, the proposed classification system decides whether the given utterance is a fragment, statement, question, command, or a rhetorical question/command, utilizing the intonation-dependency coming from the head-finality. Based on an intuitive understanding of the Korean language that is engaged in the data annotation, we construct a network which identifies the intention of a speech, and validate its utility with the test sentences. The system, if combined with up-to-date speech recognizers, is expected to be flexibly inserted into various language understanding modules.' author: - Won Ik Cho - Hyeon Seung Lee - Ji Won Yoon - Seok Min Kim - Nam Soo Kim bibliography: - 'my\_bib\_190624.bib' title: | Speech Intention Understanding in a Head-final Language:\ A Disambiguation Utilizing Intonation-dependency --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010178.10010179&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Natural language processing&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010178.10010179.10010181&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Discourse, dialogue and pragmatics&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010178.10010179.10010186&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Language resources&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; Introduction ============ Understanding the intention of a speech includes all aspects of phonetics, semantics, and syntax. For example, even when an utterance is assigned with its syntactic structure of declarative/interrogative/imperative forms, the speech act may differ considering semantics and pragmatics [@stolcke2000dialogue]. Besides, phonetic features such as prosody can influence the actual intention, which can be different from the illocutionary act that is grasped at first glance [@banuazizi1999real]. In text data, punctuation plays a dominant role in conveying non-textual information. However, in real life, where smart agents with microphones are widely used, critical phonetic features happen to be inadvertently omitted during transmission of data. In many cases, a language model may punctuate a transcribed sentence from the speech recognition module. However, more accurate prediction on the intention is expected to be achieved by engaging in the acoustic data, as observed in the recent approach that co-utilizes audio and text [@gu2017speech]. We were inspired by an idea that during this process, *performing speech analysis for all the input can be costly for devices*. That is, devices can *bypass the utterances whose intention is determined solely upon the text* and concentrate on acoustic analysis of the ambiguous ones. This approach might reduce the system malfunction from the speech with prosody that incurs confusion. Also, languages with low speech resource may benefit from this since a large portion of the utterances can be filtered as clear-cut ones via a text-based sieve. In this study, the language of interest is Korean, a representative one with the head-final syntax. Natural language processing on Korean is known to be burdensome; not only that the Korean language is agglutinative and morphologically rich, but also, the subject of a sentence sometimes happens to be omitted and be determined upon the context. Moreover, to make it challenging to understand the meaning only by text, the intention of certain types of sentences is significantly influenced by phonetic property of the sentence enders [@kim2005evidentiality]. Consider the following sentence, of which the meaning depends on the sentence-final intonation:\ (S1) 천천히 가고 있어 chen-chen-hi ka-ko iss-e slowly go-PROG be-SE[^1]\ With a high rise intonation, this sentence becomes a question (*Are you/they going slowly?*), and given a fall or fall-rise intonation, becomes a statement (*(I am) going slowly.*). Also, given a low rise or level intonation, the sentence becomes a command (*Go slowly.*). This phenomenon partially originates in the particular constituents of Korean utterances, such as multi-functional particle ‘-어 (-e)’, or other sentence enders determining the sentence type [@pak2008types]. Although similar tendencies are observed in other languages as well (e.g., declarative questions in English [@gunlogson2002declarative]), the syntactical and morphological properties of the Korean language strengthen the ambiguity of the spoken utterances. Here, we propose a partially improvable system that identifies the intention of a spoken Korean utterance, with disambiguation utilizing auxiliary acoustic information. The system categorizes input utterances into six categories of *fragment, statement, question, command, and rhetorical question$\cdot$command*, regarding speech act. Although the system does not contain a speech recognition module, it receives the script (assumed correctly transcribed) and the acoustic feature (if required) of an utterance and infers the intention. A similar analysis based on parallel processing of audio and text has been widely utilized so far [@gu2017speech; @gu2018hybrid]. However, ours accompanies the process of identifying and disambiguating intonation-dependent utterances, making the computation more efficient. To this end, total 61,225 lines of text utterances were semi-automatically collected or generated, including about 2K manually tagged lines. We claim the followings as our contribution: - A new kind of text annotation scheme that considers prosodic variability of the sentences (effective, but not restricted, to a head-final language), accompanying a freely-available corpus - A two-stage system that consists of a text-based sieve and an audio-aided analyzer, which significantly reduces the computation/resource burden that exists in end-to-end acoustic systems In the following section, we take a look at the literature of intention classification and demonstrate the background of our categorization. In Section 3 and 4, the architecture of the proposed system is described with a detailed implementation scheme. Afterward, the system is evaluated quantitatively and qualitatively with the validation and test set. Besides, we will briefly explain how our methodology can be adopted in other languages. Background ========== The most important among various areas related to this paper is the study on the sentence-level semantics. Unlike the syntactic concept of sentence form presented in @sadock1985speech, the speech act, or intention[^2], has been studied in the area of pragmatics, especially illocutionary act and dialog act [@searle1976classification; @stolcke2000dialogue]. It is controversial how the speech act should be defined, but in this study, we refer to the previous works that suggest general linguistic and domain non-specific categorization. Mainly we concentrate on lessening vague intersections between the classes, which can be noticed between e.g., *statement* and *opinion* [@stolcke2000dialogue], in the sense that some statements can be regarded as opinion and vice versa. Thus, starting from the clear boundaries between the sentence forms of *declaratives, interrogatives*, and *imperatives* [@sadock1985speech], we first extended them to syntax-semantic level adopting discourse component (DC) [@portner2004semantics]. It involves *common ground*, *question set*, and *to-do-list*: the constituent of the sentence types that comprise natural language. We interpreted them in terms of speech act, considering the obligations the sentence impose on the hearers; whether to answer (*question*), to react (*command*), or neither (*statement*). Building on the concept of reinterpreted discourse component, we took into account the rhetoricalness of question and command, which yielded additional categories of *rhetorical question* (RQ) [@rohde2006rhetorical] and *rhetorical command* (RC) [@kaufmann2016fine]. We claim that a single utterance falls into one of the five categories, if non-fragment and if given acoustic data[^3]. Our categorization is much simpler than the conventional DA tagging schemes [@stolcke2000dialogue; @bunt2010towards], and is rather close to tweet act [@vosoughi2016tweet] or situation entity types [@friedrich2016situation]. However, our methodology has strength in inferring the speech intention that less relies on the dialog history, which was possible by bringing into a new class of *intonation-dependent utterances*, as will be described afterward. At this point, it may be beneficial to point out that the term *intention* or *speech act* is going to be used as a domain non-specific indicator of the utterance type. We argue that the terms are different from *intent*, which is used as a specific action in the literature [@shimada2007case; @liu2016attention; @haghani2018audio], along with the concept of *item*, *object* and *argument*, generally for domain-specific tasks. Also, unlike dialog management where a proper response is made based on the dialog history [@li2017dailydialog], the proposed system aims to find the genuine intention of a single input utterance and recommend further action to the addressee. We expect this research to be cross-lingually extended, but for data annotation, research on the Korean sentence types was essential. Although the annotation process partly depended on the intuition of the annotators, we referred to the studies related to syntax-semantics and speech act of Korean [@han2000structure; @pak2006jussive; @seo2017syntax], to handle some unclear cases regarding optatives, permissives, promisives, request$\cdot$suggestions, and rhetorical questions. System Concept ============== The proposed system incorporates two modules as in Figure 1: (A) a module classifying the utterances into fragments, five clear-cut cases, and intonation-dependent utterances (FCI module), and (B) an audio-aided analyzer for disambiguation of the intonation-dependent utterances (3A module). Throughout this paper, *text* refers to the sequence of symbols (letters) with the punctuation marks removed, which usually indicates the output of a speech recognition module. Also, *sentence* and *utterance* are interchangeably used to denote an input, but usually the latter implies an object with intention while the former does not necessarily. FCI module: Text-based sieving ------------------------------ **Fragments (FR):** From a linguistic viewpoint, fragments often refer to single noun$\cdot$verb phrase where ellipsis occurred [@merchant2005fragments]. However, in this study, we also include some incomplete sentences whose intention is underspecified. If the input sentence is not a fragment, then it is assumed to belong to the clear-cut cases or be an intonation-dependent utterance[^4]. ![A brief illustration on the structure of the proposed system. []{data-label="fig:fig2"}](fig1){width="0.6\columnwidth"} It is arguable that fragments are interpreted as *command* or *question* under some circumstances. However, we found it difficult to assign a specific intention to them even given audio, since it mostly requires the dialog history which we do not handle within this study. We observed that a large portion of the intention concerning context is represented in the prosody, which led us to define prosody-sensitive cases afterward separately. Whereas for fragments, discerning such implication is not feasible in many cases. Therefore, we decided to leave the intention of the fragments underspecified, and let them be combatted with the help of the context in the real world usage.\ **Clear-cut cases (CCs):** Clear-cut cases incorporate the utterances of the five categories: *statement, question, command, rhetorical question*, and *rhetorical command*, as described detailed in the annotation guideline[^5] with the examples. Briefly, questions are the utterances that require the addressee to answer, and commands are the ones that require the addressee to act. Even if sentence form is declarative, words such as *wonder* or *should* can make the sentence question or command. Statements are descriptive and expressive sentences that do not apply to both cases. Rhetorical questions (RQ) are the questions that do not require an answer because the answer is already in the speaker’s mind [@rohde2006rhetorical]. Similarly, rhetorical commands (RC) are idiomatic expressions in which imperative structure does not convey a to-do-list that is mandatory (e.g., *Have a nice day*) [@han2000structure; @kaufmann2016fine]. The sentences in these categories are functionally similar to statements but were categorized separately since they usually show non-neutral tone. In making up the guideline, we carefully looked into the dataset so that the annotating schemes can cover the ambiguous cases. As stated in the previous section, we referred to @portner2004semantics to borrow the concept of discourse component and extended the formal semantic property to the level of pragmatics. That is, we searched for a question set (QS) or to-do-list (TDL) which makes an utterance a valid directive in terms of speech act [@searle1976classification], taking into account non-canonical and conversation-style sentences which contain idiomatic expressions and jargons. We provide a simplified criterion with Table 1.\ **Intonation-dependent utterances (IU):** With decision criteria for the clear-cut cases, we investigated *whether the intention of a given sentence can be determined with punctuations removed*. The decision process requires attention to the final particle and the content. There has been a study on Korean sentences which handled with final particles and adverbs [@nam2014novel]. However, up to our knowledge, there has been no explicit guideline on a text-based identification of the utterances whose intention is influenced by intonation. Thus, we set up some principles, or the rules of thumb, concerning the annotating process. Note that the last two are closely related with the maxims of conversation [@levinson2000presumptive], e.g., *“Do not say more than is required."* or *“What is generally said is stereotypically and specifically exemplified."*. - Consider mainly the sentence-final intonation, since sentence-middle intonation usually concerns the topic. Moreover, the sentence-middle intonation eventually influences the sentence-final one in general[^6]. - Consider the case where a *wh-*particle is interpreted as an existential quantifier (*wh-* intervention). - If the subject is missing, assign all the agents (1^st^ to 3^rd^ person) and exclude the awkward ones. - Note the presence of vocatives. - Avoid the loss of felicity that is induced if adverbs or numerics are used in a question. - Do not count the sentences containing excessively specific information as questions. If so, some sentences can be treated as declarative questions in an unnatural way. 3A module: Disambiguation via acoustic data ------------------------------------------- To perform an audio-aided analysis, we constructed a multi-modal[^7] network as proposed in @gu2017speech (Figure 2). The paper suggests that the multi-modal network shows much higher performance compared with *only-speech* and *only-text* systems, which will be investigated in this study as well. In our model, the multi-modal network only combats the intonation-dependent utterances, while in the previous work, it deals with all the utterances tagged with intention. The two cases, utilizing the multi-modal network as a submodule or as a whole system, are to be examined via comparison. ![A brief illustration on the structure of the 3A module that adopts multi-modal network. []{data-label="fig:fig2"}](fig3){width="0.6\columnwidth"} For **acoustic feature**, mel spectrogram (MS) was obtained by mapping a magnitude spectrogram onto mel basis. From each audio of length $L_f$ denoting the number of frames, we extracted MS of size ($L_f$, $N$) where $N$ indicates the number of frequency bins. Here the length of the fast Fourier transform (FFT) window and hop length was fixed to 2,048 and 512 respectively. Since all the utterances differ in length and the task mainly requires the tail of a speech, we utilized the last 300 frames, with zero-padding for the shorter utterances. $N$ was set to 128. To deal with the syllable-timedness of Korean, MS was augmented with the energy contour that emphasizes the syllable onsets, making up a final feature (MS + E) of size (300, 129). We maintain identical feature engineering for the speech analysis throughout this study. For the training and inference phase, CNN/RNN-based summarization is utilized based on an experiment. For **textual feature**, which is co-utilized with the acoustic feature in the multi-modal network and is solely used in the FCI module, a character-level encoding is utilized so as to avoid the usage of a morphological analyzer. In our pilot study adopting the morphological decomposition, the performance was shown lower in general, possibly due to the train and test utterances being colloquial. Thus, by using the raw text form as an input, we prevent the malfunction of an analyzer which may not fit with the conversation-style and non-canonical sentences. Also, this might help guarantee that our experiment, which adopts only the perfect transcript, have a similar result with the case that adopts the ASR transcription[^8]. To this end, we adopted recently distributed pre-trained Korean character vectors [@cho2018real] which were obtained based on skip-gram [@mikolov2013distributed] of fastText [@bojanowski2016enriching]. The vectors are 100-dim dense, encompassing the distributive semantics over the scripted corpus. Similar to the case of the acoustic feature, the character sequence is padded from the end of the sentence, and spaces are counted as a character to provide the segmentation information. We counted the last 50 characters, making up a feature of size (50, 100). For the training, CNN or RNN-based summarization is used as in the audio, upon performance. The chosen model is to be demonstrated in the following section, accompanied by a comparison with the other models. Experiment ========== Corpora ------- To cover a variety of topics, the utterances used for training and validation of the FCI module were collected from (i) the corpus provided by Seoul National University Speech Language Processing Lab[^9], (ii) the set of frequently used words, released from the National Institute of Korean Language[^10], and (iii) manually created questions/commands. From (i) which contains short utterances with topics covering e-mail, housework, weather, transportation, and stock, 20K lines were randomly selected, and three Seoul Korean L1 speakers classified them into the seven categories of FR, IU, and five CC cases (Table 2, Corpus 20K). Annotators were well informed on the guideline and had enough debate on the conflicts that occurred during the annotating process. The resulting inter-annotator agreement (IAA) was $\kappa$ = 0.85 [@fleiss1971measuring] and the final decision was done by majority voting. Taking into account the shortage of the utterances, (i)-(iii) were utilized in the data supplementation. (i) contains various neutral statements and the rhetorical utterances. In (ii), the single nouns were collected and augmented to FR. The utterances in (iii) which were generated with purpose, were augmented to *question* and *command* directly. The composition of the final corpus is stated in Table 2. For training of the multi-modal system, we annotated a speech corpus of size 7,000 utilized in @lee2018acoustic. The annotation process follows the guideline introduced in Section 3.1; due to the tagging being relatively straightforward with the help of the acoustic information, the corpus was annotated by one of the annotators. The composition of the speech corpus is also presented in the table. Implementation -------------- The system architecture can be described as a combination of convolutional neural network (CNN) [@krizhevsky2012imagenet; @kim2014convolutional] and self-attentive bidirectional long short-term memory (BiLSTM-Att) [@schuster1997bidirectional; @lin2017structured]. For CNN, five convolution layers were stacked with the max-pooling layers in between, summarizing the distributional information lying in the input of a spectrogram (acoustic features) or a character vector sequence (textual features, although used for CNN only in the pilot study). For BiLSTM, the hidden layer of a specific timestep was fed together with the input of the next timestep, to infer the subsequent hidden layer in an autoregressive manner. For a self-attentive embedding, the context vector whose length equals to that of the hidden layers of BiLSTM, was jointly trained along with the network so that it can provide the weight assigned to each hidden layer. The input format of BiLSTM equals to that of CNN except for the channel number which was set to 1 (single channel) in the CNN models. The architecture specification is provided in Table 3. First, (A) **FCI module** was constructed using a character BiLSTM-Att [@lin2017structured] alone, which shows the best performance among the implemented models (Table 4). CNN is good at recognizing a syntactic distinction that comes from the length of utterances or the presence of specific sentence enders, but not appropriate for handling the scrambling of the Korean language, worsening the performance in the concatenated network. The structure of the FCI module equals to that of the *only-text* model, which is implemented in Section 4.4 for the comparison. Next, for (B) **3A module**, especially in abstracting the acoustic features, the concatenation of CNN and BiLSTM-Att was utilized, in the sense that prosody concerns both shape-related properties (e.g., mel spectrogram) and sequential information. Also, as expected, the models which use root mean square energy (RMSE) sequence seem to emphasize the syllable onsets that mainly affect the pitch contour in Korean. For the textual features, a character BiLSTM-Att is adopted as in the FCI module. Eventually, the output layer of the acoustic feature is concatenated with the output layer of the character BiLSTM-Att, making up a thought vector that concerns both audio and text. The concatenated vector is fed as an input of an MLP to infer the final intention. The structure of the 3A module equals to that of the multi-modal network, and partially to the *only-speech* model which is implemented for the evaluation. In brief, (A) **FCI module** adopts a self attentive char-BiLSTM (acc: 0.88, F1: 0.79). For (B) **3A module**, the networks each utilizing audio (CNN and BiLSTM-Att merged) and text (char BiLSTM-Att) were jointly trained via simple concatenation, to make up a multi-modal network (acc: 0.75, F1: 0.61). For all the modules, the dataset was split into train and validation set with the ratio of 9:1. The class weight was taken into account in the training session concerning the imbalance of the volume for each utterance type. The implementation of the whole system was done with Librosa[^11], fastText[^12] and Keras [@chollet2015keras], which were used for extracting acoustic features, embedding character vectors, and making neural network models, respectively. Result ------ To interpret the performance, we analyzed the validation result for each module quantitatively and qualitatively. The **FCI module** receives only a text input and classifies it into seven categories; thus, we made up a confusion matrix with the validation result (Table 5). Note that FRs, statements, questions, and commands show high accuracy ($>$ 85%) while others show lower ($<$ 65%). RQs show the lowest accuracy (48%) and a large portion of the wrong answers were related to the utterances that are even difficult for a human to disambiguate since nuance is involved. It was encouraging that the frequency of false alarms regarding RCs and IUs is low in general. For RCs, the false alarms might induce an excessive movement of the addressee (e.g., AI agents) and for IUs, an unnecessary analysis on the speech data could have been performed. There were some wrong answers regarding the prediction as statements. We found that most of them show a long sentence length that can confuse the system with the descriptive expression; especially among the ones which are originally question, command, or RQ. For example, some of the misclassified commands contained a modal phrase (e.g., -야 한다 (*-ya hanta*, should)) that is frequently used in prohibition or requirement. This let the utterance be recognized as a descriptive one. Also, we could find some errors incurred by the morphological ambiguity of Korean. For example, ‘베란다 (*peylanta*, a terrace)’ was classified as a statement due to the existence of ‘란다 (*lanta*, declarative sentence ender)’, albeit the word (a single noun) has nothing to do with descriptiveness. For the **3A module**, a confusion matrix was constructed considering the multi-modal network which yields one of the six labels (after excluding IU) (Table 6) as an output. Statements showed high accuracy (88%), as it is assumed that much of the inference is affected by the textual analysis. For the others, due to the scarcity of the sentence type (FR, RC) or the identification being challenging (Q and RQ), the performance was not encouraging. However, it is notable that the accuracy regarding FR (65%) and RQ (62%) is much higher than the rest. It seems that the distinct acoustic property such as the length of speech or the fluctuation of magnitude might have positively affected the inference. This is the point where the audio-aided analysis displays its strength, as shown in the performance gap between the RQs of the FCI module (48%) [^13]. For the utterances with less distinct acoustic property, such as commands (59%) and RCs ($<$ 30%), a significant amount of misclassification was shown, mostly being identified as a statement. Despite the noticeable phonetic property such as rising intonation, the lower performance was achieved with the pure questions (53%). It seems to originate in the dominant volume of the RQs in the dataset, which regards the corpus being scripted. Overall, we found out that the utilization of acoustic features helps understand the utterances with distinct phonetic properties. However, it might deter the correct inference between the utterance types that are not, especially if the volume is small or the sentence form is similar. Evaluation ---------- To investigate the practical reliability of the whole system, we separately constructed a test set of size 2,000. Half of the set contains 1,000 challenging utterances. It consists of drama lines with the punctuation marks removed, recorded audio [@lee2018acoustic], and manually tagged six-class label (FR and five CCs). For another half, 1,000 sentences in the corpus (i) (not overlapping with the ones randomly chosen for FCI module) were recorded and manually tagged. The former incorporates highly scripted lines while the latter encompasses the utterances in real-life situations. By binding them, we assign fairness to the test of the models trained with both types of data. Here, we compare four systems in total: (a) **only-speech**, (b) **only-text**, (c) **multi-modal**[^14] [@gu2017speech], and (d) **text + multi-modal**. For (a), we adopted the speech corpus annotated in Section 4.1. All the IUs were tagged with their genuine intention regarding audio. For (b), we removed IUs from the corpora (labeled as *-IU* in the table) and performed 6-class classification. For a fair comparison with the speech-related modules (a, c), we utilized the corpus of small size (7,000)[^15]. For (c), we utilized both script and speech. Note that (c) equals to the structure suggested in (B) (Figure 2). For (d), the proposed system, a cascade structure of the FCI module and the multi-modal network was constructed. The overall dataset size, training scheme, neural network architecture, computation, and evaluation result are described in Table 7. Notably, the model with a text-based sieve (d) yields a comparable result with the multi-modal model (c) while reducing the computation time to about 1/20. The utility of the text-based sieve is also observed in the performance of (b), which is much higher than (a) and close to (c, d). With the large-scale corpus constructed in Section 4.1, the models which show much higher performance were obtained (b^+^, d^+^). The models were enhanced with both accuracy and F1 score by a large portion compared to the models trained with the small corpus while preserving the short inference time. This kind of advance seems to be quite tolerable, considering that many recent breakthroughs in NLU tasks accompanied pretrained language inference systems that take benefit from out-of-data information. Beyond our expectation, the model which utilizes only the text data (b^+^) showed the higher performance, both in accuracy and F1-score. This does not necessarily contradict our hypothesis since we did not assume that hybrid or multi-modal system is better than *only-speech* or *only-text* model. Instead, we interpret this result as the advantage of aggregating the concept of prosodic ambiguity into the corpus construction. The result was achieved by making up a corpus, which reduces the intervention of ambiguity (61K -IU), in a less explored head-final language. It mainly concerns disentangling the vagueness coming from some sentence enders such as ‘-어 (-e)’ or ‘-지 (-ci)’, and letting the analysis concentrate on content and nuance. Discussion ---------- Given 7K speech-script pairs for train/test in (a-d), we interpret (b^+^)/(d^+^) as also the proposed systems[^16]. They utilize the larger dataset (61K), that is easier in annotation/acquisition than speech, to boost the performance regarding low-resource speech data. Despite some weak points of the FCI/3A module, the whole proposed systems (d, b^+^, d^+^) illustrate a useful methodology for a text intention disambiguation utilizing the potential acoustic cue. Besides, we want to clarify some points about the head-final language and our work’s scalability. Head-final syntax regards languages such as Japanese/Korean/Tamil (considering only the rigid head-final ones). We used the term ‘a head-final language’ since we have executed the experiment only for the Korean language. However, we claim that the scheme can be expanded to the other languages that display underspecified sentence enders or *wh-* particles *in-situ*. Moreover, we expect the scheme to be adopted to non-head-final languages that incorporate the type of utterances whose intention is ambiguous without prosody/punctuation (e.g., declarative questions in English). Referring again to the literature, the result can be compared to the case of utilizing a fully multi-modal system as suggested for English [@gu2017speech], where the accuracy of 0.83 was obtained with the test set split from the English corpus. Such kind of systems are uncomplicated to construct and might be more reliable in the sense that less human factors are engaged in the implementation. Nevertheless, our approach is meaningful for the cases where there is a lack of resource in labeled speech data. The whole system can be partially improved by augmenting additional text or speech. Also, the efficiency of the proposed system lies in utilizing the acoustic data only for the text that requires additional prosodic information. Resultingly, it lets us avoid redundant computation and prevent confusion from unexpected prosody of users. We do not claim that our approach is optimum for intention identification. However, we believe that the proposed scheme might be helpful in the analysis of some low-resource languages since text data is easier to acquire and annotate. We mainly target the utilization of our approach in goal-oriented/spoken-language-based AI systems, where the computation issue is challenging to apply acoustic analysis for all the input speech. Conclusion ========== In this paper, we proposed a text-based sieving system for the identification of speech intention. The system first checks if input speech is a fragment or has a confidently determinable act. If neither, it conducts an audio-aided decision, associating the underspecified utterance with the inherent intention. For a data-driven training of the modules, 7K speech and 61K text data were collected or manually tagged, with a fairly high inter-annotator agreement. The proposed NN-based systems yield a comparable result with or outperform the conventional approaches with an additionally constructed test set. Our goal in theoretical linguistics lies in making up a new speech act categorization that aggregates potential acoustic cue. It was shown to be successful and is to be discussed more thoroughly via a separate article. However, more importantly, a promising application of the proposed system regards SLU modules of smart agents, especially the ones targeting free-style conversation with a human. It is why our speech act categorization includes directiveness and rhetoricalness of an utterance. Using such categorization in dialog managing might help the people who are not familiar with the way to talk to intelligent agents. It widens the accessibility of the speech-driven AI services and also sheds light to the flexible dialog management. Our future work aims to make the sieve more rigorous and to augment a precise multi-modal system for 3A module, which can be reliable by making up an elaborately tagged speech DB. Besides, for a real-life application, the analysis that can compensate the ASR errors will be executed. The models and the corpora will be distributed online. [^1]: Denotes the underspecified sentence enders; final particles whose role vary. [^2]: In this paper, intention and act are often used interchangeably. In principle, the intention of an utterance is the object of grasping, and the act of speech is a property of the utterance itself. However, we denote determining the act of a speech, such as question and demand, as inferring the intention. [^3]: More elaborate definition of *fragments* and intonation-dependency will be discussed in the next section. [^4]: There were also context-dependent cases where the intention was hard to decide even given intonation, but the portion was tiny. [^5]: Will be published online. [^6]: Except some case regarding *wh-* particles. [^7]: To be strict, the system takes a speech as an only input and makes a decision assuming perfect transcription, that it is not fully *multi-*modal; rather, it might have to be called as a co-utilization of audio and text. However, for simplicity, we denote the approach above as a multi-modal one. [^8]: ASR errors in Korean usually regard character-level mismatches, where characters approximately correspond to subwords in other languages. [^9]: http://slp.snu.ac.kr/ [^10]: https://www.korean.go.kr/ [^11]: https://github.com/librosa/librosa [^12]: https://pypi.org/project/fasttext/ [^13]: Although the fairness of the comparison is not guaranteed since the adopted corpus is different, the tendency is still appreciable. [^14]: We’re aware that our module is incomplete for ‘multi-modal’, since ours adopts script, not ASR result. However, to minimize the gap between script-given model and real-world applications, we used the character-level embeddings, which are robust to ASR errors. [^15]: The model trained with large-scale corpus will be introduced in a while. [^16]: (c+) was not available since the resource is limited, and bringing additional speech dataset for the evaluation was not aimed in this study.